uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
3,212,635,537,806
arxiv
\section{Introduction} Massive multiple-input multiple-output (MIMO) has been regarded as one of the enabling technologies in next generation wireless communications\cite{marzetta10twc, resek13spmag,larsson14commag,lu14jstsp}. Considering the large number of antennas at the base station (BS), it becomes almost imperative to rely on the time-division duplexing (TDD) channel reciprocity to learn the downlink (DL) channel state information (CSI) from the uplink (UL) channel measurements at the BS to avoid the huge overhead for CSI feedback as in frequency-division duplexing (FDD) systems. Even though the antenna array size is large at the BS, each user is only equipped with a few antennas, e.g. only one antenna. A user just needs to send one UL pilot sequence per transmit antenna to facilitate the BSs to acquire reliable estimates of the corresponding many UL channels from the user to the large antenna array at the BS. To ensure best CSI estimation quality, we want to allocate orthogonal UL pilot sequences to different users so that the pilot transmissions do not interfere with each other. But in reality, within a limited time period and a limited bandwidth, there are only a limited number of orthogonal pilot sequences. As the number of users becomes large, non-orthogonal pilot sequences are re-used by the users served by different BSs, which gives rise to the so-called pilot contamination \cite{marzetta10twc,lu14jstsp} and is one limiting factor in multi-cell massive MIMO systems. Various approaches have been proposed to alleviate the pilot contamination issue in massive MIMO. Recent works include \cite{yin2013jsac,yin2014jstsp,yin2015spawc,hu16twc,muller14jstsp}. See also \cite{lu14jstsp} and references therein for a brief overview. In \cite{yin2013jsac} and \cite{yin2014jstsp}, pilot decontamination was achieved by utilizing the fact that users with non-overlapping angles of arrival (AoA) enjoy asymptotic orthogonal covariance matrices. In \cite{yin2015spawc}, AoA diversity and amplitude-based projection were jointly exploited to null the pilot contamination and achieve better channel estimation. A blind singular value decomposition (SVD) based method was proposed in \cite{muller14jstsp} to separate the signal subspace from the interference subspace. In \cite{hu16twc}, least-squares (LS) channel estimate was derived by treating the blindly detected UL data as pilot symbols and the pilot contamination effect was shown to diminish as the data length grew. Unlike previous studies where a single narrow-band channel was typically assumed, in this paper, we look into the issue of intra-cell pilot orthogonalization and schemes for mitigating the inter-cell pilot contamination with a realistic massive MIMO OFDM system model. First, we examine how to align the power-delay profiles (PDP, a.k.a. delay power spectrum in \cite{DigitalComBook}) of different users served by one BS so that the pilots sent within one common OFDM symbol are orthogonal. From the aligning rule, we can see much more users can be sounded in the same OFDM symbol as their channels are sparse in time. In the case of massive MIMO, to alleviate the pilot contamination when the schemes in \cite{yin2013jsac,yin2014jstsp,yin2015spawc} do not apply well due to interfering users' overlapping AoAs, we further propose to exploit the fact that different paths in time are associated with different AoAs and the pilot contamination can be significantly reduced by aligning the PDPs of the users served by different BSs appropriately. Thus, PDP aligning can serve as the new baseline design philosophy for massive MIMO UL pilots. This paper is organized as follows: Section \ref{SecSysModel} describes the massive MIMO OFDM system model and the channel model. Section \ref{OrthoPilot} provides a sufficient condition for orthogonal pilots design through PDP aligning, which is also applicable to conventional MIMO systems. Then we explain how to mitigate the pilot interference in the case of massive MIMO by PDP alignment in Section \ref{IntfReduction}. Low-complexity pilot designs are provided in Section \ref{Protocol}. Corroborating simulation results are provided in Section \ref{Performance} and Section \ref{conclusion} concludes the paper. {{\it Notations:} ${\sf Diag}\{\cdots\}$ denotes the diagonal matrix with diagonal elements defined inside the curly brackets. ${A}(i,j)$ refers to the $(i,j)$th entry of matrix $\bm A$ and $a(i)$ stands for the $i$-th entry of the vector ${\bm a}$. $\bm I_N$ denotes the $N\times N$ identity matrix. ${\sf E}[\cdot]$, ${\tt Tr}(\cdot)$, $(\cdot)^{\dagger}$, $(\cdot)^{T}$, and $(\cdot)^*$ represent expectation, matrix trace, Hermitian operation, transpose, and conjugate operation respectively.} \section{System Model and Channel Model}\label{SecSysModel} We consider an MIMO-OFDM system with $B$ macro BSs. Each BS is equipped with a massive antenna array of size $M$ and serves $K$ single-antenna users. Regarding the OFDM waveform, we adopt the following notations: \begin{itemize} \item $N$: total number of sub-carriers, a.k.a. tones; \item $T$: time duration of one OFDM symbol; \item $T_c:=T/N$: time duration of one chip; \item $N_{cp}$: cyclic prefix length in $T_c$. \end{itemize} As the delay spread of each user's channel is less than $N_{cp}$, after standard OFDM receiver processing, the received signal at the $m$-th antenna in BS-$b$ can be expressed as \begin{eqnarray} {\bm y}_m^{(b)}=\sum_{l=1}^{B}\sum_{k=1}^{K}\sqrt{\rho_k^{(l)}}{\bm S}_k^{(l)}{\bm H}_{k,m}^{(l,b)}+{\bm w}_m^{(b)},\label{SysModel} \end{eqnarray} where ${\bm y}_m^{(b)}$ is an $N\times1$ vector containing the received signal over all the tones and the summation is taken over all the BSs and all the served users. For user-$k$ in BS-$l$ (we will denote it as user-$(l,k)$ in the sequel), $\rho_k^{(l)}$ denotes the transmitted power over each tone and ${\bm S}_k^{(l)}$ is an $N\times N$ diagonal matrix with diagonal entries being the transmitted pilot sequence. The frequency response of the channel from user-$(l,k)$ to the $m$-th antenna at BS-$b$ is ${\bm H}_{k,m}^{(l,b)}$ and ${\bm w}_m^{(b)}$ represents the receiver noise with covariance ${\sf E}[{\bm w}_m^{(b)}{\bm w}_m^{(b)\dagger}]=\sigma^2 {\bm I}$. The channel frequency response is the discrete Fourier transform (DFT) of the corresponding channel impulse response (CIR) in time domain, i.e. \begin{equation} {\bm H}_{k,m}^{(l,b)}={\bm F}{\bm h}_{k,m}^{(l,b)}, \label{CFR_CIR} \end{equation} where ${\bm F}$ stands for the unitary FFT matrix defined as ${F}(k,n)=\exp\{-j2\pi (k-1)(n-1)/N\}/\sqrt{N}$ and ${\bm h}_{k,m}^{(l,b)}$ is the CIR with the following PDP: \begin{equation} {\bm P}_k^{(l,b)}:={\sf E}[{\bm h}_{k,m}^{(l,b)}{\bm h}_{k,m}^{(l,b)\dagger}], \end{equation} where we have assumed the channels from one user to the antennas at one BS share a common PDP. Assuming uncorrelated scattering as in \cite{DigitalComBook}, i.e. the scattering at two different paths is uncorrelated, the PDP matrix ${\bm P}_k^{(l,b)}$ becomes diagonal. Combining (\ref{SysModel}) and (\ref{CFR_CIR}), we have \begin{eqnarray} {\bm y}_m^{(b)}=\sum_{l=1}^{B}\sum_{k=1}^{K}\sqrt{\rho_k^{(l)}}{\bm S}_k^{(l)}{\bm F}{\bm h}_{k,m}^{(l,b)}+{\bm w}_m^{(b)}.\label{SysModel2} \end{eqnarray} \begin{figure}[t] \centering \epsfig{file=ChannelModel.eps,width=0.43\textwidth} \caption{Spatial channel model.} \label{ChModelFig} \vspace{-0.3cm} \end{figure} In order to characterize the spatial covariance among multiple receive antennas at the BSs, we adopt the multi-path spatial channel model (SCM) defined in \cite{SCM} and \cite{cost2100}, which is illustrated in Fig. \ref{ChModelFig}. Each resolvable channel path corresponds to one independent scatterer including $Q$ sub-paths: \begin{equation} \left[h_{k,1}^{(l,b)}(n),...,h_{k,M}^{(l,b)}(n)\right]^T= \sqrt{\frac{P_{k}^{l,b}(n,n)}{Q}}\sum_{q=1}^Q {\bm a}(\theta_{k,n,q}^{(l,b)})e^{j\phi_q},\label{channelModel} \end{equation} where the arriving angles: $\{\theta_{k,n,q}^{(l,b)}\}_{q=1}^{Q}$ of the sub-paths are uniformly distributed within the angle spread (AS) of this path, the phases: $\{\phi_q\}$ are drawn from a uniform distribution over $[0,2\pi]$, and the vector ${\bm a}(\theta)$ stands for the steering vector of the receive antenna array when the AoA of the incoming path is $\theta$. Assuming a uniform linear array (ULA), the steering vector can be expressed as: \begin{equation} {\bm a}(\theta)=\left[1, e^{-j2\pi D/\lambda\cos(\theta)},..., e^{-j2\pi(M-1)D/\lambda\cos(\theta)}\right]^T, \end{equation} where $D$ is the antenna spacing and $\lambda$ denotes the carrier wavelength. As $M\rightarrow\infty$, we can obtain one noticeable result as follows: $\forall \theta_1\neq \theta_2\in(0,\pi)$, as $D<\lambda/2$, \begin{eqnarray} &&\lim_{M\rightarrow\infty}\frac{{\bm a}(\theta_1)^{\dagger}{\bm a}(\theta_2)}{\sqrt{{\bm a}(\theta_1)^{\dagger}{\bm a}(\theta_1)}\sqrt{{\bm a}(\theta_2)^{\dagger}{\bm a}(\theta_2)}}=\nonumber\\ &&\lim_{M\rightarrow\infty}\left|\frac{\sin(M\pi D(\cos(\theta_1)-\cos(\theta_2))/\lambda)}{M\sin(\pi D(\cos(\theta_1)-\cos(\theta_2))/\lambda)}\right|\leq\nonumber\\ &&\lim_{M\rightarrow\infty}\left|\frac{1}{M\sin(\pi D(\cos(\theta_1)-\cos(\theta_2))/\lambda)}\right|=0, \label{AsympOrtho} \end{eqnarray} where we have assumed $\theta_1$ and $\theta_2$ are independent of $M$. This result indicates the asymptotic orthogonality of the paths arriving from different angles. In the following sections, we will take advantage of this important fact to design UL pilots. \section{Orthogonal Designs via PDP Aligning} \label{OrthoPilot} Using the observation in (\ref{SysModel2}), we can obtain the MMSE estimate for the channel between user-$(b,u)$ and the $m$-th antenna at BS-$b$ as follows: \begin{eqnarray} &&\hspace{-0.8cm}\hat{{\bm h}}_{u,m}^{(b,b)}={\sf E}[{\bm h}_{u,m}^{(b,b)}{\bm y}_m^{(b)\dagger}]\left({\sf E}[{\bm y}_m^{(b)}{\bm y}_m^{(b)\dagger}]\right)^{-1}{\bm y}_m^{(b)}\nonumber\\ &&\hspace{-0.8cm}=\sqrt{\rho_{u}^{(b)}}{\bm P}_{u}^{(b,b)}{\bm F}^{\dagger}{\bm S}_u^{(b)\dagger}\cdot\nonumber\\ &&\hspace{-0.8cm}\left(\sigma^2{\bm I}+ \rho_{u}^{(b)}{\bm S}_u^{(b)}{\bm F}{\bm P}_{u}^{(b,b)}{\bm F}^{\dagger}{\bm S}_u^{(b)\dagger}+\Delta_1+\Delta_2\right)^{-1}{\bm y}_m^{(b)}, \end{eqnarray} where $$\Delta_1:=\sum_{k=1,k\neq u}^{K}\rho_{k}^{(b)}{\bm S}_k^{(b)}{\bm F}{\bm P}_{k}^{(b,b)}{\bm F}^{\dagger}{\bm S}_k^{(b)\dagger}$$ contains the interference from the intra-cell users, and $$\Delta_2:=\sum_{l=1,l\neq b}^{B}\sum_{k=1}^{K}\rho_{k}^{(l)}{\bm S}_k^{(l)}{\bm F}{\bm P}_{k}^{(l,b)}{\bm F}^{\dagger}{\bm S}_k^{(l)\dagger}$$ includes the inter-cell interference. Note in the above derivation, we have assumed the channels among different users are independent. Defining the channel estimation error as ${\bm \epsilon}:={{\bm h}}_{u,m}^{(b,b)}-\hat{{\bm h}}_{u,m}^{(b,b)}$, we can obtain its covariance as follows: \begin{eqnarray} {\sf E}[{\bm \epsilon}{\bm \epsilon}^{\dagger}]\hspace{-0.3cm}&=&\hspace{-0.3cm} {\bm P}_u^{(b,b)}-{\rho_{u}^{(b)}}{\bm P}_{u}^{(b,b)}{\bm F}^{\dagger}{\bm S}_u^{(b)\dagger}\cdot\nonumber\\ &&\hspace{-0.3cm}\left( \sigma^2{\bm I}+ \rho_{u}^{(b)}{\bm S}_u^{(b)}{\bm F}{\bm P}_{u}^{(b,b)}{\bm F}^{\dagger}{\bm S}_u^{(b)\dagger}+\Delta_1+\Delta_2\right)^{-1}\cdot\nonumber\\ &&\hspace{-0.3cm}{\bm S}_u^{(b)}{\bm F}{\bm P}_{u}^{(b,b)}. \label{MSE} \end{eqnarray} Without loss of generality, we let ${\bm S}_u^{(b)}{\bm S}_u^{(b)\dagger}={\bm I}$, i.e. the pilot sequence enjoys constant unit modulus. Then, we can rewrite (\ref{MSE}) as \begin{eqnarray} {\sf E}[{\bm \epsilon}{\bm \epsilon}^{\dagger}]\hspace{-0.3cm}&=&\hspace{-0.3cm} {\bm P}_u^{(b,b)}-{\rho_{u}^{(b)}}{\bm P}_{u}^{(b,b)}\cdot\nonumber\\ &&\hspace{-0.3cm}\left( \sigma^2{\bm I}+ \rho_{u}^{(b)}{\bm P}_{u}^{(b,b)}+\tilde{\Delta}_1+\tilde{\Delta}_2\right)^{-1}{\bm P}_{u}^{(b,b)}, \label{MSE2} \end{eqnarray} where $\tilde{\bm \Delta}_1={\bm F}^{\dagger}{\bm S}_u^{(b)\dagger}{\bm \Delta}_1{\bm S}_u^{(b)}{\bm F}$ and $\tilde{\bm \Delta}_2={\bm F}^{\dagger}{\bm S}_u^{(b)\dagger}{\bm \Delta}_2{\bm S}_u^{(b)}{\bm F}$. In the absence of interference, the corresponding MSE covariance can be computed accordingly as \begin{equation} {\sf E}[{\bm \epsilon_0}{\bm \epsilon}_0^{\dagger}]= {\bm P}_u^{(b,b)}-{\rho_{u}^{(b)}}{\bm P}_{u}^{(b,b)} \left(\sigma^2{\bm I}+ \rho_{u}^{(b)}{\bm P}_{u}^{(b,b)}\right)^{-1}{\bm P}_{u}^{(b,b)}.\label{MSE3_clean} \end{equation} To achieve the orthogonality between the received pilots from the interfering user-$(l,k)$ and the targeted user-$(b,u)$, from (\ref{MSE2}) and (\ref{MSE3_clean}), we need to ensure ${\bm S}_u^{(b)}$ and ${\bm S}_k^{(l)}$ satisfy the following condition: \begin{eqnarray} &&\hspace{-1.0cm}{\bm P}_{u}^{(b,b)}\left(\sigma^2{\bm I}+ \rho_{u}^{(b)}{\bm P}_{u}^{(b,b)}\right)^{-1}{\bm P}_{u}^{(b,b)}=\nonumber\\ &&\hspace{-1.0cm}{\bm P}_{u}^{(b,b)} \left(\sigma^2{\bm I}+ \rho_{u}^{(b)}{\bm P}_{u}^{(b,b)} +\rho_{k}^{(l)}{\bm \Theta}{\bm P}_{k}^{(l,b)}{\bm \Theta}^{\dagger}\right)^{-1}{\bm P}_{u}^{(b,b)},\label{OrthoCond1} \end{eqnarray} where ${\bm \Theta}:={\bm F}^{\dagger}{\bm S}_u^{(b)\dagger}{\bm S}_k^{(l)}{\bm F}$. From (\ref{OrthoCond1}), we can establish the following requirement on the pilot sequences:\\ \noindent{\bf Proposition 1:} {\it To achieve orthogonality between the UL pilots from user-$(l,k)$ and user-$(b,u)$ at each receive antenna in the BS, the constant unit-modulus pilot sequences need to satisfy the following condition: \begin{eqnarray} {\bm P}_{u}^{(b,b)}{\bm \Theta}{\bm P}_{k}^{(l,b)}{\bm \Theta}^{\dagger}={\bm 0}.\label{OrthoCond2} \end{eqnarray} } In the following discussions, we will assume the following pilot sequences as in the LTE UL \cite{LTEBook}: \begin{equation} {\bm S}_k^{(l)} = {\sf Diag}\left\{1,e^{j\frac{2\pi\tau_{l,k}}{N}},...,e^{j\frac{2\pi\tau_{l,k}(N-1)}{N}} \right\}\cdot{\bm S}_0, \label{CyclicShift} \end{equation} where $\tau_{l,k}$ is the amount of cyclic time shifts and ${\bm S}_0$ is the base unshifted sequence with constant modulus. With the above sequence designs, the matrix ${\bm \Theta}$ becomes unitary and circulant with the first column vector taking the following form: \begin{eqnarray} {\bm \Theta}(:,1)^T=[\underbrace{0,...,0}_{N-\Delta\tau},1,\underbrace{0,...,0}_{\Delta\tau-1}],\label{FirstColumn} \end{eqnarray} where $\Delta\tau:=\tau_{l,k}-\tau_{b,u}$ refers to the amount of relative cyclic shifts between the two users. From (\ref{OrthoCond2}), we obtain the corresponding requirement on $\Delta\tau$ as: \begin{equation} {\bm P}_{u}^{(b,b)}\tilde{\bm P}_{k}^{(l,b)}={\bm 0},\label{OrthoCond3} \end{equation} where $\tilde{\bm P}_{k}^{(l,b)}:={\bm \Theta}{\bm P}_{k}^{(l,b)}{\bm \Theta}^{\dagger}$ is the result of cyclicly shifting the diagonals of ${\bm P}_{k}^{(l,b)}$ by $-\Delta\tau$. The orthogonality condition in (\ref{OrthoCond3}) simply states that, to ensure orthogonal pilots between two users, the amount of the relative cyclic shifts between the two users should be chosen such that the supports of the shifted PDPs are non-overlapping. \begin{figure} \centering \epsfig{file=PDPAlignment.eps,width=0.45\textwidth} \caption{Orthogonality via PDP alignment.} \label{PDPAlign1} \vspace{-0.4cm} \end{figure} Conventional designs assume that the PDPs of all users are confined within the first $N_{cp}$ taps \cite{LTEBook}. In order to ensure orthogonality in (\ref{OrthoCond3}), we see up to $\frac{N}{N_{cp}}$ users\footnote{Here we assume $N_{cp}$ divides $N$. Otherwise, we can do flooring: $\lfloor \frac{N}{N_{cp}}\rfloor$.} can transmit pilots in the same OFDM symbol and the relative cyclic shift values among users are: $\Delta\tau=kN_{cp},k=0,1,...,N/N_{cp}-1$. In fact, from the general condition specified in (\ref{OrthoCond3}), we can easily see that the amount of cyclic shifts required can be well below $N_{cp}$ when the PDPs are sparse in time (see also Fig. \ref{PDPAlign1}). Even though we can exploit the channel sparsity and the proposed PDP alignment as illustrated in Fig. \ref{PDPAlign1}, it is clear that there are only a limited number of orthogonal pilot sequences available within the channel coherence time. Pilot contamination refers to the fact that, as $B\cdot K$ becomes large, we can not provide each user an orthogonal pilot sequence. Instead, a typical solution is to orthogonalize the users served by a common BS while allowing non-orthogonal pilot sequences among users served by different BSs. In the next section, we will examine a novel scheme applicable in the case of massive MIMO to reduce the inter-cell interference. \section{PDP Alignment for Interference Reduction} \label{IntfReduction} Assume the users $\{1,...,K_l\}$ served by BS-$l$ are assigned the cyclic shifts $\{\tau_{l,1},...,\tau_{l,K_l}\}$ in (\ref{CyclicShift}) and the orthogonality condition in (\ref{OrthoCond3}) is met with these cyclic shifts. Since all the orthogonal cyclic shift resources are used to enable intra-cell orthogonality, the users served by different BSs have to reuse the same set of pilot sequences. From (\ref{SysModel2}), we can have the following signal model at the $m$-th antenna of BS-$b$: \begin{eqnarray} {\bm y}_m^{(b)}&=&\sum_{l=1}^{B}\sum_{k=1}^{K_{l}}\sqrt{\rho_k^{(l)}}{\bm S}_k^{(l)}{\bm F}{\bm h}_{k,m}^{(l,b)}+{\bm w}_m^{(b)}.\label{SysModel3} \end{eqnarray} Then, we can obtain \begin{eqnarray} {\bm z}_m^{(b)}&:=&{\bm F}^{\dagger}{\bm S}_0^{\dagger}{\bm y}_m^{(b)}=\sum_{l=1}^{B}\sum_{k=1}^{K_l}\sqrt{\rho_k^{(l)}}{\bm \Theta}_k^{(l)}{\bm h}_{k,m}^{(l,b)}+{\bm \omega}_m^{(b)}\nonumber\\ &=&\sum_{l=1}^{B}\breve{\bm h}_{m}^{(l,b)}+{\bm \omega}_m^{(b)},\label{SysModel4} \end{eqnarray} where $\breve{\bm h}_{m}^{(l,b)}:=\sum_{k=1}^{K_{l}}\sqrt{\rho_k^{(l)}}{\bm \Theta}_k^{(l)}{\bm h}_{k,m}^{(l,b)}$ is the aggregate channel of all those $K_l$ non-interfering intra-cell orthogonal users served by BS-$l$, ${\bm \Theta}_k^{(l)}:={\bm F}^{\dagger}{\bm S}_0^{\dagger}{\bm S}_k^{(l)}{\bm F}$ is a circulant matrix with the first column vector defined as in (\ref{FirstColumn}) with $\Delta\tau=\tau_{l,k}$, and the noise term ${\bm\omega}_m^{(b)}$ has the same covariance as ${\bm w}_m^{(b)}$. Stacking the $n$-th taps of $\{{\bm z}_m^{(b)}\}_{m=1}^M$ into an $M\times1$ vector as: ${\bm g}_n^{(b)}:=[{z}_1^{(b)}(n),...,{z}_M^{(b)}(n)]^T $, we can get \begin{eqnarray} {\bm g}_n^{(b)}={\bm h}_n^{(b,b)}+ \sum_{l=1,l\neq b}^{B}{\bm h}_{n}^{(l,b)}+{\bm \omega}^{(b)}, \label{SysModel5} \end{eqnarray} where ${\bm h}_n^{(l,b)}$ denotes the vector of the $n$-th taps in the aggregate channels from the orthogonal users served by BS-$l$ to BS-$b$, i.e. ${\bm h}_n^{(l,b)}:=[\breve{h}_{1}^{(l,b)}(n),...,\breve{h}_{M}^{(l,b)}(n)]^T$ and ${\bm \omega}^{(b)}:=[{\omega}_{1}^{(b)}(n),...,{\omega}_{M}^{(b)}(n)]^T $. Since the circulant matrix ${\bm \Theta}_k^{(l)}$ carries out cyclic shift operation, we have \begin{eqnarray} \breve{h}_{m}^{(l,b)}(n)=\sum_{k=1}^{K_l}\sqrt{\rho_k^{(l)}} {h}_{k,m}^{(l,b)}(n+\tau_{l,k}).\label{aggTap} \end{eqnarray} Note that for channels of limited delay spread, only a few users will have non-zero contribution toward the aggregated channel tap in (\ref{aggTap}). Denoting the spatial covariance matrix of the $n$-th taps in the aggregate channels from users in BS-$l$ to BS-$b$ across the $M$ BS antennas as ${\bm C}_{n}^{(l,b)}$, i.e. ${\bm C}_{n}^{(l,b)}:={\sf E}\left[{\bm h}_{n}^{(l,b)}{\bm h}_{n}^{(l,b)\dagger}\right]$, with the signal model in (\ref{SysModel5}), we can derive the MMSE estimate of the desired channel ${\bm h}_n^{(b,b)}$ as: \begin{eqnarray} &&\hspace{-0.9cm}\hat{\bm h}_n^{(b,b)}={\sf E}[{\bm h}_n^{(b)}{\bm g}_n^{(b)\dagger}]({\sf E}[{\bm g}_n^{(b)}{\bm g}_n^{(b)\dagger}])^{-1}{\bm g}_n^{(b)}\nonumber\\ &&\hspace{-0.9cm}={\bm C}_{n}^{(b,b)}\cdot\left(\sigma^2{\bm I}_{M}+ \sum_{l=1}^{B}{\bm C}_{n}^{(l,b)} \right)^{-1}\hspace{-0.3cm}{\bm g}_n^{(b)}. \end{eqnarray} The covariance of the estimation error vector: ${\bm \epsilon}_{n}^{(b,b)}:={\bm h}_n^{(b,b)}-\hat{\bm h}_n^{(b,b)}$ can be found as follows: \begin{eqnarray} &&\hspace{-0.8cm}{\sf E}[{\bm \epsilon}_{n}^{(b,b)}{\bm \epsilon}_{n}^{(b,b)\dagger}] = {\bm C}_{n}^{(b,b)}- \nonumber\\ &&\hspace{-0.8cm}{\bm C}_{n}^{(b,b)}\left(\sigma^2{\bm I}_{M}+{\bm C}_{n}^{(b,b)}+\sum_{l=1,l\neq b}^B{\bm C}_{n}^{(l,b)} \right)^{-1}\hspace{-0.1cm}{\bm C}_{n}^{(b,b)}. \label{TapMSE} \end{eqnarray} Let $\sum_{l=1,l\neq b}^B{\bm C}_{n}^{(l,b)}={\bm U}{\bm \Sigma}{\bm U}^{\dagger}$ and ${\bm C}_{n}^{(b,b)}={\bm V}{\bm \Lambda}{\bm V}^{\dagger}$, where $\bm U$ ($\bm V$) is an $M\times r$ ($M\times r'$) matrix consisting of $r$ ($r'$) eigenvectors and $\bm \Sigma$ ($\bm \Lambda$) is an $r\times r$ ($r'\times r'$) diagonal matrix consisting of $r$ ($r'$) non-zero eigenvalues. From (\ref{TapMSE}), we get \begin{eqnarray} &&\hspace{-1cm}{\sf E}[{\bm \epsilon}_{n}^{(b,b)}{\bm \epsilon}_{n}^{(b,b)\dagger}] = {\bm C}_{n}^{(b,b)}-\nonumber\\ &&\hspace{-0.9cm}{\bm C}_{n}^{(b,b)}\left(\sigma^2{\bm I}_{M}+{\bm C}_{n}^{(b,b)}\right)^{-1}\hspace{-0.1cm}{\bm C}_{n}^{(b,b)} + {\bm R}_{n}^{(b)}, \label{TapMSE2} \end{eqnarray} where the residual matrix ${\bm R}_{n}^{(b)}$ is of the following form \begin{eqnarray} {\bm R}_{n}^{(b)}&=&\left(\sigma^2{\bm I}_{M}+{\bm C}_{n}^{(b,b)}\right)^{-1}{\bm V}{\bm \Lambda}{\bm V}^{\dagger}{\bm U}\cdot\nonumber\\ &&\left({\bm \Sigma}^{-1}+{\bm U}^{\dagger} \left(\sigma^2{\bm I}_{M}+{\bm C}_{n}^{(b,b)}\right)^{-1} {\bm U} \right)^{-1}\cdot\nonumber\\ &&{\bm U}^{\dagger}{\bm V}{\bm \Lambda}{\bm V}^{\dagger} \left(\sigma^2{\bm I}_{M}+{\bm C}_{n}^{(b,b)}\right)^{-1}. \label{ResidualR} \end{eqnarray} To obtain a reliable estimate of ${\bm h}_n^{(b,b)}$, we would like to make the subspaces spanned by the interference term and the signal term as orthogonal as possible. When ${\sf span}\{\bm U\}\perp {\sf span}\{\bm V\}$, we have ${\bm R}_{n}^{(b)}={\bm 0}$ and achieve the interference-free estimation performance. Channel taps (a.k.a. paths) of different time delays are originating from different scattering clusters. Similar to (\ref{AsympOrtho}), when the angles of arrival (AoA) of two paths do not overlap, it has been shown that the associated covariance matrices span orthogonal subspaces asymptotically as $M\rightarrow\infty$ \cite{yin2013jsac,yin2014jstsp,adhikary13tit}. Notice that the estimation error due to non-orthogonal pilots in (\ref{ResidualR}) depends on the set of cyclic shifts: $\{\tau_{l,k}\}$. Through exploiting the diversity in the covariance matrices of different paths, we can judiciously choose the set of cyclic shift values: $\{\tau_{l,k}\}$ in (\ref{CyclicShift}) to minimize the amount of extra estimation error due to inter-cell interference. This set of optimal cyclic shift values will align the PDPs of the users with non-orthogonal pilots in a way to mitigate the inter-cell pilot contamination. In Fig. \ref{ChModelFig}, the composite AoAs of user-$(1,1)$ and user-$(2,1)$ at BS-$2$ are similar and the existing approaches in \cite{yin2013jsac,yin2015spawc} will not be able to separate them well. In other words, without alignment, there will be strong pilot contamination between the two users. However, after aligning path-$1$ to path-$b$ and path-$2$ to path-$a$, we can expect near interference-free performance in estimating the channels from user-$(2,1)$ to BS-$2$. To optimize the overall system performance, we need to solve the following optimal PDP alignment problem: \begin{eqnarray} \underset{\{\tau_{l,k}\}}{\textrm{minimize}} & \sum_{b=1}^{B}\sum_{n=1}^N{\tt Tr}({\bm R}_{n}^{(b)}) \label{OptAlignment}\\ \textrm{subject to} & {\bm P}_{u}^{(b,b)}\tilde{\bm P}_{k}^{(b,b)}={\bm 0} , \nonumber\\ &b\in[1,B], u\neq k\in[1,K_b], \label{Constraint} \end{eqnarray} where the constraint in (\ref{Constraint}) comes from the intra-cell orthogonality requirement in (\ref{OrthoCond3}). The PDP alignment problem in (\ref{OptAlignment}) requires exhaustive searches over all possible cyclic time shifts of all served users. It becomes too complex to be implemented in practice as the number of served users goes large. Low-complexity designs are worthwhile and will be discussed in Section \ref{Protocol}. Summarizing, after aligning different users' PDPs appropriately, we can achieve the following two benefits at the same time: \begin{itemize} \item 1). For sparse PDPs, more users served by a common BS can transmit orthogonal pilot sequences within one OFDM symbol; \item 2). When the aligned interfering paths have non-overlapping AoAs with the desired path, asymptotic inter-cell interference-free estimation performance can be achieved as the size of the massive antenna array goes large, i.e. $M\rightarrow\infty$. \end{itemize} \section{Low-Complexity Designs} \label{Protocol} The optimal solution of the optimization problem in (\ref{OptAlignment}) is hard to find due to the complex structure of the objective function. Instead, we can make some simplifications and try to solve easier problems. In the following discussions, we will assume that the delay spread of users' channels is $N_{cp}$ chips. \subsection{Pilot Sequence Length: $N$}\label{Scheme1} In this case, user-$(l,k)$ will employ the pilot sequence defined in (\ref{CyclicShift}) with $\tau_{l,k}=\tau_l+(k-1)N_{cp}, k=1,...,N/N_{cp}$: \begin{equation} {\bm S}_k^{(l)} = {\sf Diag}\left\{1,e^{j\frac{2\pi\tau_{l,k}}{N}},...,e^{j\frac{2\pi\tau_{l,k}(N-1)}{N}}\right\}\cdot{\bm S}_0. \label{CyclicShift2} \end{equation} This design, as illustrated in Fig. \ref{LengthN}, will ensure the intra-cell orthogonality constraint in (\ref{Constraint}) and the optimization problem in (\ref{OptAlignment}) reduces to: \begin{eqnarray} \underset{\{\tau_{l}\}_{l=1}^B}{\textrm{minimize}} & \sum_{l=1}^{B}\sum_{n=1}^N{\tt Tr}({\bm R}_{ n}^{(l)}) \label{OptAlignment2}\\ \textrm{subject to} & \tau_l\in[0,N-1], l=1,...,B. \label{Constraint2} \end{eqnarray} In this simplified problem, we only need to optimize the objective over $B$ variables: $\{\tau_l\}_{l=1}^B$. Meanwhile, the assignment of the $N/N_{cp}$ orthogonal cyclic shifts to the users served by one BS can also be optimized. \subsection{Pilot Sequence Length: $N_{cp}$}\label{Scheme2} In this case, user-$(l,k)$ transmits pilots on $N_{cp}$ equally spaced tones: ${\cal G}_k=\{k-1+nN/N_{cp}, n=0,...,N_{cp}-1\}$, $k=1,...,N/N_{cp}$. The pilot sequence is defined as in (\ref{CyclicShift}) but of a shorter length $N_{cp}$: \begin{equation} {\bm S}_k^{(l)} = {\sf Diag}\left\{1,e^{j\frac{2\pi\tau_{l,k}}{N_{cp}}},..., e^{j\frac{2\pi\tau_{l,k}(N_{cp}-1)}{N_{cp}}}\right\}\cdot\tilde{\bm S}_0, \label{CyclicShift3} \end{equation} where $\tilde{\bm S}_0$ denotes the length-$N_{cp}$ base sequence. It is clear that this design will ensure the intra-cell orthogonality since the users served by one common BS transmit pilots on different sets of tones (see also Fig. \ref{LengthNoverNcp}). Under the pilot designs in (\ref{CyclicShift3}), the optimization problem in (\ref{OptAlignment}) can be decomposed into $N/N_{cp}$ parallel PDP aligning problems: \\ \noindent {\it Sub-problem for ${\cal G}_k$, $k\in[1,N/N_{cp}]$:} \begin{eqnarray} \underset{\{\tau_{l,k}\}_{l=1}^B}{\textrm{minimize}} & \sum_{l=1}^{B}\sum_{n=1}^{N_{cp}}{\tt Tr}({\bm R}_{n}^{(l)}) \label{OptAlignment3}\\ \textrm{subject to} & \tau_{l,k}\in[0,N_{cp}-1], l=1,..,B. \label{Constraint3} \end{eqnarray} In each simplified sub-problem for tone group ${\cal G}_k$, we only need to optimize the objective over $B$ variables: $\{\tau_{l,k}\}_{l=1}^B$. The optimal cyclic shifts for the interfering users belonging to different tone groups can be derived independently. Additionally, the allocation of the users served by one BS to the $N/N_{cp}$ tone groups: $\{{\cal G}_k\}_{k=1}^{N/N_{cp}}$ can also be optimized. \begin{figure}[t] \centering \epsfig{file=PilotScheme1.eps,width=0.45\textwidth} \caption{Low-complexity pilot designs with sequence length equal to $N$.} \label{LengthN} \vspace{-0.3cm} \end{figure} \begin{figure}[t] \centering \epsfig{file=PilotScheme2.eps,width=0.45\textwidth} \caption{Low-complexity pilot designs with sequence length equal to $N_{cp}$.} \label{LengthNoverNcp} \vspace{-0.3cm} \end{figure} \section{Simulated Performance}\label{Performance} In this section, we simulate the system as illustrated in Fig. \ref{ChModelFig}, where there are two neighboring cell-edge users creating strong inter-cell interference. Other simulation parameters and assumptions are listed as follows: \begin{enumerate} \item OFDM configurations\footnote{We follow the numerology in LTE \cite{LTEBook}.}: $T=66.67\mu$s ($1/T=15$kHz), $N=128$, $N_{cp}=8$; \item Each BS is equipped with a ULA of size $M=50$ and antenna spacing $D=\lambda/2$. Each path toward the BS is generated according to the SCM defined in \cite{SCM}; \item Both uniform and exponential PDPs are simulated. For the uniform PDP, we have $P_n=P_0,\forall n\in[1,N_{cp}]$. For the exponential PDP, we have $P_n=P_0e^{-0.6(n-1)}$, $\forall n\in[1,N_{cp}]$. \item User-$(1,1)$ and user-$(2,1)$ are close to each other but are served by different BSs. They share the same scatterers toward each BS, i.e. their visibility regions (VR) are overlapping \cite{cost2100}; \item User-$(1,1)$ and user-$(2,1)$ are assigned to the same tone group as defined in Section \ref{Scheme2} and the optimal PDP alignment is found by minimizing the cost function in (\ref{OptAlignment3}). \end{enumerate} In Fig. \ref{DiffPDP}, we show the cumulative distribution functions (CDF) of the normalized mean-square error (MSE)\footnote{ Normalized MSE is defined as: ${\tt NMSE}:= \frac{\sum_{l=1}^2\sum_{n=1}^{N_{cp}}\|\hat{\bm h}_{n}^{(l,l)}- {\bm h}_{n}^{(l,l)}\|^2}{\sum_{l=1}^2\sum_{n=1}^{N_{cp}}\|{\bm h}_{n}^{(l,l)}\|^2}$. } of the estimated channels with and without PDP alignment. In each one of the $1000$ Monte-Carlo runs, the second-order statistics of the channel taps are randomly generated according to the SCM. The PDP alignment in (\ref{OptAlignment3}) is based on the second-order statistics to mitigate the pilot contamination. From Fig. \ref{DiffPDP}, we see the exponential PDP can provide better decontamination than the uniform PDP. Compared with the case without alignment, optimal PDP aligning can bring about $13$dB improvement in UL channel estimation at the median point, i.e. $50\%$ in the CDF curve. In Fig. \ref{DiffAS}, we examine the PDP alignment performance for different AS values. From the plotted curves, we see the PDP alignment favors the scattering environment with a small AS. In Fig. \ref{SumRateCDF}, we compare the sum of DL spectral efficiency to the two users in Fig. \ref{ChModelFig} when the BSs formulate the DL matched-filter precoding vectors with the estimated UL channnels assuming TDD channel reciprocity. It can be observed that the achieved spectral efficiency after PDP aligning is pretty close to that without UL pilot contamination. \begin{figure}[t] \centering \epsfig{file=Diff_PDP.eps,width=0.48\textwidth} \caption{PDP alignment performance with different PDPs (BA: Best Alignment; NA: No Alignment; NI: No Interference).} \label{DiffPDP} \vspace{-0.5cm} \end{figure} \begin{figure}[t] \centering \epsfig{file=diff_angle_spread.eps,width=0.48\textwidth} \caption{PDP alignment performance with different ASs.} \label{DiffAS} \vspace{-0.5cm} \end{figure} \begin{figure}[t] \centering \epsfig{file=SumRateCDF.eps,width=0.48\textwidth} \caption{Achievable sum spectral efficiency with PDP alignment.} \label{SumRateCDF} \vspace{-0.5cm} \end{figure} \vspace{-0.15cm} \section{Conclusion}\label{conclusion} \vspace{-0.15cm} In this paper, relying on a realistic massive MIMO OFDM system model, we have addressed the issue of pilot contamination and proposed to rely on PDP aligning to mitigate this type of inter-cell pilot interference. On one hand, we have shown the UL pilots from the users served by one common BS can be made orthogonal through PDP aligning. On the other hand, due to the massive amount of antennas at the BS, we have shown that PDP alignment can also help to alleviate the pilot contamination thanks to the fact that different paths in time are originating from different AoAs. Numerical simulations validate that the pilot contamination can be significantly reduced through aligning the PDPs of the users served by different BSs appropriately. The proposed PDP aligning can serve as the new baseline design philosophy for the UL pilots in massive MIMO. \bibliographystyle{IEEE} \vspace{-0.15cm}
3,212,635,537,807
arxiv
3,212,635,537,808
arxiv
\section{Introduction} The interplay of strong correlation physics and magnetic behavior in itinerant electronic systems has been a fascinating subject for many years. At low temperature it is often possible to describe the response of such systems in terms of the low energy excitations and quasiparticle properties such as in a Fermi liquid picture. The ratio of the spin susceptibility of the interacting system $\chi_s$ and that of the non-interacting system $\chi_s^0$ is then given by the expression \begin{equation} \label{flsusc} \frac{\chi_s}{\chi_s^0}=\frac{m^*/m_0}{1+F_0^a}, \end{equation} where $m^*/m_0$ is the ratio of effective and bare electronic mass, and $F_0^a$ is the lowest order asymmetric Landau parameter, which accounts for quasiparticle interactions. A special kind of response is metamagnetism, which we define here as the existence of a regime where the system's differential susceptibility $\chi_s={\rm d}{M}/{\rm d} H$ increases with magnetic field $H$, i.e. ${\rm d}\chi_s/{\rm d} H>0$, for $H\in[H_1,H_2]$ with $H_1>0$. The subject of this paper is the analysis of the metamagnetic response in correlated electron systems in terms of the Fermi liquid description (\ref{flsusc}). For this we calculate the effective mass and the term due to quasiparticle interactions from a microscopic model. This allows us to understand what drives the magnetic response. This can be relevant for the interpretation of experiments for itinerant metamagnets where the magnetic response is measured simultaneously with the field dependence of the specific heat. In a naive single electron picture itinerant metamagnetism is not intuitive as with increasing polarization the magnetic response usually decreases. For instance, in weakly interacting systems, such as a Hubbard model with small $U$, with a featureless concave density of states metamagnetic behavior does not occur. RPA based calculations yield a decreasing susceptibility with increasing field as spin fluctuations are suppressed. On the other hand, a convex density of states, i.e. with positive curvature at the Fermi energy, such as in the Wohlfahrt and Rhodes \cite{WR62} theory can lead to metamagnetic behavior. This is exploited in a number of works, where the Hubbard model with such convex density of states is analyzed \cite{NH97,SO98}. Metamagnetic behavior is shown to also occur in situations where the Fermi energy lies close to a van Hove singularity \cite{BS04,Hon05}, or where a Pomeranchuk Fermi surface deformation instability occurs\cite{YK07}. It has been shown by calculations based on the Gutzwiller approximation by Vollhardt\cite{Vol84} and Spalek and coworkers \cite{SG90,KSWA95,SKW97} that for a generic concave density of states metamagnetic behavior is also found in the intermediate coupling regime of the Hubbard model. The metamagnetic scenario is then that of correlated electrons, with a (Mott) localization tendency due to the interaction. Our calculations are based on the half filled single band Hubbard model which has been used frequently to describe itinerant metamagnetism for correlated electrons \cite{LGK94,Tri95,KSWA95,SKW97,NH97,SO98,BS04,Hon05} due to its relative formal simplicity. We employ the dynamical mean field theory (DMFT) \cite{LGK94,GKKR96} combined with the numerical renormalization group (NRG) \cite{KWW80a,BCP08} to solve the effective impurity problem. We focus on the case of zero temperature, where sharp features are most clearly visible. We follow these earlier approaches here and restrict ourselves to the response of the paramagnetic solutions of the Hubbard model, which is possible for mean field-like approaches. The half filled Hubbard model in a magnetic field has already been investigated by detailed DMFT studies by Laloux et al. \cite{LGK94} and Bauer and Hewson \cite{BH07b}. Low temperature magnetization curves and field induced metal insulator transitions have been investigated by Laloux et al. Metamagnetic response based on correlated electron physics, seen in the Gutzwiller approach, was confirmed in such calculations. Our analysis extends previous work\cite{LGK94} as we investigate the $T=0$ magnetic response with a Fermi liquid interpretation based on the field dependent renormalized parameter approach\cite{HOM04,BH07b,HBK06,BH07a}. This, together with results for the spectral functions, allows us to identify what gives rise to the magnetic response in the system. The paper is organized as follows. In a brief section II we give details about the model and method. The Fermi liquid interpretation and the relation between Fermi liquid parameters and the field dependent renormalized parameters are described in section III. Section IV reports the results for magnetization, susceptibilities and the interpretation in terms of effective mass and quasiparticle interactions. We conclude by putting our results in context with itinerant metamagnetism studied experimentally. \section{Model and Method} The basis for our calculation forms the Hubbard Hamiltonian in a magnetic field, which in the grand canonical formulation reads \begin{equation} H_{\mu}=\sum_{i,j,\sigma}(t_{ij}\elcre {i}{\sigma}\elann {j}{\sigma}+\mathrm{h.c.})-\sum_{i\sigma}\mu_{\sigma} n_{i\sigma}+U\sum_in_{i,\uparrow}n_{i,\downarrow}\label{hubm}. \end{equation} $\elcre {i}{\sigma}$ creates an electron at site $i$ with spin $\sigma$, and $n_{i,\sigma}=\elcre {i}{\sigma}\elann {i}{\sigma}$. $t_{ij}=-t$ for nearest neighbors is the hopping amplitude and $U$ is the on-site interaction; $\mu_{\sigma}=\mu+\sigma h$, where $\mu$ is the chemical potential of the interacting system, and the Zeeman splitting term with external magnetic field $H$ is given by $h=g\mu_{\rm B} H/2$ with the Bohr magneton $\mu_{\rm B}$. In the DMFT approach the proper self-energy is a function of $\omega$ only \cite{MV89,Mue89}. In this case the local lattice Green's function $ G_{\sigma}^{\mathrm{loc}}(\omega)$ can be expressed in the form, \begin{equation} G_{\sigma}^{\mathrm{loc}}(\omega) = \integral{\epsilon}{}{}\frac{\rho_0(\epsilon)} {\omega+\mu_\sigma -\Sigma_{\sigma}(\omega)-\epsilon}, \label{gloc} \end{equation} where $\rho_0(\epsilon)$ is the density of states for the non-interacting model ($U=0$). It is possible to convert this lattice problem into an effective impurity one \cite{GKKR96}, introduce the dynamical Weiss field ${\cal G}_{0,\sigma}^{-1}(\omega)$. The DMFT self-consistency condition reads \begin{equation} {\cal G}_{0,\sigma}^{-1}(\omega)=G_{\sigma}^{\mathrm{loc}}(\omega)^{-1} +\Sigma_{\sigma}(\omega). \label{tgf} \end{equation} The Green's function $ G_{\sigma}^{\mathrm{loc}}(\omega)$ can be identified with the Green's function $ G_{\sigma}(\omega)$ of an effective Anderson model, and ${\cal G}_{0,\sigma}^{-1}(\omega)$ expressed as \begin{equation} {\cal G}_{0,\sigma}^{-1}(\omega)=\omega+\mu_{\sigma}-K_{\sigma}(\omega). \label{thgfK} \end{equation} The function $K_\sigma(\omega)$ plays the role of a dynamical mean field describing the effective medium surrounding the impurity. $K_\sigma(\omega)$ and $\Sigma_\sigma(\omega)$ have to be calculated self-consistently using equations (\ref{gloc})-(\ref{thgfK}). Our calculations are based on the numerical NRG\cite{KWW80a,BCP08} to solve the effective impurity problem. As in earlier work\cite{BH07b} we calculate spectral functions from a complete basis set\cite{PPA06,WD07} and use higher Green's functions to obtain the self-energy \cite{BHP98}. For numerical calculations within the DMFT-NRG approach for $\rho_0(\epsilon)$ we take the semi-elliptical form for the non-interacting density of states $\rho^{\rm sem}_0(\epsilon)={2}\sqrt{D^2-\epsilon^2}/{\pi D^2}$, where $W=2D$ is the band width with $D=2t$ for the Hubbard model. $t=1$ sets the energy scale in the following. \section{Field dependent renormalized parameters and Fermi liquid theory} The response of a metallic system of correlated electrons can often be described in terms of Fermi liquid theory. The ratio of the spin susceptibility of the interacting system $\chi_s$ and that of the non-interacting system $\chi_s^0$ is given in equation (\ref{flsusc}). Thus, when strongly interacting fermions have a large paramagnetic susceptibility, it can be interpreted as due to quasiparticles with large effective masses. It is, however, also possible that the susceptibility is additionally enhanced due to the quasiparticle interaction term $1/[1+F_0^a]$, which is for instance the case in liquid ${}^3\rm He$, where $m^*/m_0\simeq 5$ but $\chi_s/\chi_s^0\simeq 20$.\cite{BFSWRPW98} This is usually described by the dimensionless Sommerfeld or Wilson ratio $R$ of the magnetic susceptibility and the linear specific heat coefficient $\gamma$. We will use it in the form $R=({\chi_s}/{\chi_s^0})/(\gamma/\gamma_0)$, where $\gamma/\gamma_0=m^*/m_0$. Here we are interested in analyzing the behavior in finite field, and it is possible to calculate corrections of higher order in $H$ to equation (\ref{flsusc}).\cite{Mis71} We will, however, follow a different approach here, and assume that expression (\ref{flsusc}) remains valid for finite field with field dependent effective mass $m^*(H)$ and Landau parameter $F_0^a(H)$. This is in the spirit of the field dependent quasiparticle parameters introduced in earlier work \cite{BH07b,HBK06,BH07a}. Notice that for the case considered the field dependence of $\chi_s^0$, which is given by the non-interacting density of states, varies very little in the relevant field range. In this picture with field dependent parameters, metamagnetism can occur when the effective mass increases with the magnetic field. Generally, however, also the field dependence of the quasiparticle interaction plays a role. One hypothesis, tested in this paper, is that itinerant metamagnetic behavior is always accompanied by a field induced localization and a sharp increase of the effective mass near the metamagnetic transition. In order to calculate the microscopic Fermi liquid parameters, we expand $\Sigma_\sigma(\omega)$ in powers of $\omega$ for small $\omega$, and retain terms to first order in $\omega$ only. This is used to define renormalized parameters\cite{BH07b} \begin{equation} \tilde\mu_{0,\sigma}=z_{\sigma}[\mu_{\sigma}-\Sigma_{\sigma}(0)],\quad{\rm and}\quad z_{\sigma}=1/[1-\Sigma'_{\sigma}(0)]. \label{nrgqp} \end{equation} and from (\ref{gloc}) a normalized quasiparticle propagator, \begin{equation} \tilde G_{0,\sigma}^{\mathrm{loc}}(\omega) =\frac1{z_{\sigma}}\integral{\epsilon}{}{}\frac{\rho_0(\epsilon/z_{\sigma})} {\omega+\tilde\mu_{0,\sigma} -\epsilon}. \label{gqploc} \end{equation} Note that this $\omega$-expansion can also be carried out in finite magnetic field. Then the renormalized parameters become field dependent, $z_{\sigma}=z_{\sigma}(h)$ and $\tilde\mu_{0,\sigma}=\tilde\mu_{0,\sigma}(h)$. The density of states $\tilde \rho_{0,\sigma}(\epsilon)$ derived from (\ref{gqploc}), $\tilde \rho_{0,\sigma}(\epsilon)=-{\rm Im}\tilde G_{0,\sigma}(\epsilon+i\delta)/\pi=\rho_0[(\epsilon+\tilde\mu_{0,\sigma})/z_{\sigma}]/z_{\sigma}$, is referred to as the free quasiparticle density of states. $z_\sigma$ is interpreted as the weight of the quasiparticle resonance and $\tilde\mu_{0,\sigma}$ gives the position of the quasiparticle band. All energies are measured from the chemical potential $\mu$. To obtain the renormalized parameters $z_\sigma$ and $\tilde\mu_{0,\sigma}$, we use two different methods based on the NRG approach. The first method is a direct one where we use the self-energy $\Sigma_\sigma(\omega)$ determined by NRG and the chemical potential $\mu_\sigma$, and then substitute into equation (\ref{nrgqp}) for $z_\sigma$ and $\tilde\mu_{0,\sigma}$. The second method is indirect, and it is based on the quasiparticle interpretation of the NRG low energy fixed point of the effective impurity.\cite{HOM04} This approach has been used earlier for the Hubbard model \cite{BH07b,BH07c} and for the Anderson impurity model in a magnetic field \cite{HBK06,BH07a}. As shown before the results of both methods usually agree within a few percent, and we use an average value of both methods for the numerical results presented later. It is important to calculate these parameters accurately, since for the following results also their derivatives are needed. We can calculate static expectation values and response functions in terms of the renormalized parameters. The quasiparticle occupation number $ \tilde n^0_{\sigma}$ is given by integrating the quasiparticle density of states up to the Fermi level, \begin{equation} \tilde n^0_{\sigma}=\integral{\epsilon}{-\infty}{0}\tilde \rho_{0,\sigma}(\epsilon)=\integral{\epsilon}{-\infty}{\infty}\rho_{0,\sigma}(\epsilon) \theta(\mu_{\sigma}-\Sigma_{\sigma}-\epsilon). \label{qpocc} \end{equation} Luttinger's theorem \cite{Lut60} holds for each spin component for the Hubbard model in magnetic field\cite{BH07b}, hence we have $\tilde n^0_{\sigma}= n_{\sigma}$, where $n_{\sigma}$ is the value of the occupation number in the interacting system at $T=0$. To calculate the magnetic response we focus for the rest of this paper on the case with particle-hole symmetry where $\mu=U/2$, and we can write $\Sigma_{\sigma}(0,h)=U/2-\sigma\eta(h)$. We can calculate $\eta(h)$ directly from the self-energy, e.g. $\eta(h)=(\Sigma_{\downarrow}- \Sigma_{\uparrow})/2$, or from the renormalized parameters $\eta(h)=\tilde\mu_0(h)/z(h)-h$. At half filling we have $z_{\uparrow}=z_{\downarrow}\equiv z$ and $\tilde\mu_{0,\uparrow}=-\tilde\mu_{0,\downarrow}\equiv \tilde\mu_{0}$. We define the function \begin{equation} g(h):=h+\eta(h)=\tilde\mu_0(h)/z(h)=\tilde\mu_0(h)m^*(h)/m_0, \end{equation} as $m^*/m_0=z^{-1}$ in DMFT. In terms of the quasiparticles it is the product of the effective mass enhancement $m^*/m_0$ and the shift of the quasiparticle band $\tilde\mu_0$. With the applicability of Luttinger's theorem the magnetization is then given by \begin{equation} \label{mag} m(h)=\frac12(n_{\uparrow}-n_{\downarrow})=\integral{\epsilon}{-\infty}{\infty} \rho_0(\epsilon)\theta[g(h)-\epsilon] -\frac12. \end{equation} For a local self-energy this is an exact expression for the magnetization, which only depends on the field dependent renormalized parameters via $g(h)$. For certain bare densities of state, for instance, for the semi-elliptical density of states $\rho^{\rm sem}_0(\epsilon)$, it can be evaluated analytically, \begin{equation} m(h)= \frac12 g(h)\rho^{\rm sem}_0(g(h))+\frac1{\pi}\arcsin(g(h)). \label{mag2} \end{equation} Differentiating (\ref{mag}) with respect to $h$ yields the local static spin susceptibility \begin{equation} \label{suscrp} \chi_s=\frac{{\rm d}m}{{\rm d}h} =g'(h)\rho_0(g(h)) \end{equation} where here and in the following primes indicate derivatives with respect to $h$. A similar expression had already been derived by Luttinger \cite{Lut60}. The metamagnetic condition $\chi_s'(h)>0$ is then \begin{equation} g''(h)\rho_0(g(h))+\rho_0'(g(h))g'(h)^2>0. \label{metamagcond} \end{equation} The occurrence of metamagnetic behavior can be analyzed depending on the functional form of $g(h)$ and $\rho_0(\epsilon)$. For a simple analysis let us assume $h>0$ and the power law form for $g(h)=c\,h^{\alpha}$, $c>0$. The first term in (\ref{metamagcond}) is then positive if $\alpha> 1$. For a convex density of states, $\rho_0''(\epsilon)>0$, the second term is also positive and metamagnetic behavior occurs as mentioned earlier. For a concave density of states, $\rho_0''(\epsilon)<0$, the two terms in (\ref{metamagcond}) compete. If we also assume the power law form for the density of states, $\rho_0(\epsilon)=r_0-d\,\epsilon^{\gamma}$, (e.g. for $\rho^{\rm sem}_0$ one has $r_0=2/\pi D$ $d=r_0/2$ and $\gamma=2$) condition (\ref{metamagcond}) becomes \begin{equation} \frac{r_0}{c^{\gamma}d}\frac{\alpha-1}{\alpha(1+\gamma)-1}>h^{\alpha\gamma}, \end{equation} Since the right hand side is positive, we can infer that for $\alpha>1$ and $\gamma>(1-\alpha)/\alpha$ metamagnetic behavior occurs. The actual field dependence of $g(h)$ can be calculated from the renormalized parameters and it depends on the interaction strength. As we will see for the half filled Hubbard model and intermediate $U$, $g(h)$ grows faster than linear with $h$, i.e. $\alpha>1$. In the limit of zero field the ratio of the susceptibility of the interacting and non-interacting system has a simplified expression in terms of the renormalized parameters, \begin{equation} \label{suscrph0} \frac{\chi_s}{\chi_s^0} =g'(0) =\frac{m^*(0)}{m_0}\tilde\mu_0'(0), \end{equation} for $\tilde\mu_0(0)=0$. Comparing with the Fermi liquid expression (\ref{flsusc}) we can identify $1/(1+F_0^a)=\tilde\mu_0'$. This quantity corresponds to the Wilson ratio $R$. In the general case, the field dependent enhancement due to the quasiparticle interactions reads \begin{equation} R(h)=\frac1{1+F_0^a(h)}=\Big(\tilde\mu_0'+\tilde\mu_0\frac{m^*{}'}{m^*}\Big) \frac{\rho_0(\tilde\mu_0\frac{m^*}{m_0})}{\rho_0(h)}. \label{flqpint} \end{equation} So far the considerations have been independent of our DMFT-NRG approach. In the following section we will compare results for the magnetic susceptibility obtained from the static expectation values of integrating the Green's functions, with the results based on the field dependent parameters. We determine them as described above. Alternatively they can be calculated by other methods, such as the Gutzwiller (GW) approach, and we will make comparison as appropriate. Results are obtained as in Ref. \onlinecite{Vol84}, where the critical interaction for the metal insulator transition is $U^{\rm GW}_c=16 W/3\pi\approx 6.79$ for $\rho^{\rm sem}_0(\epsilon)$ with $W=4$. \section{Results} \subsection{Magnetization and metamagnetic transition} For a first overview we present results for the magnetization $m(h)$ as a function of field $h$ in Fig. \ref{maghdifU} for various values of $U$. The magnetization $m(h)$ was computed from the static NRG expectation value (EV) for the occupation number as well as from integrating the spectral function to the Fermi level, both of which agree very well. The results for $m(h)$ based on the field dependent renormalized parameters (RP) and equation (\ref{mag2}) are also in good agreement, but not included in the figure. \begin{figure}[!htbp] \centering \includegraphics[width=0.45\textwidth]{figures_metmag/magoverhplotdifU.eps} \caption{(Color online) The local magnetization $m(h)$ as a function of the magnetic field $h$ for different values of $U$. We can see that a metamagnetic curvature sets in at $U=3$. Inset: Hysteresis curve for $U=4$ (triangle up increasing $h$, triangle down decreasing $h$).} \label{maghdifU} \end{figure} \noindent The plot gives a clear picture of the field strength $h_{\rm pol}$ necessary to polarize the metal completely to $m=1/2$. For weak coupling it can be related to the rigid band shift and a large field $h\sim D$ is needed, but for larger interaction strength $h_{\rm pol}$ is reduced substantially. For $U\ge 3$ a metamagnetic curvature in the magnetization can be observed, and we see that in the Hubbard model at zero temperature the metamagnetic transition field \cite{hm} $h_{\rm m}$ coincides with $h_{\rm pol}$, which is not necessarily the case for $T>0$. Laloux et al.\cite{LGK94} have compared results from low temperature DMFT calculations with the Gutzwiller approximation and it was found that the occurrence of metamagnetic behavior is overestimated by the Gutzwiller approximation (see also Fig. \ref{chihdepU4.5}). Earlier work \cite{LGK94} showed that the transition is a discontinuous first order one at low temperature. Our results show jumps in the magnetization curve at the transition field $h_{\rm m}$, e.g. for $U=3$ and $U=4$ in Fig. \ref{maghdifU}, however, we can not exclude a very steep continuous increase which can not be resolved numerically. We have also found hysteresis, shown for $U=4$ as an inset in Fig. \ref{maghdifU} (triangle up increasing $h$, triangle down decreasing $h$). This suggests that the transition is also of first order for zero temperature. For larger interaction $U\ge 4.5$ there exists a small field range near $h_{\rm m}$, where we have not found unique, well converged DMFT solutions, so no definite statement can be made. The half filled repulsive Hubbard model in magnetic field can be mapped to the attractive one \cite{MRR90}, in which the chemical potential is related to the field in the original model, $\mu=U/2+h$. The attractive model has been studied by the DMFT in situations, where superconducting order was not allowed for \cite{KMS01,CCG02}. A first order transition from a metallic to a pairing state for fixed density was found at a critical interaction. The occurrence of the transition can be related to the metamagnetic transition here. A nearly polarized system corresponds to a low density limit, and to estimate when the transition sets in, one can analyze the two-body problem in the attractive model and calculate the critical $U_c$ for bound state formation. For a three dimensional cubic lattice the result is $U_c\approx 0.659W$ \cite{MRR90}. With the given bandwidth this corresponds to a value of $U_c\approx 2.64$, which is a reasonable estimate for the interaction strengths, where the metamagnetic behavior is found here. \subsection{Magnetic susceptibilities and quasiparticle properties} From the initial slope of the magnetization curves in Fig. \ref{maghdifU} we observe an increase of the magnetic susceptibility with the interaction strength $U$. This increase can also be seen in the following Fig. \ref{chiUdep} where we show the ratio of zero field susceptibility to the non-interacting value $\chi_s^0$ as function of $U$ deduced from differentiating the EV for $m(h)$ in the limit $h\to 0$. \begin{figure}[!htbp] \centering \includegraphics[width=0.45\textwidth]{figures_metmag/chi_mag_cgw_varU_h0.eps} \caption{(Color online) The $U$-dependence of the magnetic susceptibility $\chi_s$. We compare results deduced from the EV of $m(h)$ with ones obtained from the RP and from the Gutzwiller (GW) approximation. The inset shows the effective mass $m^*(U)/m_0$ and the Wilson ratio $R(U)$ as a function of $U$.} \label{chiUdep} \end{figure} \noindent For comparison we have also included the susceptibility calculated from equation (\ref{suscrph0}) with the renormalized parameters (RP) and their derivatives, as well as the results obtained from the Gutzwiller (GW) approximation. EV and RP results agree very well, confirming the applicability of Fermi liquid results in this metallic regime. The GW results follow a similar trend but overestimate the value for the susceptibility, which becomes more pronounced for larger $U$. The inset plot shows the $U$-dependence of the effective mass and the Wilson ratio. In terms of Fermi liquid theory and the expression (\ref{flsusc}) the increase of $\chi_s$ with $U$ can be understood by the behavior of the effective mass and the progressive localization tendency, which brings out more the spin degrees of freedom of the electrons. We can see, however, that the effective mass ratio is larger than that of the magnetic susceptibility. This difference can be attributed to the factor $R=\tilde\mu_0'=[1+F_0^a]^{-1}$, which is due to the quasiparticle interaction. This factor is larger than one for smaller values of $U$, but decreases to values below one for stronger interaction. This indicates a sign change of the parameter $F_0^a$ from negative to positive. The comparison of the corresponding quantities calculated in the GW approximation shows a qualitatively similar behavior for both $m^*/m_0$ and $R$, when $U$ is small. For larger values of $U$ in Fig. \ref{chiUdep}, however, the effective mass enhancement in the GW approach, $m^*/m_0=1-(U/U_c^{\rm GW})^2$, is much smaller and $R$ increases with $U$ in contrast to the DMFT result. We return the finite field response and focus on the metamagnetic behavior which is found for intermediate values of $U$. Results for the ratio of the magnetic susceptibility in finite and zero field deduced from differentiating the magnetization (EV) are compared to the ones obtained from the quasiparticle parameters (RP) and equation (\ref{suscrp}). For completeness, we have also included results from the GW approximation. This is shown in Fig. \ref{chihdepU4.5} for $U=3$ in the upper panel and $U=4.5$ in the lower panel. \begin{figure}[!htbp] \centering \includegraphics[width=0.45\textwidth]{figures_metmag/chi_magcgw3.eps} \includegraphics[width=0.45\textwidth]{figures_metmag/chi_magcgw4.5.eps} \caption{(Color online) The $h$-dependence of the ratio of the finite and zero field magnetic susceptibility $\chi_s$ for $U=3$ (upper panel) and $U=4.5$ (lower panel). We compare results deduced from the EV for $m(h)$ with ones obtained from the RP and the ones from the GW approach. The inset shows the ratio of finite and zero field effective mass $m^*(h)/m_0(0)$ and the Wilson ratio $R(h)/R(0)$ as a function of $h$.} \label{chihdepU4.5} \end{figure} \noindent We can see that also in finite field the results for the susceptibility calculated from the EV for $m(h)$ and the field dependent RP agree fairly well with a deviation of less than 5$\%$. For the case $U=3$ (upper panel) the results for $\chi(h)$ based on the field dependent RP are always smaller. In both cases we find first a period where the susceptibility is nearly constant, but then starts to increase rapidly as $h$ approaches $h_{\rm m}$. For $U=3$ the values obtained from the RP initially decrease slightly with the field, which is however incorrect, and comes about through numerical inaccuracies when determining the parameters and the numerical differentiation. As $h_{\rm m}=h_{\rm pol}$ the magnetic susceptibility is zero for $h>h_{\rm m}$. At finite temperature a susceptibility maximum is expected. The results for $\chi_s$ from the GW approximation show generally a similar trend, but as mentioned earlier the metamagnetic behavior sets in at lower field strengths. A difference in the behavior between the two cases is visible in the two insets where the ratios of field dependent effective masses to their zero field values and the field dependent Wilson ratios $R(h)/R(0)$ are plotted. For the $U=3$ case the effective mass decreases with the field which is typical behavior in the weak coupling regime. It can be understood by RPA approximations where spin fluctuations, which give an effective mass enhancement, are suppressed in finite field. The metamagnetic increase of the susceptibility, however, can not be explained by this. In terms of Fermi liquid theory it is related to the magnetic field dependence of the quasiparticle interaction rather than the localization tendency encoded in the effective mass. $R(h)/R(0)$ indeed is increasing sharply close to $h_{\rm m}$. In equation (\ref{flqpint}) we have two competing terms for this enhancement factor, $m^*{}'/m^*<0$, but one finds $\tilde \mu_0'>|\tilde \mu_0\,m^*{}'/m^*|$ which leads to the observed enhancement. The drive for the metamagnetic behavior is therefore due to the shift of the quasiparticle band from the Fermi level with increasing field. This contrasts to the weak coupling situation, such as $U=2$, where $R(h)$ decreases with the field strength and no metamagnetic response is observed. The effective mass in the case of $U=4.5$ (lower panel in Fig. \ref{chihdepU4.5}) shows different behavior. We can see a sharp increase with the field. However, the magnitude the ratio $m^*/m_0$ increases is less than that of the susceptibility. The difference again can be related to the Fermi liquid factor $R=1/[1+F_0^a]$, which is larger than one and increasing with $h$ as can be seen in the inset of the lower panel in Fig. \ref{chihdepU4.5}. In this case the second term in equation (\ref{flqpint}) is positive and the first term negative, but $|\tilde \mu_0'|<|\tilde \mu_0\,m^*{}'/m^*|$. The results from GW approach for the effective mass and $R$ are in line with the DMFT calculations for the case $U=3$, however, for $U=4.5$, the GW result for $m^*{}'/m^*$ only increases very little with the field, whereas $R(h)$ increases sharply to yield the metamagnetic response. For larger interactions than the ones discussed here ($5<U<U_c$), one can encounter difficulties to reach convergency in the DMFT calculations with finite field as discussed in earlier work\cite{BH07b}. The results indicate, however, that there is a strong field dependent enhancement of the effective mass which is the main drive for the metamagnetic response. The ratio $R(h)/R(0)$ varies little with $h$ or even decrease for larger fields. Such a behavior is also found within the GW approach for larger $U$ near the metal insulator transition. \subsection{Spectral functions} The behavior of the quasiparticle band can be seen directly in the local spectral function. For the cases with smaller coupling the field dependent response shows a continuous shift of spectral weight to lower energies for the majority spin (see Fig. \ref{dosU2varh} for $U=2$). \begin{figure}[!htbp] \centering \includegraphics[width=0.45\textwidth]{figures_metmag/downDOS_diffh_U2.eps} \caption{(Color online) The majority spin density of states for $U=2$ and various field strengths in comparison.} \label{dosU2varh} \end{figure} \noindent Note that the minority spin density of states $\rho_{\downarrow}(\omega)$ is given by $\rho_{\uparrow}(-\omega)$ at half filling. To illustrate the behavior of the quasiparticle peak for the stronger interacting case with $U=4.5$ in more detail, we plot the local spectral function for the majority spin $\rho_{\uparrow}(\omega)$ in Fig. \ref{dosU4.5varh}. \begin{figure}[!htbp] \centering \includegraphics[width=0.45\textwidth]{figures_metmag/downDOS_diffh_U4.5.eps} \includegraphics[width=0.45\textwidth]{figures_metmag/downDOS_zoom_diffh_U4.5.eps} \caption{(Color online) The majority spin density of states for $U=4.5$ and various field strengths in comparison: upper panel full frequency range, lower panel low frequency behavior.} \label{dosU4.5varh} \end{figure} \noindent In the upper panel we can see how the lower Hubbard peak in the spectral density acquires weight when the field and thence magnetization is increased whilst the upper Hubbard peak loses spectral weight. The behavior at low energy is seen more clearly in the lower panel. At first sight the overall picture is reminiscent of the particle hole symmetric Anderson impurity model in the Kondo regime in magnetic field \cite{HBK06} as far as the high energy behavior is concerned. The quasiparticle resonance in the locally correlated system broadens and departs from the Fermi level. This behavior occurs in an analogous fashion in the weak coupling regime of the Hubbard model with $\tilde\mu_0'(h)>0$. In the strongly correlated case, however, we find a significant narrowing of the quasiparticle peak in the field, which is accompanied by the field induced metal insulator transition and metamagnetic behavior. The quasiparticle resonance first departs from the Fermi energy, but for larger fields is driven back to it. These features are visible in the field dependence of the renormalized parameter $\tilde\mu_0$ with $\tilde\mu_0'<0$ as discussed above. \section{Relation to experiments and conclusions} It is of interest to see, whether the described behavior bears any resemblance with what is observed experimentally in strongly correlated itinerant electron system. Metamagnetic behavior is observed, for instance, in the heavy fermion compounds CeRu${}_2$Si${}_2$ \cite{PLPHLTF90,FHRAK02}, UPt${}_3$ \cite{MTVFPAK90} or Sr${}_3$Ru${}_2$O${}_7$ \cite{LTBW90,GPSCJLIMM01,FHRAK02,PTKSIM05} and the Co-based metallic compounds such as Y(Co${}_{1-x}$Al${}_x$)${}_2$,\cite{SGYF90,GKSMFM94} sometimes called nearly ferromagnetic metals. The microscopic origin for the occurrence of the effect in these compounds can be manifold, and is sometimes still controversial. In many cases antiferromagnetic exchange is thought be important and the system's closeness to a magnetic instability. For generic features, we attempt to compare our microscopic Fermi liquid description with experimental studies of itinerant metamagnetic behavior in heavy fermion compounds. It is important, however, to be aware that our results based on the paramagnetic solutions of the half filled single band Hubbard model are not appropriate to make quantitative predictions for those complex systems. Organic conductors are thought behave like simple Mott-Hubbard systems and have been shown to display a magnetic field induced localization transition with hysteresis by resistance measurements.\cite{KIMK04} The author is, however, not aware of any published field dependent magnetization or specific heat data to compare to. In materials such as CeRu${}_2$Si${}_2$, UPt${}_3$ or Sr${}_3$Ru${}_2$O${}_7$ the magnetic field dependence of the linear specific heat coefficient $\gamma$ was measured near the metamagnetic transition \cite{PLPHLTF90,MTVFPAK90,FHRAK02,PTKSIM05}. It is worth noting that, as can be shown from a thermodynamic identity, the field dependence of $\gamma$ can also be extracted from $T^2$-coefficient of the magnetization \cite{PLPHLTF90}. In the experiments $\gamma$ increases with the magnetic field and possesses a maximum at the metamagnetic transition $h=h_{\rm m}$. This is comparable with the Fermi liquid results for stronger coupling, e.g. the case $U=4.5$ (Fig. \ref{chihdepU4.5} lower panel), where the effective mass increases with the magnetic field. In the case of CeRu${}_2$Si${}_2$ \cite{FHRAK02} one can see that the susceptibility increase with the magnetic field is up to about 8.5 times the zero field value, whereas in the same regime the specific heat coefficient only shows an enhancement of 1.6. In our Fermi liquid interpretation this signals that the quasiparticle interaction plays an important role in the susceptibility enhancement and the metamagnetic behavior. The relevance of this has been emphasized in the recent experimental work on Yb${}_3$Pt${}_4$.\cite{BSKJYGA08pre} A more careful quantitative comparison would be possible based on the periodic Anderson model, for instance. The presented approach can be extended to this situation, but also other techniques are available \cite{MN01,SI96,Ono98,EG97}. To summarize, we have analyzed the metamagnetic response of the half filled Hubbard model in terms of renormalized quasiparticle parameters and Fermi liquid theory. The renormalized parameters can be calculated accurately with methods based on the NRG, and they have a clear physical meaning. It is shown that the field dependent metamagnetic behavior can have part of its origin in field induced effective mass enhancements, but is not fully explained by this. This is most clearly pointed out in Fig. \ref{chihdepU4.5}, where metamagnetic behavior for smaller $U$ is accompanied by an effective mass reduction in the field, whereas for larger interaction the opposite is the case. The comparison with results obtained from the Gutzwiller approximation gives similar trends, but shows quantitative deviations. The hypothesis that the metamagnetic behavior in itinerant systems is always driven by field induced mass enhancement is therefore found to be not valid. In the intermediate coupling regime it is also shown that the effective mass enhancement alone is not sufficient to explain the metamagnetic enhancement and based on Fermi liquid theory arguments the quasiparticle interaction has to account for the difference. As a generic feature there the corresponding term described by the Wilson ratio $R$ increases near the metamagnetic transition. The opposite happens in the weak (no metamagnetic response) and strong coupling situation. The observation that only a part of the susceptibility enhancement is based on the effective mass is found to be qualitatively in agreement with experimental observations in heavy fermion systems. \par \noindent{\bf Acknowledgment}\par \noindent I wish to thank K. Held, A.C. Hewson, P. Jakubczyk, W. Metzner, A. Toschi, D. Vollhardt, and H. Yamase for helpful discussions, W. Koller and D. Meyer for their earlier contributions to the development of the NRG programs, and A. Toschi for critically reading the manuscript. I would like to acknowledge many fruitful discussion with A.C. Hewson during early stages of this work and thank the Gottlieb Daimler and Karl Benz Foundation, the German Academic Exchange Service (DAAD) and the EPSRC for financial support during this period.
3,212,635,537,809
arxiv
\section{Introduction} \label{sec:introduction} Many well-studied problems in combinatorics concern characterising discrete structures that~satisfy certain `local' constraints. For example, the~celebrated theorem of~Szemer\'edi~\cite{Sz75} gives an~upper bound on the~maximum size of~a~subset of~the~first $n$ integers which does not contain an~arithmetic progression of~a~fixed length~$k$. To~give another example, the~archetypal problem studied in~extremal graph theory, dating back to~the~work of Mantel~\cite{Ma07} and Tur\'an~\cite{Tu41}, is that of~characterising graphs which do not contain a~fixed graph $H$ as~a~subgraph. Problems of this type fall into the following general framework. We are given a~finite set~$V$ and a~collection~$\mathcal{H}$ of~subsets of~$V$. What can be said about sets~$I \subseteq V$ that do not contain any~member of~$\mathcal{H}$? Such a~collection~$\mathcal{H}$ is often called a~\emph{hypergraph} with vertex set~$V$, members of~$\mathcal{H}$ are termed \emph{edges}, and any set~$I \subseteq V$ that contains no edge is called an~\emph{independent set}. In view of this, one might say that a~large part of~combinatorics is concerned with studying independent sets in various hypergraphs. For instance, in the first example from the previous paragraph, $V$ is the set $\{1, \ldots, n\}$ and $\mathcal{H}$ is the collection of all $k$-term arithmetic progressions contained in $V$; stated in~this language, Szemer\'edi's theorem says that for every positive constant $\delta$, every independent set in $\mathcal{H}$ has fewer than $\delta n$ elements, provided that $n$ is sufficiently large. In the second example, $V$ is the edge set of a complete graph on a given set of $n$ vertices and $\mathcal{H}$ is the family of all $\binom{n}{|V(H)|}$~sets of $|E(H)|$ edges that form a copy of $H$ in the complete graph; in this notation, if $H$ is a clique with $k+1$ vertices, then Tur\'an's theorem says that the largest independent sets in $\mathcal{H}$ are precisely the edge sets of the complete balanced $k$-partite subgraphs of the complete graph with edge set $V$ and the well-known theorem of Kolaitis, Pr\"omel, and Rothschild~\cite{KoPrRo87} states that almost all independent sets of $\mathcal{H}$ are $k$-partite, that is, the number $i^*(\mathcal{H})$ of independent sets in $\mathcal{H}$ that are not the edge sets of $k$-partite subgraphs of the complete graph with edge set $V$ satisfies $i^*(\mathcal{H}) / i(\mathcal{H}) \to 0$ as $n \to \infty$. For a~hypergraph~$\mathcal{H}$, let~$\mathcal{I}(\mathcal{H})$ denote the~family of all independent sets in~$\mathcal{H}$, let $i(\mathcal{H}) = |\mathcal{I}(\mathcal{H})|$, and let $\alpha(\mathcal{H})$ be the largest cardinality of an element of~$\mathcal{I}(\mathcal{H})$, usually called the~\emph{independence number} of~$\mathcal{H}$. There are two natural problems that one usually poses about a specific hypergraph~$\mathcal{H}$: \begin{enumerate}[(i)] \item\label{item:q1} Determine $\alpha(\mathcal{H})$ and describe all $I \in \mathcal{I}(\mathcal{H})$ with $\alpha(\mathcal{H})$ elements. \item\label{item:q2} Estimate $i(\mathcal{H})$ and describe a `typical' member of $\mathcal{I}(\mathcal{H})$. \end{enumerate} Let us remark here that providing a precise characterisation of a typical element of $\mathcal{I}(\mathcal{H})$ usually yields a~precise estimate for~$i(\mathcal{H})$. An~apparent connection between problems~(\ref{item:q1}) and~(\ref{item:q2}) may be easily observed in the following two inequalities, which are trivial consequences of the above definitions and the fact that the family~$\mathcal{I}(\mathcal{H})$ is closed under taking subsets: \begin{equation} \label{eq:alpha-i} 2^{\alpha(\mathcal{H})} \le i(\mathcal{H}) \le \sum_{m = 0}^{\alpha(\mathcal{H})} \binom{|V(\mathcal{H})|}{m}. \end{equation} Note that, unless $\alpha(\mathcal{H})$ is very close to~$|V(\mathcal{H})|$, the~lower and upper bounds on~$i(\mathcal{H})$ given in~\eqref{eq:alpha-i} are quite far apart. Since for many interesting hypergraphs $\mathcal{H}$ this naive lower bound is actually fairly close to being best possible, the~efforts of~many researchers have been focused on improving the upper bound. In this short survey article, we present an elementary, yet very powerful, method for~proving stronger upper bounds in the~case when all edges of~$\mathcal{H}$ have size two, that is, when $\mathcal{H}$ is a~graph. This method was first described more than three decades ago by~Kleitman and Winston, who used it to obtain upper bounds on the number of lattices\footnote{A lattice is a partially ordered set in which every two elements have a supremum and an infimum.}~\cite{KlWi80} and graphs without cycles of~length four~\cite{KlWi82}. Variations of this method were subsequently rediscovered by several researchers, most notably by Sapozhenko, in the context of enumerating independent sets in regular graphs~\cite{Al91, Sa01} and sum-free sets in abelian groups~\cite{Al91, LeLuSc01, Sa02}. We shall illustrate our presentation of this method with several applications of it to `real-life' combinatorial problems. We would like to stress here that none of the results or proof techniques presented here are new, but we hope that there is some value in seeing them next to one another. \section{The Kleitman--Winston algorithm} \label{sec:KW} Suppose that we are given an arbitrary graph $G$ with $n$ vertices. Our goal is to give an upper bound on $i(G)$, the number of independent sets in $G$. The idea of Kleitman and Winston was to devise an algorithm that, given a particular independent set $I \in \mathcal{I}(G)$, would encode $I$ in an invertible way. Crucially, the encoding should be performed in a way which makes it easy to estimate the total number of outputs of the algorithm. Since for every invertible encoding, the total number of outputs is precisely $i(G)$, in this way one could derive an upper bound on this quantity. The crucial idea of Kleitman and Winston was to consider the vertices of $G$ ordered according to their degrees and encode each independent set $I$ as a sequence of positions of the elements of $I$ in that ordering. We make this precise below. \begin{dfn} Let $G$ be a graph and fix an arbitrary total order on $V(G)$. For every $A \subseteq V(G)$, the \emph{max-degree ordering} of $A$ is the ordering $(v_1, \ldots, v_{|A|})$ of all elements of $A$, where for each $j \in \{1, \ldots, |A|\}$, $v_j$ is the maximum-degree vertex in the subgraph of $G$ induced by $A \setminus \{v_1, \ldots, v_{j-1}\}$; ties are broken by giving preference to vertices that come earlier in the fixed total order on $V(G)$. \end{dfn} \begin{alg} Suppose that a graph $G$, an $I \in \mathcal{I}(G)$, and an integer $q \le |I|$ are given. Set~$A = V(G)$ and $S = \emptyset$. For $s = 1, \ldots, q$, do the following: \begin{enumerate}[(a)] \item Let $(v_1, \ldots, v_{|A|})$ be the max-degree ordering of $A$. \item Let $j_s$ be the minimal index $j$ such that $v_j \in I$. \item\label{item:v} Move $v_{j_s}$ from $A$ to $S$. \item\label{item:before-v} Delete $v_1, \ldots, v_{j_s-1}$ from $A$. \item\label{item:Nv} Delete $N_G(v_{j_s}) \cap A$ from $A$. \end{enumerate} Output $(j_1, \ldots, j_q)$ and $A \cap I$. \end{alg} For each output sequence $(j_1, \ldots, j_q)$ and every $s \in \{1, \ldots, q\}$, denote by $A(j_1, \ldots, j_s)$ and $S(j_1, \ldots, j_s)$ the sets $A$ and $S$ at the end of the $s$th iteration of the algorithm (run on some input~$I$ that produces this particular sequence $(j_1, \ldots, j_q)$), respectively. Observe that these definitions do not depend on the choice of $I$ as the sequence $(j_1, \ldots, j_q)$ uniquely determines how the sets $S$ and $A$ evolve throughout the algorithm. More precisely, if running the algorithm on two inputs $I, I' \in \mathcal{I}(G)$ produces the same sequence $(j_1, \ldots, j_q)$, then both these executions will also yield the same sets $S$ and $A$. Indeed, all the modifications of the sets $S$ and $A$ in the~$s$th iteration of the algorithm depend solely on $j_s$. Note crucially that $S(j_1, \ldots, j_s) \subseteq I$ and $I \setminus S(j_1, \ldots, j_s) \subseteq A(j_1, \ldots, j_s)$ for every $s$. Indeed, by the minimality of $j_s$ and the assumption that $I$ is independent, the only vertices of $I$ that are deleted from $A$ are moved to~$S$. It follows that one may recover the set $I$ from the output of the algorithm, as $I = S(j_1, \ldots, j_q) \cup (A(j_1, \ldots, j_q) \cap I)$. We also note for future reference that the sequence $(j_1, \ldots, j_q)$ can be recovered from the set $S(j_1, \ldots, j_q)$. Indeed, if running the algorithm on some input $I \in \mathcal{I}(G)$ produces a sequence $(j_1, \ldots, j_q)$ and $S = S(j_1, \ldots, j_q)$, then the same sequence will be produced by running the algorithm with $I$ replaced by~$S$. Finally, let us observe that $j_1 + \ldots + j_q \le |V(G)| - |A(j_1, \ldots, j_q)|$, as in steps (\ref{item:v}) and (\ref{item:before-v}) of the $s$th iteration of the main loop, we removed from $A$ some $j_s$ vertices. Let $i(G,m)$ be the number of independent sets in $G$ that have precisely $m$ elements. The above observations readily imply that for every $m$ and $q$ with $m \ge q$, \begin{equation} \label{eq:iG-q-m} i(G,m) \le \sum_{(j_s)} i\big(G[A(j_1, \ldots, j_q)], m-q\big) \le \sum_{(j_s)} \binom{|A(j_1, \ldots, j_q)|}{m-q}, \end{equation} where the above sums range over all output sequences $(j_1, \ldots, j_q)$. In particular, letting $n = |V(G)|$, \begin{equation} \label{eq:iG-q} i(G) \le \sum_{m=0}^{q-1} \binom{n}{m} + \sum_{(j_s)} i\big(G[A(j_1, \ldots, j_q)]\big) \le \sum_{m=0}^{q-1} \binom{n}{m} + \sum_{(j_s)} 2^{|A(j_1, \ldots, j_q)|}. \end{equation} In view of~\eqref{eq:iG-q-m} and~\eqref{eq:iG-q}, it is in our interest to make the set $A(j_1, \ldots, j_q)$ as small as possible, uniformly for all values of $(j_1, \ldots, j_q)$. This is why we consider the vertices of $A$ listed according to the max-degree ordering. (An attentive reader might have already noticed that this particular ordering maximises $\deg_G(v_{j_s}, A)$ in each iteration of the algorithm.) Suppose that we are at the $s$th iteration of the main loop of the algorithm and let $A' = A \setminus \{v_1, \ldots, v_{j_s-1}\}$, where $A$ is as at the start of this iteration, that is, $A = A(j_1, \ldots, j_{s-1})$. By the definition of the max-degree ordering, \[ |N_G(v_{j_s}) \cap A'| = \max_{v \in A'} \deg_G(v,A') \ge \frac{2e_G(A')}{|A'|}. \] In particular, if $e_G(A') = \beta \binom{|A'|}{2}$, then the right-hand side of the above inequality is $\beta (|A'| - 1)$. Consequently, the number of vertices that are removed from $A$ during the $s$th iteration of the main loop of the algorithm is at least $j_s + \beta (|A'|-1)$, which is at least $\beta |A|$, as $|A'| - 1 = |A| - j_s$ and~$\beta \le 1$. In other words, as long as the density of the subgraph induced by the set $A$ exceeds some~$\beta$, each iteration of the main loop of the algorithm shrinks $A$ by a factor of at most $1 - \beta$. The following two lemmas, which are both implicit in the work of Kleitman and Winston, summarise the above discussion. The first lemma gives a simple bound on the number of independent sets of a given size in a graph which satisfies a certain local density condition. The exact statement of this lemma is taken from~\cite{KoLeRoSa14}. The second lemma characterises the family of all independent sets in such a locally dense graph. The statement of this lemma is inspired by the statement of the~main result of~\cite{BaMoSa14}. \begin{lemma} \label{lemma:KW-basic} Let $G$ be a graph on $n$ vertices and assume that an integer $q$ and reals $R$ and $\beta \in [0,1]$ satisfy \begin{equation} \label{eq:beta-q-R} R \ge e^{-\beta q} n. \end{equation} Suppose that the number of edges induced in $G$ by every set $U \subseteq V(G)$ with $|U| \ge R$ satisfies \begin{equation} \label{eq:eGU} e_G(U) \ge \beta \binom{|U|}{2}. \end{equation} Then, for every integer $m$ with $m \ge q$, \begin{equation} \label{eq:iG-m-bound} i(G,m) \le \binom{n}{q} \binom{R}{m-q}. \end{equation} \end{lemma} \begin{proof} Since there are exactly $\binom{n}{q}$ sequences $(j_1, \ldots, j_q)$ satisfying $j_1 + \ldots + j_q \le n$ and $j_s \ge 1$ for each $s$, the sum in the right-hand side of~\eqref{eq:iG-q-m} has at most $\binom{n}{q}$ terms. Therefore, it suffices to show that for each sequence $(j_1, \ldots, j_q)$ that is outputted by the algorithm, the set $A(j_1, \ldots, j_q)$ has at most $R$ elements. If this were not the case, then there would be some sequence $(j_1, \ldots, j_q)$ such that for each $s \in \{1, \ldots, q\}$, the set $A \setminus \{v_1, \ldots, v_{j_s-1}\}$ in the $s$th iteration of the main loop of the algorithm (run on some input that results in this particular sequence) would have more than $R$ elements and therefore induce in $G$ a subgraph with edge density at least $\beta$. It follows from our discussion that each of the $q$ iterations would shrink the set~$A$ by a~factor of~at~most $1-\beta$. Since $|A| = |V(G)| = n$ at the~start of the~algorithm, then, by~\eqref{eq:beta-q-R}, \[ |A(j_1, \ldots, j_q)| \le (1-\beta)^q n \le e^{-\beta q}n \le R, \] a contradiction. \end{proof} \begin{lemma} \label{lemma:KW-containers} Let $G$ be a graph on $n$ vertices and assume that an integer $q$ and reals $R$ and $D$ satisfy \begin{equation} \label{eq:D-q-R} R + q D \ge n. \end{equation} Suppose that the number of edges induced in $G$ by every set $U \subseteq V(G)$ with $|U| \ge R$ satisfies \begin{equation} \label{eq:eGU-D} 2e_G(U) \ge D|U|. \end{equation} Then there exists a collection $\mathcal{S}$ of $q$-element subsets of $V(G)$ and two mappings $g \colon \mathcal{I}(G) \to \mathcal{S}$ and $f\colon \mathcal{S} \to \mathcal{P}(V(G))$ such that $|f(S)| \le R$ for each $S \in \mathcal{S}$ and $g(I) \subseteq I \subseteq f(g(I)) \cup g(I)$ for every $I \in \mathcal{I}(G)$ with at least $q$ elements. \end{lemma} \begin{proof} We define the mappings $f$ and $g$ and the family $\mathcal{S}$ as follows. We simply run the algorithm with input $I$ for each $I \in \mathcal{I}(G)$ with at least $q$ elements and let $g(I)$ and $f(g(I))$ be the final sets $S$ and $A$, respectively. Moreover, we let $\mathcal{S}$ be the family of all such $S$, that is, the set of values taken by $g$. The discussion in the paragraph following the description of the algorithm should convince us that this is a valid definition of $f$, that $g(I) \subseteq I \subseteq f(g(I)) \cup g(I)$ for each $I$ as above, and that $\mathcal{S}$ consists solely of $q$-element subsets of $V(G)$. It suffices to check that $|f(g(I))| \le R$ for each such $I$. If this were not the case, then there would be some sequence $(j_1, \ldots, j_q)$ such that for each $s \in \{1, \ldots, q\}$, the set $A \setminus \{v_1, \ldots, v_{j_s-1}\}$ in the $s$th iteration of the main loop of the algorithm (run on an input $I$ that generates this sequence) would have more than $R$ elements and therefore induce in $G$ a subgraph with average degree at least $D$. But then, each of the $q$ iterations would remove from $A$ at least $D+1$ vertices. Since $|A| = |V(G)| = n$ at the start of the algorithm, then by~\eqref{eq:D-q-R}, \[ |A(j_1, \ldots, j_q)| \le n - D q \le R, \] a contradiction. \end{proof} Before we close this section, let us make several final remarks. First, the conclusion of Lemma~\ref{lemma:KW-containers} is stronger than the conclusion of Lemma~\ref{lemma:KW-basic}. This is simply because the existence of $f$ and $g$ as in the statement of the second lemma imply the bound on $i(G,m)$ asserted by the first lemma. Moreover, it should be clear from the proofs that the assumptions of the two lemmas are `interchangeable' in the following sense. If a graph $G$ satisfies the assumptions of Lemma~\ref{lemma:KW-basic} with some $q$, $R$, and $\beta$, then the conclusion of Lemma~\ref{lemma:KW-containers} holds for $G$ with the same $q$ and $R$; and vice-versa, if a graph $G$ satisfies the assumptions of Lemma~\ref{lemma:KW-containers} with some $q$, $R$, and $D$, then the conclusion of Lemma~\ref{lemma:KW-basic} holds for~$G$ with the same $q$ and $R$. (The latter statement is redundant because, as we have already noted above, the conclusion of Lemma~\ref{lemma:KW-containers} is stronger than the conclusion of Lemma~\ref{lemma:KW-basic}.) \section{Applications} \label{sec:applications} \subsection{Independent sets in regular graphs} \label{sec:indep-sets-reg-graphs} During a number theory conference at Banff in~1988, Granville conjectured (see~\cite{Al91}) that an $n$-vertex $d$-regular graph can have no more than $2^{(1+o(1))\frac{n}{2}}$ independent sets, where $o(1)$ is some function that tends to $0$ as $d \to \infty$. A few years later, this was shown to be true by Alon~\cite{Al91}, who proved that in fact \[ i(G) \le 2^{(1+O(d^{-0.1}))\frac{n}{2}} \] for every $n$-vertex $d$-regular graph $G$. As our first application of Lemma~\ref{lemma:KW-basic}, we derive a somewhat stronger estimate, which was obtained several years later by Sapozhenko~\cite{Sa01}, using arguments very similar to those presented in Section~\ref{sec:KW}. \begin{thm}[{\cite{Sa01}}] \label{thm:Sapozhenko} There is an absolute constant $C$ such that every $n$-vertex $d$-regular graph $G$ satisfies \[ i(G) \le 2^{\left(1 + C \sqrt{\frac{\log d}{d}}\right)\frac{n}{2}}. \] \end{thm} Alon~\cite{Al91} speculated that when $n$ is divisible by $2d$, then the disjoint union of $\frac{n}{2d}$ complete bipartite graphs $K_{d,d}$ has the maximum number of independent sets among all $d$-regular graphs with $n$~vertices. A slightly stronger statement (Theorem~\ref{thm:Kahn-Zhao} below) was later conjectured by Kahn~\cite{Ka01}, who proved it under the additional assumption that $G$ is bipartite, using a beautiful entropy argument. This assumption was recently shown to be unnecessary by Zhao~\cite{Zh10}, who gave a short and elegant argument showing that for every $n$-vertex $d$-regular graph $G$, there exists a $2n$-vertex $d$-regular bipartite graph $G'$ such that $i(G) \le i(G')^{1/2}$. The results of Kahn and Zhao yield the following. \begin{thm}[{\cite{Ka01,Zh10}}] \label{thm:Kahn-Zhao} For every $n$-vertex $d$-regular graph $G$, \[ i(G) \le i(K_{d,d})^{\frac{n}{2d}} = \left(2^{d+1}-1\right)^{\frac{n}{2d}}. \] \end{thm} We now derive Theorem~\ref{thm:Sapozhenko} from Lemma~\ref{lemma:KW-basic}. \begin{proof}[Proof of Theorem~\ref{thm:Sapozhenko}] Let $G$ be an $n$-vertex $d$-regular graph. We shall in fact estimate $i(G,m)$ for each $m$ and deduce the claimed bound on $i(G)$ by summing over all $m$. Since $i(G) \le 2^n$ and $C$ is an arbitrary constant, we may assume that $d$ is sufficiently large (and therefore $n$ is sufficiently large). We consider two cases. First, if $m \le n/10$, then we simply note that \begin{equation} \label{eq:ind-set-reg-1} i(G,m) \le \binom{n}{\frac{n}{10}} \le (10e)^{\frac{n}{10}} \le 2^{0.48n}, \end{equation} where we used the well-known inequality $\binom{a}{b} \le (ea/b)^b$ valid for all $a$ and $b$. In the complementary case, $m > n/10$, we shall apply Lemma~\ref{lemma:KW-basic}. To this end, let $B \subseteq V(G)$ and note that \begin{equation} \label{eq:degree-sums} d|B| = \sum_{v \in B} \deg_G(v) = 2e(B) + e(B,B^c) \le 2e(B) + \sum_{v \in B^c} \deg_G(v) = 2e(B) + d(n-|B|). \end{equation} Fix an arbitrary $\beta$, let $R = \frac{n}{2} + \frac{\beta n^2}{2d}$, and observe that if $|B| \ge R$, then~\eqref{eq:degree-sums} yields \begin{equation} \label{eq:eB-bound} e(B) \ge \frac{d}{2}(2|B| - n) \ge \frac{d}{2}(2R - n) \ge \frac{\beta n^2}{2} \ge \beta\binom{|B|}{2}. \end{equation} Assume that $\beta > 10/n$ and let $q = \lceil 1/\beta \rceil$. By Lemma~\ref{lemma:KW-basic}, since \[ e^{-\beta q} n \le e^{-1}n \le R, \] then for every $m$ with $m \ge \lceil n/10 \rceil \ge q$, \begin{equation} \label{eq:ind-set-reg-2} i(G,m) \le \binom{n}{q} \binom{\frac{n}{2} + \frac{\beta n^2}{2d}}{m-q} \le \left(\frac{en}{q}\right)^q \binom{\frac{n}{2} + \frac{\beta n^2}{2d}}{m-q} \le (e\beta n)^{\lceil 1/\beta \rceil} \cdot \binom{\frac{n}{2} + \frac{\beta n^2}{2d}}{m-q}. \end{equation} Summing~\eqref{eq:ind-set-reg-1} and~\eqref{eq:ind-set-reg-2} over all $m$ yields \[ i(G) \le 2^{0.49n} + 2^{\frac{n}{2} + \frac{\beta n^2}{2d} + \lceil 1/\beta \rceil \log_2(e\beta n)} \] We obtain the claimed bound by letting $\beta = \frac{\sqrt{d \log d}}{n}$; we note that $\sqrt{d \log d} > 10$ as we assumed that $d$ is large. \end{proof} We ought to indicate here that one may significantly improve the upper bound given by Theorem~\ref{thm:Sapozhenko} by a somewhat more careful analysis of the execution of the Kleitman--Winston algorithm than the one given in the proof of Lemma~\ref{lemma:KW-basic}. The main reason why one should expect such an~improvement to be possible is the crudeness of the second inequality in~\eqref{eq:eB-bound} in the case when $|B| - n/2$ is much larger than $R - n/2$. The proof of Lemma~\ref{lemma:KW-basic} uses~\eqref{eq:eB-bound} to show that in each step of the~algorithm, the set $A$ loses at least $\beta |A|$ elements whereas in reality $A$ will lose many more elements as long as $|A|$ is not very close to $n/2 + \beta n^2 / (2d)$. By considering the `evolution' of $|A|$ partitioned into `dyadic' intervals $\big(n/2 + n/2^{i+1}, n/2 + n/2^i\big]$, where $1 \le i \le \log_2 d - \log_2 \log_2 d$, one may prove that there is an absolute constant $C$ such that every $n$-vertex $d$-regular graph $G$ satisfies \[ i(G) \le 2^{\left(1 + C\frac{(\log d)^2}{d}\right)\frac{n}{2}}. \] One rigorous way of tracking this `evolution' of $|A|$ is to repeatedly invoke Lemma~\ref{lemma:KW-containers} with $R_i = n/2 + n/2^{i+1}$ and $D_i = d/2^i$ for $i = 1, \ldots, \log_2 d - \log_2 \log_2 d$. We leave filling in the details as an~exercise for the reader. \subsection{Sum-free sets} The conjecture of Granville mentioned in the previous section was motivated by a problem posed by Cameron and Erd\H{o}s at the same number theory conference. A set $A$ of elements of an abelian group is called \emph{sum-free} if there are no $x,y,z \in A$ satisfying $x + y = z$. Let $[n]$ denote the set $\{1, \ldots, n\} \subseteq \mathbb{Z}$. Cameron and Erd\H{o}s raised the question of determining the~number $\mathrm{SF}([n])$ of sum-free sets contained in the set $[n]$. They noted that any set containing either only odd integers or only integers greater than $n/2$ is sum-free, and that it is unlikely that there is another large collection of sum-free sets that are not essentially of one of the above two types. In view of this, they conjectured that $\mathrm{SF}([n]) = O(2^{n/2})$. Soon afterwards, Alon~\cite{Al91} showed that the aforementioned conjecture of Granville implies the following weaker estimate on $\mathrm{SF}([n])$, which will serve as a second example application of Lemma~\ref{lemma:KW-basic}. \begin{thm}[{\cite{Al91}}] \label{thm:CE-weak} The set $\{1, \ldots, n\}$ has at most $2^{(1/2+o(1))n}$ sum-free subsets. \end{thm} The Cameron--Erd\H{o}s conjecture was solved some fifteen years later by Green~\cite{Gr04} and, independently, by Sapozhenko~\cite{Sa03}. The solution due to Sapozhenko uses a method akin to the Kleitman--Winston algorithm presented in Section~\ref{sec:KW}, while the one due to Green uses discrete Fourier analysis.\footnote{However, one might still argue that the general `philosophy' behind Green's proof is similar.} We do not discuss either of their arguments here, but instead refer the interested reader to the original papers. Finally, we mention that strong estimates on the number of sum-free subsets of $[n]$ with a given number of elements, which imply the conjecture, were recently obtained in~\cite{AlBaMoSa14-CE}; the~proof there employs the ideas presented in Section~\ref{sec:KW}. \begin{proof}[Proof of Theorem~\ref{thm:CE-weak}] Observe first that the number of all subsets of $[n]$ which contain fewer than $n^{2/3}$ elements from $\{1, \ldots, \lceil n/2 \rceil - 1\}$ is at most $(n/2)^{n^{2/3}} 2^{n/2+1}$. Therefore, we may restrict our attention to sum-free sets that contain at least $n^{2/3}$ elements strictly smaller than $n/2$. For each such set $A$, let $S_A$ be the set of $\lfloor n^{2/3} \rfloor$ smallest elements of $A$. Given a set $S \subseteq \{1, \ldots, \lceil n/2 \rceil - 1\}$, define an auxiliary graph $G_S$ with vertex set $[n]$ by letting \[ E(G_S) = \{xy \colon \text{$x + s \equiv y \pmod n$ for some $s \in S \cup (-S)$}\} \] and note that $G_S$ is $2|S|$-regular, as $n - (\lceil n/2 \rceil - 1) > \lceil n/2 \rceil -1$ and hence $S$ and $-S$ contain different residues modulo $n$. The~crucial observation is that for every sum-free $A$ as above, the set $A \setminus S_A$ is an independent set in the graph $G_{S_A}$. Indeed, otherwise there would be $x, y \in A \setminus S_A$ and an $s \in S_A \cup (-S_A)$ with $x + s \equiv y \pmod n$; since $1 \le |s| < x, y \le n$, this is only possible when $x + s = y$. In particular, for a given $S \subseteq \{1, \ldots, \lceil n / 2 \rceil -1\}$, there are at most $i(G_S)$ sum-free sets~$A$ satisfying $S = S_A$. By Theorem~\ref{thm:Sapozhenko}, we conclude that \[ \mathrm{SF}([n]) \le (n/2)^{n^{2/3}} 2^{n/2+1} + \binom{n/2}{n^{2/3}} \cdot 2^{\left(1 + O(n^{-1/3}\sqrt{\log n})\right)\frac{n}{2}} \le 2^{\left(1/2 + O(n^{-1/3} \log n)\right)n}.\qedhere \] \end{proof} Before closing this section, we remark that the paper of Alon~\cite{Al91} started a very successful line of inquiry into the closely related problem of determining the number of sum-free sets contained in an arbitrary finite abelian group; see, e.g., \cite{AlBaMoSa14-AG, GrRu04, GrRu05, LeLuSc01, Sa02}. In many of these works, variations of the ideas presented in Section~\ref{sec:KW} play a prominent role. \subsection{Independent sets in regular graphs without small eigenvalues} Since every $n$-vertex bipartite graph $G$ satisfies $\alpha(G) \ge n/2$ and hence it contains at least $2^{n/2}$ independent sets, the upper bounds for $i(G)$ proved in Section~\ref{sec:indep-sets-reg-graphs} are essentially best possible whenever $G$ is bipartite. It is natural to ask whether these bounds can be improved when one assumes that $G$ is `far' from being bipartite. An affirmative answer to this question was given by Alon and R\"odl~\cite{AlRo05}. Recall that the adjacency matrix of an $n$-vertex graph $G$ is a real-valued symmetric $n \times n$ matrix and therefore it has $n$ real eigenvalues. Denote these eigenvalues by $\lambda_1, \ldots, \lambda_n$, where $\lambda_1 \ge \ldots \ge \lambda_n$. It is well known that the quantity $\max\{|\lambda_2|, |\lambda_n|\}$, called the \emph{second eigenvalue} of $G$, is closely tied with, among other parameters, the expansion properties of $G$. We shall be interested only in the smallest eigenvalue $\lambda_n$ of $G$, which we denote by $\lambda(G)$. It was first proved by Hoffman~\cite{Ho70} that every $d$-regular $n$-vertex graph $G$ satisfies $\alpha(G) \le \frac{-\lambda(G)}{d-\lambda(G)}n$. This was later significantly strengthened\footnote{In particular, Lemma~\ref{lemma:Alon-Chung} implies that $e_G(A) > 0$ for every $A$ with more than $\frac{-\lambda(G)}{d-\lambda(G)}n$ vertices.} by Alon and Chung~\cite{AlCh88}, who established the following relation between~$\lambda(G)$ and the number of edges induced by large sets of vertices in $G$, cf.\ the expander mixing lemma (see, e.g.,~\cite{HoLiWi06}). \begin{lemma}[{\cite{AlCh88}}] \label{lemma:Alon-Chung} Let $G$ be an $n$-vertex $d$-regular graph. For all $A \subseteq V(G)$, \[ 2e_G(A) \ge \frac{d}{n}|A|^2 + \frac{\lambda(G)}{n}|A|\big(n-|A|\big). \] \end{lemma} Alon and R\"odl~\cite{AlRo05} were the first to prove that if $\lambda(G)$ is much larger than $-d$, then each such $G$ has far fewer than $2^{n/2}$ independent sets. As our next application of Lemma~\ref{lemma:KW-basic}, we derive a similar estimate, originally proved in~\cite{AlBaMoSa14-AG}. \begin{thm}[{\cite{AlBaMoSa14-AG}}] \label{thm:eigenvalue} For every $\varepsilon > 0$, there exists a constant $C$ such that the following holds. If $G$ is an~$n$-vertex $d$-regular graph with $\lambda(G) \ge -\lambda$, then \[ i(G,m) \le \binom{\left( \frac{\lambda}{d+\lambda} + \varepsilon \right) n}{m}, \] provided that $m \ge Cn/d$. \end{thm} \begin{proof}[Proof of Theorem~\ref{thm:eigenvalue}] Fix some $\varepsilon > 0$, let $G$ be an $n$-vertex $d$-regular graph, and let $\lambda = -\lambda(G)$. We may assume that $\frac{\lambda}{d+\lambda} + \varepsilon < 1$ as otherwise there is nothing to prove. Let $U \subseteq V(G)$ be an arbitrary set with $|U| \ge \left(\frac{\lambda}{d+\lambda} + \frac{\varepsilon}{2}\right)n$. Lemma~\ref{lemma:Alon-Chung} implies that \[ 2e_G(U) \ge \frac{d}{n} |U|^2 - \frac{\lambda}{n}|U|\big(n-|U|\big) = \frac{|U|}{n} \big((d+\lambda)|U| - \lambda n \big) \ge \frac{\varepsilon d}{2}|U| \ge \frac{\varepsilon d}{n} \binom{|U|}{2}. \] Let $\beta = \frac{\varepsilon d}{n}$, $q = \left\lceil \frac{\log (2/\varepsilon)}{\varepsilon} \cdot \frac{n}{d}\right\rceil$, and $R = \left(\frac{\lambda}{d+\lambda} + \frac{\varepsilon}{2}\right)n$ and observe that $R \ge e^{-\beta q}n$. If follows from Lemma~\ref{lemma:KW-basic} that for every $m$ with $m \ge q$, \begin{equation} \label{eq:eigenvalue-iGm-bound} i(G,m) \le \binom{n}{q} \binom{R}{m-q}. \end{equation} Let $r(t)$ denote the right-hand side of~\eqref{eq:eigenvalue-iGm-bound} with $q$ replaced by $t$. We may clearly assume that $m \le \alpha(G) \le \frac{\lambda}{d+\lambda}n$, as otherwise $i(G,m) = 0$. An elementary calculation shows that \[ \frac{r(t+1)}{r(t)} = \frac{n-t}{t+1} \cdot \frac{m-t}{R - m + t + 1} \le \frac{nm}{(t+1)(R-m)} \le \frac{2m}{\varepsilon(t+1)} \] and hence \[ i(G,m) = r(q) = \prod_{t=0}^{q-1} \frac{r(t+1)}{r(t)} \cdot r(0) \le \frac{(2m)^q}{\varepsilon^q q!} \cdot \binom{R}{m} \le \left(\frac{2em}{\varepsilon q}\right)^q \cdot \left(\frac{R}{R + \varepsilon n /2}\right)^m \binom{R + \varepsilon n/ 2}{m}, \] where we used the inequalities $a! > (a/e)^a$ and $\binom{a}{c} \ge (a/b)^c \binom{b}{c}$ valid whenever $a \ge b \ge c \ge 0$. Finally, if $K$ is sufficiently large (as a function of $\varepsilon$) and $C \ge K \cdot \left\lceil \frac{\log(2/\varepsilon)}{\varepsilon} \right\rceil$, then for every $m$ with $m \ge Cn/d \ge Kq$, \[ \left( \frac{2em}{\varepsilon q} \right)^{q/m} \cdot \frac{R}{R + \varepsilon n /2} \le \left(\frac{2Ke}{\varepsilon}\right)^{1/K} \cdot \left(1-\frac{\varepsilon}{2}\right) \le 1, \] which completes the proof of the theorem. \end{proof} We close this section with several remarks. First, the constant $\frac{\lambda}{d+\lambda}$ in the assertion of the theorem is optimal as for many values of $n$, $d$, and $\alpha$, there are $n$-vertex $d$-regular graphs with $\alpha(G) = \frac{-\lambda(G)}{d-\lambda(G)} n = \alpha n$. Second, the assumption that $m \ge Cn/d$ cannot be relaxed as for every~$\varepsilon > 0$, every $n$-vertex $d$-regular graph $G$ satisfies $i(G,m) \ge \binom{(1-\varepsilon)n}{m}$ whenever $m \le \varepsilon n/(d+1)$. (To~see this, consider the greedy process of constructing an independent set which repeatedly picks an~arbitrary vertex of $G$ and removes it and all of its neighbours from $G$.) Third, the above theorem implies the conjecture of Granville stated in Section~\ref{sec:indep-sets-reg-graphs} as $\lambda(G) \ge -d$ for every $d$-regular graph $G$. Finally, we refer the interested reader to~\cite{AlBaMoSa14-AG} and~\cite{AlRo05}, where Theorem~\ref{thm:eigenvalue} was used to obtain upper bounds on the number of sum-free sets in abelian groups of even order and lower bounds on some multicolor Ramsey numbers, respectively. \subsection{The number of $C_4$-free graphs} \label{sec:number-C4-free-graphs} As our next example, we present the main result from one of the papers of Kleitman and Winston~\cite{KlWi82} which introduced the methods described in Section~\ref{sec:KW}. Call a graph \emph{$C_4$-free} if it does not contain a cycle of length four and let $\mathrm{ex}(n,C_4)$ denote the maximum number of edges in a $C_4$-free graph with $n$ vertices. A classical result of K\H{o}v\'ari, S\'os, and Tur\'an~\cite{KoSoTu54} together with a construction due to Brown~\cite{Br66} and Erd\H{o}s, R\'enyi, and S\'os~\cite{ErReSo66} imply that \[ \mathrm{ex}(n,C_4) = \left(\frac{1}{2} + o(1)\right) n^{3/2}. \] Let $f_n(C_4)$ be the number of (labeled) $C_4$-free graphs on the vertex set $\{1, \ldots, n\}$. Since each subgraph of a $C_4$-free graph is itself $C_4$-free, we have \[ 2^{\mathrm{ex}(n,C_4)} \le f_n(C_4) \le \sum_{m=0}^{\mathrm{ex}(n,C_4)} \binom{\binom{n}{2}}{m} = 2^{\Theta(\mathrm{ex}(n,C_4) \log n)}, \] which yields \begin{equation} \label{eq:fnC4-trivial} \mathrm{ex}(n,C_4) \le \log_2 f_n(C_4) \le O\big(\mathrm{ex}(n,C_4) \log n\big). \end{equation} Answering a question of Erd\H{o}s, Kleitman and Winston~\cite{KlWi82} showed that the lower bound in~\eqref{eq:fnC4-trivial} is tight up to a constant factor. \begin{thm}[{\cite{KlWi82}}] \label{thm:KlWi-C4} There is a positive constant $C$ such that \[ \log_2 f_n(C_4) \le Cn^{3/2}. \] \end{thm} Before we continue with the proof of the theorem, let us make a few comments. In fact, Erd\H{o}s asked whether $\log_2 f_n(H) = (1+o(1))\mathrm{ex}(n,H)$ for an arbitrary $H$ that contains a cycle. This was shown to be the case by Erd\H{o}s, Frankl, and R\"odl~\cite{ErFrRo86} under the assumption that $\chi(H) \ge 3$. Very recently, Morris and Saxton~\cite{MoSa14} proved that $\log_2 f_n(C_6) \ge 1.0007 \cdot \mathrm{ex}(n,C_6)$ for infinitely many~$n$. But the notoriously difficult problem of determining whether or not $\log_2 f_n(H) = O(\mathrm{ex}(n,H))$ for every bipartite $H$ that is not a forest remains unsolved, apart from the following two special cases: $H$ is a cycle length four~\cite{KlWi82}, six~\cite{KlWi96}, or ten~\cite{MoSa14} or $H$ is an unbalanced complete bipartite graph~\cite{BaSa11-Kmm,BaSa11-Kst}. More exactly, it is proved in~\cite{BaSa11-Kst} and~\cite{MoSa14} that $\log_2 f_n(K_{s,t}) = O(n^{2-1/s})$ whenever $2 \le s \le t$ and that $\log_2 f_n(C_{2\ell}) = O(n^{1+1/\ell})$ for every $\ell \ge 2$, respectively. As it is commonly believed that $\mathrm{ex}(n, K_{s,t}) = \Omega(n^{2-1/s})$ whenever $s \le t$ and that $\mathrm{ex}(n,C_{2\ell}) = \Omega(n^{1+1/\ell})$, both these results are most likely best possible. Finally, we mention that the proofs of most of the results mentioned in this paragraph use either a variant of Lemma~\ref{lemma:KW-basic} or extensions of the ideas presented in Section~\ref{sec:KW} to hypergraphs, see Section~\ref{sec:extensions-to-hypergraphs}. \begin{proof}[{Proof of Theorem~\ref{thm:KlWi-C4}}] Note that one can order the vertices of every $n$-vertex graph $G$ as $v_1, \ldots, v_n$ in such a way that for every $i \in \{2, \ldots, n\}$, letting $G_i = G[\{v_1, \ldots, v_i\}]$, \[ \delta(G_{i-1}) \ge \deg_{G_i}(v_i) - 1. \] Indeed, one may obtain such an ordering by iteratively letting $v_i$ be a minimum-degree vertex of $G - \{v_{i+1}, \ldots, v_n\}$ for $i = n, \ldots, 2$. In particular, every labeled $n$-vertex graph $G$ can be constructed in the following way. First, choose an ordering $v_1, \ldots, v_n$ of the vertices and let $G_1$ be the empty graph with vertex set~$\{v_1\}$. Second, for each $i \in \{2, \ldots n\}$, build a graph $G_i$ by adding to the graph $G_{i-1}$ a vertex labeled $v_i$ in such a way that its degree $d_i$ (in $G_i$) satisfies $d_i \le \delta(G_{i-1})+1$. Finally, we let $G = G_n$. Observe that $G$ is $C_4$-free if and only if $G_i$ is $C_4$-free for each $i$. Now, given integers~$d$ and $i$ with $d \le i$, let $g_i(d)$ denote the maximum number of ways to attach a vertex of degree $d$ to an $i$-vertex $C_4$-free graph with minimum degree at least $d-1$ in such a way that the resulting graph remains $C_4$-free. This number is well defined as clearly $g_i(d) \le \binom{i}{d}$. Moreover, let $g_i = \max \{g_i(d) \colon d \le i\}$. The argument given in the previous paragraph proves that \begin{equation} \label{eq:fnC4-crude} f_n(C_4) \le n! \cdot n! \cdot \prod_{i=2}^n g_{i-1}. \end{equation} Indeed, there are $n!$ ways to order $[n]$ as $v_1, \ldots, v_n$ and for each such ordering, there are at most $n!$ choices for the sequence $d_2, \ldots, d_n$ of degrees. In view of~\eqref{eq:fnC4-crude}, the following claim easily implies the assertion of the theorem. \begin{claim} There exists a constant $C$ such that $g_n \le \exp(C\sqrt{n})$ for all $n$. \end{claim} Without loss of generality, we may assume that $n$ is large. Thus, if $d \le \sqrt{n} / \log n$, then \[ g_n(d) \le \binom{n}{d} \le \binom{n}{\frac{\sqrt{n}}{\log n}} \le \left(e\sqrt{n}\log n\right)^{\frac{\sqrt{n}}{\log n}} \le \exp(\sqrt{n}). \] Therefore, we shall from now on assume that $d > \sqrt{n} / \log n$. Let $G$ be a $C_4$-free graph on $n$ vertices with $\delta(G) \ge d-1$. Let $H$ be the square of $G$, that is, the graph with $V(H) = V(G)$ and \[ E(H) = \{xy \colon xz, yz \in E(G) \text{ for some $z \in V(G)$}\}. \] Crucially, observe that adding $v$ to $G$ will result in a $C_4$-free graph if and only if the neighbourhood of $v$ is an independent set in $H$. Hence, $i(H,d)$ is an upper bound on the number of $C_4$-free extensions of $G$ by a vertex of degree $d$. We shall estimate $i(H, d)$ using Lemma~\ref{lemma:KW-basic}. To this end, we show that subgraphs of $H$ induced by large subsets of $V(H)$ have reasonably high density. Since $G$ is $C_4$-free, every edge $xy$ of $H$ corresponds to a unique vertex $z \in V(G)$ such that $xz$ and $yz$ are edges of $G$. Therefore, for each $B \subseteq V(H)$, \[ e_H(B) = \sum_{z \in V(G)} \binom{\deg_G(z,B)}{2} \ge n \cdot \binom{\sum_z \deg(z,B) / n}{2}, \] where the last inequality is Jensen's inequality applied to the convex function $x \mapsto \binom{x}{2}$. Since \[ \sum_{z \in V(G)}\deg_G(z,B) = \sum_{x \in B}\deg_G(x) \ge |B| \cdot \delta(G) \ge (d-1)|B|, \] then assuming that $|B| \ge \frac{2n}{d-1}$ implies \[ e_H(B) \ge n \cdot \frac{(d-1)|B|}{2n} \left(\frac{(d-1)|B|}{n} - 1\right) \ge \frac{(d-1)^2}{2n}\binom{|B|}{2}. \] Finally, let $R = \frac{2n}{d-1}$, $\beta = \frac{(d-1)^2}{2n}$, and $q = \lceil 3(\log n)^3\rceil$. Since $d > \sqrt{n}/\log n$ and $n$ is large, then $\beta q \ge \log n$ and therefore $e^{-\beta q} n \le 1 \le R$. If follows from Lemma~\ref{lemma:KW-basic} that \[ i(H, d) \le \binom{n}{q} \binom{\frac{2n}{d-1}}{d-q} \le e^{4\log^4n} \cdot \left(\frac{2en}{(d-q)^2}\right)^{d-q} \le \sup_{k > 0} \left(\frac{e\sqrt{n}}{k}\right)^{2k} = e^{2\sqrt{n}}, \] where we used the assumption that $n$ is large and the fact that $\sup\left\{\left(\frac{e}{x}\right)^x \colon x > 0\right\} = e$. \end{proof} \subsection{Roth's theorem in random sets} As our final example, we present a short proof of a well-known result of Kohayakawa, \L uczak, and R\"odl~\cite{KoLuRo96}. Recall that $[n]$ denotes the set $\{1, \ldots, n\}$. A famous theorem of Roth~\cite{Ro53} asserts that for every positive $\delta$, any set of at least $\delta n$ integers from~$[n]$ contains a $3$-term arithmetic progression ($3$-term AP), provided that $n$ is sufficiently large (as a~function of $\delta$ only). Given a positive $\delta$, we shall say that a set $A \subseteq \mathbb{Z}$ is \emph{$\delta$-Roth} if each $B \subseteq A$ satisfying $|B| \ge \delta |A|$ contains a $3$-term AP. We may now restate Roth's theorem as follows. For every positive $\delta$, there exists an $n_0$ such that the set $[n]$ is $\delta$-Roth whenever $n \ge n_0$. With the aim of showing that there exist `smaller' and `sparser' $\delta$-Roth sets Kohayakawa, \L uczak, and R\"odl~\cite{KoLuRo96} proved the following result. \begin{thm}[{\cite{KoLuRo96}}] \label{thm:KoLuRo} For every positive $\delta$, there exists a constant $C$ such that if $C\sqrt{n} \le m \le n$, then the probability that a uniformly chosen random $m$-element subset of $\{1, \ldots, n\}$ is $\delta$-Roth tends to~$1$ as $n \to \infty$. \end{thm} We shall deduce Theorem~\ref{thm:KoLuRo} as an easy corollary of the following upper bound for the number of subsets of $[n]$ of a given cardinality that do not contain a $3$-term AP, originally proved in~\cite{BaMoSa14} and~\cite{SaTh14} in a much more general form. This upper bound will be derived from Roth's theorem using Lemma~\ref{lemma:KW-containers} with one additional twist which was previously considered in~\cite{AlBaMoSa14-AG}. \begin{thm} \label{thm:3AP-free-count} For every positive $\varepsilon$, there exists a constant $D$ such that if $D\sqrt{n} \le m \le n$, \[ \left|\big\{A \subseteq [n] \colon \text{$|A| = m$ and $A$ contains no $3$-term AP}\big\}\right| \le \binom{\varepsilon n}{m}. \] \end{thm} \begin{proof}[Proof of Theorem~\ref{thm:KoLuRo}] Fix a positive $\delta$, let $\varepsilon = \delta / 6$, and let $D$ be the constant from the statement of Theorem~\ref{thm:3AP-free-count}. Let $C = D / \delta$ and suppose that $C \sqrt{n} \le m \le n$. Since $\lceil \delta m \rceil \ge D \sqrt{n}$, Theorem~\ref{thm:3AP-free-count} implies that the set $\mathcal{A}$ defined by \[ \mathcal{A} = \big\{ A \subseteq [n] \colon \text{$|A| = \lceil \delta m \rceil$ and $A$ contains no $3$-term AP}\big\} \] has at most $\binom{\varepsilon n}{\lceil \delta m \rceil}$ elements. Now, let $R$ be an $m$-element subset of $[n]$ chosen uniformly at random. Clearly, \[ \begin{split} \Pr\big(\text{$R$ is not $\delta$-Roth}\big) & = \Pr\big(\text{$R \supseteq A$ for some $A \in \mathcal{A}$}\big) \le \sum_{A \in \mathcal{A}} \Pr(R \supseteq A) \le \sum_{A \in \mathcal{A}} \left(\frac{m}{n}\right)^{|A|} \\ & = |\mathcal{A}| \cdot \left(\frac{m}{n}\right)^{\lceil \delta m\rceil} \le \binom{\varepsilon n}{\lceil \delta m \rceil} \cdot \left(\frac{m}{n}\right)^{\lceil \delta m\rceil} \le \left(\frac{\varepsilon e n}{\lceil \delta m \rceil} \cdot \frac{m}{n} \right)^{\lceil \delta m \rceil} \le 2^{-\delta m}.\qedhere \end{split} \] \end{proof} Our proof of Theorem~\ref{thm:3AP-free-count} will use the following simple consequence of Roth's theorem, observed first by Varnavides~\cite{Va59}, as a `black box'. \begin{prop}[{\cite{Ro53,Va59}}] \label{prop:Varnavides} For every positive $\delta$, there exist an integer $n_0$ and a positive $\beta$ such that if $n \ge n_0$, then every set of at least $\delta n$ integers from $\{1, \ldots, n\}$ contains at least $\beta n^2$ $3$-term APs. \end{prop} \begin{proof}[Proof of Theorem~\ref{thm:3AP-free-count}] Fix a positive $\varepsilon$, let $n_0$ and $\beta$ be the constants from the statement of Proposition~\ref{prop:Varnavides} invoked with $\delta = \varepsilon / 2$, and suppose that $n \ge n_0$. Given an arbitrary set $B \subseteq [n]$ and integers~$m$ and $n'$, let \begin{align*} a(B,m) & = \left|\big\{ I \subseteq B \colon \text{$|I| = m$ and $I$ contains no $3$-term AP}\big\}\right|, \\ a(n',m) & = \max \big\{ a(B,m) \colon \text{$B \subseteq [n]$ with $|B| = n'$} \big\}. \end{align*} Our aim is to show that $ a([n],m) = a(n,m)\le \binom{\varepsilon n}{m}$, provided that $m \ge C \sqrt{n}$ for some constant $C$ which depends only on $\varepsilon$. This inequality will follow from the trivial observation that $a(n',m) \le \binom{n'}{m}$ for all $n'$ and $m$ and the following claim. \begin{claim} If $n' \ge \varepsilon n / 2$ and $m \ge 2\lfloor\sqrt{n}\rfloor$, then $a(n',m) \le 2 \binom{n}{\lfloor\sqrt{n}\rfloor}^2 \cdot a\big(n' - \lceil \beta n/12 \rceil, m-2\lfloor\sqrt{n}\rfloor\big)$. \end{claim} Let $\mathcal{H}$ be the $3$-uniform hypergraph with vertex set $[n]$ whose edges are all triples of numbers which form a $3$-term AP. Let $B$ be an arbitrary $n'$-element subset of $[n]$. By Proposition~\ref{prop:Varnavides}, $e_{\mathcal{H}}(B) \ge \beta n^2$. Let $Z \subseteq B$ be the set of all vertices of $\mathcal{H}[B]$, the subhypergraph of $\mathcal{H}$ induced by~$B$, whose degree is at least $\beta n$. In other words, $Z$ is the set of all numbers in $B$ that belong to at least $\beta n$ three-term APs contained in $B$. Since the maximum degree of $\mathcal{H}$ is at most $2n$, we have $|Z| \ge \beta n$. We first estimate the number of $m$-element subsets of $B$ with no $3$-term AP that contain fewer than $\sqrt{n}$ elements of $Z$. Since each such set $A$ may be partitioned into $A_1$ and $A_2$, where $|A_1| = \lfloor \sqrt{n} \rfloor$ and $A_2 \subseteq B \setminus Z$, there are at most $\binom{n}{\lfloor \sqrt{n} \rfloor} \cdot a(n' - \lceil \beta n \rceil, m - \lfloor \sqrt{n} \rfloor)$ such sets. We may therefore focus on counting subsets of $B$ that contain at least $\sqrt{n}$ elements of $Z$. We shall obtain a suitable upper bound for their number using Lemma~\ref{lemma:KW-containers}. Let $W$ be an arbitrary subset of $Z$ and consider the auxiliary graph $G_W$ with vertex set $B$ whose edges are all pairs $\{x,y\}$ such that $\{x,y,z\} \in \mathcal{H}$ for some $z \in W$. Since for a given pair $\{x, y\} \subseteq [n]$, there are at most three different $z$ such that $\{x,y,z\} \in \mathcal{H}$, it follows that $e(G_W) \ge |W| \beta n/3$ and the maximum degree of $G_W$ is no more than $3|W|$. It follows that for an arbitrary subset $U$ of $B$ with at least $n' - \beta n /12$ elements, \begin{equation} \label{eq:eGW-U} e_{G_W}(U) \ge e(G_W) - |B \setminus U| \cdot \Delta(G_W) \ge \frac{\beta n |W|}{3} - \frac{\beta n}{12} \cdot 3|W| = \frac{\beta n |W|}{12}. \end{equation} Observe crucially that if some set $I \cup W$ contains no $3$-term APs, then $I$ is an independent set in the graph $G_W$. Let $w = \lfloor \sqrt{n} \rfloor$ and fix some $W \subseteq Z$ with $|W| = w$. We shall prove an upper bound on the~number of ways one can extend $W$ to an $m$-element subset of $B$ that contains no $3$-term APs. By our above discussion, if $I \cup W$ is such a set, then $I$ is an independent set of $G_W$ with $m-w$ elements. Let $\mathcal{S}$ be the family of sets and let $f$ and $g$ be the maps whose existence is postulated by Lemma~\ref{lemma:KW-containers} with $G = G_W$, $q = \lfloor \sqrt{n} \rfloor$, $R = n' - \lceil \beta n /12 \rceil$, and $D = \beta w / 6$. Note that the assumptions of the lemma are satisfied by our discussion above, see~\eqref{eq:eGW-U}. Since clearly for each extension $I$ of $W$ to an $m$-element subset of $B$ with no $3$-term APs, $I \cap f(g(I))$ contains no $3$-term APs, the~number $E_W$ of extensions of $W$ satisfies \[ E_W \le \sum_{S \in \mathcal{S}} a\big(f(S), m-w-q\big) \le \binom{n}{q} \cdot a\big(R, m-w-q\big). \] We conclude that \[ \begin{split} a(B,m) & \le \binom{n}{\lfloor\sqrt{n}\rfloor} \cdot a\big(n' - \lceil \beta n \rceil, m - \lfloor\sqrt{n}\rfloor\big) + \sum_{W \subseteq Z \colon |W| = w} E_W \\ & \le \binom{n}{\lfloor\sqrt{n}\rfloor}^2 \cdot a\big(n' - \lceil \beta n \rceil, m - 2\lfloor\sqrt{n}\rfloor\big) + \binom{n}{w}\binom{n}{q} \cdot a\big(n'-\lceil \beta n /12 \rceil, m-2\lfloor\sqrt{n}\rfloor\big) \\ & \le 2 \binom{n}{\lfloor\sqrt{n}\rfloor}^2 \cdot a\big(n' - \lceil \beta n /12 \rceil, m - 2\lfloor \sqrt{n} \rfloor\big), \end{split} \] which, since $B$ was an arbitrary $n'$-element subset of $[n]$, proves the claim. \medskip Let $K = \lceil (12-6\varepsilon)/\beta \rceil$ and suppose that $m \ge \sqrt{n}$. We recursively invoke the claim $K$ times to obtain \begin{equation} \label{eq:anm-unprocessed} a(n,m) \le 2^K \binom{n}{\lfloor \sqrt{n} \rfloor}^{2K} \binom{\varepsilon n/2}{m-2K\lfloor \sqrt{n} \rfloor} \le 2^K \binom{2Kn}{2K\lfloor \sqrt{n} \rfloor} \binom{\varepsilon n/2}{m-2K\lfloor \sqrt{n} \rfloor}. \end{equation} As in the proof of Theorem~\ref{thm:eigenvalue}, denote by $r(t)$ the right-hand side of~\eqref{eq:anm-unprocessed} with $2K \lfloor \sqrt{n} \rfloor$ replaced by~$t$. We may clearly assume that $m < \varepsilon n/4$ as otherwise $a(n,m) = 0$ by Roth's theorem (we may assume that $n$ is sufficiently large). An~elementary calculation shows that \[ \frac{r(t+1)}{r(t)} = \frac{2Kn-t}{t+1} \cdot \frac{m - t}{\varepsilon n /2 -m + t + 1} \le \frac{2Knm}{(t+1)(\varepsilon n / 2 - m)} \le \frac{8Km}{\varepsilon(t+1)} \] and hence, letting $T = 2K \lfloor \sqrt{n} \rfloor$, \[ a(n,m) \le r(T) \le 2^K \cdot \frac{(8Km)^T}{\varepsilon^T T!} \cdot \binom{\varepsilon n/2}{m} \le 2^K \cdot \left(\frac{8eKm}{\varepsilon T}\right)^T \cdot \left(\frac{1}{2}\right)^m \binom{\varepsilon n}{m}. \] Finally, if $D$ is sufficiently large as a function of $K$ and $\varepsilon$, then for every $m$ with $m \ge D\sqrt{n} \ge D/(2K) \cdot T$, we have \[ 2^{K/m} \cdot \left(\frac{8eKm}{\varepsilon T}\right)^{T/m} \le 2, \] which completes the proof of the theorem. \end{proof} \section{Concluding remarks and further reading} \subsection{Other applications of the Kleitman--Winston method} There have been quite a few successful applications of the Kleitman--Winston method other than the ones presented in Section~\ref{sec:applications}. In particular, variants of Lemma~\ref{lemma:KW-basic} were used in the following works: Kleitman and Wilson~\cite{KlWi96} proved that the number of $n$-vertex graphs with girth larger than $2\ell$ is $2^{O(n^{1+1/\ell})}$; Dellamonica, Kohayakawa, Lee, R\"odl, and the author~\cite{DeKoLeRoSa14-B3, DeKoLeRoSa14-Bh, KoLeRoSa14} proved sharp bounds on the number of subsets of $[n]$ with a given cardinality which contain no non-trivial solutions to the equation $a_1 + \ldots + a_h = b_1 + \ldots + b_h$ for every $h \ge 2$; Balogh, Das, Delcourt, Liu, and Sharifzadeh~\cite{BaDaDeLiSh14} and Gauy, H\`an, and Oliveira~\cite{GaHaOl14} proved sharp bounds for the number of intersecting families of $k$-element subsets of $[n]$ with a given cardinality and for the typical size of the largest intersecting subfamily contained in a random collection of $k$-element subsets of $[n]$. \subsection{Extensions of the Kleitman--Winston method to hypergraphs} \label{sec:extensions-to-hypergraphs} It seems natural to seek a generalisation of the Kleitman--Winston method that would yield non-trivial upper bounds for the number of independent sets in a hypergraphs of higher uniformity. Perhaps somewhat surprisingly, such generalisations were considered only fairly recently. To the best of our knowledge this was first done in~\cite{BaSa11-Kmm,BaSa11-Kst}, where sharp upper bounds for the number of $n$-vertex graphs which do not contain a copy of a fixed complete bipartite subgraph were proved using a generalisation of the argument presented in Section~\ref{sec:number-C4-free-graphs}. Around the same time, similar ideas were developed by Saxton and Thomason, who used them to establish lower bounds for the list chromatic number of regular uniform hypergraphs~\cite{SaTh12}. Inspired by the groundbreaking work of Conlon and Gowers~\cite{CoGo14} and Schacht~\cite{Sc14}, these efforts culminated in far-reaching generalisations of the Kleitman--Winston method to arbitrary uniform hypergraphs, obtained independently by Saxton and Thomason~\cite{SaTh14}, and by Balogh, Morris, and the author~\cite{BaMoSa14}. For further details, we refer the interested reader to~\cite{BaMoSa14, Co14, CoGo14, Sa14, SaTh14, Sc14}. \bigskip \noindent \textbf{Acknowledgments.} I would like to thank Noga Alon, J\'ozsi Balogh, Domingos Dellamonica, Yoshi Kohayakawa, Sang June Lee, Rob Morris, and Vojta R\"odl for many interesting discussions on the topics of independent sets in graphs and the Kleitman--Winston method and its applications over the past several years. These discussions have greatly influenced the content of this paper. I would also like to thank David Conlon, Asaf Ferber, and Rob Morris for their careful reading of an earlier version of this manuscript and many valuable comments which helped me improve the exposition and saved me from making several embarrassing mistakes. Finally, special thanks to Jarik Ne\v{s}et\v{r}il for his encouragement to write this survey. \bibliographystyle{amsplain}
3,212,635,537,810
arxiv
\section{Boundary entropy of topological loop-gasses}\label{sec:boundaryentropy} We now turn to the computation of the boundary diagnostics from \cref{sec:entropydiagnostics}. As before, we begin with Levin-Wen models. \subsection{Levin-Wen models} \begin{thm}[Topological entropy of (2+1)D Levin-Wen models at a boundary]\ \\ Consider the regions shown in \cref{fig:LWregionsbnd}. The Levin-Wen model defined by unitary spherical fusion category $\mathcal{C}$, with boundary specified by an indecomposable, strongly separable, special, Frobenius algebra $A\in\mathcal{C}$ has boundary entropy \begin{align} \Gamma & =\log\mathcal{D}^2, \end{align} where $\mathcal{D}$ is the total quantum dimension of $\mathcal{C}$. \label{thm:LWbnd} \end{thm} \begin{examples*} Recall the examples from \cref{sec:examples}. As discussed in \cref{sec:examples_phys}, these label two distinct loop-gas models in (2+1)-dimensions, the toric code, and the double semion. The toric code has two possible boundary conditions, while the double semion only allows for the trivial boundary. All boundaries have $\Gamma=\log 2$. \end{examples*} Recall that a boundary for a Levin-Wen model defined by $\mathcal{C}$ is specified by an algebra object $A\in\mathcal{C}$. The algebra encodes the strings that can terminate on the boundary. This interpretation leads us to the following lemma. \begin{lemma}[Entropy of (union of) simply connected regions, with boundary]\label{lem:bndLW} On a region $R$ consisting of the disjoint union of simply connected sub-regions, the entropy is \begin{align} S_R & =nS[\mathcal{C}]+\frac{b_1}{2}\log d_A-b_0\log\mathcal{D}^2,\label{eqn:WWtrivialent} \end{align} where $b_0$ is the number of disjoint interface components of $R$, $b_1$ is the number of points where the entanglement surface intersects the physical boundary, and $n$ is the number of links crossing the entanglement interface. \begin{proof} Consider a ball $R$ with $n$ sites along the interface, which is in contact with the boundary. Recall that in the bulk, the fusion of the strings crossing the boundary was required to be $1$. In the presence of the boundary, this conservation rule is modified, since loops can terminate. All that is now required is that the fusion is in $A$ \begin{align} \begin{array}{c} \includeTikz{tree_bnd}{ \begin{tikzpicture} \draw(0,0)--(1.75,1.75); \draw[dotted](1.75,1.75)--(2,2); \draw(2,2)--(3,3); \draw[blue!20] (2.755,2.755)--(3,3); \begin{scope} \clip (0,0)--(3,3)--(6,3)--(6,0)--(0,0); % \draw(1,0)--(0,1); \draw(2,0)--(0,2); \draw(3,0)--(0,3); \draw(4.5,0)--(0,4.5); \draw(5.5,0)--(0,5.5); \end{scope}; \node[below] at (0,0) {$x_1$}; \node[below] at (1,0) {$x_2$}; \node[below] at (2,0) {$x_3$}; \node[below] at (3,0) {$x_4$}; \node[below] at (4.5,0) {$x_{n-1}$}; \node[below] at (5.5,0) {$x_{n}$}; \node[above right,blue!50] at (3,3) {$a\in A$}; \node[above left] at (.75,.75) {$y_1$}; \node[above left] at (1.25,1.25) {$y_2$}; \node[above left] at (2.5,2.5) {$y_{n-2}$}; \node[below] at (.5,.5) {\tiny{$\mu_1$}}; \node[below] at (1,1) {\tiny{$\mu_2$}}; \node[below] at (1.5,1.5) {\tiny{$\mu_3$}}; \node[right] at (2.25,2.25) {\tiny{$\mu_{n-2}$}}; \end{tikzpicture} } \end{array},\label{eqn:treeA_bnd} \end{align} The ground state can be decomposed as \begin{align} \ket{\psi} & =\sum_{\substack{\vec{x},\vec{y},\vec{\mu} \\ a\in A}} \Phi_{\vec{x},\vec{y},\vec{\mu},a} \ket{ \psi_R^{\vec{x},\vec{y},\vec{\mu},a} }\ket{ \psi_{\comp{R}}^{\vec{x},\vec{y},\vec{\mu},a} }.\label{eqn:simpleregionpartition_bnd} \end{align} As before, the state $\ket{ \psi_R^{\vec{x},\vec{y},\vec{\mu},a} }$ includes any state that can be reached from \cref{eqn:treeA_bnd} by acting only on $R$. The reduced state on $R$ is \begin{align} \rho_R & =\sum_{\substack{\vec{x},\vec{y},\vec{\mu} \\a\in A}} |\Phi_{\vec{x},\vec{y},\vec{\mu},a}|^2 \ketbra{ \psi_R^{\vec{x},\vec{y},\vec{\mu},a} }\\ & =\sum_{\substack{\vec{x},\vec{y},\vec{\mu} \\a\in A}} \Pr[\vec{x},\vec{y},\vec{\mu}|a]\Pr[a] \ketbra{ \psi_R^{\vec{x},\vec{y},\vec{\mu},a} }, \end{align} where $\Pr[\vec{x},\vec{y},\vec{\mu}|a]$ is the probability of the labeled tree, given that $\vec{x}$ fuses to $a$, and $\Pr[a\in A]=d_a/d_A$. Therefore, \begin{align} \rho_R & =\sum_{\substack{\vec{x},\vec{y},\vec{\mu} \\a\in A}} \frac{\prod_{j\leq n} d_{x_j}}{\mathcal{D}^{2(n-1)}d_A} \ketbra{ \psi_R^{\vec{x},\vec{y},\vec{\mu},a} }. \end{align} Applying \cref{lem:summingds,lem:sumlog} completes the proof for this region. It is straightforward to check that this holds on each sub-region of $R$, where \cref{lem:bulkLW} is used for any bulk sub-region. \end{proof} \end{lemma} Applying \cref{lem:bndLW} to the regions in \cref{fig:LWregionsbnd} completes the proof of \cref{thm:LWbnd}. We can make sense of this halving of the entropy by considering folding the plane. Suppose we fold the model in \cref{fig:LWregionsblk} so that it resembles \cref{fig:LWregionsbnd}. This turns the bulk of a model defined by $\mathcal{C}$ to a boundary of a model labeled by $\mathcal{C}\boxtimes\mathcal{C}^{\rev}$. The quantum dimension of the folded theory is $\mathcal{D}_{\mathcal{C}\boxtimes\mathcal{C}^{\rev}}=\mathcal{D}_{\mathcal{C}}^2$, so the bulk diagnostic for $\mathcal{C}$ matches the boundary diagnostic computed for this folded theory. \subsection{Walker-Wang models} In 3D, just like in 2D, strings can terminate at the boundary. In addition, loops can interlock as discussed in \cref{ss:WWbulkpf}. In the vicinity of the boundary, these two effects can occur simultaneously as depicted in \cref{fig:bound_torus_A}. For simply connected regions in contact with a boundary, we can apply \cref{lem:bndLW}, replacing $b_1/2$ with the number of lines where the region contacts the physical boundary. By applying the results so far, it is straightforward to check that the two diagnostics \cref{eqn:WWptdef} and \cref{eqn:WWloopdef} are related by \begin{align} \Delta_{\circ} & =\Delta_{\bullet}+\log d_A^2-\log\mathcal{D}^2, \end{align} so we only need to consider $\Delta_{\bullet}$. We are currently unable to compute this in general, however in this section we prove the following results: \begin{thm}\label{thm:wwbnd} For a Walker-Wang model defined by a unitary premodular category $\mathcal{C}$, the entropy diagnostic $\Delta_{\bullet}$ for a boundary labeled by an indecomposable, strongly separable, commutative, Frobenius algebra $A$ is given by \begin{align}\hspace*{-2mm} \Delta_{\bullet} & = \begin{cases} \log\mathcal{D}^2 & A=1, \\ \log\mathcal{D}^2-\log d_A & \mathcal{C}\text{ symmetric}, \\ \log\mathcal{D}^2-2\log d_A & \mathcal{C}\text{ pointed and }\mug{C}\cap A=\{1\}.\text{ In particular }\mathcal{C}\text{ modular.} \\ \log\mathcal{D}^2-\log d_A & \mathcal{C}\text{ pointed and }\mug{C}\cap A=A. \end{cases} \end{align} \end{thm} \begin{examples*} Recall the examples from \cref{sec:examples}. As discussed in \cref{sec:examples_phys}, these label four distinct loop-gas models in (3+1)-dimensions, the bosonic and fermionic toric code models, and two semion models. All four input categories are pointed, so we can apply \cref{thm:wwbnd} to obtain the boundary TEE. The bosonic toric code is compatible with two distinct gapped boundary conditions, labeled by $A_0$ and $A_1$ (see \cref{sec:examples}), with $d_{A_0} = d_1 = 1$, and $d_{A_1} = d_1+d_x=2$. Since the input category is symmetric, $\mug{\mathcal{C}}\cap A_i=A_i$, so the entropy diagnostics are $\Delta_{\bullet}(A_0) = \log 2 - \log 1=\log2$ and $\Delta_{\bullet}(A_1) = \log 2 - \log 2=0$. For the remaining examples, only the boundary labeled by $A_0$ is compatible, and $\Delta_{\bullet} = \log2$ in all cases. \end{examples*} \begin{figure} \centering $\begin{array}{c} \includeTikz{bound_torus_A}{ \begin{tikzpicture}[scale=.75] \draw[black!10] (0,0,0) -- (0,0,2) -- (2,0,2) -- (2,0,0) -- cycle; \draw[black!10] (2,0,2) -- (2,-1.5,2)-- (2,-1.5,0) -- (2,0,0); \draw[thick, dotted, blue!80] (3.5,-0.75,-0.5) -- (3.5,-3.5,-0.5) -- (3.5,-3.5,2.5); \draw[draw=black!10,fill=white] (2,-1.5,0) -- (5,-1.5,0) -- (5,-1.5,2) -- (2,-1.5,2) -- cycle; \draw[draw=black!10,fill=white] (0,0,2) -- (0,-2.75,2)-- (7,-2.75,2) -- (7,0,2) -- (5,0,2) -- (5,-1.5,2) -- (2,-1.5,2) -- (2,0,2) -- cycle; \draw[black!10] (7,-2.75,2) -- (7,-2.75,0) -- (7,0,0); \draw[black!10] (5,0,0) -- (5,0,2) -- (7,0,2) -- (7,0,0) -- cycle; \draw[thick, dashed, darkred] (1,0,1) -- (1,-2.5,1)-- (6,-2.5,1)-- (6,0,1); \draw[fill=blue,opacity=.1] (5,0,0) -- (5,0,2) -- (7,0,2) -- (7,0,0) -- cycle; \draw[fill=blue, opacity = 0.1] (0,0,0) -- (0,0,2) -- (2,0,2) -- (2,0,0) -- cycle; \draw[thick, dotted, blue!80] (3.5,-0.75,-0.5) -- (3.5,-0.75,2.5) -- (3.5,-3.5,2.5); \draw[thick,dotted,orange] (3.5,-3.5,1)--(3.5,-2.5,1); \draw[thick,dotted,orange] (3.5,0,1)--(3.5,-0.75,1); \end{tikzpicture} }\end{array}$ \caption{When the region $R$ is in non-simple contact with a boundary on which strings can terminate, the computation of entropy is more subtle. There is additional entanglement in the system due to intersecting loops that cannot be created in $R$ (red, dashed) or $\comp{R}$ (blue, dotted) separately. The red (internal) strings can terminate on the boundary. Also, the blue loop can emit a string which can terminate on the boundary. } \label{fig:bound_torus_A} \end{figure} \begin{proof} To capture configurations like the one in \cref{fig:bound_torus_A}, we need new, boundary $\S$-matrices resembling \begin{align} \frac{1}{\mathcal{D}} \begin{array}{c} \includeTikz{linkedSmatrixalgebra_1}{ \begin{tikzpicture}[scale=.75] \centerarc[draw=white,double=black,ultra thick](-.75,0)(0:180:1); \centerarc[draw=white,double=black,ultra thick](.75,0)(180-60:360+45:1) \centerarc[draw=white,double=black,ultra thick](-.75,0)(180:360:1) \node[anchor=west,inner sep=.5] at(1.75,0) {\strut$a_0$}; \node[anchor=west,inner sep=.5] at(.25,0) {\strut$b_0$}; \node[anchor=east,inner sep=.5] at(-.25,0) {\strut$a_1$}; \node[anchor=east,inner sep=.5] at(-1.75,0) {\strut$b_1$}; \draw[thick] (-.75,-1)to[out=270,in=270](.75,-1); \node[anchor=east,inner sep=.5] at(-.75,1.15) {\strut$d$}; \node[anchor=east,inner sep=.5] at(-.75,-1.25) {\strut$c$}; \node[anchor=west,inner sep=.5] at(.75,-1.25) {\strut$\dual{c}$}; \draw[thick] (-.75,1)--(-.75,1.5); \fill[blue!20] (-.75,1.5) circle (.1); \fill[blue!20] ($(.75,0)+({cos(180-60)},{sin(180-60)})$) circle (.1); \fill[blue!20] ($(.75,0)+({cos(360+45)},{sin(360+45)})$) circle (.1); \end{tikzpicture} } \end{array},\label{eqn:bndSmatrix1} \end{align} where the dots indicate where a string meets the boundary. We use boundary retriangulation invariance, as defined in \onlinecites{1706.00650,1706.03329} to evaluate this diagram on the ground space. Using this, we define the new $\S$-matrix elements as \begin{align} \left[\S_{c,d}\right]_{(b_0,b_1),(a_0,a_1)}:= & \frac{m_{a_1,a_0}^{\dual{d}}}{(d_{a_0}d_{a_1}d_{d})^{1/4}d_A\mathcal{D}} \begin{array}{c} \includeTikz{linkedSmatrixalgebra_2}{ \begin{tikzpicture}[scale=.75] \centerarc[draw=white,double=black,ultra thick](-.75,0)(0:180:1); \centerarc[draw=white,double=black,ultra thick](.75,0)(90:360+90:1) \centerarc[draw=white,double=black,ultra thick](-.75,0)(180:360:1) \node[anchor=west,inner sep=.5] at(1.75,0) {\strut$a_0$}; \node[anchor=west,inner sep=.5] at(.25,0) {\strut$b_0$}; \node[anchor=east,inner sep=.5] at(-.25,0) {\strut$a_1$}; \node[anchor=east,inner sep=.5] at(-1.75,0) {\strut$b_1$}; \draw[thick] (-.75,-1)to[out=270,in=270](.75,-1); \draw[thick] (-.75,1)to[out=90,in=90](.75,1); \node[anchor=east,inner sep=.5] at(-.75,1.25) {\strut$d$}; \node[anchor=west,inner sep=.5] at(.75,1.25) {\strut$\dual{d}$}; \node[anchor=east,inner sep=.5] at(-.75,-1.25) {\strut$c$}; \node[anchor=west,inner sep=.5] at(.75,-1.25) {\strut$\dual{c}$}; \end{tikzpicture} } \end{array},\label{eqn:bndSmatrix2} \end{align} where $a_0,a_1,d\in A$ and $b_0,b_1,c\in \mathcal{C}$. With this, the ground state can be written \begin{align} \ket{\psi}=\sum_{ \substack{\vec{x},\vec{y},\vec{\mu} \\ c,b_0,b_1\in\mathcal{C} \\ d,a_0,a_1\in A}} & \Phi_{\vec{x},\vec{y},\vec{\mu},c} \frac{\left[\S_{c,d}\right]_{(b_0,b_1),(a_0,a_1)}}{N_A} \ket{ \psi_R^{\vec{x},\vec{y},\vec{\mu},c,a_0,a_1} }\ket{ \psi_{\comp{R}}^{\vec{x},\vec{y},\vec{\mu},c,b_0,b_1,d} }, \end{align} where $N_A$ is a normalizing factor. The reduced state on $R$ is \begin{align} \rho_R=\sum_{ \substack{\vec{x},\vec{y},\vec{\mu} \\ c\in\mathcal{C} \\ d,a_0,a_1,a_2,a_3\in A}} & \frac{\prod_{j\leq n} d_{x_j}\left[\S_{c,d}^\dagger\S_{c,d}\right]_{(a_2,a_3),(a_0,a_1)}}{d_c\mathcal{D}^{2(n-1)}N_A}\ketbra{\psi_R^{\vec{x},\vec{y},\vec{\mu},c,a_0,a_1}}{\psi_R^{\vec{x},\vec{y},\vec{\mu},c,a_2,a_3}}. \end{align} \subsubsection{\texorpdfstring{$A=1$}{A=1}} When the algebra is trivial, no strings can terminate. In that case, $\S_{1,1}^\dagger\S_{1,1}=1$, so the reduced state is \begin{align} \rho_R & =\sum_{ \vec{x},\vec{y},\vec{\mu}} \frac{\prod_{j\leq n} d_{x_j}}{\mathcal{D}^{2(n-1)}} \ketbra{\psi_R^{\vec{x},\vec{y},\vec{\mu}}}{\psi_R^{\vec{x},\vec{y},\vec{\mu}}}, \end{align} which is diagonal and has entropy \begin{align} S_R & =n S[C]-\log\mathcal{D}^2. \end{align} \subsubsection{\texorpdfstring{$\mathcal{C}$}{C} symmetric} When $\mathcal{C}$ is symmetric, the rings in \cref{eqn:bndSmatrix2} separate, so \begin{align} \left[\S_{c,d}\right]_{(b_0,b_1),(a_0,a_1)}= & \indicator{c=d}\sqrt{d_{b_0}d_{b_1}} \frac{(d_{a_0}d_{a_1})^{1/4}m_{a_1,a_0}^{\dual{d}}}{d_{d}^{1/4}d_A^2\mathcal{D}}, \\ \left[\S_{c,d}^\dagger\S_{c,d}\right]_{(a_2,a_3),(a_0,a_1)}= & \indicator{c=d}\sum_{b_0,b_1}N_{b_0b_1}^{d}\frac{d_{b_0}d_{b_1}}{\sqrt{d_d}} (d_{a_0}d_{a_1}d_{a_2}d_{a_3})^{1/4}\frac{m_{a_1,a_0}^{\dual{d}}\left(m_{a_3,a_2}^{\dual{d}}\right)^*}{d_A^4\mathcal{D}^2} \\ = & \indicator{c=d} (d_{a_0}d_{a_1}d_{a_2}d_{a_3})^{1/4}\sqrt{d_d} \frac{m_{a_1,a_0}^{\dual{d}}\left(m_{a_3,a_2}^{\dual{d}}\right)^*}{d_A^2}. \end{align} It can readily be verified that this matrix is rank 1. The eigenvalue can be found using \cref{eqn:algnorm}, giving $\lambda=d_d/d_A$. We can therefore write the state on $R$ as \begin{align} \rho_R & =\sum_{\substack{\vec{x},\vec{y},\vec{\mu} \\c\in\mathcal{C}\\d\in A}} \indicator{c=d} \frac{\prod_{j\leq n} d_{x_j}}{\mathcal{D}^{2(n-1)}d_A} \ketbra{\psi_R^{\vec{x},\vec{y},\vec{\mu},c}}{\psi_R^{\vec{x},\vec{y},\vec{\mu},c}} \\ & =\sum_{ \substack{\vec{x},\vec{y},\vec{\mu} \\c\in A}} \frac{\prod_{j\leq n} d_{x_j}}{\mathcal{D}^{2(n-1)}d_A} \ketbra{\psi_R^{\vec{x},\vec{y},\vec{\mu},c}}{\psi_R^{\vec{x},\vec{y},\vec{\mu},c}}. \end{align} Using \cref{lem:summingds,lem:sumlog}, entropy of this state is \begin{align} \S_R & =nS[\mathcal{C}]-\log\mathcal{D}^2+\log d_A. \end{align} \subsubsection{\texorpdfstring{$\mathcal{C}$}{C} pointed} When all quantum dimensions are equal to 1, the boundary $\S$-matrix is \begin{align} \left[\S_{c,d}\right]_{(b_0,b_1),(a_0,a_1)}= & \indicator{c=d} \frac{m_{a_1,a_0}^{\dual{d}}}{d_A}\left[\S\right]_{b_0,\dual{a}_1}, \end{align} where $\left[\S\right]_{b_0,a_1}$ is the $\S$-matrix from \cref{eqn:Smatrix}. This gives \begin{align} \left[\S_{c,d}^\dagger\S_{c,d}\right]_{(a_2,a_3),(a_0,a_1)} & =\indicator{c=d} \frac{m_{a_1,a_0}^{\dual{d}}(m_{a_3,a_2}^{\dual{d}})^*}{d_A^2}\sum_{b_0} \left[\S\right]_{b_0,\dual{a}_3}^*\left[\S\right]_{b_0,\dual{a}_1}. \end{align} Using \cref{lem:productS,lem:sumS}, this can be simplified to \begin{align} \left[\S_{c,d}^\dagger\S_{c,d}\right]_{(a_2,a_3),(a_0,a_1)} & =\indicator{c=d} \frac{m_{a_1,a_0}^{\dual{d}}(m_{a_3,a_2}^{\dual{d}})^*}{d_A^2}\indicator{a_3\otimes\dual{a}_1\in\mug{\mathcal{C}}}. \end{align} Pointed braided categories have fusion rules given by an Abelian group $G$~\cite{Joyal_1993,MR3242743}, and algebras are twisted group algebras~\cite{0202130,bullivant2020} of subgroups of $G$. Moreover, $\mug{\mathcal{C}}$ also has fusion rules given by a subgroup $G$. Since $a_3\otimes\dual{a}_1\in A$ and $a_3\otimes\dual{a}_1\in \mug{\mathcal{C}}$, there must be some $h\in\mug{\mathcal{C}}\cap A$ so that $a_3 = a_1 h$. We can then write \begin{align} \rho_R=\sum_{ \substack{ \vec{x} \\ d,a\in A \\ h\in \mug{\mathcal{C}}\cap A }} & \frac{ m_{a,(ad)^{-1}}^{d^{-1}}(m_{a h,(ahd)^{-1}}^{d^{-1}})^*}{ d_A^2\mathcal{D}^{2(n-1)}N_A} \ketbra{\psi_R^{\vec{x},d,a}}{\psi_R^{\vec{x},d,ah}}.\label{eqn:rho_ptd} \end{align} In the case that $\mug{\mathcal{C}}\cap A=A$, this reduces to the symmetric case, since summing over $h\in A$ is the same as summing over $A\ni h^\prime = ah$. When $\mug{\mathcal{C}}\cap A=\{1\}$, the unit, the state on $R$ simplifies to \begin{align} \rho_R=\sum_{ \substack{ \vec{x} \\ d,a\in A }} & \frac{ m_{a,(ad)^{-1}}^{d^{-1}}(m_{a,(ad)^{-1}}^{d^{-1}})^*}{ d_A^2\mathcal{D}^{2(n-1)}N_A} \ketbra{\psi_R^{\vec{x},d,a}}{\psi_R^{\vec{x},d,a}}. \end{align} This state is diagonal, and has entropy \begin{align} S_R & =-\sum_{ \substack{\vec{x} \\ d,a\in A}} \frac{|m_{a,(ad)^{-1}}^{d^{-1}}|^2}{\mathcal{D}^{2(n-1)}d_A^2} \log(\frac{|m_{a,(ad)^{-1}}^{d^{-1}}|^2}{\mathcal{D}^{2(n-1)}d_A^2}) \\ & =-\sum_{ \substack{\vec{x} \\ d,a\in A}} \frac{|m_{a,(ad)^{-1}}^{d^{-1}}|^2}{\mathcal{D}^{2(n-1)}d_A^2} \log(|m_{a,(ad)^{-1}}^{d^{-1}}|^2)+\log(\mathcal{D}^{2(n-1)}d_A^2), \end{align} where we have made use of \cref{eqn:algnorm}. Since $A$ is a twisted group algebra, we may assume $|m_{ab}^c|\in\{0,1\}$. Finally, this gives \begin{align} S_R & =\log(\mathcal{D}^{2(n-1)}d_A^2) \\ & =S[\mathcal{C}]-\log\mathcal{D}^2+2\log d_A, \end{align} completing the proof. \end{proof} \section{Bulk entropy of topological loop-gasses}\label{sec:bulkentropy} We now show how the entanglement entropy of ground states of Levin-Wen models is computed far from any boundary, before moving on to Walker-Wang models. To make the calculation we take the Schmidt decomposition of the ground state \begin{align} \ket{\psi} & =\sum_{\lambda=1}^{r}\Phi_{\lambda} \ket{\psi_R^\lambda}\ket{\psi_{\comp{R}}^\lambda}, \end{align} for regions $R$, where the sets $\{\ket{\psi_R^\lambda}\}$ and $\{\ket{\psi_{\comp{R}}^\lambda}\}$ are orthonormal, and $r$ is the Schmidt rank of the state $\ket{\psi}$. This allows us to compute the reduced state $\rho_R$ on $R$. Diagonalizing this matrix yields the entanglement entropy. For the following, we will need several results concerning fusion trees. Consider fusing $n$ strings labeled $\vec{x}:=(x_1,x_2,\ldots,x_n)$ to a fixed object $a$. Using $F$-moves, we can bring the fusion tree for this process into the canonical form \begin{align} \begin{array}{c} \includeTikz{treeA}{ \begin{tikzpicture} \draw(0,0)--(1.75,1.75); \draw[dotted](1.75,1.75)--(2,2); \draw(2,2)--(3,3); \begin{scope} \clip(0,0)--(3,3)--(6,3)--(6,0)--(0,0); \draw(1,0)--(0,1); \draw(2,0)--(0,2); \draw(3,0)--(0,3); \draw(4.5,0)--(0,4.5); \draw(5.5,0)--(0,5.5); \end{scope} \node[below] at (0,0) {$x_1$}; \node[below] at (1,0) {$x_2$}; \node[below] at (2,0) {$x_3$}; \node[below] at (3,0) {$x_4$}; \node[below] at (4.5,0) {$x_{n-1}$}; \node[below] at (5.5,0) {$x_{n}$}; \node[above right] at (3,3) {$a$}; \node[above left] at (.75,.75) {$y_1$}; \node[above left] at (1.25,1.25) {$y_2$}; \node[above left] at (2.5,2.5) {$y_{n-2}$}; \node[below] at (.5,.5) {\tiny{$\mu_1$}}; \node[below] at (1,1) {\tiny{$\mu_2$}}; \node[below] at (1.5,1.5) {\tiny{$\mu_3$}}; \node[right] at (2.25,2.25) {\tiny{$\mu_{n-2}$}}; \end{tikzpicture} } \end{array},\label{eqn:treeA} \end{align} where $1\leq\mu\leq N_{a,b}^{c}$ parameterizes the distinct fusion channels $a\otimes b\to c$. In the following, sums over $x_i,\,y_i$ are over all simple objects in $\mathcal{C}$. First, we need two results concerning summing over trees. \begin{lemma}\label{lem:summingds} Let $\mathcal{C}$ a unitary fusion category, then for a fixed simple fusion outcome $a$, \begin{align} \sum_{\vec{x},\vec{y}}N_{x_1x_2}^{y_1}N_{y_1x_3}^{y_2}\ldots N_{y_{n-2}x_n}^{a}\prod_{j\leq n}d_{x_j} & =d_a\mathcal{D}^{2(n-1)}, \end{align} where $\mathcal{D}=\sqrt{\sum_i d_i^2}$ is the total quantum dimension of $\mathcal{C}$. \begin{proof} Provided in Appendix~\hyperref[lem:summingds_pf]{\ref*{app:SN_results}}. \end{proof} \end{lemma} \begin{lemma}\label{lem:sumlog} Let $\mathcal{C}$ a unitary fusion category, then for a fixed simple fusion outcome $a$, \begin{align} \sum_{\vec{x},\vec{y}} N_{x_1x_2}^{y_1}N_{y_1x_3}^{y_2}\ldots N_{y_{n-2}x_n}^{a}\frac{\prod_{j\leq n} d_{x_j}}{\mathcal{D}^{2(n-1)}}\log\prod_{k\leq n} d_{x_k} & = n d_a \sum_x \frac{d_x^2\log d_x}{\mathcal{D}^2}. \end{align} \begin{proof} Provided in Appendix~\hyperref[lem:sumlog_pf]{\ref*{app:SN_results}}. \end{proof} \end{lemma} Finally, we need the probability of a given fusion tree in a topological loop-gas model. \begin{lemma}[Probability of trees]\label{lem:prtree} Let $\mathcal{C}$ a unitary fusion category. Given a fusion outcome $a$ on $n$ edges, the probability of the tree in \cref{eqn:treeA} is \begin{align} \Pr[\vec{x},\vec{y},\vec{\mu}|a] & =\frac{\prod_{j\leq n} d_{x_j}}{d_{a}\mathcal{D}^{2(n-1)}}. \end{align} \begin{proof} Provided in Appendix~\hyperref[lem:prtree_pf]{\ref*{app:SN_results}}. \end{proof} \end{lemma} Throughout the remainder of this section, we use the following condensed notation \begin{align} \sum_{\vec{x},\vec{y},\vec{\mu}}:= & \sum_{x_1,\ldots,x_n}\sum_{y_1,\ldots,y_{n-2}}\sum_{\mu_1,\ldots,\mu_{n-2}} \\ = & \sum_{x_1,\ldots,x_n}\sum_{y_1,\ldots,y_{n-2}} N_{x_1x_2}^{y_1}N_{y_1x_3}^{y_2}\ldots N_{y_{n-2}x_n}^{a}, \end{align} where we frequently leave the fusion outcome $a$ implicit. \subsection{Levin-Wen models} \begin{thm}[Topological entropy of (2+1)D Levin-Wen models in the bulk~\cite{levin2006detecting,kitaev2006topological,LevinThesis}]\ \\ Consider the regions shown in \cref{fig:LWregionsblk}, then the Levin-Wen model defined by a unitary spherical fusion category $\mathcal{C}$, with total dimension $\mathcal{D}$, has topological entropy \begin{align} \gamma & =2\log\mathcal{D}^2=\log\mathcal{D}^2_{\mathcal{Z}(\mathcal{C})}, \end{align} where $\drinfeld{\mathcal{C}}$ is the modular category called the \define{Drinfeld center}~\cite{MR3242743} of $\mathcal{C}$ which describes the anyons of the theory. \label{thm:LWbulk} \end{thm} \begin{examples*} Recall the examples from \cref{sec:examples}. As discussed in \cref{sec:examples_phys}, these label two distinct loop-gas models in (2+1)-dimensions, the toric code and double semion models. Since all the input categories for these examples have $\mathcal{D}^2=2$, the TEE is $\gamma=2\log2$ for both. \end{examples*} \begin{lemma}[Entropy of (union of) simply connected bulk regions~\cite{levin2006detecting,LevinThesis,bullivant2016entropic}]\label{lem:bulkLW} On a region $R$ in the bulk consisting of the disjoint union of simply connected sub-regions, the entropy is \begin{align} S_R & =nS[\mathcal{C}]-b_0\log\mathcal{D}^2,\label{eqn:LWsimpentropy} \end{align} where $b_0$ is the number of disjoint interface components of $R$, $n$ is the number of links crossing the entanglement interface, and \begin{align} S[\mathcal{C}]:=\log\mathcal{D}^2-\sum_{x}\frac{d_x^2\log d_x}{\mathcal{D}^2}. \end{align} \begin{proof} Consider a ball $R$ with $n$ sites along the interface, in the configuration $\vec{x}=x_1,x_2,\ldots, x_n$. Since any configuration must be created by inserting closed loops into the empty state, the total `charge' crossing the interface must be $1$. For a fixed $\vec{x}$, there are now many ways for this to happen, parameterized by trees depicted in \cref{eqn:treeA} with fusion outcome $a=1$. Trees with distinct labelings (in $\vec{x}$, $\vec{y}$ or $\vec{\mu}$) are orthogonal. This means that if the tree \cref{eqn:treeA} occurs adjacent to the interface within $R$, it must also occur on the other side of the interface \begin{align} \begin{array}{c} \includeTikz{tree_interface_blk}{ \begin{tikzpicture}[scale=.9] \begin{scope} \draw(0,0)--(1.75,1.75); \draw[dotted](1.75,1.75)--(2,2); \draw(2,2)--(3,3); \begin{scope} \clip(0,0)--(3,3)--(6,3)--(6,0)--(0,0); \draw(1,0)--(0,1); \draw(2,0)--(0,2); \draw(3,0)--(0,3); \draw(4.5,0)--(0,4.5); \draw(5.5,0)--(0,5.5); \end{scope} \node[below left] at (0,0) {$x_1$}; \node[below right,inner sep=.75] at (1,0) {$x_2$}; \node[below right,inner sep=.75] at (2,0) {$x_3$}; \node[below right,inner sep=.75] at (3,0) {$x_4$}; \node[below right,inner sep=.75] at (4.5,0) {$x_{n-1}$}; \node[below right,inner sep=.75] at (5.5,0) {$x_{n}$}; \node[above right] at (3,3) {$c$}; \node[above left] at (.75,.75) {$y_1$}; \node[above left] at (1.25,1.25) {$y_2$}; \node[above left] at (2.5,2.5) {$y_{n-2}$}; \node[below] at (.5,.5) {\tiny{$\mu_1$}}; \node[below] at (1,1) {\tiny{$\mu_2$}}; \node[below] at (1.5,1.5) {\tiny{$\mu_3$}}; \node[right] at (2.25,2.25) {\tiny{$\mu_{n-2}$}}; \node[right] at (2.75,2.75) {\tiny{$\mu_{n-1}$}}; \end{scope} \begin{scope}[yscale=-1] \draw(0,0)--(1.75,1.75); \draw[dotted](1.75,1.75)--(2,2); \draw(2,2)--(3,3); \begin{scope} \clip(0,0)--(3,3)--(6,3)--(6,0)--(0,0); \draw(1,0)--(0,1); \draw(2,0)--(0,2); \draw(3,0)--(0,3); \draw(4.5,0)--(0,4.5); \draw(5.5,0)--(0,5.5); \end{scope} \node[below right] at (3,3) {$d$}; \node[below left] at (.75,.75) {$z_1$}; \node[below left] at (1.25,1.25) {$z_2$}; \node[below left] at (2.5,2.5) {$z_{n-2}$}; \node[above] at (.5,.5) {\tiny{$\nu_1$}}; \node[above] at (1,1) {\tiny{$\nu_2$}}; \node[above] at (1.5,1.5) {\tiny{$\nu_3$}}; \node[right] at (2.25,2.25) {\tiny{$\nu_{n-2}$}}; \node[right] at (2.75,2.75) {\tiny{$\nu_{n-1}$}}; \end{scope} \draw[black!20](-.5,0)--(6,0); \node at (-1,2) {$R$}; \node at (-1,-2) {$\comp{R}$}; \end{tikzpicture} } \end{array}\propto \indicator{\vec{z}=\vec{y}}\indicator{\vec{\nu}=\vec{\mu}}\indicator{d=c}.\label{eqn:treeB_blk} \end{align} If the trees on either side of the cut had different branching structures, we could use local moves on either side of the cut to bring them to this standard form. We take the Schmidt decomposition of the ground state as follows \begin{align} \ket{\psi} & =\sum_{\vec{x},\vec{y},\vec{\mu}} \Phi_{\vec{x},\vec{y},\vec{\mu}} \ket{ \psi_R^{\vec{x},\vec{y},\vec{\mu}} }\ket{ \psi_{\comp{R}}^{\vec{x},\vec{y},\vec{\mu}} },\label{eqn:simpleregionpartition} \end{align} where the notation $\vec{x},\,\vec{y},\,\vec{\mu}$ indicates the labeling of a valid tree as in \cref{eqn:treeA}. The state $\ket{ \psi_R^{\vec{x},\vec{y},\vec{\mu}} }$ includes any state that can be reached from \cref{eqn:treeA} (with $a=1$) by acting only on $R$. The reduced state on $R$ is \begin{align} \rho_R & =\sum_{\vec{x},\vec{y},\vec{\mu}} |\Phi_{\vec{x},\vec{y},\vec{\mu}}|^2 \ketbra{ \psi_R^{\vec{x},\vec{y},\vec{\mu}} } \\ & =\sum_{\vec{x},\vec{y},\vec{\mu}} \Pr[\vec{x},\vec{y},\vec{\mu}|1] \ketbra{ \psi_R^{\vec{x},\vec{y},\vec{\mu}} }, \end{align} where $\Pr[\vec{x},\vec{y},\vec{\mu}|1]$ is the probability of the labeled tree, given that $\vec{x}$ fuses to 1. From \cref{lem:prtree}, the reduced state is \begin{align} \rho_R & =\sum_{\vec{x},\vec{y},\vec{\mu}} \frac{\prod_{j\leq n} d_{x_j}}{\mathcal{D}^{2(n-1)}} \ketbra{ \psi_R^{\vec{x},\vec{y},\vec{\mu}} }. \end{align} The von Neumann entropy of $\rho_R$ is therefore \begin{align} S_R := & -\tr\rho_R\log\rho_R \\ = & -\sum_{\vec{x},\vec{y},\vec{\mu}} \frac{\prod_{j\leq n} d_{x_j}}{\mathcal{D}^{2(n-1)}} \log \frac{\prod_{k\leq n} d_{x_k}}{\mathcal{D}^{2(n-1)}} \\ = & \frac{ \log \mathcal{D}^{2(n-1)}}{\mathcal{D}^{2(n-1)}} \sum_{\vec{x},\vec{y},\vec{\mu}} \prod_{j\leq n} d_{x_j} - \sum_{\vec{x},\vec{y},\vec{\mu}} \frac{ \prod_{j\leq n} d_{x_j} }{\mathcal{D}^{2(n-1)}} \log \prod_{k\leq n} d_{x_k} \label{EqnLine:ApplyLemmas} \\ = & n(\log\mathcal{D}^2-\sum_x \frac{d_x^2\log{d_x}}{\mathcal{D}^2})-\log\mathcal{D}^2 \\ = & n S[C]-\log\mathcal{D}^2 \end{align} where \cref{lem:summingds,lem:sumlog} are applied to the left and right terms of line~(\ref{EqnLine:ApplyLemmas}), respectively, and \begin{align} S[C]:=\log\mathcal{D}^2-\sum_x \frac{d_x^2\log{d_x}}{\mathcal{D}^2}. \end{align} It is straightforward to check that this holds on each sub-region of $R$. \end{proof} \end{lemma} Applying \cref{lem:bulkLW} to the regions in \cref{fig:LWregionsblk} completes the proof of \cref{thm:LWbulk}. \subsection{Walker-Wang models}\label{ss:WWbulkpf} In this section, we prove the following result for the bulk diagnostic for Walker-Wang models. The essential arguments in this section were made in \onlinecite{bullivant2016entropic}, however we use slightly different language that allows the result to be applied more generally. \begin{thm} For a Walker-Wang model defined by a unitary premodular category $\mathcal{C}$, the topological entanglement entropy (defined using the regions in \cref{fig:WWregionsblk}) in the bulk is given by \begin{align} \delta & =\sum_{c,\lambda_c}\frac{\lambda_c}{\mathcal{D}^2}\log \frac{\lambda_c}{d_c}, \end{align} where $\{\lambda_c\}$ are the eigenvalues of $\S_c^\dagger \S_c$, and $\S_c$ is the connected $\S$-matrix (\cref{def:commectedS}). \end{thm} \begin{proof} In simply connected regions, the arguments from \cref{lem:bulkLW} still hold. The other type of region in \cref{fig:LWregionsblk} is a torus. In this case, we cannot simply decompose the ground state as in \cref{eqn:simpleregionpartition}, with the sum over configurations on the interface. Recall that the reason we could do this for a simple region was ground states are created by inserting closed loops, and all closed loops except those crossing the interface can be added entirely within either $R$ or $\comp{R}$. This is not the case for a toroidal region. Consider, for example, the configuration depicted in \cref{fig:toroid_config}. The closed string inside $R$ (red, dashed) cannot be altered by acting entirely within $R$, so contributes additional entanglement to the ground state, which is not witnessed by the interface configuration. Additionally, the two loops may be connected by a string, such that the global charge is trivial. Therefore, unlike for simply connected regions, the net charge crossing the boundary is not necessarily trivial. With these considerations, we can decompose the ground state as \begin{align} \ket{\psi}=\sum_{ \substack{\vec{x},\vec{y},\vec{\mu} \\ c,a,\alpha,b,\beta}} \Phi_{\vec{x},\vec{y},\vec{\mu},c} & \frac{\left[\S_c\right]_{(b,\beta)(a,\alpha)}}{\mathcal{D}} \ket{ \psi_R^{\vec{x},\vec{y},\vec{\mu},c,a,\alpha} }\ket{ \psi_{\comp{R}}^{\vec{x},\vec{y},\vec{\mu},c,b,\beta} }, \end{align} where $\S_c$ is the connected $\S$-matrix defined in \cref{eqn:connectedS}. The indices $\vec{x}, \vec{y}, \vec{\mu}$ are as in \cref{eqn:simpleregionpartition}, $b$ labels the loop encircling $R$, while $a$ is the loop within $R$, and $c$ is the total charge crossing the boundary (the top label in \cref{eqn:treeA}). The reduced state on $R$ is \begin{alignat}{2} \rho_R & =\sum_{ \substack{\vec{x},\vec{y},\vec{\mu} \\ a_1,\alpha_1,a_2,\alpha_2,c}} & \frac{\Pr[\vec{x},\vec{y},\vec{\mu}|c]}{\mathcal{D}^2} \left[\S_c^\dagger\S_c\right]_{(a_2,\alpha_2)(a_1,\alpha_1)} \ketbra{\psi_R^{\vec{x},\vec{y},\vec{\mu},a_1,\alpha_1,c}}{\psi_R^{\vec{x},\vec{y},\vec{\mu},a_2,\alpha_2,c}} \\ & =\sum_{ \substack{\vec{x},\vec{y},\vec{\mu} \\ a_1,\alpha_1,a_2,\alpha_2,c}} & \frac{\prod_{j\leq n} d_{x_j}}{d_{c}\mathcal{D}^{2n}} \left[\S_c^\dagger\S_c\right]_{(a_2,\alpha_2)(a_1,\alpha_1)} \ketbra{\psi_R^{\vec{x},\vec{y},\vec{\mu},a_1,\alpha_1,c}}{\psi_R^{\vec{x},\vec{y},\vec{\mu},a_2,\alpha_2,c}}. \end{alignat} To compute the entropy of this state, it is convenient to diagonalize it. Denote the eigenvalues of $\S_c^\dagger\S_c$ by $\{\lambda_c\}$. By a unitary change of basis, we have \begin{align} U\rho_R U^\dagger & = \sum_{\substack{\vec{x},\vec{y},\vec{\mu}, \\ c,\lambda_c}} \frac{\prod_{j\leq n} d_{x_j}}{d_{c}\mathcal{D}^{2n}} \lambda_c\ketbra{\varphi^{\vec{x},\vec{y},\vec{\mu},c}_{\lambda_c}}, \end{align} with von Neumann entropy \begin{align} S_R & =nS[\mathcal{C}]-\sum_{c,\lambda_c}\frac{\lambda_c}{\mathcal{D}^2}\log\frac{\lambda_c}{d_c}, \end{align} where \cref{lem:summingds,lem:sumlog,lem:TrScSc} are used. Combining with \cref{lem:bulkLW} completes the proof. \end{proof} \begin{figure} \centering $\begin{array}{c} \includeTikz{torus}{ \begin{tikzpicture}[scale=.75] \draw[gray!30] (6,-1.25,0) -- (0,-1.25,0) -- (0,-1.25,6); \draw[thick, dashed, darkred] (1,-0.625,1) -- (5,-0.625,1) -- (5,-0.625,3); \draw[fill=gray!20,draw=gray!40](3.75,-0.008,2.25) -- (3.75,-1.25,2.25) -- (3.75,-1.25,3.75) -- (3.75,-0.008,3.75); \draw[] (0,0,6) -- (0,-1.25,6) -- (6,-1.25,6) -- (6,-1.25,0) -- (6,0,0); \draw[] (6,-1.25,6) -- (6,0,6); \draw[] (0,-1.25,6) -- (0,0,6); \draw[] (6,-1.25,0) -- (6,0,0); \draw[gray!40] (0,-1.25,0) -- (0,0,0); \draw[fill=white] (2.25,0,2.25) -- (2.25,0,3.75) -- (3.75,0,3.75) -- (3.75,0,2.25) -- cycle; \draw[] (2.25,0,2.25) -- (2.25,-1,2.25); \draw[thick, dotted, blue!80] (-.75,-2.25,3) -- (-.75,1,3) -- (3,1,3)-- (3,-2.25,3) -- cycle; \draw[fill=white, draw=gray!40] (2.25,-0.008,3.75) -- (2.25,-1.25,3.75) -- (3.75,-1.25,3.75) -- (3.75,-0.008,3.75); \draw[] (0,0,0) -- (0,0,6) -- (6,0,6) -- (6,0,0) -- cycle; \draw[thick, dashed, darkred] (1,-0.625,1) -- (1,-0.625,5) -- (5,-0.625,5) -- (5,-0.625,3); \draw[thick,dotted,orange] (1,-0.625,3)--(-.75,-0.625,3); \end{tikzpicture} } \end{array}$ \caption{When the region $R$ is not simply connected, the computation of entropy is more subtle. There is additional entanglement in the system due to intersecting loops that cannot be created in $R$ or $\comp{R}$ separately. This is not witnessed by the configuration of strings on the interface.} \label{fig:toroid_config} \end{figure} \begin{conjecture} Let $\mathcal{C}$ be a unitary premodular category $\mathcal{C}$, and define the connected $\S$-matrix via its matrix elements \begin{align} \left[\S_c\right]_{(a,\alpha),(b,\beta)} & =\frac{1}{\mathcal{D}} \begin{array}{c} \includeTikz{ConnectedSmatrix}{} \end{array}. \end{align} The connected $\S$-matrix obeys \begin{align} \sum_{c,\lambda_c}\frac{\lambda_c}{\mathcal{D}^2}\log \frac{\lambda_c}{d_c} & =\log\mathcal{D}_{\mug{\mathcal{C}}}^2,\label{eqn:conjecture} \end{align} where $\{\lambda_c\}$ are the eigenvalues of $\S_c^\dagger \S_c$, and $\mug{\mathcal{C}}$ is the M\"uger center of $\mathcal{C}$. \end{conjecture} We conjecture that \cref{eqn:conjecture} holds in general, however we are currently unable to compute the spectrum of $\S_c$ beyond the families outlined in \cref{thm:WWentropyexamples}. \begin{thm}\label{thm:WWentropyexamples} For a Walker-Wang model defined by a unitary premodular category of one of the following types: \begin{itemize} \item $\mathcal{C}=\cat{A}\boxtimes\cat{B}$, where $\cat{A}$ is symmetric and $\cat{B}$ is modular~\cite{bullivant2016entropic}, \item $\mathcal{C}$ pointed, \item $\rk{\mathcal{C}}<6$ and multiplicity free, \item $\rk{\mathcal{C}}=\rk{\mug{\mathcal{C}}}+1$ and $d_x=\mathcal{D}_{\mug{\mathcal{C}}}$, where $x$ is the additional object (as a special case, $\mathcal{C}$ is a Tambara-Yamagami category~\cite{Tambara_1998,Siehler}), \end{itemize} then \cref{eqn:conjecture} holds. As a consequence, the topological entanglement entropy (defined using the regions in \cref{fig:WWregionsblk}) in the bulk is given by \begin{align} \delta & =\log\mathcal{D}_{\mug{\mathcal{C}}}^2,\label{eqn:BodyWalkerWangBulkTEE} \end{align} where $\mug{\mathcal{C}}$ is the M\"uger center of $\mathcal{C}$. As special cases, this includes \begin{align} \delta_{\text{modular}} & =0 \\ \delta_{\text{symmetric}} & =\log \mathcal{D}^2 \end{align} We conjecture that \cref{eqn:BodyWalkerWangBulkTEE} holds in generality. Physically, this is seen by noting that the particle content of the bulk Walker-Wang model is given by the M\"uger center $\mug{\cdot}$~\cite{ZWang}. \begin{proof} Provided in Appendix~\hyperref[thm:WWentropyexamples_pf]{\ref*{app:SN_results}}. \end{proof} \end{thm} \begin{examples*} Recall the examples from \cref{sec:examples}. As discussed in \cref{sec:examples_phys}, these label four distinct loop-gas models in (3+1)-dimensions, the bosonic and fermionic toric code models, and two semion models. All four input categories are pointed, so we can apply \cref{thm:WWentropyexamples} to obtain the TEE. The first two models are symmetric, so $\delta=\log 2$ for both. The inputs to the semion models are modular, so the bulk is trivial~\cite{von2013three}. In this case $\delta=0$. \end{examples*} \section{Entropy diagnostics}\label{sec:entropydiagnostics} In what follows we describe the universal correction to the area law that we expect for topological phases. We then define two diagnostics that can be used to probe the properties of the excitations at the boundary of three-dimensional topological phases. \subsection{The universal correction to the area law} The ground states of topological phases of matter demonstrate robust long-range entanglement that is not present in trivial phases~\cite{hamma2005bipartite, kitaev2006topological, levin2006detecting}. Typically, we expect the entanglement entropy shared between a subsystem of a ground state of a gapped phase with the rest of the system to respect an area law, i.e., the entanglement will scale with the size of the surface area of the subsystem. The long-range entanglement manifests as a universal correction to the area law. More precisely, we expect that if we partition the ground state of a system into two subsystems, $R$ and its complement $\comp{R}$, the entanglement entropy, $S_R$, will satisfy \begin{equation} S_R=\alpha|\partial R|-b_R \gamma. \label{eqn:arealaw} \end{equation} Here $\alpha$ is a non-universal coefficient that depends on the microscopic details of the system, $|\partial R|$ is the surface area of the interface between the partitioned regions, $b_R$ is the number of disjoint components of the interface between $R$ and $\comp{R}$, and $\gamma$ is a universal constant commonly known as the topological entanglement entropy (TEE). We have assumed that $R$ is large compared to the correlation length of the system, and its shape has no irregular features. \subsection{Two-dimensional models} \begin{figure} \includeTikz{TwoDimEntropyRegions}{ \begin{tikzpicture} \begin{scope} \node at (-3,1.5) {a)}; \draw (-3,-.5) rectangle (-2,.5);\draw (3,-.5) rectangle (2,.5); \draw (-2,-1.5) rectangle (2,1.5); \draw (-2,-.5)--(2,-.5) (-2,.5)--(2,.5) (-1,-.5)--(-1,.5) (1,-.5)--(1,.5); \node at (0,0) {$C$};\node at (-1.5,0) {$B$};\node at (1.5,0) {$B$}; \node at (0,1) {$D$};\node at (0,-1) {$D$}; \node at (-2.5,0) {$P$};\node at (2.5,0) {$P$}; \end{scope} \begin{scope}[shift = {(9,0)}] \node at (-3,1.5) {b)}; \begin{scope} \clip (-3,-2) rectangle (3,00); \draw (-3,-.5) rectangle (-2,.5);\draw (3,-.5) rectangle (2,.5); \draw (-2,-1.5) rectangle (2,1.5); \draw (-2,-.5)--(2,-.5) (-2,.5)--(2,.5) (-1,-.5)--(-1,.5) (1,-.5)--(1,.5); \node at (0,-.25) {$C$};\node at (-1.5,-.25) {$B$};\node at (1.5,-.25) {$B$}; \node at (0,1) {$D$};\node at (0,-1) {$D$}; \node at (-2.5,-.25) {$P$};\node at (2.5,-.25) {$P$}; \end{scope} \draw[blue!20,ultra thick](-3,0)--(3,0); \draw[blue!20,ultra thick,dashed](-4,0)--(4,0); \end{scope} \end{tikzpicture}} {\phantomsubcaption\label{fig:LWregionsblk}} {\phantomsubcaption\label{fig:LWregionsbnd}} \caption{Example of subsystems that can be used to find topological entropies in 2D. The region $A$ is the complement of $BCD$. The regions \subref*{fig:LWregionsblk}) are used to find the bulk entropy $\gamma$, and the regions \subref*{fig:LWregionsbnd}) are used for the boundary entropy $\Gamma$. }\label{fig:LWregions} \end{figure} Intimately connected to the long-range entanglement of a topological phase are the properties of its low-energy excitations. A large class of topological models in two-dimensions are the Levin-Wen (LW) string-net models~\cite{levin2005string}. These models support topological point-like excitations that can be braided to change the state of the system. Throughout this work we will be interested in the boundaries of topological phases. Importantly, topological particles can behave differently in the vicinity of the boundary of a phase. For instance, topological particles found in the bulk may become trivial particles close to certain boundaries. This is because topological particles can condense at the boundary. We therefore see that the nature of some particles can change depending on whether they are in the vicinity of a boundary. As the physics of quasi-particles of a topological phase can change close to its boundary, so to do we expect that the nature of its long-range entanglement to change. In \onlinecite{kim2015ground}, several topological entanglement entropy diagnostics were found to probe long-range entanglement of a model, both in the bulk and near to a boundary. The first is the bulk topological entanglement entropy \begin{align} \gamma:=S_{BC}+S_{CD}-S_B-S_D, \end{align} where the regions are depicted in \cref{fig:LWregionsblk}, and $XY:=X\cup Y$. If $\gamma=0$, all point-like excitations can be created on the distinct parts of $P$ with a creation operator that has no support on $ACD$, where $A$ is the region that is complement to those shown in the figure. In this case, we declare them trivial. Conversely, if there are non-trivial topological excitations, for example created with string-like operators, $\gamma$ is necessarily non-zero. In the presence of a gapped boundary, the excitations may differ. If a bulk topological excitation can be discarded or `condensed' on the boundary, it is possible to locally create such an excitation near the boundary. This is detected using the diagnostic \begin{align} \Gamma:=S_{BC}+S_{CD}-S_B-S_D,\label{eqn:LWbndent} \end{align} where the regions are depicted in \cref{fig:LWregionsbnd}. If $\Gamma=0$, all point-like excitations on $P$ can be created with an operator that has no support on $ACD$, while non-trivial excitations require non-zero $\Gamma$. \subsection{Three-dimensional models} A natural generalization of Levin-Wen models to three dimensions is the class called Walker-Wang (WW) models~\cite{Walker2012TQFT}. These models give rise to both point- and line-like topological particles in the bulk, in addition to boundary excitations. Unlike Levin-Wen models, in some instances topological particles are only found at the boundary. Since there are two kinds of topological excitations in 3D, we might expect that there are two bulk diagnostics generalizing $\gamma$. However, as it has been shown~\cite{grover2011entanglement,bullivant2016entropic}, these coincide. We define the bulk topological entanglement entropy \begin{align} \delta:=S_{BC}+S_{CD}-S_B-S_D, \end{align} where the regions are depicted in \cref{fig:WWregionsblk}. We obtained this choice of region following intuition given in Ref.~\cite{kim2015ground} where we consider the creating point excitations at the distinct parts of region $P$ using a string operator supported on $ACD$. We find $\delta$ is zero only if all the excitations can be created using an operator with local support. The boundary diagnostics that we describe next are obtained by bisecting the regions shown in \cref{fig:WWregionsblk} along different planes where the boundary lies. \begin{figure} \centering \includeTikz{bulk_point_A1}{ \begin{tikzpicture}[scale=0.65] \begin{scope} \draw[black!20,dashed] (4.5,-.75,-.75)--(-4.5,-.75,-.75); \draw[black!20,dashed] (3,-.75,.75)--(-4.5,-.75,.75); \draw[black!20,dashed] (3,.75,-.75)--(-4.5,.75,-.75); \draw[black!20,dashed] (3,.75,.75)--(-4.5,.75,.75); \draw (4.5,-.75,.75)--(3,-.75,.75); \draw (4.5,.75,-.75)--(3,.75,-.75); \draw (4.5,.75,.75)--(3,.75,.75); % \draw (-3.475,-.75,.75)--(-4.5,-.75,.75); \draw (-4.05,.75,-.75)--(-4.5,.75,-.75); \draw (-3.475,.75,.75)--(-4.5,.75,.75); % \draw[black!20,dashed] (-4.5,-.75,.75)--(-4.5,.75,.75)--(-4.5,.75,-.75)--(-4.5,-.75,-.75)--cycle; \draw (4.5,-.75,.75)--(4.5,.75,.75)--(4.5,.75,-.75)--(4.5,-.75,-.75)--cycle; \draw[black!20,dashed] (1.5,-.75,.75)--(1.5,.75,.75)--(1.5,.75,-.75)--(1.5,-.75,-.75)--cycle; \draw[black!20,dashed] (-1.5,-.75,.75)--(-1.5,.75,.75)--(-1.5,.75,-.75)--(-1.5,-.75,-.75)--cycle; \draw (-4.5,-.75,.75)--(-4.5,.75,.75)--(-4.5,.75,-.75); \node[circle,inner sep=.01pt] at (0,0,.75) {$C$}; \node[inner sep=0pt] at (-2.25,0,.75) {$D$};\node[inner sep=0] at (2.25,0,.75) {$D$}; \node[circle,inner sep=.01pt] at (-4,0,.75) {$P$};\node[circle,inner sep=.01pt] at (3.75,0,.75) {$P$}; \end{scope} \draw (-3,-2,2)--(-3,2,2)--(-3,2,-2); \draw (-3,-2,2)--(3,-2,2); \draw (-3,2,2)--(3,2,2); \draw (-3,2,-2)--(3,2,-2); \draw[black!20,dashed] (3,-.75,.75)--(-3,-.75,.75); \draw[black!20,dashed] (3,.75,.75)--(-3,.75,.75); \draw[black!20,dashed] (3,.75,-.75)--(-3,.75,-.75); \draw[black!20,dashed] (-3,-.75,-.75)--(-3,.75,-.75)--(-3,.75,.75)--(-3,-.75,.75)--cycle; \draw[black!20,dashed] (3,.75,-.75)--(3,-.75,-.75)--(3,-.75,.75); \draw (3,-.75,.75)--(3,.75,.75)--(3,.75,-.75)--(4.5,.75,-.75); % \draw[] (3,.275,-2)--(3,2,-2)--(3,2,2)--(3,-2,2)--(3,-2,-2)--(3,-1.8,-2); \node[circle,inner sep=.1pt,fill=white] at (-.5,-1.75,0) {$B$}; \node[circle,inner sep=.1pt,fill=white] at (6.4,2.5,5) {\color{white}.}; \end{tikzpicture} } \caption{Partitioning of the lattice for detecting excitations in the bulk. $B$ encircles $CD$, and $A$ is the complement of $BCD$. If $\delta$ is small, excitations on $P$ can be created by only acting on $PD$ and so have trivial statistics.} \label{fig:WWregionsblk} \end{figure} In Ref.~\cite{kim2015ground} two topological entanglement entropy diagnostics were found to probe long-range entanglement of a model near to a boundary. The first boundary diagnostic is an indicator that point-like topological particles can be created at the boundary of the system, and the second indicates that the boundary supports extended one-dimensional `loop-like' topological particles. Unlike in the bulk, these diagnostics do not necessarily coincide. The first \begin{align} \Delta_{\bullet}:=S_{BC}+S_{CD}-S_B-S_D,\label{eqn:WWptdef} \end{align} defined using the regions in \cref{fig:WWregionsbnd_pt}, is non-zero if non-trivial point-like excitations can be created near the boundary. If $\Delta_{\bullet}=0$, all point-like excitations on $P$ can be created with a local operator, so they are necessarily trivial. Conversely, if there are non-trivial point-like particles near the boundary, $\Delta_{\bullet}>0$. \begin{figure} \centering \includeTikz{Boundary_point_A}{ \begin{tikzpicture}[scale=0.65] \draw (-3,-2,2)--(-3,0,2)--(-3,0,.75) (-3,0,-.75)--(-3,0,-2); \draw (-3,0,-.75)--(-3,-.6,-.75); \draw (3,-.75,-.75)--(-3,-.75,-.75); % \fill[blue!20] (-3,0,2)--(3,0,2)--(3,0,.75)--(-3,0,.75)--cycle; \fill[blue!20] (-3,0,-2)--(3,0,-2)--(3,0,-.75)--(-3,0,-.75)--cycle; \filldraw[fill=white] (3,-2,2)--(3,0,2)--(3,0,.75)--(3,-.75,.75)--(3,-.75,-.75)--(3,0,-.75)--(3,0,-2)--(3,-2,-2)--cycle; % \draw[black!20,dashed] (2.4,-.75,-.75)--(-3,-.75,-.75); \draw[black!20,dashed] (3,-.75,.75)--(-3,-.75,.75)--(-3,0,.75) (-3,-.75,.75)--(-3,-.75,-.75)--(-3,-.54,-.75); % \draw (-3,-2,2)--(3,-2,2); \draw (-3,0,2)--(3,0,2); \draw (-3,0,-2)--(3,0,-2); \draw (-3,0,.75)--(3,0,.75); \draw (-3,0,-.75)--(3,0,-.75); % \draw[] (3,-2,2)--(3,0,2)--(3,0,.75)--(3,-.75,.75)--(3,-.75,-.75)--(3,0,-.75)--(3,0,-2)--(3,-2,-2)--cycle; \node[circle,inner sep=.1pt,fill=white] at (-.5,-1.75,0) {$B$}; \node[circle,inner sep=.1pt,fill=white] at (6.4,2.5,5) {\color{white}.}; \end{tikzpicture} } \hspace{1.5cm} \includeTikz{Boundary_point_B}{ \begin{tikzpicture}[scale=0.65] \begin{scope}[shift={(-7.5,0,0)}] \filldraw[fill=white] (3,-.75,-.75)--(4.5,-.75,-.75)--(4.5,0,-.75)--(3,0,-.75)--cycle; \filldraw[fill=white] (3,-.75,.75)--(4.5,-.75,.75)--(4.5,0,.75)--(3,0,.75)--cycle; \filldraw[fill=white] (4.5,-.75,-.75)--(4.5,-.75,.75)--(4.5,0,.75)--(4.5,0,-.75)--cycle; \filldraw[fill=blue!20] (3,0,-.75)--(4.5,0,-.75)--(4.5,0,.75)--(3,0,.75)--cycle; \end{scope} \filldraw[fill=white] (-3,0,2)--(3,0,2)--(3,0,.75)--(-3,0,.75)--cycle; \fill[blue!20] (-3,0,-2)--(3,0,-2)--(3,0,2)--(-3,0,2)--cycle; \draw (-1.5,0,-.75)--(-1.5,0,.75);\draw (1.5,0,-.75)--(1.5,0,.75); \draw (-3,0,-.75)--(-3,0,.75);\draw (3,0,-.75)--(3,0,.75); \draw (-3,-2,2)--(-3,0,2)--(-3,0,.75)--(-3,0,-.75)--(-3,0,-2); \fill[white] (-3,-2,2)--(3,-2,2)--(3,0,2)--(-3,0,2)--cycle; \draw (-3,-2,2)--(3,-2,2);\draw (-3,0,2)--(3,0,2);\draw (-3,0,-2)--(3,0,-2); \draw (-3,0,.75)--(3,0,.75);\draw (-3,0,-.75)--(3,0,-.75); \draw (3,-2,2)--(3,0,2)--(3,0,.75)--(3,0,-.75)--(3,0,-2)--(3,-2,-2)--cycle; \begin{scope} \filldraw[fill=white] (3,-.75,-.75)--(4.5,-.75,-.75)--(4.5,0,-.75)--(3,0,-.75)--cycle; \filldraw[fill=white] (3,-.75,.75)--(4.5,-.75,.75)--(4.5,0,.75)--(3,0,.75)--cycle; \filldraw[fill=white] (4.5,-.75,-.75)--(4.5,-.75,.75)--(4.5,0,.75)--(4.5,0,-.75)--cycle; \filldraw[fill=blue!20] (3,0,-.75)--(4.5,0,-.75)--(4.5,0,.75)--(3,0,.75)--cycle; \end{scope} \draw[] (-3,-2,2)--(3,-2,2)--(3,0,2)--(-3,0,2)--cycle; % \node[circle,inner sep=.01pt,fill=white] at (-.5,-1.75,0) {$B$}; \node[circle,inner sep=.01pt,fill=blue!20] at (0,0,0) {$C$}; \node[inner sep=0pt,fill=blue!20] at (-2.25,0,0) {$D$};\node[inner sep=0,fill=blue!20] at (2.25,0,0) {$D$}; \node[circle,inner sep=.01pt,fill=blue!20] at (-3.75,0,0) {$P$};\node[circle,inner sep=.01pt,fill=blue!20] at (3.75,0,0) {$P$}; \end{tikzpicture} } \caption{Partitioning of the lattice for detecting point-like excitations on the boundary. The top (blue) surface is on the physical boundary of the lattice. If $\Delta_{\bullet}$ is small, excitations on $P$ can be created by only acting on $PD$ and so have trivial statistics.} \label{fig:WWregionsbnd_pt} \end{figure} The final diagnostic is designed to detect nontrivial loop-like excitations. Using the regions depicted in \cref{fig:WWregionsbnd_lp}, this diagnostic is \begin{align} \Delta_{\circ}:=S_{BC}+S_{CD}-S_B-S_D.\label{eqn:WWloopdef} \end{align} Similarly to the other diagnostics, if $\Delta_{\circ}$ is zero, then line-like excitations must be trivial. Conversely, $\Delta_{\circ}$ must be nonzero if non-trivial loop excitations can be created at the boundary. \begin{figure} \centering \includeTikz{Boundary_line_A}{ \begin{tikzpicture}[scale=0.65] \draw[fill=blue!20] (-2,0,-2)--(-2,0,2)--(2,0,2)--(2,0,-2)--cycle; \draw[fill=blue!20] (-1,0,-1)--(-1,0,1)--(1,0,1)--(1,0,-1)--cycle; \draw (2,-2,-2)--(2,0,-2) (2,-2,2)--(2,0,2) (-2,-2,2)--(-2,0,2); \draw(-2,-2,2)--(2,-2,2)--(2,-2,-2); \draw(-2,-1,2)--(2,-1,2)--(2,-1,-2); \node[] at (0,0,0) {$C$}; \node[] at (0,-.5,2) {$B$}; \node[] at (0,-1.5,2) {$D$}; \end{tikzpicture} } \hspace{1.5cm} \includeTikz{Boundary_line_B}{ \begin{tikzpicture}[scale=0.65] \draw[fill=blue!20] (-2,-1,-2)--(-2,-1,2)--(2,-1,2)--(2,-1,-2)--cycle; \draw[fill=white] (-1,-1,-1)--(-1,-1,1)--(1,-1,1)--(1,-1,-1)--cycle; \draw (2,-2,-2)--(2,-1,-2) (2,-2,2)--(2,-1,2) (-2,-2,2)--(-2,-1,2); \draw(-2,-1,-2)--(-2,-1,2)--(2,-1,2)--(2,-1,-2)--cycle; \draw (-2,-2,2)--(2,-2,2)--(2,-2,-2); \begin{scope} \clip (-1,-1,-1)--(-1,-1,1)--(1,-1,1)--(1,-1,-1)--cycle; \draw (-1,-1,-1)--(-1,-2,-1); \end{scope} \node[] at (0,-1.5,2) {$B$}; \end{tikzpicture} } \hspace{1.5cm} \includeTikz{Boundary_line_C}{ \begin{tikzpicture}[scale=0.65] \draw (2,-2,-2)--(2,-1,-2) (2,-2,2)--(2,-1,2) (-2,-2,2)--(-2,-1,2); \draw(-2,-1,-2)--(-2,-1,2)--(2,-1,2)--(2,-1,-2)--cycle; \draw (-2,-2,2)--(2,-2,2)--(2,-2,-2); \node[] at (0,-1.5,2) {$D$}; \end{tikzpicture} } \caption{Partitioning of the lattice for detecting line-like excitations on the boundary. The top (blue) surface is on the boundary of the lattice. If $\Delta_{\circ}$ is small, excitations on $B$ can be created without acting on $C$ and so have trivial statistics.}\label{fig:WWregionsbnd_lp} \end{figure} The diagnostics presented in \onlinecite{kim2015ground} were found using generic arguments about the support of deformable operators that are used to create excitations. As such, it was shown rigorously that the null outcome is obtained only if a boundary does not give rise to topological particles. Conversely, a boundary that gives rise to topological excitations must give a positive reading for these diagnostics. However, due to spurious contributions~\cite{BravyiUnpublished,Cano15, Zou16, Williamson19, Kato19}, the generic arguments cannot guarantee that the diagnostics do not give false positives and, moreover, the work gives no interpretation for the magnitude of a positive reading. In our work, we restrict to loop-gas models. In that setting, for a large class of models, we obtain expressions for the topological entanglement entropy near the boundary. \section{Introduction}\label{sec:introduction} The classification of topological phases is central to the study of modern condensed matter physics~\cite{haldane1984periodic,Wen1989vacuum, wen1990ground, wen2004quantum}. Moreover, they have properties that may be valuable for the robust storage and manipulation of quantum information~\cite{kitaev1997fault, Brown2016Quantum}. Their characteristics include a stable gap at zero temperature and quasiparticle excitations with non-trivial braid statistics~\cite{wilczek1982magnetic,wilczek1982quantum}. An important class of topological phases are represented by topological loop-gas models~\cite{levin2005string,Walker2012TQFT}. These models can be defined in terms of an input unitary fusion category, and their ground states by superpositions of string diagrams labeled by objects from the category. The categorical framework provides a collection of local relations that ensure topological invariance of the ground states. In (2+1)-dimensions, these models are called Levin-Wen (LW) models~\cite{levin2005string}. LW models have point-like excitations, commonly called anyons, with non-trivial fusion rules and braid statistics. In (3+1)-dimensions, the input category must be equipped with a premodular braiding, leading to a Walker-Wang (WW) model~\cite{Walker2012TQFT}. Generically, WW models support point-like and loop-like excitations. In contrast to LW models, the excitations in the bulk of a WW model may be trivial, specifically if the input category is modular. Loop-gas models can be defined on manifolds with boundaries by modifying the local relations governing the strings in the vicinity of the boundary. One way to define a boundary to a topological loop-gas is to allow some strings to terminate on the boundary. This is captured in the current work using particular objects called algebras~\cite{1706.03329,1706.00650}. Despite their trivial bulk excitations, WW models may have highly non-trivial boundary excitations. In particular, the boundary excitations may be described by a LW model. Intimately connected to the topological properties of these phases is the long-range entanglement present in the ground state of the Hamiltonians describing these phases~\cite{chen2010local, wang2017twisted}. The long-range quantum correlations found in the ground states of topological phases can be measured using the topological entanglement entropy~\cite{hamma2005bipartite, kitaev2006topological, levin2006detecting}. We typically expect that the entanglement entropy shared between two subsystems of the ground state of a gapped many body system to respect an area law~\cite{eisert2010area}. However, supposing a sensible choice of bipartition, the entanglement entropy of the ground state of topological phases has a constant universal correction~\cite{hamma2005bipartite}. In (2+1)-dimensions, it is known that this correction relates to the total quantum dimension of the quasiparticle excitations supported by the phase~\cite{kitaev2006topological, levin2006detecting}. We can also evaluate the quantum dimensions of individual excitations~\cite{dong2008topological} and defects~\cite{brown2013topological, Bonderson2017Anyonic} of a phase using topological entanglement entropy. Other work has shown we can use the topological entanglement entropy to calculate the fusion rules~\cite{Shi2020} and braid statistics~\cite{zhang2012quasi} of (2+1)-dimensional phases. Generalizations of topological entanglement entropy diagnostics have been found~\cite{castelnovo2008topological, grover2011entanglement} for three-dimensional phases with bulk topological order. These diagnostics were first demonstrated using the three-dimensional toric code model~\cite{hamma2005string} as an example. This phase gives rise to one species of bosonic excitation that braids non-trivially with a loop-like excitation in the bulk of the system. In contrast, particular classes of Walker-Wang models~\cite{Walker2012TQFT} have been shown to behave differently using the same diagnostics. Modular examples of these models demonstrate zero bulk topological entanglement entropy~\cite{von2013three, bullivant2016entropic}, even though, at their boundary, they realize quasiparticle excitations with non-trivial braid statistics~\cite{von2013three}. In \onlinecite{kim2015ground}, two new diagnostics were found to interrogate the long-range entanglement at the boundary of a three-dimensional topological phase. The behavior of the diagnostics was determined by making quite generic considerations of the support of creation operators for topological excitations, without assuming any knowledge of the underlying particle theory of the phase. It was shown that the diagnostics will show a null outcome only if all the particles that can be created at the boundary have trivial braid statistics. Conversely, boundary topological order must necessarily show positive topological entanglement entropy if quasi-particles that demonstrate non-trivial braid statistics can be created. In that work, the diagnostics were tested at the different boundaries of the three-dimensional toric code where negative outcomes were obtained at boundaries where the appropriate types of particles condense. However, a limitation of the diagnostics presented in that paper is that the meaning of a positive outcome is not well understood. From the input fusion category perspective, the topological entanglement entropies can be understood as arising from constraints on the `string flux' passing through a surface. In (3+1)-dimensions, there are also additional corrections due to braiding. Allowing strings to terminate in the vicinity of a physical boundary alters the flux (and braiding) constraints in the vicinity, thereby altering the topological entropy. In this work, we obtain closed form expressions for bulk and boundary topological entanglement entropy diagnostics for topological loop-gas models. We obtain our results by evaluating the entanglement entropy of various regions of ground states of Levin-Wen and Walker-Wang models. This requires careful analysis of various string diagrams, such as generalized $\S$-matrices which encode the braiding properties of the input category. Additionally, we examine how the inclusion of boundaries, via algebra objects, alter these diagrams, and so the topological entropy. In all cases, we find that the entropy can be expressed in terms of the quantum dimension of the input category and the quantum dimension of the algebra object. In the bulk of (3+1)-dimensional models, we conjecture, and prove in many cases, that the entropy is the logarithm of the total quantum dimension of the particle content of the theory, extending the results of \onlinecite{bullivant2016entropic}. \subsubsection*{Overview} Following a brief summary of our results, the remainder of this paper is structured as follows. In \cref{sec:preliminaries}, we provide some mathematical definitions and minor results that are required for the remainder of the paper. In \cref{sec:models}, we briefly review the models of interest, and discuss the class of boundaries we consider. In \cref{sec:entropydiagnostics}, we explain the origin and meaning of topological entanglement entropy, and define the diagnostics used to detect boundary topological entanglement entropy. In \cref{sec:bulkentropy}, we compute the entropy of bulk regions for Levin-Wen models. These computations are required for the Walker-Wang models, and provide a good warm-up. We then discuss the additional considerations for Walker-Wang models, and extend the computations to these. In \cref{sec:boundaryentropy}, we compute the boundary entropy diagnostics for Levin-Wen models with boundary, followed by some classes of Walker-Wang models with boundary. We summarize in \cref{sec:remarks}. We include two appendices. In \cref{app:FC_pfs}, we provide proofs of some lemmas concerning (generalized) $\S$-matrices. In \cref{app:SN_results}, we provide proofs of some results concerning loop-gas models, and their entropies. \subsection{Summary of results} \begin{table}\renewcommand{\arraystretch}{1.2}\setlength{\tabcolsep}{15pt} \centering \begin{threeparttable} \begin{tabular}{!{\vrule width 1pt}>{\columncolor[gray]{.9}[\tabcolsep]}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}} \toprule[1pt] \rowcolor[gray]{.9}[\tabcolsep] Model & Bulk strings & Boundary algebra object & TEE \\ \toprule[1pt] & & & $\gamma=\log\mathcal{D}^2_{\drinfeld{\mathcal{C}}}$ \\ \greycline{3-4} \multirow{-2}{*}{Levin-Wen} & \multirow{-2}{*}{$\mathcal{C}$ fusion} & $A$ & $\Gamma=\log\mathcal{D}^2$ \\ \toprule[1pt] & & & $\delta=\log\mathcal{D}^2_{\mug{\mathcal{C}}}$\tnote{a} \\ \greycline{3-4} & & & $\Delta_{\bullet}=?$\tnote{b} \\ & & \multirow{-2}{*}{$A$} & $\Delta_{\circ}=\log d_A^2-\log\mathcal{D}^2+\Delta_{\bullet}$ \\ \greycline{3-4} & & & $\Delta_{\bullet}=\log\mathcal{D}^2$ \\ \multirow{-5}{*}{Walker-Wang} & \multirow{-5}{*}{$\mathcal{C}$ premodular} & \multirow{-2}{*}{$A=1$} & $\Delta_{\circ}=0$ \\ \toprule[1pt] & & & $\delta=\log\mathcal{D}^2_{\mug{\mathcal{C}}}$ \\ \greycline{3-4} & & & $\Delta_{\bullet}=\log\mathcal{D}^2-\log d_A$ \\ \multirow{-3}{*}{Walker-Wang} & \multirow{-3}{*}{$\mathcal{C}$ symmetric} & \multirow{-2}{*}{$A$} & $\Delta_{\circ}=\log d_A$ \\ \toprule[1pt] & & & $\delta=\log\mathcal{D}^2_{\mug{\mathcal{C}}}$ \\ \greycline{3-4} & & & $\Delta_{\bullet}=\log\mathcal{D}^2-2\log d_A$ \\ & & \multirow{-2}{*}{$A$ such that $A\cap\mug{\mathcal{C}}=\{1\}$\tnote{c}} & $\Delta_{\circ}=0$ \\ \greycline{3-4} & & & $\Delta_{\bullet}=\log\mathcal{D}^2-\log d_A$ \\ \multirow{-5}{*}{Walker-Wang} & \multirow{-5}{*}{$\mathcal{C}$ pointed} & \multirow{-2}{*}{$A$ such that $A\cap\mug{\mathcal{C}}=A$} & $\Delta_{\circ}=\log d_A$ \\ \toprule[1pt] \end{tabular} \footnotesize \begin{tablenotes} \item[a] Conjectured, proven in many cases. \item[b] We do not have a general form at present. \item[c] Includes the case $\mathcal{C}$ modular. \end{tablenotes} \end{threeparttable} \caption{Summary of results, technical terms defined in \cref{sec:preliminaries}. The bulk strings are labeled by a unitary fusion category $\mathcal{C}$, possibly with extra structure. $\mathcal{D}$ denotes the total quantum dimension of $\mathcal{C}$, $A$ is an algebra object (with extra structure, see \cref{sec:preliminaries}) of dimension $d_A$, $\drinfeld{\mathcal{C}}$ and $\mug{\mathcal{C}}$ are the Drinfeld and M\"uger centers of $\mathcal{C}$ respectively.} \label{tab:resultsummary} \end{table} In \cref{tab:resultsummary}, we summarize our main results. Technical terms used in this summary are defined in \cref{sec:preliminaries}. The models we discuss will be introduced in the following sections, followed by the proofs of these results. We note that many of these results were previously known, for example the bulk Levin-Wen appears in \onlinecites{kitaev2006topological,levin2006detecting}. When the Levin-Wen model is defined by the fusion category $\vvec{G}$, the boundary LW results appear in \onlinecite{1801.01519}. The bulk results for symmetric and modular Walker-Wang models appear in \onlinecite{bullivant2016entropic}. We extend this to include all pointed inputs (all quantum dimensions equal to 1), as well as all input categories up to rank 5. This allows us to conjecture a general result. To the best of our knowledge, there are no results concerning boundary entropies of WW models beyond the 3D toric code~\cite{kim2015ground}. \section{Properties of the connected \texorpdfstring{$\S$}{S}-matrix}\label{app:FC_pfs} \begin{lemma_rep}[\ref{lem:productS}]\label{lem:productS_pf} Let $\mathcal{C}$ be a unitary premodular category. The matrix elements of $\S$ obey \begin{align} \frac{\mathcal{D}}{d_c}\S_{a,c}\S_{b,c} & =\S_{a\otimes b,c}=\sum_x N_{a,b}^x \S_{x,c}. \end{align} \begin{proof} We prove the second equality first. \begin{align} \S_{a\otimes b,c} & :=\frac{1}{\mathcal{D}}\begin{array}{c} \includeTikz{productS_1}{ \begin{tikzpicture}[scale=.75] \centerarc[draw=white,double=black,ultra thick](-.75,0)(0:180:1); \centerarc[draw=white,double=black,ultra thick](-.75,0)(0:180:.8); \draw[draw=white,double=black,ultra thick] (.75,0) circle (1); \centerarc[draw=white,double=black,ultra thick](-.75,0)(180:360:1); \centerarc[draw=white,double=black,ultra thick](-.75,0)(180:360:.8); \node[anchor=west,inner sep=.5] at(1.75,0) {\strut$c$}; \node[anchor=west,inner sep=.5] at(.25,0) {\strut$b$}; \node[anchor=east,inner sep=.5] at(-1.75,0) {\strut$\dual{b}$}; \node[anchor=west,inner sep=1] at(-1.55,0) {\strut$\dual{a}$}; \end{tikzpicture} } \end{array} =\frac{1}{\mathcal{D}}\sum_{x}\sum_{\mu=1}^{N_{ab}^x}\sqrt{\frac{d_x}{d_ad_b}} \begin{array}{c} \includeTikz{productS_2}{ \begin{tikzpicture}[scale=.75] \centerarc[draw=white,double=black,ultra thick](-.75,0)(30:180:1); \centerarc[draw=white,double=black,ultra thick](-.75,0)(30:180:.8); \draw[draw=white,double=black,ultra thick] (.75,0) circle (1); \centerarc[draw=white,double=black,ultra thick](-.75,0)(180:330:1); \centerarc[draw=white,double=black,ultra thick](-.75,0)(180:330:.8); \draw[semithick] ($(30:1)-(.75,0)$) to[out = 300,in=70] (.15,.15); \draw[semithick] ($(30:.8)-(.75,0)$) to[out = 300,in=110] (.15,.15); \draw[semithick] ($(330:1)-(.75,0)$) to[out = 60,in=300] (.15,-.15); \draw[semithick] ($(330:.8)-(.75,0)$) to[out = 60,in=240] (.15,-.15) -- (.15,.15); \node[anchor=west,inner sep=.5] at(1.75,0) {\strut$c$}; \node[anchor=east,inner sep=.5] at(-1.75,0) {\strut$\dual{b}$}; \node[anchor=west,inner sep=1] at(-1.55,0) {\strut$\dual{a}$}; \node[anchor=west,inner sep=.5] at(.15,0) {\strut$x$}; \node[anchor=south west,inner sep=.5] at(.15,.15) {\strut\scriptsize$\mu$}; \node[anchor=north west,inner sep=.5] at(.15,-.15) {\strut\scriptsize$\mu$}; \end{tikzpicture} } \end{array} \\ & =\frac{1}{\mathcal{D}}\sum_{x,\mu}\sqrt{\frac{d_x}{d_ad_b}} \begin{array}{c} \includeTikz{productS_3}{ \begin{tikzpicture}[scale=.75] \centerarc[draw=white,double=black,ultra thick](-.75,0)(0:150:1); \draw[draw=white,double=black,ultra thick] (.75,0) circle (1); \centerarc[draw=white,double=black,ultra thick](-.75,0)(210:360:1); \draw[semithick] ($(150:1)-(.75,0)$) to[out = 240,in=120] ($(210:1)-(.75,0)$); \draw[semithick] ($(150:1)-(.75,0)$) to[out = 300,in=60] ($(210:1)-(.75,0)$); \node[anchor=west,inner sep=.5] at(1.75,0) {\strut$c$}; \node[anchor=west,inner sep=.5] at(.25,0) {\strut$x$}; \node[anchor=east,inner sep=.5] at(-1.75,0) {\strut$\dual{b}$}; \node[anchor=west,inner sep=1] at(-1.5,0) {\strut$\dual{a}$}; \end{tikzpicture} } \end{array}=\sum_{x}N_{a,b}^{x} \S_{x,c}. \end{align} And the first equality. \begin{align} \frac{1}{\mathcal{D}} \begin{array}{c} \includeTikz{productS_1}{} \end{array} & =\frac{1}{\mathcal{D}}\sum_{x,\mu}\sqrt{\frac{d_x}{d_ad_c}}\frac{\theta_x}{\theta_a\theta_{\dual{c}}} \begin{array}{c} \includeTikz{productS_3a}{ \begin{tikzpicture}[scale=.75] \centerarc[draw=white,double=black,ultra thick](-.75,0)(0:180:1); \draw[draw=white,double=black,ultra thick] (.75,0) circle (1); \centerarc[draw=white,double=black,ultra thick](-.75,0)(180:360:1); \draw[shift={(.75,0)}] (160:1)to[out=100,in=0] (-1.25,.75)to[out=180,in=180](-1.25,-.75)to[out=0,in=260](200:1); \node[anchor=west,inner sep=.5] at(1.75,0) {\strut$c$}; \node[anchor=west,inner sep=.5] at(.25,0) {\strut$b$}; \node[anchor=west,inner sep=.5] at(-.25,0) {\strut$x$}; \node[anchor=east,inner sep=.5] at(-1,0) {\strut$\dual{a}$}; \end{tikzpicture} } \end{array} =\frac{1}{\mathcal{D}}\sum_{x,\mu}\sqrt{\frac{d_x}{d_ad_c}}\frac{\theta_x}{\theta_a\theta_{\dual{c}}} \begin{array}{c} \includeTikz{productS_3b}{ \begin{tikzpicture}[scale=.75] \centerarc[draw=white,double=black,ultra thick](-.75,0)(0:180:1); \draw[draw=white,double=black,ultra thick] (.75,0) circle (1); \centerarc[draw=white,double=black,ultra thick](-.75,0)(180:360:1); \draw[shift={(.75,0)}] (160:1)to[out=240,in=90](-1.25,0)to[out=270,in=120](200:1); \node[anchor=west,inner sep=.5] at(1.75,0) {\strut$c$}; \node[anchor=west,inner sep=.5] at(.25,0) {\strut$b$}; \node[anchor=west,inner sep=.5] at(-.25,0) {\strut$x$}; \node[anchor=east,inner sep=.5] at(-.5,0) {\strut$\dual{a}$}; \end{tikzpicture} } \end{array} \\% & =\left(\frac{1}{\mathcal{D} d_c}\sum_{x}\frac{\theta_x}{\theta_a\theta_{\dual{c}}}d_x\right) \begin{array}{c} \includeTikz{productS_4}{ \begin{tikzpicture}[scale=.75] \centerarc[draw=white,double=black,ultra thick](-.75,0)(0:180:1); \draw[draw=white,double=black,ultra thick] (.75,0) circle (1); \centerarc[draw=white,double=black,ultra thick](-.75,0)(180:360:1); \node[anchor=west,inner sep=.5] at(1.75,0) {\strut$c$}; \node[anchor=west,inner sep=.5] at(.25,0) {\strut$b$}; \end{tikzpicture} } \end{array} =\frac{\S_{a,c}}{d_c} \begin{array}{c} \includeTikz{productS_4}{} \end{array}=\frac{\mathcal{D}\S_{a,c}\S_{b,c}}{d_c}. \end{align} \end{proof} \end{lemma_rep} \begin{lemma_rep}[\ref{lem:sumS}]\label{lem:sumS_pf} Let $\mathcal{C}$ be a unitary premodular category, then \begin{align} \sum_b d_b \S_{a,b} & =\indicator{a\in\mug{\mathcal{C}}} d_a \mathcal{D}, \end{align} where $\indicator{W}=1 \iff W$ is true, and $\mug{\mathcal{C}}$ is the M\"uger center. \begin{proof} If $a\in \mug{\mathcal{C}}$, then \begin{align} \sum_b d_b\S_{a,b} & =\frac{1}{\mathcal{D}}\sum_{x,b}N_{a,\dual{b}}^x d_x d_b \\ & =\frac{1}{\mathcal{D}}\sum_{b}d_a d_b^2 \\ & =\mathcal{D} d_a \end{align} Otherwise, we have \begin{align} \frac{\mathcal{D}}{d_a}\S_{a,c}\sum_b d_b \S_{a,b} & =\sum_{x,b}N_{c,b}^x \S_{a,x}d_b=\sum_x \S_{a,x}d_xd_c, \end{align} so after relabeling the RHS summation variable \begin{align} \sum_b d_b \S_{a,b}\left(\S_{a,c}-\frac{d_a d_c}{\mathcal{D}}\right) & =0, \end{align} for all $c$. Unless $\mathcal{C}=\mug{\mathcal{C}}$ (in which case, $a\in \mug{\mathcal{C}}$), this implies the result. \end{proof} \end{lemma_rep} \begin{lemma_rep}[\ref{lem:TrScSc}]\label{lem:TrScSc_pf} Let $\mathcal{C}$ be a unitary premodular category, then \begin{align} \sum_{c\in\mathcal{C}}\Tr\S_c^\dagger\S_c & =\mathcal{D}^2, \end{align} where $\S_c$ is the connected $\S$-matrix and $\mathcal{D}$ is the total dimension of $\mathcal{C}$. \begin{proof} Using \cref{eqn:Scaction}, we have \begin{align} \sum_c\Tr\S_c^\dagger\S_c & =\sum_{a,\alpha,c}\left[\S_c^\dagger\S_c\right]_{(a,\alpha),(a,\alpha)} \\ & =\sum_{a,\alpha,c}\frac{\sqrt{d_c}}{\mathcal{D}^2}\sum_x d_x \begin{array}{c} \includeTikz{STrProof1}{ \begin{tikzpicture}[scale=.75] \centerarc[draw=white,double=black,ultra thick](0,0)(0:180:1); \draw[draw=white,double=black,ultra thick] (1.5,0) circle (1); \draw[draw=white,double=black,ultra thick] (-1.5,0) circle (1); \centerarc[draw=white,double=black,ultra thick](0,0)(180:360:1) \node[anchor=west,inner sep=.5] at(2.5,0) {\strut$a$}; \node[anchor=east,inner sep=.5] at(-2.5,0) {\strut$\dual{a}$}; \node[anchor=west,inner sep=.5] at(1,0) {\strut$x$}; \draw[thick] (1.5,-1)--(1.5,-1.2);\draw[thick] (-1.5,1)--(-1.5,1.2); \draw[thick] (1.5,-1.2)to[out=270,in=270] (3,0)to[out=90, in=90] (-1.5,1.2); \node[anchor=south,inner sep=.5] at(1.5,-1) {\strut$\alpha$}; \node[anchor=north,inner sep=.75] at(-1.5,1) {\strut$\alpha$}; \node[anchor=east,inner sep=.75] at(-1.5,1.25) {\strut$c$}; \node[anchor=west,inner sep=.75] at(3,0) {\strut$\dual{c}$}; \end{tikzpicture} } \end{array} \\ & =\sum_{a,\alpha,c}\frac{\sqrt{d_c}}{\mathcal{D}^2}\sum_x d_x \begin{array}{c} \includeTikz{STrProof2}{ \begin{tikzpicture}[scale=.75] \coordinate (X) at ($(-1.5,0)+({cos(130)},{sin(130)})$); \coordinate (Y) at ($(-1.5,0)+({cos(410)},{sin(410)})$); \centerarc[draw=white,double=black,ultra thick](0,0)(0:180:1); \draw[draw=white,double=black,ultra thick] (1.5,0) circle (1); \centerarc[draw=white,double=black,ultra thick](-1.5,0)(130:410:1); \centerarc[draw=white,double=black,ultra thick](0,0)(180:360:1) \node[anchor=west,inner sep=.5] at(2.5,0) {\strut$a$}; \node[anchor=east,inner sep=.5] at(-2.5,0) {\strut$\dual{a}$}; \node[anchor=west,inner sep=.5] at(1,0) {\strut$x$}; \draw[thick] (1.5,-1)--(1.5,-1.5); \draw[draw=white,double=black,ultra thick] (X)to[out=40,in=180] (0,2)to[out=0,in=90](3.25,0)to[out=-90,in=90](3,-1.5)to[out=-90,in=220](1.25,-1.75)--(1.5,-1.5); \draw[draw=white,double=black,ultra thick] (Y)to[out=130,in=180] (0,1.5)to[out=0,in=90](3,0)to[out=-90,in=305](1.75,-1.75)--(1.5,-1.5); \draw[thick] (1.5,-1.5)--(1.25,-1.75) (1.5,-1.5)--(1.75,-1.75) ; % \node[anchor=south,inner sep=.5] at(1.5,-1) {\strut$\alpha$}; \node[anchor=north,inner sep=.75] at(1.5,-1.6) {\strut$\alpha$}; \node[anchor=east,inner sep=.75] at(1.5,-1.25) {\strut$c$}; \end{tikzpicture} } \end{array}, \end{align} using the properties of the trace. Applying the premodular trap (\cref{cor:premodulartrap}), this gives \begin{align} \sum_c\Tr\S_c^\dagger\S_c & =\sum_{\substack{a,\alpha,c, \\z\in\mug{\mathcal{C}},\mu}} \sqrt{\frac{d_cd_z}{d_a^2}} \begin{array}{c} \includeTikz{STrProof3}{ \begin{tikzpicture}[scale=.75] \draw (-.5,-.75)--(0,-.5)--(0,.5)--(-.5,.75) (0,.5)--(.5,.75) (0,-.5)--(.5,-.75); % \draw[shift={(1,-1.5)}] (-.5,-.75)--(0,-.5)--(0,.5)--(-.5,.75) (0,.5)--(.5,.75) (0,-.5)--(.5,-.75); \draw (.5,.75) to[out=30,in=30] (1.5,-.75); \draw (-.5,.75) to[out=120,in=120] (1.5,1) to[out=300,in=-30] (1.5,-2.25); \draw (-.5,-.75) to[out=210,in=270] (-1,-.5)to[out=90,in=90](-1.5,-.5)to[out=270,in=210](.5,-2.25); \node[anchor=east,inner sep=.75] at(0,0) {$z$}; \node[anchor=north,inner sep=.75] at(0,-.5) {\strut\footnotesize$\mu$}; \node[anchor=south,inner sep=.75] at(0,.5) {\strut\footnotesize$\mu$}; \begin{scope}[shift={(1,-1.5)}] \node[anchor=east,inner sep=.75] at(0,0) {\strut$c$}; \node[anchor=north,inner sep=.75] at(0,-.5) {\strut\footnotesize$\alpha$}; \node[anchor=south,inner sep=.75] at(0,.5) {\strut\footnotesize$\alpha$}; \end{scope} \node[anchor=south,inner sep=.75] at(.5,-.75) {\strut$\dual{a}$}; \node[anchor=south,inner sep=.75] at(.5,-2.25) {\strut$\dual{a}$}; \node[anchor=west,inner sep=.75] at(1.5,-.75) {\strut$a$}; \node[anchor=south west,inner sep=.75] at(1.5,-2.25) {\strut$a$}; \end{tikzpicture} } \end{array} \\ & =\sum_{\substack{a,z\in\mug{\mathcal{C}},\mu}}\sqrt{d_z} \begin{array}{c} \includeTikz{STrProof4}{ \begin{tikzpicture}[scale=.75] \draw (-.5,-.75)--(0,-.5)--(0,.5)--(-.5,.75) (0,.5)--(.5,.75) (0,-.5)--(.5,-.75); \draw (-.5,-.75) to[out=210,in=270] (-1,-.5)to[out=90,in=90](-1.5,-.5)to[out=270,in=-45](.5,-.75); \draw[scale=-1] (-.5,-.75) to[out=210,in=270] (-1,-.5)to[out=90,in=90](-1.5,-.5)to[out=270,in=-45](.5,-.75); \node[anchor=east,inner sep=.75] at(0,0) {\strut$z$}; \node[anchor=north,inner sep=.75] at(0,-.5) {\strut\footnotesize$\mu$}; \node[anchor=south,inner sep=.75] at(0,.5) {\strut\footnotesize$\mu$}; \end{tikzpicture} } \end{array} \\ & =\sum_{\substack{a,z\in\mug{\mathcal{C}},\mu}}|\varkappa_a|^2\indicator{z=1}d_a^2 \\ & =\mathcal{D}^2, \end{align} where $\varkappa_a$ is the Frobenius-Schur indicator~\cite{kitaev2006anyons}. \end{proof} \end{lemma_rep} \section{Loop-gas models in two and three dimensions}\label{sec:models} In this work, we study loop-gas models. In their most general form, these models have ground states described by superpositions of string diagrams subject to a collection of rules, for example a diagram may be declared invalid in a ground state superposition if it contains an `open string'. We focus on topological loop-gas models. In this case, the rules are a collection of local manipulations or moves that states must be invariant under. These are designed to ensure invariance under diffeomorphism. Given a triangulation of the manifold, which provides a lattice structure on which a condensed matter model can be defined~\footnote{The lattice is the dual of the triangulation.}, the local moves ensure retriangulation invariance. Levin-Wen~\cite{wen2004quantum,levin2005string} (LW) and Walker-Wang~\cite{Walker2012TQFT,williamson2017hamiltonian,crane1993categorical,crane1997state} (WW) models are, respectively, two- and three- dimensional Hamiltonian models that give rise to topological loop-gas states as their ground states. Hamiltonians for these models are given in \onlinecite{levin2005string} and \onlinecite{Walker2012TQFT} for LW and WW models respectively. Our results do not depend on the particular form of the Hamiltonian, rather on universal properties of the ground states in the associated topological phase. Quasiparticle excitations in these models are defects in the ground state, corresponding to a local change in the rules. Far from the excitations, the excited state remains invariant under the original moves but, for example, a string may be allowed to terminate at the location of the excitation. \subsection{Bulk} Given a unitary spherical fusion category $\mathcal{C}$ (\cref{def:UFC}), and a given lattice embedded on a 2 dimensional manifold, the ground states of LW models are superpositions of closed diagrams from the category. Strings lie along the edges of the lattice, and closed means they cannot terminate. On the lattice, the string types correspond to states of a qudit with dimension equal to the rank of $\mathcal{C}$. At the vertices, the fusion rules of the category dictate which strings can fuse. If a given configuration occurs in a particular ground state, then any other configuration that is obtainable by local moves (i.e. $F$- or loop-moves) also occurs in that ground state. The relative coefficients are dictated by the $F$-symbols, and consistency is ensured by the pentagon equation. Since the allowed moves are all local, there may be multiple ground states on manifolds with nontrivial genus. For example, a loop enclosing a cycle of the torus cannot be removed with the local loop move. The collection of excitations (anyons) resulting from the Levin-Wen construction is called the Drinfeld center, denoted $\drinfeld{\mathcal{C}}$. We refer to \onlinecite{MR3242743} for a formal definition. In (3+1)-dimensions, for WW models, the diagrams can also include crossings. This required additional data to be added to the category, in particular a braiding (\cref{def:BFC}). If we picture these diagrams embedded in 3 dimensional space, there is an ambiguity involved in these crossing. For example, if we look at a crossing from `the side', there is no crossing. This ambiguity can be resolved by widening the strings into ribbons. This is implemented by insisting that the braided category is premodular (\cref{def:Premodular}). Given a premodular category $\mathcal{C}$, and a lattice embedded in a 3D manifold, a WW model is defined in essentially the same way as a LW model. In addition to the $F$- and loop- moves, $R$- moves and insertion of links (or knots), such as the $\S$-matrix are allowed. Again, given any closed string configuration, any other configuration that can be reached via these rules is included in the ground state superposition. Within the collection of string types, the subset that can be `unlinked' from any other string is called the M\"uger center of $\mathcal{C}$, denoted $\mug{\mathcal{C}}$ (\cref{def:mugC}). The M\"uger center labels the particle excitations of the Walker-Wang model~\cite{ZWang}. \subsection{Boundaries} \begin{figure} \includeTikz{2DAlgebraLattice}{ \begin{tikzpicture} \foreach \x in {-2,...,1} \draw[ultra thick,draw=blue!20] (\x+1/2,1)--(\x+1/2,0) node [pos=0,above,inner sep=.1] {\strut$A$}; \begin{scope}[shift={(0,0)}] \draw (-2,0)--(2,0);\draw[dashed] (2,0)--(3,0);\draw[dashed] (-2,0)--(-3,0); \foreach \x in {-2,...,2} \draw (\x,0)--(\x,-1); \end{scope} \begin{scope}[shift={(0,-1)}] \draw (-2,0)--(2,0);\draw[dashed] (2,0)--(3,0);\draw[dashed] (-2,0)--(-3,0); \foreach \x in {-2,...,1} \draw (\x+1/2,0)--(\x+1/2,-1); \end{scope} \begin{scope}[shift={(0,-2)}] \draw (-2,0)--(2,0);\draw[dashed] (2,0)--(3,0);\draw[dashed] (-2,0)--(-3,0); \foreach \x in {-2,...,2} {\draw (\x,0)--(\x,-.5);\draw[dashed] (\x,-.5)--(\x,-1);}; \end{scope} \end{tikzpicture} } \caption{An algebra specifies a boundary for a Levin-Wen model on a `comb lattice'. Dashed lines indicate the lattice continues. The top, thick blue lines are labeled by an algebra $A$ that defines a physical boundary to the lattice.}\label{fig:Alattice2D} \end{figure} To include a physical boundary in a loop-gas model, the rules must be modified. For the topological loop-gas models, these rules are again defined by local moves in the vicinity of the boundary. These must be compatible with the bulk moves, and ensure topological/retriangulation invariance at the boundary. We restrict our attention to gapped boundaries. There are various equivalent classifications for the gapped boundaries of Levin-Wen models~\cite{Fuchs2002,kitaev2012models,Fuchs2013,Fuchs2015,1706.03329,1706.00650}. In this work, we use an internal classification. In this framework, gapped boundary conditions for Levin-Wen models are labeled by \define{indecomposable strongly separable, Frobenius algebra objects} (\cref{def:FrobAlg}) in $\mathcal{C}$ up to Morita equivalence~\cite{1706.03329,1706.00650}. We restrict to multiplicity free algebras for simplicity. These algebra objects are (not necessarily simple) objects in $\mathcal{C}$, and their simple subjects are roughly the string types that are allowed to terminate on the boundary. On the comb lattice (\cref{fig:Alattice2D}), for example, the dangling edges only take values in the chosen algebra. Far from the boundary, the ground states look just like those with no boundary. Near the boundary, loops are no longer required to be closed, rather they can terminate on the boundary if their label occurs within the algebra. We refer to \onlinecites{1706.00650,1706.03329} for more details, including an explicit Hamiltonian. Just as in the bulk, when me move to (3+1)-dimensions, the braiding must be taken into account. A general classification of gapped boundaries for Walker-Wang models has not been established, so we proceed for a class of boundaries generalizing those for Levin-Wen introduced above. As before, a boundary is labeled by an algebra object. Since the bulk is braided, an additional compatibility condition is required, namely that the algebra is commutative (\cref{eqn:commutativealg}). Finally, in this work, an \define{indecomposable, strongly separable, commutative, Frobenius algebra object} labels a gapped boundary condition of a Walker-Wang model~\footnote{Private communications with David Aasen}. \subsection{Examples}\label{sec:examples_phys} Recall the examples from \cref{sec:examples}. In (2+1)-dimensions, $\vvectwist{\ZZ{2}}{1,\pm 1}$ lead to the same loop-gas model, since the LW construction doesn't make use of the braiding. This model is the equally weighted superposition of all loop diagrams (with no branching due to the fusion rules). This is the ground state of the toric code model~\cite{kitaev1997fault}. Likewise, $\vvectwist{-1}{\pm i}$ correspond to the same loop-gas model. Due to the nontrivial Frobenius-Schur indicator~\cite{kitaev2006anyons}, it is convenient to associate $-1$ to a loop rather than $+1$ (otherwise we can take extra care when bending lines). The ground state is therefore a superposition of loops, but weighted by $(-1)^{\text{number of loops}}$. This is commonly called the double-semion model. There are two possible (gapped) boundaries for the toric code, the `smooth' boundary, corresponding to the algebra $A_0$, and the `rough' boundary, corresponding to $A_1$. We refer to \onlinecite{brayvi1998quantum} for more details. The double-semion model only allows for one kind of (gapped) boundary, labeled by $A_0$. In (3+1)-dimensions, each of these models labels a distinct WW model. The models $\vvectwist{\ZZ{2}}{1,1},\,\vvectwist{\ZZ{2}}{1,-1}$ are commonly called the \define{bosonic-} and \define{fermionic-} toric code models respectively~\cite{hamma2005string,von2015walker}. Since these categories both have $\mug{\mathcal{C}}=\{1,x\}$, they both have a single particle excitation, in addition to the trivial excitation, whose self-statistics lead to the names of the models. The two models $\vvectwist{-1}{\pm i}$ are both referred to as semion models. In (3+1)-dimensions, the bosonic toric code still has two kinds of boundaries, but the remaining models are only compatible with the trivial boundary labeled by $A_0$. \section{Loop-gas results}\label{app:SN_results} In this section, given a fusion category $\mathcal{C}$, an $n$-tuple of simple objects $\vec{x}_n:=(x_1,x_2,\ldots,x_n)$, and a fixed simple object $a$, we use the notation \begin{align} N_{a}(\vec{x}_n):=\sum_{\vec{y}_{n-2}}N_{x_1,x_2}^{y_1}N_{y_1,x_3}^{y_2}\ldots N_{y_{n-2},x_n}^{a}, \end{align} where $\vec{y}_{n-2}:=(y_1,y_2,\ldots,y_{n-2})$, and the sum is over all tuples of simple objects in $\mathcal{C}$. $N_{a}(\vec{x}_n)$ counts the number of ways $\vec{x}_{n}$ can fuse to $a$. When it can easily be inferred, we omit the subscript on the tuple $\vec{x}$. \begin{lemma_rep}[\ref{lem:summingds}]\label{lem:summingds_pf} Let $\mathcal{C}$ a unitary fusion category, then \begin{align} \sum_{\vec{x}_n}N_a(\vec{x}_n)\prod_{j\leq n}d_{x_j} & =d_a\mathcal{D}^{2(n-1)},\label{eqn:sumd_1} \end{align} where $\mathcal{D}=\sqrt{\sum_a d_a^2}$ is the total quantum dimension of $\mathcal{C}$. \begin{proof} We proceed inductively. When $n=1$, $N_a(x)=\indicator{x=a}=N_{1,x}^{a}$, and \cref{eqn:sumd_1} reduces to $d_a=d_a$. Assume \cref{eqn:sumd_1} holds for the fusion of $n$ objects. Recall that or any fusion category, we have \begin{align} d_ad_b & =\sum_c N_{bc}^{a}d_c, \label{eqn:TheOneAbove} \end{align} and this holds for any cyclic permutation of the indices on $N_{bc}^{a}$. We now obtain \begin{align} \sum_{\vec{x}_{n+1}}N_a(\vec{x}_{n+1})\prod_{j\leq n+1}d_{x_j} & = \sum_{\vec{x}_n,y_{n-1}}N_{y_{n-1}}(\vec{x}_n)\prod_{j\leq n}d_{x_j}\sum_{x_{n+1}}N_{y_{n-1},x_{n+1}}^{a}d_{x_{n+1}} \\ & =\mathcal{D}^{2(n-1)}\sum_{x_{n+1},y_{n-1}}N_{y_{n-1},x_{n+1}}^{a}d_{y_{n-1}}d_{x_{n+1}} \\ & =\mathcal{D}^{2(n-1)}d_a\sum_{y_{n-1}}d_{y_{n-1}}^2 \\ & =d_a\mathcal{D}^{2n}, \end{align} where in the second line we used the induction assumption (\cref{eqn:sumd_1}), and in the third line we used \cref{eqn:TheOneAbove}. \end{proof} \end{lemma_rep} \begin{lemma_rep}[\ref{lem:sumlog}]\label{lem:sumlog_pf} Let $\mathcal{C}$ a unitary fusion category. For the fusion of $n$ objects $\vec{x}=(x_1,x_2,\ldots,x_n)$, with $n>1$, we have \begin{align} \sum_{\vec{x}_n} N_a(\vec{x}_n)\frac{\prod_{j\leq n} d_{x_j}}{\mathcal{D}^{2(n-1)}}\log\prod_{k\leq n} d_{x_k} & =n d_a \sum_x \frac{d_x^2\log d_x}{\mathcal{D}^2}.\label{eqn:sumlog_1} \end{align} \begin{proof} We prove the claim inductively. The base case is when $n=2$. \begin{align} \sum_{x_1,x_2} N_{x_1,x_2}^{a}\frac{d_{x_1}d_{x_2}}{\mathcal{D}^{2}}(\log d_{x_1}+\log d_{x_2}) & =\sum_{x_1,x_2} N_{x_1,x_2}^{a}\frac{d_{x_1}d_{x_2}}{\mathcal{D}^{2}}\log d_{x_1}+\sum_{x_1,x_2} N_{x_1,x_2}^{a}\frac{d_{x_1}d_{x_2}}{\mathcal{D}^{2}}\log d_{x_2} \\ & =d_a\sum_{x_1} \frac{d_{x_1}^2}{\mathcal{D}^{2}}\log d_{x_1}+d_a\sum_{x_2} \frac{d_{x_2}^2}{\mathcal{D}^{2}}\log d_{x_2} \\ & =2 d_a \sum_x \frac{d_x^2\log d_x}{\mathcal{D}^2}. \end{align} Assume \cref{eqn:sumlog_1} holds for $n$-tuples, then \begin{align} \sum_{\vec{x}_n,x_{n+1}} & N_a(\vec{x}_{n+1})\frac{\prod_{j\leq n+1} d_{x_j}}{\mathcal{D}^{2n}}\log\prod_{k\leq n+1} d_{x_k} = \!\!\sum_{\substack{\vec{x}_n \\y_{n-1},x_{n+1}}} \!\!N_{y_{n-1}}(\vec{x}_{n})N_{y_{n-1},x_{n+1}}^{a}\frac{\prod_{j\leq n} d_{x_j}}{\mathcal{D}^{2n}}d_{x_{n+1}}\log(\prod_{k\leq n} d_{x_k}d_{n+1})\\ & =\sum_{ \substack{\vec{x}_n \\y_{n-1},x_{n+1}}} N_{y_{n-1}}(\vec{x}_{n})\frac{\prod_{j\leq n}d_{x_j}}{\mathcal{D}^{2n}}N_{y_{n-1},x_{n+1}}^{a}d_{x_{n+1}}\left( \log\prod_{k\leq n} d_{x_k}+\log d_{x_{n+1}}\right) \\ & =n\sum_{x}\frac{d_x^2\log d_x}{\mathcal{D}^2}\sum_{y_{n-1},x_{n+1}}\frac{N_{y_{n-1},x_{n+1}}^{a}d_{y_{n-1}}d_{x_{n+1}}}{\mathcal{D}^2} +\sum_{y_{n-1},x_{n+1}} N_{y_{n-1},x_{n+1}}^{a} \frac{d_{y_{n-1}}d_{x_{n+1}}\log d_{x_{n+1}}}{\mathcal{D}^{2}} \\ & =nd_a\sum_{x}\frac{d_x^2\log d_x}{\mathcal{D}^2}\sum_{x_{n+1}}\frac{d_{x_{n+1}}^2}{\mathcal{D}^2} +d_a\sum_{x_{n+1}} \frac{d_{x_{n+1}}^2\log d_{x_{n+1}}}{\mathcal{D}^{2}} \\ & =(n+1) d_a \sum_x \frac{d_x^2\log d_x}{\mathcal{D}^2}. \end{align} \end{proof} \end{lemma_rep} \begin{lemma_rep}[\ref{lem:prtree_pf}]\label{lem:prtree_pf} Let $\mathcal{C}$ a unitary fusion category. Given a fixed fusion outcome $a$ on $n$ simple objects, the probability of the tree \begin{align} \begin{array}{c} \includeTikz{treeextraA}{ \begin{tikzpicture} \draw(0,0)--(1.75,1.75); \draw[dotted](1.75,1.75)--(2,2); \draw(2,2)--(3,3); \begin{scope} \clip(0,0)--(3,3)--(6,3)--(6,0)--(0,0); \draw(1,0)--(0,1); \draw(2,0)--(0,2); \draw(3,0)--(0,3); \draw(4.5,0)--(0,4.5); \draw(5.5,0)--(0,5.5); \end{scope} \node[below] at (0,0) {$x_1$}; \node[below] at (1,0) {$x_2$}; \node[below] at (2,0) {$x_3$}; \node[below] at (3,0) {$x_4$}; \node[below] at (4.5,0) {$x_{n-1}$}; \node[below] at (5.5,0) {$x_{n}$}; \node[above right] at (3,3) {$a$}; \node[above left] at (.75,.75) {$y_1$}; \node[above left] at (1.25,1.25) {$y_2$}; \node[above left] at (2.5,2.5) {$y_{n-2}$}; \node[below] at (.5,.5) {\tiny{$\mu_1$}}; \node[below] at (1,1) {\tiny{$\mu_2$}}; \node[below] at (1.5,1.5) {\tiny{$\mu_3$}}; \node[right] at (2.25,2.25) {\tiny{$\mu_{n-2}$}}; \end{tikzpicture} } \end{array},\label{eqn:treeextraA} \end{align} in the ground state of a topological loop-gas (Levin-Wen or Walker-Wang) model is \begin{align} \Pr[\vec{x},\vec{y},\vec{\mu}|a] & =\frac{\prod_{j\leq n}\Pr[x_j]}{\Pr[a]\prod_{k\leq n} d_{x_k}}d_a \\ & =\frac{\prod_{j\leq n} d_{x_j}}{d_{a}\mathcal{D}^{2(n-1)}}. \end{align} \begin{proof} Given a pair of objects $a,b$, the probability that they fuse to $c$ is given by~\cite{preskillnotes,bullivant2016entropic} \begin{align} \Pr[a\otimes b\to c]=\frac{N_{ab}^c d_c}{d_ad_b }, \end{align} so the probability that $x_1\otimes x_2\otimes\ldots\otimes x_n\to a$ is \begin{align} \Pr[x_1\otimes x_2\otimes x_3\otimes \ldots \otimes x_n\to a]= & \sum_{\vec{y}}\Pr[x_1\otimes x_2\to y_1]\Pr[y_1\otimes x_3\to y_2]\cdots \Pr[y_{n-2}\otimes x_n\to a] \\ = & \frac{N_{a}(\vec{x})}{\prod_{j\leq n} d_{x_j}}d_{a}, \end{align} where \begin{align} N_{a}(\vec{x}) := & \sum_{\vec{y}}N_{x_1x_2}^{y_1}N_{y_1x_3}^{y_2}\ldots N_{y_{n-2}x_n}^{a} \\ = & \sum_{\vec{y}}N_{a}(\vec{x},\vec{y}). \end{align} The probability of a configuration is \begin{align} \Pr[\vec{x},\vec{y}|a] & =\Pr[x_1\otimes x_2\otimes x_3\otimes \ldots \otimes x_n\to a]\frac{\prod_{j\leq n}\Pr[x_j]}{\Pr[a]} \\ & =\frac{N_{a}(\vec{x},\vec{y})d_a}{\prod_{k\leq n} d_{x_k}}\frac{\prod_{j\leq n}\Pr[x_j]}{\Pr[a]}, \end{align} where $\Pr[x_i]=d_{x_i}^2/\mathcal{D}^2$. For a fixed $\vec{x}$ and $\vec{y}$, all (allowed) $\vec{\mu}$ are equally likely, and there are $N_a(\vec{x},\vec{y})$ such configurations, so \begin{align} \Pr[\vec{x},\vec{y},\vec{\mu}|a] & =\frac{\prod_{j\leq n}\Pr[x_j]}{\Pr[a]\prod_{k\leq n} d_{x_k}}d_a \\ & =\frac{\prod_{j\leq n} d_{x_j}}{d_{a}\mathcal{D}^{2(n-1)}}. \end{align} Lemma~\ref{lem:summingds} can be used to show these are properly normalized. \end{proof} \end{lemma_rep} \begin{thm_rep}[\ref{thm:WWentropyexamples}]\label{thm:WWentropyexamples_pf} For a Walker-Wang model defined by a unitary premodular category of one of the following types: \begin{enumerate} \item $\mathcal{C}=\cat{A}\boxtimes\cat{B}$, where $\cat{A}$ is symmetric and $\cat{B}$ is modular~\cite{bullivant2016entropic}, \item $\mathcal{C}$ pointed, \item $\rk{\mathcal{C}}<6$ and multiplicity free, \item $\rk{\mathcal{C}}=\rk{\mug{\mathcal{C}}}+1$ and $d_x=\mathcal{D}_{\mug{\mathcal{C}}}$, where $x$ is the additional object, \end{enumerate} then \cref{eqn:conjecture} holds. As a consequence, the topological entanglement entropy (defined using the regions in \cref{fig:WWregionsblk}) in the bulk is given by \begin{align} \delta & =\log\mathcal{D}_{\mug{\mathcal{C}}}^2.\label{eqn:WalkerWangBulkTEE_app} \end{align} As special cases, this includes \begin{align} \delta_{\text{modular}} & =0 \\ \delta_{\text{symmetric}} & =\log \mathcal{D}^2 \end{align} \begin{proof}~ \subsection{Case 1} Using the premodular trap (\cref{cor:premodulartrap}), we have the matrix elements of $\S_c^\dagger\S_c$ \begin{align} \left[\S_c^\dagger \S_c\right]_{(a,\alpha),(b,\beta)} & =\sqrt{\frac{d_c}{d_ad_b}}\sum_{x\in\mug{\mathcal{C}},\mu}\sqrt{d_x} \begin{array}{c} \includeTikz{SdotS}{ \begin{tikzpicture}[scale=.75] \pgfmathsetmacro{\s}{sqrt(2)/2} \draw (0,-.5)--(0,.5)--(-\s,\s+.5) (0,.5)--(\s,\s+.5); \draw (0,-.5)--(-\s,-\s-.5) (0,-.5)--(\s,-\s-.5); \draw (-\s,\s+.5) to [out=180+45,in=180-45] (-\s,-\s-.5); \draw (\s,\s+.5) to [out=-45,in=45] (\s,-\s-.5); \draw[] (\s,-\s-.5)to[out=270,in=270] (2.5,0)to[out=90, in=90](-\s,\s+.5); \node[anchor=east,inner sep=.2] at (0,.5){\strut$\mu$}; \node[anchor=east,inner sep=.2] at (0,-.5){\strut$\mu$}; \node[anchor=west,inner sep=.5] at (0,0){\strut$x$}; \node[inner sep=.5] at (1,0){\strut$b$}; \node[inner sep=.5] at (-1,0){\strut$\dual{a}$}; \node[inner sep=.5,anchor=south east] at (-\s,\s+.5){\strut$\alpha$}; \node[inner sep=.5,anchor=north east] at (\s,-\s-.5){\strut$\beta$}; \node[anchor=east,inner sep=.5] at (2.5,0){\strut$\dual{c}$}; \node[anchor=south west,inner sep=.2] at (-\s/2,.5+\s/2) {\strut$a$}; \node[anchor=north east,inner sep=.2] at (\s/2,-.5-\s/2) {\strut$\dual{b}$}; \end{tikzpicture} } \end{array}\label{eqn:SdotS}. \end{align} If $\mathcal{C}$ is symmetric, $\mug{\mathcal{C}}=\mathcal{C}$, and \begin{align} \left[\S_c^\dagger \S_c\right]_{(a,\alpha),(b,\beta)} & =\indicator{c=1}d_ad_b. \end{align} This matrix is rank 1, with eigenvalue $\mathcal{D}^2$. If $\mathcal{C}$ is modular, $\mug{\mathcal{C}}=\vvec{}$, and \begin{align} \left[\S_c^\dagger \S_c\right]_{(a,\alpha),(b,\beta)} & =\indicator{a=b}\indicator{\alpha=\beta}\indicator{\dual{a}\otimes a=c}d_c. \end{align} For fixed $c$, this matrix is rank $\sum_c N_{\dual{a},a}^c$, with all eigenvalues equal to $d_c$. \subsection{Case 2} If $\mathcal{C}$ is pointed (every simple object has dimension 1), then $\S_c=0$ unless $c=1$. In this case, the fusion rules are given by a finite Abelian group $A$~\cite{Joyal_1993,MR3242743}, and $\mug{C}=A^\prime$ has fusion rules given by a subgroup. From \cref{lem:productS,lem:sumS}, along with symmetries of the $\S_1$ matrix proven in \onlinecite{kitaev2006anyons} we know that \begin{align} \left[\S_1^\dagger\S_1\right]_{ab}=\sum_{c\in\mug{\mathcal{C}}}N_{a\dual{b}}^cd_c,\label{eqn:S1S1} \end{align} so \begin{align} \left[\S_1^\dagger\S_1\right]_{ab}=1\iff a\in bA^\prime. \end{align} Therefore, $[\S_1^\dagger \S_1]$ is a block matrix, with $[A:A^\prime]=|A|/|A^\prime|$ blocks, labeled by the cosets of $A^\prime$, each full of ones. Therefore, there are $[A:A^\prime]$ eigenvalues, identically $\mathcal{D}^2_{\mug{\mathcal{C}}}$. The entropy is given by \begin{align} \delta & =\log\mathcal{D}_{\mug{\mathcal{C}}}^2. \end{align} \subsection{Case 3} Case 3 is proven explicitly in the attached Mathematica file~\cite{premoddata}. Classification of the fusion rings for ranks 2-5 can be found in \onlinecite{rk2,rk3,Rowell2009On,rk4,rk5}, along with \onlinecite{Yu_2020}. Additionally, all multiplicity free fusion rings for ranks 1-6 can be found at \onlinecite{anyonwiki}. From this, explicit $F$ and $R$ data can be found. The list of categories, along with their properties, is included beginning \cpageref{sec:case3}. \subsection{Case 4} It is straightforward to check that if $a$ or $b$ are in $\mug{\mathcal{C}}$, then \begin{align} \left[\S_c^\dagger \S_c\right]_{(a,\alpha),(b,\beta)} & =\indicator{c=1}d_ad_b\indicator{a\in\mug{\mathcal{C}}}\indicator{b\in\mug{\mathcal{C}}}, \end{align} so $\S_c^\dagger \S_c$ has the form \begin{align} [\S_c^\dagger \S_c]= \kbordermatrix{ & \mug{\mathcal{C}} & \\ \mug{\mathcal{C}} & d_ad_b\indicator{c=1} & 0 \\ & 0 & X_c } =U \kbordermatrix{ & \mug{\mathcal{C}} & \\ \mug{\mathcal{C}} & \begin{matrix} \mathcal{D}_{\mug{\mathcal{C}}}^2\indicator{c=1} & 0 & \cdots \\0&0&\cdots\\\vdots&\vdots&\ddots \end{matrix} & 0 \\ & 0 & \tilde{X}_c }U^\dagger. \end{align} From the top left block, we have an eigenvector of $\S_1^\dagger\S_1$ with entries $v_a=\indicator{a\in\mug{\mathcal{C}}}d_a$ with eigenvalue $\mathcal{D}^2_{\mug{\mathcal{C}}}$. From \cref{lem:productS,lem:sumS}, along with symmetries of the $\S_1$ matrix proven in \onlinecite{kitaev2006anyons} we know that \begin{align} \left[\S_1^\dagger\S_1\right]_{ab}=\sum_{c\in\mug{\mathcal{C}}}N_{a\dual{b}}^cd_c. \end{align} The vector with entries $w_a=d_a$ is also an eigenvector with the same eigenvalue: \begin{align} \sum_{b\in \mathcal{C}}\sum_{c\in\mug{\mathcal{C}}}N_{a\dual{b}}^c d_cd_b & =\sum_{c\in\mug{\mathcal{C}}}d_ad_c^2 \\ & =\mathcal{D}^2_{\mug{\mathcal{C}}}d_a, \end{align} so we have an orthogonal vector $w-v$ with eigenvalue $\mathcal{D}^2_{\mug{\mathcal{C}}}$. If $\rk{\mathcal{C}}=\rk{\mug{\mathcal{C}}}+1$ and the additional object has $d_x=\mathcal{D}_{\mug{\mathcal{C}}}$, then all other eigenvalues must be $0$ since $\Tr \S_1^\dagger S_1=2\mathcal{D}^2_{\mug{\mathcal{C}}}$ and $\mathcal{D}^2=\mathcal{D}^2_{\mug{\mathcal{C}}}+d_x^2$. The entropy of the WW model in the bulk is \begin{align} \delta & =\log\mathcal{D}_{\mug{\mathcal{C}}}^2. \end{align} \end{proof} \end{thm_rep} \section{Preliminaries}\label{sec:preliminaries} Each of the physical models of interest in this work is defined by a collection of algebraic data. In the case of the (2+1)-dimensional models, this data can be conveniently packaged into a \emph{fusion category}. In one higher dimension, additional data is required, so the package is a \emph{premodular category}. Boundaries of these models can be specified using particular objects, called \emph{algebra objects} in the input category. In this section, we provide some (standard) definitions of these constructions, and introduce the notation we will use in the remainder of the manuscript. Many of the definitions that appear in this section are adapted from \onlinecite{0111204,0804.3587}. Throughout this work, we find it helpful to define a generalized Kronecker delta function. For this purpose, we use the \define{indicator function} $\indicator{X}=1\iff X=\mathrm{true}$, and zero otherwise. \begin{definition}[Unitary spherical fusion category]\label{def:UFC} We sketch the definition of a unitary (skeletal) fusion category. For a more complete definition, we refer to \onlinecite{MR3242743}, or for the physically minded reader \onlinecites{kitaev2006anyons,Bondersonthesis}. For our purposes, a unitary fusion category $\mathcal{C}$ consists of: \begin{itemize} \item A finite set of simple objects $\{1,a_1,a_2,\ldots a_k\}$, where $1$ is the distinguished object called the unit. \item For each triple of simple objects $a,b,c\in \mathcal{C}$, a finite dimensional vector space $\mathcal{C}(a\otimes b,c)$, called a fusion space. The dimension of $\mathcal{C}(a\otimes b,c)$ defines the integer $N_{ab}^c$. For any simple objects $a$ and $b$, fusion with the unit obeys $N_{1,a}^{b}=N_{a,1}^{b}=\indicator{a=b}$. \item Associator isomorphisms $(a\otimes b)\otimes c\cong a\otimes(b\otimes c)$ \item To every object $a\in \mathcal{C}$, a dual object $\dual{a}$ so that $\mathcal{C}(a\otimes\dual{a},1)\cong\mathbb{C}\cong\mathcal{C}(\dual{a}\otimes a,1)$. \end{itemize} We refer to the integers $N_{ab}^c$ as fusion rules. If all $N_{a,b}^c\in\{0,1\}$, we call the category \define{multiplicity free}. The number of simple objects is called the \define{rank} of $\mathcal{C}$, denoted $\rk{\mathcal{C}}$. The unique positive solution to the set of equations \begin{align} d_ad_b & =\sum_c N_{ab}^c d_c \end{align} defines the \define{quantum dimensions} $d_a$ of the simple objects. The \define{total quantum dimension} of $\mathcal{C}$ is defined as \begin{align} \mathcal{D}^2:=\sum_c d_c^2. \end{align} If all $d_a=1$ (equivalently $\mathcal{D}^2=\rk{\mathcal{C}}$), the category is called \define{pointed}. We remark that in the physics literature, this property is commonly called Abelian. It is common to use a diagrammatic calculus of string diagrams to discuss fusion categories. Strings are labeled by objects from the category. If, in particular, a string is labeled by a simple object, it cannot change its labeling to another simple object except at a vertex, since there are no morphisms between distinct simple objects. Frequently, we neglect to draw strings labeled by the unit object for simplicity. In string diagrams, the quantum dimensions are assigned to loops \begin{align} \begin{array}{c} \includeTikz{bubbleA}{ \begin{tikzpicture}[scale=.5]; \draw (0,0) circle (.75); \node[anchor=east] at (-.75,0) {$a$}; \end{tikzpicture} } \end{array} & =d_a,\label{eqn:bubble} \end{align} and we refer to the insertion or removal of a loop as a \define{loop move}. Once we choose a basis for $\mathcal{C}(a\otimes b,c)$, we denote a basis vector $\mu\in\mathcal{C}(a\otimes b,c)$ using a trivalent vertex\footnote{We refrain from drawing arrows on the diagrams, instead using the convention that all lines are oriented upwards.} \begin{align} \begin{array}{c} \includeTikz{trivalent1}{ \begin{tikzpicture}[scale=.5] \pgfmathsetmacro{\s}{sqrt(2)} \draw (0,0)--(1,1) node[below,pos=0,inner sep=.1]{\strut$a$} (1,1)--(2,0)node[below,pos=1,inner sep=.1]{\strut$b$} (1,1)--(1,1+\s)node[above,pos=1,inner sep=.1]{\strut$c$}; \node[left] at (1,1) {\strut$\mu$}; \end{tikzpicture} } \end{array}. \end{align} With these bases fixed, the associators can be expressed as unitary matrices \begin{align} \begin{array}{c} \includeTikz{FLHS}{ \begin{tikzpicture}[scale=.5,yscale=-1]; \pgfmathsetmacro{\s}{sqrt(2)} \draw (0,0)--(0,1)--(-1,2)--(-2,3) (-1,2)--(0,3) (0,1)--(2,3); \node[anchor=south,inner sep=.1] at (0,0) {\strut$d$}; \node[anchor=north,inner sep=.1] at (-2,3) {\strut$a$}; \node[anchor=north,inner sep=.1] at (0,3) {\strut$b$}; \node[anchor=north,inner sep=.1] at (2,3) {\strut$c$}; \node[anchor=south east,inner sep=.1] at (-.5,1.5) {\strut$e$}; \node[anchor=west] at (0,1) {\strut$\alpha$}; \node[anchor=west] at (-1,2) {\strut$\beta$}; \end{tikzpicture} } \end{array} & = \sum_{(\mu,f,\nu)}\bigg[F^d_{abc}\bigg]_{(\alpha,e,\beta), (\mu,f,\nu)} \begin{array}{c} \includeTikz{FRHS}{ \begin{tikzpicture}[scale=.5,xscale=-1,yscale=-1]; \pgfmathsetmacro{\s}{sqrt(2)} \draw (0,0)--(0,1)--(-1,2)--(-2,3) (-1,2)--(0,3) (0,1)--(2,3); \node[anchor=south,inner sep=.1] at (0,0) {\strut$d$}; \node[anchor=north,inner sep=.1] at (-2,3) {\strut$c$}; \node[anchor=north,inner sep=.1] at (0,3) {\strut$b$}; \node[anchor=north,inner sep=.1] at (2,3) {\strut$a$}; \node[anchor=south west,inner sep=.1] at (-.5,1.5) {\strut$f$}; \node[anchor=east] at (0,1) {\strut$\mu$}; \node[anchor=east] at (-1,2) {\strut$\nu$}; \end{tikzpicture} } \end{array}, \end{align} with $F^d_{abc}$ a unitary matrix for each valid choice of labels. This re-association is commonly referred to as an \define{$F$-move}. These matrices must obey the pentagon equations \begin{align} \sum_\delta \bigg[F^e_{fcd}\bigg]_{(\alpha,g,\beta), (\delta,x,\rho)}\bigg[F^e_{abx}\bigg]_{(\delta,f,\gamma), (\mu,y,\nu)} & = \sum_{\substack{(\sigma,z,\tau) \\\epsilon}} \bigg[F^g_{abc}\bigg]_{(\beta,f,\gamma), (\sigma,z,\tau)}\bigg[F^e_{azd}\bigg]_{(\alpha,g,\sigma), (\mu,y,\epsilon)} \bigg[F^y_{bcd}\bigg]_{(\epsilon,z,\tau), (\nu,x,\rho)}. \end{align} Additionally, the unit obeys the triangle equation~\cite{MR3242743}, however we always, without loss of generality, choose the unit to be strict. The category $\mathcal{C}$ is equipped with a dagger structure, so we also have dual spaces to each fusion space. These are represented using upside-down vertices. We normalize the basis for $\mathcal{C}(a\otimes b,c)$ and $\mathcal{C}(c,a\otimes b)$ so that \begin{align} \begin{array}{c} \includeTikz{TVV_norm_LHS}{ \begin{tikzpicture}[scale=.5] \pgfmathsetmacro{\s}{sqrt(2)} \begin{scope}[yscale=-1] \draw (1,1)--(0,0) node[pos=1,left,inner sep=.5] {\strut$a$} node[pos=0,left]{\strut$\mu$} (1,1)--(1,1+\s)node[pos=.9,right,inner sep=.5]{\strut$c$} (1,1)--(2,0) node[pos=1,right,inner sep=.5] {\strut$b$}; \end{scope} \draw (1,1)--(0,0) node[pos=0,left]{\strut$\nu$} (1,1)--(1,1+\s)node[pos=.9,right,inner sep=.5]{\strut$d$} (1,1)--(2,0); \end{tikzpicture} } \end{array} & =\indicator{c=d}\indicator{\mu=\nu} \sqrt{\frac{d_ad_b}{d_c}} \begin{array}{c} \includeTikz{TVV_norm_RHS}{ \begin{tikzpicture}[scale=.5] \pgfmathsetmacro{\s}{sqrt(2)} \draw (0,-1-\s)--(0,1+\s) node[right,pos=.5,inner sep=.5]{\strut$c$}; \end{tikzpicture} } \end{array}, \\ \begin{array}{c} \includeTikz{identitynormLHS}{ \begin{tikzpicture}[scale=.5] \pgfmathsetmacro{\s}{sqrt(2)} \draw (0,-1-\s)--(0,1+\s) node[left,pos=0,inner sep=.5]{\strut$a$};\node[left,inner sep=.5] at (0,1+\s){\phantom{\strut$a$}}; \draw (1,-1-\s)--(1,1+\s) node[right,pos=0,inner sep=.5]{\strut$b$};\node[right,inner sep=.5] at (1,1+\s){\phantom{\strut$b$}}; \end{tikzpicture} } \end{array} & =\sum_{c,\mu} \sqrt{\frac{d_c}{d_ad_b}} \begin{array}{c} \includeTikz{identitynormRHS}{ \begin{tikzpicture}[scale=.5] \pgfmathsetmacro{\s}{sqrt(2)}; \draw (0,-1-\s)--(.5,-1)--(.5,1)--(0,1+\s) ; \draw (1,-1-\s)--(.5,-1)--(.5,1)--(1,1+\s); \node[left,inner sep=.5] at (0,-1-\s){\strut$a$};\node[left,inner sep=.5] at (0,1+\s){\strut$a$}; \node[right,inner sep=.5] at (1,-1-\s){\strut$b$};\node[right,inner sep=.5] at (1,1+\s){\strut$b$}; \node[right,inner sep=.5] at (.5,0){\strut$c$}; \node[left] at (.5,-1){\strut$\mu$};\node[left] at (.5,1){\strut$\mu$}; \end{tikzpicture} } \end{array}. \end{align} All diagrams behave as though they were drawn on the surface of a sphere, for example \begin{align} \begin{array}{c} \includeTikz{sphericalLHS}{ \begin{tikzpicture}[scale=1] \draw (0,.2) to[out=90,in=0] (-.25,.35)to[out=180,in=90] (-.5,0) to[out=270,in=180](-.25,-.35) to[out=0,in=270] (0,-.2); \filldraw[fill=white] (0,0) circle (.2);\node at (0,0) {$x$}; \end{tikzpicture} } \end{array} & = \begin{array}{c} \includeTikz{sphericalRHS}{ \begin{tikzpicture}[scale=1] \draw (0,.2) to[out=90,in=180] (.25,.35)to[out=0,in=90] (.5,0) to[out=270,in=0](.25,-.35) to[out=180,in=270] (0,-.2); \filldraw[fill=white] (0,0) circle (.2);\node at (0,0) {$x$}; \end{tikzpicture} } \end{array}, \end{align} where $x$ is any subdiagram. We refer to \onlinecites{kitaev2006anyons,Bondersonthesis} for a more detailed overview of these diagrams. \end{definition} \begin{definition}[Unitary braided fusion category]\label{def:BFC} Given a unitary fusion category $\mathcal{C}$, a braiding is a map $a\otimes b\cong b\otimes a$, which is compatible with the associator of $\mathcal{C}$. Graphically, the braiding is encoded in the unitary $R$ matrices \begin{align} \begin{array}{c} \includeTikz{braid_LHS}{ \begin{tikzpicture}[scale=.4] \pgfmathsetmacro{\s}{sqrt(2)} \draw[draw=white,double=black,ultra thick] (2,-2) to [out=135,in=225] (0,0)--(1,1)--(2,0) to [out=315,in=45] (0,-2); \draw[draw=white,double=black,ultra thick](2,0) to [out=315,in=45] (0,-2); \draw [thick](1,1)--(1,1+\s) node[pos=1,above,inner sep=.1] {\strut$c$}; \node[below,inner sep=.1] at (0,-2) {\strut$a$};\node[below,inner sep=.1] at (2,-2) {\strut$b$}; \node[left] at (1,1) {\strut$\mu$}; \end{tikzpicture} } \end{array} & =\sum_\nu\bigg[R_{ab}^c\bigg]_{\mu,\nu} \begin{array}{c} \includeTikz{braid_RHS}{ \begin{tikzpicture}[scale=.4] \pgfmathsetmacro{\s}{sqrt(2)} \draw [thick](1,1)--(1,1+\s)node[pos=1,above,inner sep=.1] {\strut$c$};; \draw [thick] (1,1)--(0,-2); \draw [thick] (1,1)--(2,-2); \node[below,inner sep=.1] at (0,-2) {\strut$a$};\node[below,inner sep=.1] at (2,-2) {\strut$b$}; \node[left] at (1,1) {\strut$\nu$}; \end{tikzpicture} } \end{array}. \end{align} Compatibility with the (given) $F$ matrices is encoded in the hexagon equations \begin{align} \sum_{\gamma,\delta} \bigg[R_{ac}^e\bigg]_{\beta,\gamma} \bigg[F^d_{acb}\bigg]_{(\alpha,e,\gamma), (\sigma,f,\delta)} \bigg[R_{bc}^f\bigg]_{\delta,\tau} & = \sum_{\substack{(\epsilon,g,\rho) \\\mu}} \bigg[F^d_{cab}\bigg]_{(\alpha,e,\beta), (\epsilon,g,\rho)} \bigg[R_{gc}^d\bigg]_{\epsilon,\mu} \bigg[F^d_{abc}\bigg]_{(\mu,g,\rho), (\sigma,f,\tau)} \\ \sum_{\gamma,\delta} \bigg[R_{ca}^e\bigg]_{\gamma,\beta}^* \bigg[F^d_{acb}\bigg]_{(\alpha,e,\gamma), (\sigma,f,\delta)} \bigg[R_{cb}^f\bigg]_{\tau,\delta}^* & = \sum_{\substack{(\epsilon,g,\rho) \\\mu}} \bigg[F^d_{cab}\bigg]_{(\alpha,e,\beta), (\epsilon,g,\rho)} \bigg[R_{cg}^d\bigg]_{\mu,\epsilon}^* \bigg[F^d_{abc}\bigg]_{(\mu,g,\rho), (\sigma,f,\tau)}. \end{align} \end{definition} \begin{definition}[Premodular category]\label{def:Premodular} In a braided fusion category, we can define twists $\theta_a$ by \begin{align} \begin{array}{c} \includeTikz{thetaop1}{ \begin{tikzpicture}[scale=.5] \pgfmathsetmacro{\s}{sqrt(2)}; \draw[draw=white,double=black,ultra thick](0,0)to[out=180,in=270](-.5,1); \draw[draw=white,double=black,ultra thick](-.5,-1)to[out=90,in=180](0,.5)to[out=0,in=0](0,0); \node[anchor=north,inner sep=.1] at (-.5,-1) {\strut$a$}; \end{tikzpicture} } \end{array} & =\theta_a \begin{array}{c} \includeTikz{thetaop2}{ \begin{tikzpicture}[scale=.5] \pgfmathsetmacro{\s}{sqrt(2)}; \draw[draw=white,double=black,ultra thick](-.5,-1)--(-.5,1); \node[anchor=north,inner sep=.1] at (-.5,-1) {\strut$a$}; \end{tikzpicture} } \end{array}. \end{align} If these twists obey the ribbon equations \begin{align} \sum_\nu \left[R^c_{ba}\right]_{\mu,\nu}\left[R^c_{ab}\right]_{\nu,\rho} & =\frac{\theta_c}{\theta_a\theta_b}\indicator{\mu=\rho}, \end{align} we call $\mathcal{C}$ premodular. In a premodular category, we can define the $\S$-matrix \begin{align} \S_{a,b}:=\frac{1}{\mathcal{D}}\tr B_{a,\dual{b}} & =\frac{1}{\mathcal{D}} \begin{array}{c} \includeTikz{Smatrixdef}{ \begin{tikzpicture}[scale=.75] \centerarc[draw=white,double=black,ultra thick](-.75,0)(0:180:1); \draw[draw=white,double=black,ultra thick] (.75,0) circle (1); \centerarc[draw=white,double=black,ultra thick](-.75,0)(180:360:1) \node[anchor=west,inner sep=.5] at(1.75,0) {\strut$b$}; \node[anchor=west,inner sep=.5] at(.25,0) {\strut$a$}; \node[anchor=east,inner sep=.5] at(-.25,0) {\strut$\dual{b}$}; \node[anchor=east,inner sep=.5] at(-1.75,0) {\strut$\dual{a}$}; \end{tikzpicture} } \end{array} \label{eqn:Smatrix} \\ & =\frac{1}{\mathcal{D}}\sum_x N_{a,\dual{b}}^x \frac{\theta_x}{\theta_a\theta_{\dual{b}}}d_x, \end{align} where \begin{align} B_{a,b}:= \begin{array}{c} \includeTikz{RoperatorLHS}{ \begin{tikzpicture}[scale=.75] \draw[draw=white,double=black,ultra thick] (.5,0) to [out=90,in=270] (-.5,1); \draw[draw=white,double=black,ultra thick] (-.5,0) to [out=90,in=270] (.5,1) to[out=90,in=270] (-.5,2); \draw[draw=white,double=black,ultra thick] (-.5,1) to [out=90,in=270] (.5,2); \node[anchor=north,inner sep=.1] at (-.5,0) {\strut$a$};\node[anchor=north,inner sep=.1] at (.5,0) {\strut$b$}; \node[anchor=south,inner sep=.1] at (-.5,2) {\phantom{\strut$a$}};\node[anchor=south,inner sep=.1] at (.5,2) {\phantom{\strut$b$}}; \end{tikzpicture} } \end{array} & = \sum_{x,\mu}\sqrt{\frac{d_x}{d_ad_b}}\frac{\theta_x}{\theta_a\theta_b} \begin{array}{c} \includeTikz{RoperatorRHS}{ \begin{tikzpicture}[scale=.75] \draw (-.5,0)--(0,.5) (.5,0)--(0,.5)--(0,1.5)--(-.5,2) (0,1.5)--(.5,2); \node[anchor=north,inner sep=.1] at (-.5,0) {\strut$a$};\node[anchor=north,inner sep=.1] at (.5,0) {\strut$b$}; \node[anchor=south,inner sep=.1] at (-.5,2) {\strut$a$};\node[anchor=south,inner sep=.1] at (.5,2) {\strut$b$}; \node[anchor=west,inner sep=.1] at (0,1) {$x$}; \node[left] at (0,.5) {\strut$\mu$}; \node[left] at (0,1.5) {\strut$\mu$}; \end{tikzpicture} } \end{array}. \end{align} \end{definition} For the results in this manuscript, it is important to understand which strings can be `uncrossed'. This is captured by the M\"uger center. \begin{definition}[M\"uger center~\cite{0804.3587}]\label{def:mugC} Let $\mathcal{C}$ be a unitary premodular category. The \define{symmetric} or \define{M\"uger} center of $\mathcal{C}$ is the full subcategory of $\mathcal{C}$ with objects \begin{align} \mug{\mathcal{C}}:=\{X\in\mathcal{C}|B_{X,Y}=\id_{X\otimes Y}\forall Y\in \mathcal{C}\}. \end{align} A premodular category is \define{symmetric} if $\mug{\mathcal{C}}=\mathcal{C}$, and \define{modular} if $\mug{\mathcal{C}}=\vvec{}$. We will refer to premodular categories which are neither symmetric nor modular as \define{properly premodular}. The $\S$-matrix acts as a witness for these properties. Symmetric categories have (matrix) rank 1 $\S$-matrices, while modular categories have invertible (unitary) $\S$-matrices. \end{definition} It will be convenient to define a slight generalization of the $\S$-matrix. \begin{definition}[Connected $\S$-matrix]\label{def:commectedS} Recall that the trivalent vertices define a vector space, with the $\mu$ labels indicating basis vectors. We can therefore define the operator $\S_c$ by its action on the fusion space~\cite{kitaev2006anyons} \begin{align} \S_c \begin{array}{c} \includeTikz{Scop1}{ \begin{tikzpicture}[scale=.5] \pgfmathsetmacro{\s}{sqrt(2)}; \draw (0,-\s)--(0,0)--(-1,1) (0,0)--(1,1); \node[anchor=north,inner sep=.1] at (0,-\s) {\strut$c$}; \node[anchor=south,inner sep=.1] at (-1,1) {\strut$b$};\node[anchor=south,inner sep=.1] at (1,1) {\strut$\dual{b}$}; \node[anchor=north west,inner sep=.3] at(0,0) {\strut$\beta$}; \end{tikzpicture} } \end{array} & =\frac{\sqrt{d_c}}{\mathcal{D}}\sum_x d_x \begin{array}{c} \includeTikz{Scop2}{ \begin{tikzpicture}[scale=.5] \pgfmathsetmacro{\s}{sqrt(2)}; \draw[draw=white,double=black,ultra thick] (-.25,0) to [out=90,in=270] (-1,2); \draw[draw=white,double=black,ultra thick] (0,0) circle (1); \draw[draw=white,double=black,ultra thick] (-2,2) to [out=270,in=180] (-1,-2) to [out=0,in=270] (-.25,0); \draw (0,-1)--(0,-3); \node[anchor= north west,inner sep=.5] at(0,-1) {\strut$\beta$}; \node[anchor=west,inner sep=.5] at(1,0) {\strut$\dual{b}$}; \node[anchor=west,inner sep=1] at(-1,2) {\strut$x$}; \node[anchor=east,inner sep=1] at(-2,2) {\strut$\dual{x}$}; \node[anchor=north,inner sep=.5] at(0,-3) {\strut$c$}; \end{tikzpicture} } \end{array}.\label{eqn:Scaction} \end{align} The matrix elements of this operator are \begin{align} \left[\S_c\right]_{(a,\alpha),(b,\beta)} & =\frac{1}{\mathcal{D}} \begin{array}{c} \includeTikz{ConnectedSmatrix}{ \begin{tikzpicture}[scale=.65] \centerarc[draw=white,double=black,ultra thick](-.75,0)(0:180:1); \draw[draw=white,double=black,ultra thick] (.75,0) circle (1); \centerarc[draw=white,double=black,ultra thick](-.75,0)(180:360:1); \node[anchor=west,inner sep=.5] at(1.75,0) {\strut$b$}; \node[anchor=west,inner sep=.5] at(.25,0) {\strut$a$}; \node[anchor=east,inner sep=.5] at(-.25,0) {\strut$\dual{b}$}; \node[anchor=east,inner sep=.5] at(-1.75,0) {\strut$\dual{a}$}; \draw[thick] (.75,-1)--(.75,-1.2);\draw[thick] (-.75,1)--(-.75,1.2); \draw[thick] (.75,-1.2)to[out=270,in=270] (2.5,0)to[out=90, in=90] (-.75,1.2); \node[anchor=south,inner sep=.5] at(.75,-1) {\strut$\beta$}; \node[anchor=north,inner sep=.75] at(-.75,1) {\strut$\alpha$}; \node[anchor=east,inner sep=.5] at(-.75,1.25) {\strut$c$}; \node[anchor=west,inner sep=.5] at(2.5,0) {\strut$\dual{c}$}; \end{tikzpicture} } \end{array}.\label{eqn:connectedS} \end{align} The usual $\S$-matrix is $\S_1$. The connected $\S$-matrix is closely related to the punctured $\S$-matrix of \onlinecite{Bonderson_2019}. \end{definition} The (connected) $\S$-matrix has a number of properties that we require. \begin{lemma}\label{lem:productS} Let $\mathcal{C}$ be a unitary premodular category. The matrix elements of $\S$ obey \begin{align} \frac{\mathcal{D}}{d_c}\S_{a,c}\S_{b,c} & =\S_{a\otimes b,c}=\sum_x N_{a,b}^x \S_{x,c}. \end{align} \begin{proof} Provided in Appendix~\hyperref[lem:productS_pf]{\ref*{app:FC_pfs}}. \end{proof} \end{lemma} \begin{lemma}\label{lem:sumS} Let $\mathcal{C}$ be a unitary premodular category, then \begin{align} \sum_b d_b \S_{a,b} & =\indicator{a\in\mug{\mathcal{C}}} d_a \mathcal{D}, \end{align} where $\mug{\mathcal{C}}$ is the M\"uger center. \begin{proof} Provided in Appendix~\hyperref[lem:sumS_pf]{\ref*{app:FC_pfs}}. \end{proof} \end{lemma} \begin{corollary}[Premodular trap]\label{cor:premodulartrap} For any premodular theory, we have \begin{align} \begin{array}{c} \includeTikz{trapLHS}{ \begin{tikzpicture} \draw[draw=white,double=black,ultra thick] (.25,-.75)--(.25,0); \draw[draw=white,double=black,ultra thick] (0,0) circle (.5); \draw[draw=white,double=black,ultra thick] (.25,0)--(.25,.75); \node[anchor=west] at (.5,0) {\strut$a$}; \node[anchor=north,inner sep=.1] at (.25,-.75) {\strut$b$}; \end{tikzpicture} } \end{array} & =\frac{\S_{a,\dual{b}}}{\S_{1,\dual{b}}} \begin{array}{c} \includeTikz{trapRHS}{ \begin{tikzpicture} \draw[draw=white,double=black,ultra thick] (.25,-.75)--(.25,0); \draw[draw=white,double=black,ultra thick] (.25,0)--(.25,.75); \node[anchor=north,inner sep=.1] at (.25,-.75) {\strut$b$}; \end{tikzpicture} } \end{array}=\frac{\mathcal{D} \S_{a,\dual{b}}}{d_b}. \end{align} Using \cref{lem:productS,lem:sumS}, this gives \begin{align} \frac{1}{\mathcal{D}^2}\sum_a d_a \begin{array}{c} \includeTikz{trapLHS2}{ \begin{tikzpicture} \draw[draw=white,double=black,ultra thick] (-.5,-.75)to[out=45,in=270](-.25,0) (.5,-.75)to[out=90+45,in=270](.25,0); \draw[draw=white,double=black,ultra thick] (0,0) circle (.5); \draw[draw=white,double=black,ultra thick] (-.25,0)to[out=90,in=270+45](-.5,.75) (.25,0)to[out=90,in=180+45](.5,.75); \node[anchor=west] at (.5,0) {\strut$a$}; \node[anchor=north,inner sep=.1] at (-.5,-.75) {\strut$x$};\node[anchor=north,inner sep=.1] at (.5,-.75) {\strut$y$}; \node[anchor=south,inner sep=.1] at (-.5,.75) {\phantom{\strut$x$}};\node[anchor=south,inner sep=.1] at (.5,.75) {\phantom{\strut$y$}}; \end{tikzpicture} } \end{array} & =\sum_{z\in\mug{\mathcal{C}},\mu} \sqrt{\frac{d_z}{d_xd_y}} \begin{array}{c} \includeTikz{trapRHS2}{ \begin{tikzpicture} \draw (-.5,-.75)--(0,-.5)--(0,.5)--(-.5,.75) (.5,-.75)--(0,-.5)--(0,.5)--(.5,.75); \node[anchor=north,inner sep=.1] at (-.5,-.75) {\strut$x$};\node[anchor=north,inner sep=.1] at (.5,-.75) {\strut$y$}; \node[anchor=south,inner sep=.1] at (-.5,.75) {\strut$x$};\node[anchor=south,inner sep=.1] at (.5,.75) {\strut$y$}; \node[anchor=west,inner sep=.1] at(0,0) {\strut$z$}; \node[anchor=south east,inner sep=.3] at(0,-.5) {\strut$\mu$};\node[anchor=north east,inner sep=.3] at(0,.5) {\strut$\mu$}; \end{tikzpicture} } \end{array}. \end{align} \end{corollary} \begin{lemma}\label{lem:TrScSc} Let $\mathcal{C}$ be a unitary premodular category, then \begin{align} \sum_{c\in\mathcal{C}}\Tr\S_c^\dagger\S_c & =\mathcal{D}^2, \end{align} where $\S_c$ is the connected $\S$-matrix and $\mathcal{D}$ is the total dimension of $\mathcal{C}$. \begin{proof} Provided in Appendix~\hyperref[lem:TrScSc_pf]{\ref*{app:FC_pfs}}. \end{proof} \end{lemma} \begin{definition}[Algebra object]\label{def:Alg} An algebra $(A,m,\eta)$ in a fusion category is an object $A=1\oplus a_1\oplus a_2\oplus \cdots$, along with morphisms $m:A\times A\to A$ and $\eta:1\to A$. For simplicity, we restrict $\mathcal{C}$ and $A$ to be multiplicity free, that is each simple object $a_i$ occurs at most once. We represent the multiplication morphism $m$ as \begin{align} m= \begin{array}{c} \includeTikz{Algm_1}{ \begin{tikzpicture}[scale=.5] \pgfmathsetmacro{\s}{sqrt(2)} \draw (0,0)--(1,1) node[below,pos=0,inner sep=.1]{\strut$A$} (1,1)--(2,0)node[below,pos=1,inner sep=.1]{\strut$A$} (1,1)--(1,1+\s)node[above,pos=1,inner sep=.1]{\strut$A$}; \fill [shift={(1,1)}] (-.1,-.1) rectangle (.1,.1); \end{tikzpicture} } \end{array} & =\sum_{a,b,c\in A}m_{ab}^c \begin{array}{c} \includeTikz{Algm_2}{ \begin{tikzpicture}[scale=.5] \pgfmathsetmacro{\s}{sqrt(2)} \draw (0,0)--(1,1) node[below,pos=0,inner sep=.1]{\strut$a$} (1,1)--(2,0)node[below,pos=1,inner sep=.1]{\strut$b$} (1,1)--(1,1+\s)node[above,pos=1,inner sep=.1]{\strut$c$}; \end{tikzpicture} } \end{array}. \end{align} To simplify the notation, we define $m_{xy}^z=0$ whenever any of the labels do not occur in the decomposition of $A$. This allows us to always sum over simple objects in $\mathcal{C}$. Additionally, we suppress the $A$ label. Any unlabeled lines carry an implicit $A$. The multiplication of the algebra should be associative (in $\mathcal{C}$) \begin{align} \begin{array}{c} \includeTikz{AAssociativeLHS}{ \begin{tikzpicture}[scale=.5] \pgfmathsetmacro{\s}{sqrt(2)} \draw (0,0)--(1,1) (1,1)--(2,0) (1,1)--(2,2)--(2,2+\s) (2,2)--(4,0); \fill [shift={(1,1)}] (-.1,-.1) rectangle (.1,.1); \fill [shift={(2,2)}] (-.1,-.1) rectangle (.1,.1); \end{tikzpicture} } \end{array} & = \begin{array}{c} \includeTikz{AAssociativeRHS}{ \begin{tikzpicture}[scale=.5,xscale=-1] \pgfmathsetmacro{\s}{sqrt(2)} \draw (0,0)--(1,1) (1,1)--(2,0) (1,1)--(2,2)--(2,2+\s) (2,2)--(4,0); \fill [shift={(1,1)}] (-.1,-.1) rectangle (.1,.1); \fill [shift={(2,2)}] (-.1,-.1) rectangle (.1,.1); \end{tikzpicture} } \end{array}, \end{align} or in components \begin{align} \sum_x m_{ab}^xm_{xc}^d \begin{array}{c} \includeTikz{AAssociativeLHS_2}{ \begin{tikzpicture}[scale=.5] \pgfmathsetmacro{\s}{sqrt(2)} \draw (0,0)--(1,1)node[below,pos=0,inner sep=.1]{\strut$a$} (1,1)--(2,0)node[below,pos=1,inner sep=.1]{\strut$b$} (1,1)--(2,2)--(2,2+\s)node[above,pos=1,inner sep=.1]{\strut$d$} (2,2)--(4,0)node[below,pos=1,inner sep=.1]{\strut$c$}; \node[above left,inner sep=.1] at(1.5,1.5) {\strut$x$}; \end{tikzpicture} } \end{array} & = \sum_y m_{ay}^dm_{bc}^y \begin{array}{c} \includeTikz{AAssociativeRHS_2}{ \begin{tikzpicture}[scale=.5,xscale=-1] \pgfmathsetmacro{\s}{sqrt(2)} \draw (0,0)--(1,1)node[below,pos=0,inner sep=.1]{\strut$c$} (1,1)--(2,0)node[below,pos=1,inner sep=.1]{\strut$b$} (1,1)--(2,2)--(2,2+\s)node[above,pos=1,inner sep=.1]{\strut$d$} (2,2)--(4,0)node[below,pos=1,inner sep=.1]{\strut$a$}; \node[above right,inner sep=.1] at(1.5,1.5) {\strut$y$}; \end{tikzpicture} } \end{array} = \sum_{x,z} m_{ab}^xm_{xc}^d \bigg[F_{abc}^{d}\bigg]_{xz} \begin{array}{c} \includeTikz{AAssociativeLHS_3}{ \begin{tikzpicture}[scale=.5,xscale=-1] \pgfmathsetmacro{\s}{sqrt(2)} \draw (0,0)--(1,1)node[below,pos=0,inner sep=.1]{\strut$c$} (1,1)--(2,0)node[below,pos=1,inner sep=.1]{\strut$b$} (1,1)--(2,2)--(2,2+\s)node[above,pos=1,inner sep=.1]{\strut$d$} (2,2)--(4,0)node[below,pos=1,inner sep=.1]{\strut$a$}; \node[above right,inner sep=.1] at(1.5,1.5) {\strut$z$}; \end{tikzpicture} } \end{array}, \end{align} where the final equality is obtained by using the $F$-move on the left hand side. The components of $m$ therefore obey \begin{align} m_{ay}^{d}m_{bc}^y & =\sum_x m_{ab}^xm_{xc}^d \bigg[F_{abc}^{d}\bigg]_{xy}.\label{eqn:algCompatibility} \end{align} Multiplication by the unit obeys \begin{align} \begin{array}{c} \includeTikz{AUnitLHS}{ \begin{tikzpicture}[scale=.5] \pgfmathsetmacro{\s}{sqrt(2)} \draw (0,0)--(1,1) (1,1)--(3,-1) (1,1)--(1,1+\s); \fill [shift={(1,1)}] (-.1,-.1) rectangle (.1,.1); \fill [shift={(0,0)}] (0,0) circle (.1); \end{tikzpicture} } \end{array} & = \begin{array}{c} \includeTikz{AUnitMid}{ \begin{tikzpicture}[scale=.5] \pgfmathsetmacro{\s}{sqrt(2)} \draw (0,-1)--(0,1+\s); \end{tikzpicture} } \end{array} = \begin{array}{c} \includeTikz{AUnitRHS}{ \begin{tikzpicture}[scale=.5,xscale=-1] \pgfmathsetmacro{\s}{sqrt(2)} \draw (0,0)--(1,1) (1,1)--(3,-1) (1,1)--(1,1+\s); \fill [shift={(1,1)}] (-.1,-.1) rectangle (.1,.1); \fill [shift={(0,0)}] (0,0) circle (.1); \end{tikzpicture} } \end{array}, \end{align} or in components \begin{align} \eta m_{1x}^{x} & =1=\eta m_{x1}^x. \end{align} A \define{coalgebra} is the same, with everything flipped upside down. Two algebras $(A,m,\eta)$ and $(A,n,\theta)$ are isomorphic if they can be related by \begin{align} m_{ab}^c & =n_{ab}^c\frac{\beta_c}{\beta_a\beta_b}, \\ \eta & =\frac{1}{\beta_1}\theta, \end{align} where $\beta_a$ are nonzero complex numbers. We can (and will) always use this to normalize $\eta=1$. When it does not cause confusion, we will indicate an algebra by it's object, for example $A=1$, the `unit algebra'. \end{definition} \begin{definition}[Frobenius algebra]\label{def:FrobAlg} A Frobenius algebra is a quintuple $(A,m,\eta,\mu,\epsilon)$, where $(A,m,\eta)$ is an algebra, $(A,\mu,\epsilon)$ is a coalgebra. The algebra and coalgebra maps obey \begin{align} \begin{array}{c} \includeTikz{Frob_1}{ \begin{tikzpicture}[scale=.5] \pgfmathsetmacro{\s}{sqrt(2)} \clip (-2,-1-\s/2) rectangle (2,1+\s/2); \begin{scope}[shift={(0,-\s/2)}] \draw (0,0)--(-1,-1) (0,0)--(1,-1) (0,0)--(0,\s); \fill [] (-.1,-.1) rectangle (.1,.1); \end{scope} \begin{scope}[shift={(0,\s/2)},yscale=-1] \draw (0,0)--(-1,-1) (0,0)--(1,-1) (0,0)--(0,\s); \fill [] (-.1,-.1) rectangle (.1,.1); \end{scope} \end{tikzpicture} } \end{array} & = \begin{array}{c} \includeTikz{Frob_2}{ \begin{tikzpicture}[scale=.5] \pgfmathsetmacro{\s}{sqrt(2)} \clip (-2,-1-\s/2) rectangle (2,1+\s/2); \begin{scope}[shift={(-.5,.5)}] \draw (0,0)--(-1,-1)--(-1,-2.5) (0,0)--(1,-1) (0,0)--(0,\s); \fill [] (-.1,-.1) rectangle (.1,.1); \end{scope} \begin{scope}[shift={(.5,-.5)},yscale=-1] \draw (0,0)--(-1,-1) (0,0)--(1,-1)--(1,-2.5) (0,0)--(0,\s); \fill [] (-.1,-.1) rectangle (.1,.1); \end{scope} \end{tikzpicture} } \end{array} = \begin{array}{c} \includeTikz{Frob_3}{ \begin{tikzpicture}[scale=.5,xscale=-1] \pgfmathsetmacro{\s}{sqrt(2)} \clip (-2,-1-\s/2) rectangle (2,1+\s/2); \begin{scope}[shift={(-.5,.5)}] \draw (0,0)--(-1,-1)--(-1,-2.5) (0,0)--(1,-1) (0,0)--(0,\s); \fill [] (-.1,-.1) rectangle (.1,.1); \end{scope} \begin{scope}[shift={(.5,-.5)},yscale=-1] \draw (0,0)--(-1,-1) (0,0)--(1,-1)--(1,-2.5) (0,0)--(0,\s); \fill [] (-.1,-.1) rectangle (.1,.1); \end{scope} \end{tikzpicture} } \end{array}. \end{align} Since we only consider unitary $\mathcal{C}$, we require $\mu^{ab}_c=\left(m_{ab}^c\right)^*$. \end{definition} A Frobenius algebra is said to be \define{strongly separable} if \begin{align} \begin{array}{c} \includeTikz{A_strsep_1}{ \begin{tikzpicture}[scale=.5] \draw (0,-1)--(0,1); \fill [shift={(0,-1)}] (0,0) circle (.1); \fill [shift={(0,1)}] (0,0) circle (.1); \end{tikzpicture} } \end{array} & =\alpha_A \\ \begin{array}{c} \includeTikz{AspecialLHS}{ \begin{tikzpicture}[scale=.5] \pgfmathsetmacro{\s}{sqrt(2)} \begin{scope}[yscale=-1] \draw (1,1)--(0,0) (1,1)--(1,1+\s) (1,1)--(2,0); \fill [shift={(1,1)}] (-.1,-.1) rectangle (.1,.1); \end{scope} \draw (1,1)--(0,0) (1,1)--(1,1+\s) (1,1)--(2,0); \fill [shift={(1,1)}] (-.1,-.1) rectangle (.1,.1); \end{tikzpicture} } \end{array} & = \beta_A \begin{array}{c} \includeTikz{AspecialRHS}{ \begin{tikzpicture}[scale=.5] \pgfmathsetmacro{\s}{sqrt(2)} \draw (0,-1-\s)--(0,1+\s); \end{tikzpicture} } \end{array},\label{eqn:algnorm} \end{align} where $\alpha_A\beta_A\neq 0$. We normalize $\alpha_A=1$ and $\beta_A=d_A$, where $d_A=\sum_{a\in A}d_a$. If the underlying category $\mathcal{C}$ is braided, we say the algebra $A$ is \define{commutative} if \begin{align} \begin{array}{c} \includeTikz{AcommLHS}{ \begin{tikzpicture}[scale=.35] \pgfmathsetmacro{\s}{sqrt(2)} \draw[draw=white,double=black,ultra thick] (2,-2) to [out=135,in=225] (0,0)--(1,1)--(2,0) to [out=315,in=45] (0,-2); \draw[draw=white,double=black,ultra thick](2,0) to [out=315,in=45] (0,-2); \draw [thick](1,1)--(1,1+\s); \fill [shift={(1,1)}] (-.1,-.1) rectangle (.1,.1); \end{tikzpicture} } \end{array} & = \begin{array}{c} \includeTikz{AcommMid}{ \begin{tikzpicture}[scale=.35] \pgfmathsetmacro{\s}{sqrt(2)} \draw [thick](1,1)--(1,1+\s); \draw [thick] (1,1)--(0,-2); \draw [thick] (1,1)--(2,-2); \fill [shift={(1,1)}] (-.1,-.1) rectangle (.1,.1); \end{tikzpicture} } \end{array} = \begin{array}{c} \includeTikz{AcommRHS}{ \begin{tikzpicture}[scale=.35,xscale=-1] \pgfmathsetmacro{\s}{sqrt(2)} \draw[draw=white,double=black,ultra thick] (2,-2) to [out=135,in=225] (0,0)--(1,1)--(2,0) to [out=315,in=45] (0,-2); \draw[draw=white,double=black,ultra thick](2,0) to [out=315,in=45] (0,-2); \draw [thick](1,1)--(1,1+\s); \fill [shift={(1,1)}] (-.1,-.1) rectangle (.1,.1); \end{tikzpicture} } \end{array},\label{eqn:commutativealg} \end{align} or in components \begin{align} R_{ab}^cm_{ba}^c=m_{ab}^c=m_{ba}^c\left(R_{ba}^c\right)^*. \end{align} \subsection{Examples}\label{sec:examples} To aid understanding, and illustrate our results, we will refer to the following examples throughout the remainder of the manuscript. The unitary fusion category $\vvectwist{\ZZ{2}}{\omega}$ is the category of finite dimensional $\ZZ{2}$-graded vector spaces. The simple objects are labeled by the group elements $\ZZ{2}:=\set{1,x}{x^2=1}$. Since we neglect to draw the unit object, corresponding to the identity group element, the only nonzero trivalent vertex is \begin{align} \begin{array}{c} \includeTikz{trivalentVecZ2_1}{ \begin{tikzpicture}[scale=.25] \draw[thick] (0,0)--(1,1) (1,1)--(2,0); \end{tikzpicture} } \end{array}. \end{align} These are pointed categories, and so $d_x=1$. The total quantum dimension is $\mathcal{D}^2=2$. It is straightforward to check that there are exactly two inequivalent associators compatible with this fusion rule, namely \begin{align} \begin{array}{c} \includeTikz{FVecZ2_LHS}{ \begin{tikzpicture}[scale=.3,yscale=-1]; \draw[thick] (0,0)--(0,1) (-1,2)--(-2,3) (-1,2)--(0,3) (0,1)--(2,3); \end{tikzpicture} } \end{array} & = \omega \begin{array}{c} \includeTikz{FVecZ2_RHS}{ \begin{tikzpicture}[scale=.3,xscale=-1,yscale=-1]; \draw[thick] (0,0)--(0,1) (-1,2)--(-2,3) (-1,2)--(0,3) (0,1)--(2,3); \end{tikzpicture} } \end{array}, \end{align} with $\omega=\pm 1$. With the associator fixed, there are two compatible braidings \begin{align} \begin{array}{c} \includeTikz{braidVecZ2_LHS}{ \begin{tikzpicture}[scale=.3] \draw[draw=white,double=black,ultra thick] (2,-2) to [out=135,in=225] (0,0)--(1,1)--(2,0) to [out=315,in=45] (0,-2); \draw[draw=white,double=black,ultra thick](2,0) to [out=315,in=45] (0,-2); \end{tikzpicture} } \end{array} & =\phi \begin{array}{c} \includeTikz{braidVecZ2_RHS}{ \begin{tikzpicture}[scale=.3] \draw [thick] (1,1)--(0,-2); \draw [thick] (1,1)--(2,-2); \end{tikzpicture} } \end{array}, \end{align} with $\phi^2 = \omega$. The twists and $\S$-matrices of these models are given by \begin{align} \theta_x & =\phi & & & \S & =\frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 1 \\1&\omega \end{pmatrix}, \end{align} so the categories are modular when $\omega=-1$, and symmetric when $\omega=+1$. We denote by $\vvectwist{\ZZ{2}}{\omega,\phi}$ the braided category, with associator $\omega$ and braiding $\phi$. These four examples are included in the attached Mathematica file~\cite{premoddata} as $\vvectwist{\ZZ{2}}{1,1}=\FR{2}{0}{1}{0}$, $\vvectwist{\ZZ{2}}{1,-1}=\FR{2}{0}{1}{2}$, $\vvectwist{\ZZ{2}}{-1,i}=\FR{2}{0}{1}{1}$, and $\vvectwist{\ZZ{2}}{-1,-i}=\FR{2}{0}{1}{3}$. For these examples, there are two possible algebras, namely $A_0:=1$, with trivial $m$ morphism, and $A_1:=1\oplus x$. The algebra $A_0$ is compatible as a commutative algebra. For $A_1$, compatibility as a Frobenius algebra (\cref{eqn:algCompatibility}) reduces to \begin{align} m_{x,x}^{1}\omega & = m_{x,x}^{1}, \end{align} so is only a valid algebra object when $\omega = 1$. In that case, $A_1$ is commutative when \begin{align} m_{x,x}^1 \phi = m_{x,x}^{1} \iff \phi = 1. \end{align} \section{Remarks}\label{sec:remarks} To summarize, we have evaluated the long-range entanglement in the bulk, and at the boundary, of (2+1)- and (3+1)-dimensional topological phases. In (2+1) dimensions, we found the entropy diagnostic $\Gamma=\log\mathcal{D}^2$ regardless of the choice of boundary algebra $A$. This is in contrast to the results for three dimensions, where a signature of the boundary, namely its dimension as an algebra, can be seen in the diagnostics $\Delta_{\bullet}$ and $\Delta_{\circ}$. The most natural boundary for these models is defined by the algebra $A=1$, which (uniquely) always exists. At this boundary, we found that the point-like diagnostic $\Delta_{\bullet}$ recovers the total dimension of the input category. In particular, when $\mathcal{C}$ is a (2+1)D anyon model, this is consent with a boundary that supports the anyons. Conversely, the loop-like diagnostic $\Delta_{\circ}$ is zero at these boundaries, ruling out loop-like excitations in the vicinity. We have conjectured a general property of the connected $\S$-matrix which, if proven in general, allows computation of bulk WW topological entropy. Such a proof may also be interesting for the classification of premodular categories in general. To the best of our knowledge, there is no complete classification of boundaries for Walker-Wang models. Such a classification is complicated by requiring, as a sub-classification, a complete understanding of (2+1)D theories. This goes beyond the scope of the current work, and we have therefore specialized to boundaries described by Frobenius algebras and to particular families of input fusion category. Extending these results may provide a more complete understanding of the possible boundary excitations and their properties. \subsection{Small category data}\label{sec:case3} Data for small categories. ``Valid" indicates that the pentagon, hexagon, and ribbon equations, along with unitarity, are true. ``TY" indicates that the category has the property defined in Case 4 of \cref{thm:WWentropyexamples}. Full data, including explicit $F$ and $R$ symbols is provided in the attached Mathematica files, also available at \onlinecite{premoddata}. Note that these may take \emph{a very long time} to check. This is due to the complicated algebraic integers occurring, and Mathematica needing to simplify. Categories are named $\FR{a}{b}{c}{x}$ according to their fusion ring $\mathrm{FR}^{a,b}_{c}$ from \onlinecite{anyonwiki}, along with their categorification ID $x$. Highlighted categories do not fall within any of the other cases in \cref{thm:WWentropyexamples}. \vfill \centering\begin{tabular}{ !{\vrule width 1pt}>{\columncolor[gray]{.9}[\tabcolsep]}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}} \toprule[1pt] \rowcolor[gray]{.9}[\tabcolsep] Cat. ID & Rank & $\mathcal{D}^2$ & Valid & $\rk{\mug{\mathcal{C}}}$ & Premodular? & Pointed? & TY? & TEE & $\log\mathcal{D}_{\mug{\mathcal{C}}}^2$ & Conjecture true? \\ \toprule[1pt] $\FR{2}{0}{1}{0}$ & 2 & 2 & \ding{51} & 2 & Symm. & \ding{51} & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{2}{0}{1}{1}$ & 2 & 2 & \ding{51} & 1 & Mod. & \ding{51} & \ding{51} & 0 & 0 & \ding{51} \\ $\FR{2}{0}{1}{2}$ & 2 & 2 & \ding{51} & 2 & Symm. & \ding{51} & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{2}{0}{1}{3}$ & 2 & 2 & \ding{51} & 1 & Mod. & \ding{51} & \ding{51} & 0 & 0 & \ding{51} \\ $\FR{2}{0}{2}{0}$ & 2 & $\frac{1}{2} \left(\sqrt{5}+5\right)$ & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{2}{0}{2}{1}$ & 2 & $\frac{1}{2} \left(\sqrt{5}+5\right)$ & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ \toprule[1pt] $\FR{3}{0}{1}{0}$ & 3 & 4 & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{3}{0}{1}{1}$ & 3 & 4 & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{3}{0}{1}{2}$ & 3 & 4 & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{3}{0}{1}{3}$ & 3 & 4 & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{3}{0}{1}{4}$ & 3 & 4 & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{3}{0}{1}{5}$ & 3 & 4 & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{3}{0}{1}{6}$ & 3 & 4 & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{3}{0}{1}{7}$ & 3 & 4 & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{3}{0}{2}{0}$ & 3 & 6 & \ding{51} & 3 & Symm. & & & $\log 6$ & $\log 6$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{3}{0}{2}{1}$ & 3 & 6 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{3}{0}{2}{2}$ & 3 & 6 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{3}{0}{3}{0}$ & 3 & $\sim9.30$ & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{3}{0}{3}{1}$ & 3 & $\sim9.30$ & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{3}{2}{1}{0}$ & 3 & 3 & \ding{51} & 3 & Symm. & \ding{51} & & $\log 3$ & $\log 3$ & \ding{51} \\ $\FR{3}{2}{1}{1}$ & 3 & 3 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ $\FR{3}{2}{1}{2}$ & 3 & 3 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ \toprule[1pt] \end{tabular} \clearpage \newgeometry{left=17mm,right=17mm,top=8mm,bottom=8mm,ignoreall, noheadfoot} \centering\resizebox*{!}{\textheight}{ \begin{tabular}{ !{\vrule width 1pt}>{\columncolor[gray]{.9}[\tabcolsep]}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}} \toprule[1pt] \rowcolor[gray]{.9}[\tabcolsep] Cat. ID & Rank & $\mathcal{D}^2$ & Valid & $\rk{\mug{\mathcal{C}}}$ & Premodular? & Pointed? & TY? & TEE & $\log\mathcal{D}_{\mug{\mathcal{C}}}^2$ & Conjecture true? \\ \toprule[1pt] $\FR{4}{0}{1}{0}$ & 4 & 4 & \ding{51} & 4 & Symm. & \ding{51} & & $\log 4$ & $\log 4$ & \ding{51} \\ $\FR{4}{0}{1}{1}$ & 4 & 4 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{1}{2}$ & 4 & 4 & \ding{51} & 4 & Symm. & \ding{51} & & $\log 4$ & $\log 4$ & \ding{51} \\ $\FR{4}{0}{1}{3}$ & 4 & 4 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{1}{4}$ & 4 & 4 & \ding{51} & 2 & \ding{51} & \ding{51} & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{4}{0}{1}{5}$ & 4 & 4 & \ding{51} & 2 & \ding{51} & \ding{51} & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{4}{0}{1}{6}$ & 4 & 4 & \ding{51} & 2 & \ding{51} & \ding{51} & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{4}{0}{1}{7}$ & 4 & 4 & \ding{51} & 2 & \ding{51} & \ding{51} & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{4}{0}{1}{8}$ & 4 & 4 & \ding{51} & 4 & Symm. & \ding{51} & & $\log 4$ & $\log 4$ & \ding{51} \\ $\FR{4}{0}{1}{9}$ & 4 & 4 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{1}{10}$ & 4 & 4 & \ding{51} & 4 & Symm. & \ding{51} & & $\log 4$ & $\log 4$ & \ding{51} \\ $\FR{4}{0}{1}{11}$ & 4 & 4 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{1}{12}$ & 4 & 4 & \ding{51} & 2 & \ding{51} & \ding{51} & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{4}{0}{1}{13}$ & 4 & 4 & \ding{51} & 2 & \ding{51} & \ding{51} & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{4}{0}{1}{14}$ & 4 & 4 & \ding{51} & 2 & \ding{51} & \ding{51} & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{4}{0}{1}{15}$ & 4 & 4 & \ding{51} & 2 & \ding{51} & \ding{51} & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{4}{0}{1}{16}$ & 4 & 4 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{1}{17}$ & 4 & 4 & \ding{51} & 2 & \ding{51} & \ding{51} & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{4}{0}{1}{18}$ & 4 & 4 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{1}{19}$ & 4 & 4 & \ding{51} & 2 & \ding{51} & \ding{51} & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{4}{0}{1}{20}$ & 4 & 4 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{1}{21}$ & 4 & 4 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{1}{22}$ & 4 & 4 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{1}{23}$ & 4 & 4 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{1}{24}$ & 4 & 4 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{1}{25}$ & 4 & 4 & \ding{51} & 2 & \ding{51} & \ding{51} & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{4}{0}{1}{26}$ & 4 & 4 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{1}{27}$ & 4 & 4 & \ding{51} & 2 & \ding{51} & \ding{51} & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{4}{0}{1}{28}$ & 4 & 4 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{1}{29}$ & 4 & 4 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{1}{30}$ & 4 & 4 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{1}{31}$ & 4 & 4 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{2}{0}$ & 4 & $\sqrt{5}+5$ & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{4}{0}{2}{1}$ & 4 & $\sqrt{5}+5$ & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{2}{2}$ & 4 & $\sqrt{5}+5$ & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{4}{0}{2}{3}$ & 4 & $\sqrt{5}+5$ & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{2}{4}$ & 4 & $\sqrt{5}+5$ & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{4}{0}{2}{5}$ & 4 & $\sqrt{5}+5$ & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{2}{6}$ & 4 & $\sqrt{5}+5$ & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{4}{0}{2}{7}$ & 4 & $\sqrt{5}+5$ & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{3}{0}$ & 4 & 10 & \ding{51} & 4 & Symm. & & & $\log 10$ & $\log 10$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{4}{0}{3}{1}$ & 4 & 10 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{4}{0}{3}{2}$ & 4 & 10 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{4}{0}{3}{3}$ & 4 & 10 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{4}{0}{3}{4}$ & 4 & 10 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{4}{0}{4}{0}$ & 4 & 4 $\sqrt{2}+8$ & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{4}{0}{4}{1}$ & 4 & 4 $\sqrt{2}+8$ & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{4}{0}{5}{0}$ & 4 & $\frac{1}{2} \left(5 \sqrt{5}+15\right)$ & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{5}{1}$ & 4 & $\frac{1}{2} \left(5 \sqrt{5}+15\right)$ & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{5}{2}$ & 4 & $\frac{1}{2} \left(5 \sqrt{5}+15\right)$ & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{5}{3}$ & 4 & $\frac{1}{2} \left(5 \sqrt{5}+15\right)$ & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{6}{0}$ & 4 & $\sim19.24$ & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{4}{0}{6}{1}$ & 4 & $\sim19.24$ & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{4}{2}{1}{0}$ & 4 & 4 & \ding{51} & 4 & Symm. & \ding{51} & & $\log 4$ & $\log 4$ & \ding{51} \\ $\FR{4}{2}{1}{1}$ & 4 & 4 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ $\FR{4}{2}{1}{2}$ & 4 & 4 & \ding{51} & 2 & \ding{51} & \ding{51} & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{4}{2}{1}{3}$ & 4 & 4 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ $\FR{4}{2}{1}{4}$ & 4 & 4 & \ding{51} & 4 & Symm. & \ding{51} & & $\log 4$ & $\log 4$ & \ding{51} \\ $\FR{4}{2}{1}{5}$ & 4 & 4 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ $\FR{4}{2}{1}{6}$ & 4 & 4 & \ding{51} & 2 & \ding{51} & \ding{51} & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{4}{2}{1}{7}$ & 4 & 4 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ \toprule[1pt] \end{tabular} } \centering\resizebox*{!}{\textheight}{ \begin{tabular}{ !{\vrule width 1pt}>{\columncolor[gray]{.9}[\tabcolsep]}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}} \toprule[1pt] \rowcolor[gray]{.9}[\tabcolsep] Cat. ID & Rank & $\mathcal{D}^2$ & Valid & $\rk{\mug{\mathcal{C}}}$ & Premodular? & Pointed? & TY? & TEE & $\log\mathcal{D}_{\mug{\mathcal{C}}}^2$ & Conjecture true? \\ \toprule[1pt] \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{0}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{5}{0}{1}{1}$ & 5 & 8 & \ding{51} & 5 & Symm. & & & $\log 8$ & $\log 8$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{2}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{5}{0}{1}{3}$ & 5 & 8 & \ding{51} & 5 & Symm. & & & $\log 8$ & $\log 8$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{4}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{5}{0}{1}{5}$ & 5 & 8 & \ding{51} & 4 & \ding{51} & & \ding{51} & $\log 4$ & $\log 4$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{6}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{5}{0}{1}{7}$ & 5 & 8 & \ding{51} & 4 & \ding{51} & & \ding{51} & $\log 4$ & $\log 4$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{8}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{5}{0}{1}{9}$ & 5 & 8 & \ding{51} & 4 & \ding{51} & & \ding{51} & $\log 4$ & $\log 4$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{10}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{5}{0}{1}{11}$ & 5 & 8 & \ding{51} & 4 & \ding{51} & & \ding{51} & $\log 4$ & $\log 4$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{12}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{5}{0}{1}{13}$ & 5 & 8 & \ding{51} & 5 & Symm. & & & $\log 8$ & $\log 8$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{14}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{5}{0}{1}{15}$ & 5 & 8 & \ding{51} & 5 & Symm. & & & $\log 8$ & $\log 8$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{16}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{5}{0}{1}{17}$ & 5 & 8 & \ding{51} & 5 & Symm. & & & $\log 8$ & $\log 8$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{18}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{19}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{5}{0}{1}{20}$ & 5 & 8 & \ding{51} & 4 & \ding{51} & & \ding{51} & $\log 4$ & $\log 4$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{21}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{22}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{5}{0}{1}{23}$ & 5 & 8 & \ding{51} & 4 & \ding{51} & & \ding{51} & $\log 4$ & $\log 4$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{24}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{25}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{5}{0}{1}{26}$ & 5 & 8 & \ding{51} & 5 & Symm. & & & $\log 8$ & $\log 8$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{27}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{28}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{29}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{30}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{31}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{32}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{33}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{34}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{35}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{36}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{37}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{38}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{39}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{40}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{41}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{42}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{43}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{44}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{45}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{46}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{47}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{48}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{49}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{50}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{51}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{52}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{5}{0}{1}{53}$ & 5 & 8 & \ding{51} & 5 & Symm. & & & $\log 8$ & $\log 8$ & \ding{51} \\ $\FR{5}{0}{1}{54}$ & 5 & 8 & \ding{51} & 4 & \ding{51} & & \ding{51} & $\log 4$ & $\log 4$ & \ding{51} \\ $\FR{5}{0}{1}{55}$ & 5 & 8 & \ding{51} & 4 & \ding{51} & & \ding{51} & $\log 4$ & $\log 4$ & \ding{51} \\ $\FR{5}{0}{1}{56}$ & 5 & 8 & \ding{51} & 5 & Symm. & & & $\log 8$ & $\log 8$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{57}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{58}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{59}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{60}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{61}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{62}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{1}{63}$ & 5 & 8 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{5}{0}{3}{0}$ & 5 & 12 & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{5}{0}{3}{1}$ & 5 & 12 & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{5}{0}{3}{2}$ & 5 & 12 & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{5}{0}{3}{3}$ & 5 & 12 & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{5}{0}{3}{4}$ & 5 & 12 & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{5}{0}{3}{5}$ & 5 & 12 & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{5}{0}{3}{6}$ & 5 & 12 & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{5}{0}{3}{7}$ & 5 & 12 & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{5}{0}{4}{0}$ & 5 & 14 & \ding{51} & 5 & Symm. & & & $\log 14$ & $\log 14$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{4}{1}$ & 5 & 14 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{4}{2}$ & 5 & 14 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{4}{3}$ & 5 & 14 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{4}{4}$ & 5 & 14 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{4}{5}$ & 5 & 14 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{4}{6}$ & 5 & 14 & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{5}{0}{6}{0}$ & 5 & 24 & \ding{51} & 5 & Symm. & & & $\log 24$ & $\log 24$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{6}{1}$ & 5 & 24 & \ding{51} & 3 & \ding{51} & & & $\log 6$ & $\log 6$ & \ding{51} \\ $\FR{5}{0}{6}{2}$ & 5 & 24 & \ding{51} & 5 & Symm. & & & $\log 24$ & $\log 24$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{6}{3}$ & 5 & 24 & \ding{51} & 3 & \ding{51} & & & $\log 6$ & $\log 6$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{7}{0}$ & 5 & 5 $\sqrt{5}+15$ & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ \rowcolor{nicegreen!10}[\tabcolsep] $\FR{5}{0}{7}{1}$ & 5 & 5 $\sqrt{5}+15$ & \ding{51} & 2 & \ding{51} & & & $\log 2$ & $\log 2$ & \ding{51} \\ $\FR{5}{0}{10}{0}$ & 5 & $\sim34.65$ & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{5}{0}{10}{1}$ & 5 & $\sim34.65$ & \ding{51} & 1 & Mod. & & & 0 & 0 & \ding{51} \\ $\FR{5}{4}{1}{0}$ & 5 & 5 & \ding{51} & 5 & Symm. & \ding{51} & & $\log 5$ & $\log 5$ & \ding{51} \\ $\FR{5}{4}{1}{1}$ & 5 & 5 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ $\FR{5}{4}{1}{2}$ & 5 & 5 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ $\FR{5}{4}{1}{3}$ & 5 & 5 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ $\FR{5}{4}{1}{4}$ & 5 & 5 & \ding{51} & 1 & Mod. & \ding{51} & & 0 & 0 & \ding{51} \\ \toprule[1pt] \end{tabular} }
3,212,635,537,811
arxiv
\section{Introduction}\label{intro} This article presents an experimental study of HD$^+$ in the vicinity of the H$^+$ + D(1s) and H(1s) + D$^+$ dissociation thresholds by pulsed-field-ionisation zero-kinetic-energy (PFI-ZEKE) photoelectron spectroscopy. We used a resonant multiphoton excitation sequence via the $v=11$-$13$ levels of the H$\bar{\rm H}$~$^1\Sigma_g^+$ and B$^{\prime\prime}\bar{\rm B}$~$^1\Sigma_u^+$ states. These levels are located in the outer ($\bar{\rm H}$ and $\bar{\rm B}$) potential wells, which have ion-pair character, and are thus ideally suited to efficiently access the high vibrational levels of HD$^+$ located just below, and the dissociation continua located above, these thresholds. They are almost degenerate and, in HD, the {\it gerade/ungerade (g/u)} symmetry breaking from the nuclear-mass asymmetry mixes these two states \cite{reinhold99a} and makes them accessible by two-photon excitation from the ground state. The emphasis of the article is placed on (i) the structure and dynamics of HD in the H$\bar{\rm H}$~$^1\Sigma_g^+$ and B$^{\prime\prime}\bar{\rm B}$~$^1\Sigma_u^+$ states and of HD$^+$ near the dissociation threshold, and (ii) the effects of the {\it g/u}-symmetry breaking in the electronic states of HD and HD$^+$. The earlier theoretical work of L. Wolniewicz and his coworkers on these topics \cite{reinhold99a,wolniewicz80a,wolniewicz91a} were extremely useful in guiding our analysis, and so were numerous experimental and theoretical studies of isotopic effects near the dissociation thresholds of HD (see, e.g., Refs.~\cite{thorson71a,dabrowski76a,durup78a,delange00a,delange02a,grozdanov09a,wang18a,wang20a} and references therein). HD$^+$ is the simplest molecular system that possesses a permanent electric dipole moment. The nonvanishing electric dipole moment results from a breakdown of the Born-Oppenheimer approximation that can be attributed to the displacement of the centre-of-mass relative to the geometric centre of the nuclei. The rovibronic molecular Hamiltonian can be expressed as \cite{wolniewicz80a,carrington84a} \begin{equation}\label{H2+_hamiltonian} \begin{split} \mathcal{H_\text{rve}} &= \underbrace{-\frac{\nabla^2_1}{2} + \frac{1}{R} - \frac{1}{r_{1\text{a}}} - \frac{1}{r_{1\text{b}}} }_{H_{\rm cn}} \\ & \underbrace{-\frac{\nabla^2_{R\Theta\Phi} }{2\mu}}_{H'_1} \underbrace{-\frac{\nabla^2_1 }{8\mu} }_{H'_2} \underbrace{-\frac{\nabla_{R\Theta\Phi}\cdot \nabla_1 }{2\mu_\alpha}}_{H'_3}. \end{split} \end{equation} In Eq.~(\ref{H2+_hamiltonian}), $H_{\rm cn}$ is the \emph{clamped-nuclei} Hamiltonian, $(R\Theta\Phi)$ specify the nuclear spatial arrangement in the laboratory-fixed frame, and \begin{equation} \mu = \frac{m_\text{a}m_\text{b}}{m_\text{a} + m_\text{b}} \end{equation} and \begin{equation} \mu_\alpha = \frac{m_\text{a}m_\text{b}}{m_\text{a}-m_\text{b}}, \end{equation} where $m_\text{a}$ and $m_\text{b}$ are the masses of nuclei a and b, respectively. All other symbols have their usual meaning and the labels of the particles are given in Fig.~\ref{fig0}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Fig1.pdf} \caption{Relevant distances in the HD$^+$ molecular ion. The small blue dot designates the geometric centre of the nuclei. \label{fig0}} \end{figure} The Hamiltonian of HD differs from that of the homonuclear species H$_2$ and D$_2$ only through the additional term $H'_3$, which vanishes in homonuclear molecules and is the only term in Eq.~(\ref{H2+_hamiltonian}) that is not invariant to the symmetry operations of the D$_{\infty {\rm h}}$ point group. Specifically, $H'_3$ is not invariant with respect to the permutation of the nuclei a and b and has C$_{\infty {\rm v}}$ symmetry. This term causes a mixing of states of $g$ and $u$ symmetry (see, e.g., Refs.~\cite{wolniewicz80a,carrington84a}) and couples ionisation channels differing in core rotational-angular-momentum quantum number $N^+$ and electron-orbital-angular-momentum quantum number $l$ by $\pm 1$ \cite{merkt96a,sprecher14c}. Because of the mass ratio of H to D (and T), the nonadiabatic {\it g/u}-symmetry breaking in HD (and HT) is stronger than in any other isotopically substituted homonuclear diatomic molecule. The rovibronic {\it g/u}-symmetry-breaking interaction is very small in low rovibrational states, but becomes important in high vibrational states near the dissociation thresholds of HD$^+$, H$^+$ + D(1s) and D$^+$ + H(1s), that are separated by the different ionisation energies of the atoms, i.e., by 29.843~cm$^{-1}$. These two dissociation thresholds can be associated with the lower (X$^+$) and upper (A$^+$) electronic states, respectively, and the theoretical description is similar to that introduced for the hyperfine-induced {\it g/u} mixing in H$_2^+$ presented in Ref.~\cite{beyer18d}, although the rovibronic {\it g/u} mixing effect in HD$^+$ is approximately 900 times stronger. The nonzero electric dipole moment of HD$^+$ allows the measurement of rovibrational transitions in the infrared and microwave region and a variety of such transitions have been observed \cite{wing76a,carrington81a,carrington83a,carrington85a,carrington91a,carrington92a,alighanbari20a,patra20a}. In particular, Carrington and coworkers have studied the hyperfine structure of rovibrational levels of HD$^+$ in the vicinity of the first dissociation threshold and the effects of the asymmetric charge distribution in the centre-of-mass frame \cite{carrington91a}. Above the first dissociation threshold of HD$^+$, the {\it g/u}-mixing term $H^\prime_3$ in the molecular Hamiltonian~(\ref{H2+_hamiltonian}) is also responsible for electronic predissociation (type I predissociation in Herzberg's classification \cite{herzberg89a}). The rovibrational levels of the A$^+$ state cannot predissociate in the cases of H$_2^+$ and D$_2^+$ because they are located below the dissociation threshold. In HD$^+$, they are all located above the H$^+$ + D(1s) dissociation threshold and are coupled to the H$^+$ + D(1s) continuum, which reduces the lifetimes to the picosecond range. Although the decay through electronic predissociation is allowed above the lowest dissociation threshold, it does not always contribute significantly, so that some resonances are better described by elastic-scattering resonances i.e., shape and orbiting resonances, as discussed, {\it e.g.}, in Refs.~\cite{davis78a,beyer16a,beyer18b}. All bound rovibronic states of HD$^+$ associated with the X$^+$ state were calculated by Moss using a transformed Hamiltonian in combination with the artificial-channels scattering method \cite{moss93b}. With this method, the effect of the {\it g/u} mixing rovibronic term is removed from the Hamiltonian using a unitary transformation that results in effective nuclear charges and effective reduced masses in the electronic Hamiltonian. For the lowest vibrational states of HD$^+$, variational calculations \cite{moss89a} and pre-Born-Oppenheimer variational results including relativistic and QED corrections up to high order \cite{korobov04a,korobov06a,korobov14a,korobov21a} were reported. Calculations, based on a variation-perturbation approach first used in HD$^+$ by Wolniewicz and Orlikowski \cite{wolniewicz80a,wolniewicz91a, orlikowski94a}, using hyperspherical coordinates \cite{igarashi99a} and a modified adiabatic Hamiltonian \cite{esry99a}, were also reported (see Ref.~\cite{leach95a} for a review), but included more restrictive approximations than the calculations of Moss \cite{moss93b}. However, these calculations could be extended to the energy region above the first dissociation threshold H$^+$ + D(1s) and predicted the positions and widths of the quasibound states of HD$^+$ \cite{davis78a,wolniewicz91a, orlikowski94a,igarashi99a,esry00a}. Several quasibound states (shape resonances) of the X$^+$ state with rotational quantum numbers $N^+$ larger than 16 were observed by Carrington and coworkers \cite{carrington88a}. Their experimental method restricted the detection to states with lifetimes longer than a few nanoseconds so that only resonances trapped on the low-$R$ side of large centrifugal barriers could be detected. Concerning the metastable levels of the A$^+$ state, Leach and Moss commented in their review article that ,,there seems to be no immediate prospect to experimentally observe these short-lived levels of the first excited electronic state'' \cite{leach95a}. We present PFI-ZEKE photoelectron spectra of HD near the H$^+$ + D and H + D$^+$ dissociative-ionisation thresholds recorded via the H$\bar{\rm H}$~$^1\Sigma_g^+\ (v=11-13)$ and B$^{\prime\prime}\bar{\rm B}$~$^1\Sigma_u^+\ (v=11-13)$ intermediate states. These spectra reveal sharp transitions to the highest bound states of HD$^+$ with $v^+$ between 16 and 21 and $N^+=0-10$ as well as the onset of the dissociation continua. The effects of increasing {\it g/u} mixing with increasing $v^+$ on the relative intensities in these spectra are discussed. By detecting the H$^+$ and D$^+$ fragments using mass-analysed threshold-ionisation (MATI) spectroscopy, we also observed the metastable rovibrational levels of the A$^+$ state for the first time. These levels are located between the two dissociation thresholds and are Feshbach resonances subject to fast electronic predissociation. We also observed three other resonances that are attributed to the X$^+(21,4)$, $(20,7)$ and A$^+$(0,4) resonances, the first two of which are located below the second dissociation threshold D$^+$ + H(1s). \section{Experimental setup and procedure}\label{exp_setup} The experiments relied on the use of PFI-ZEKE photoelectron spectroscopy \cite{muellerdethlefs91b} in combination with a resonant three-photon-excitation scheme from the ground X~$^1\Sigma_g^+~(v=0)$ state of HD. The region around the first dissociative-ionisation threshold of HD, which is located 146084.55541(37)~cm$^{-1}$ above the X~$^1\Sigma_g^+\ (v=0,N=0)$ ground state of HD \cite{sprecher10a,moss93b}, was accessed using the resonant three-photon excitation sequence \begin{align} \label{CRHD:eq:excitation} \text{X}~^1\Sigma_g^+~(0,0-3) &\xrightarrow{\rm VUV} \text{B}~^1\Sigma_u^+~(20-21,1-4) \notag \\ &\xrightarrow{\rm VIS} \bar{\rm H}~^1\Sigma_g^+~(11-13,0-5)\quad\text{or}\quad \bar{\rm B}~^1\Sigma_u^+~(11-13,1-4)\quad \notag\\ &\xrightarrow{\rm UV} \text{X}^+~^2\Sigma_g^+~(16-21,0-10) \quad\text{and}\quad \text{A}^+~^2\Sigma_u^+~(0,0-4) \end{align} via excited vibrational levels of the B~$^1\Sigma_u^+$ and either the $\bar{\rm H}~^1\Sigma_g^+$ or the $\bar{\rm B}~^1\Sigma_u^+$ intermediate states, see Fig.~\ref{Fig0_pot}. The same laser system, consisting of a broadly tunable VUV laser used to drive the B-X transition, a visible laser to induce the H$\bar{\rm H}$-B and B$^{\prime\prime}\bar{\rm B}$-B transitions, and a UV laser to access the dissociative-ionisation threshold, was already used to study the highest bound states and shape resonances of H$_2^+$ and D$_2^+$ and is described in Refs.~\cite{beyer16a,beyer16b,beyer17a,beyer18b}. \begin{figure}\centering \includegraphics[width=0.9\textwidth]{Fig2.pdf} \caption{Potential-energy functions of the B~$^1\Sigma_u^+$, H$\bar{\rm H}$~$^1\Sigma_g^+$ and B$^{\prime\prime}\bar{\rm B}$~$^1\Sigma_u^+$ states of molecular hydrogen and of the X$^+$~$^2\Sigma_g^+$ and A$^+$~$^2\Sigma_u^+$ states of the molecular-hydrogen cation. The dashed lines represent schematically the diabatic potential-energy functions of the repulsive states with configurations $(2{\rm p}\sigma_u)^2$, $(2{\rm p}\sigma_u)(3{\rm p}\sigma_u^+)$ and $(2{\rm p}\sigma_u)(2{\rm s}\sigma_g)$ (magenta) and of the H$^+$D$^-$/H$^-$D$^+$ ion-pair states (green). The horizontal dotted green line gives the position the ion-pair-dissociation thresholds. The left inset indicates the two ion-pair dissociation thresholds D$^+$ + H$^-$ and D$^-$ + H$^+$ of HD, and the right inset the two dissociation thresholds D$^+$ + H(1s) and D(1s) + H$^+$ of HD$^+$. The blue and magenta colours highlight the mixed characters in the correlation to the X$^+$ and A$^+$ states. The vertical arrows indicate the photoexcitation sequence used to access the region of the dissociative-ionisation threshold from the X~$^1\Sigma_g^+$ ground state of HD. \label{Fig0_pot}} \end{figure} The {\it g/u}-symmetry breaking in HD allowed the direct excitation, from the ${\rm B}~^1\Sigma_u^+~(20-21,1-4)$ intermediate levels, of rovibrational levels of the B$^{\prime\prime}\bar{\rm B}$ $^1\Sigma_u^+ $state \cite{reinhold98a,reinhold99a}, which has (1s$\sigma_g$)3p$\sigma_u$ character in the inner well and (2p$\sigma_u$)2s$\sigma_g$ and ion-pair character in the outer well. Because of the ion-pair character of the outer-well states of the H$\bar{\rm H}$ and B$^{\prime\prime}\bar{\rm B}$ state, their potential-energy curves are almost identical beyond $8\,a_0$. Consequently, the {\it g/u} mixing is substantial and some levels of the B$^{\prime\prime}\bar{\rm B}$ state can be excited from the B state almost as efficiently as the corresponding levels of the H$\bar{\rm H}$ state despite the fact that their nominal \emph{ungerade} character makes the excitation forbidden in zero order. \begin{figure}\centering \includegraphics[width=0.9\textwidth]{Fig3.pdf} \caption{Upper Panel: Electric-field pulse sequence used to record the PFI-ZEKE photoelectron spectrum of HD in the vicinity of the dissociative-ionisation thresholds of HD. Lower panel: Electron time-of-flight spectrum obtained by collecting the electrons generated by the pulsed-field-ionisation sequence. The peak labels (1)-(10) designate the field pulses that led to the field ionisation of the corresponding electrons, as indicated by the grey diagonal lines. \label{ZEKE_sequence}} \end{figure} The PFI-ZEKE photoelectron spectra were recorded by monitoring the yield of electrons generated by delayed pulsed field ionisation as a function of the wave number of the UV laser. The multipulse electric-field sequence used for field ionisation is displayed in the upper panel of Fig.~\ref{ZEKE_sequence} and consists of 10 electric-field steps with field strength increasing from $-0.05$ to $-1.22$~V/cm. The electric fields also extracted the electrons toward a microchannel-plate (MCP) detector located a the end of a flight tube. The lengths of the individual pulses were chosen so as to be able to unambiguously relate each electron time-of-flight peak to the corresponding electric-field step in the sequence. Their strengths were selected to achieve a high resolution (about 0.15~cm$^{-1}$) in the spectra generated by pulses (2)-(5) and a high signal-to-noise ratio in the spectra recorded with the last pulses of the sequence. The lower part of Fig.~\ref{ZEKE_sequence} presents the time-of-flight spectrum of the electrons generated by the pulse sequence when carrying out the photoexcitation to a spectral position above the dissociative-ionisation thresholds, which makes it possible to generate a field-ionisation signal simultaneously with all field pulses. The labels (1) to (10) designate the groups of electrons generated by the 10 pulses of the sequence. The PFI-ZEKE photoelectron spectra presented in the next section correspond to the electron signals generated by the field pulses (2)-(5). The positions of the ionisation thresholds were corrected for the field-induced shifts, which were determined from calculations of the field-ionisation rates of the Rydberg-Stark states following the procedure described in Ref.~\cite{hollenstein01a}. The field-corrected spectra were then added to improve the signal-to-noise ratio. MATI spectra were also recorded in the vicinity of the dissociative-ionisation thresholds using a sequence of two electric-field pulses. The first, weak ($-70$~mV/cm) discrimination pulse served the purpose of sweeping prompt ions out of the photoexcitation volume whereas the second, stronger (800 mV/cm) pulse was used to field ionise high Rydberg states located below the bound and continuum states of HD$^+$ and extract the H$^+$, D$^+$ and HD$^+$ ions toward the MCP detector. Below the D(1s) + H$^+$ dissociation threshold, only HD$^+$ ions corresponding to weakly bound vibrational levels of the X$^+$ and A$^+$ states can be observed in the MATI spectra for energetic reasons. H$^+$ and D$^+$ ions can only be observed above the D(1s) + H$^+$ and the D$^+$ + H(1s) dissociation thresholds, respectively. The UV wave number was calibrated using a wavemeter (absolute accuracy of 0.02~cm$^{-1}$, relative accuracy of 0.015~cm$^{-1}$) and the line centres were determined using a Poisson-weighted nonlinear fit of a Gaussian line-shape model assuming a constant line width determined by the field-ionisation sequence. The relative positions of the rovibronic levels of HD$^+$ were extracted in a weighted linear least-squares fit from a redundant network of more than 350 transitions connecting different rovibrational levels of the $\bar{\rm H}$ and $\bar{\rm B}$ states to the rovibronic ionisation thresholds. The absolute term values of the HD$^+$ levels with respect to the X~$^1\Sigma_g^+(0,0)$ ground state of HD were obtained by adding the calibrated UV wave numbers to the term values of the $\bar{\rm H}~^1\Sigma_g^+$ or $\bar{\rm B}~^1\Sigma_u^+$ levels reported by Reinhold {\it et al.} \cite{reinhold99a} and compensating for the shifts of the ionisation thresholds induced by the pulsed field ionisation~\cite{hollenstein01a}. This procedure resulted in typical absolute and relative uncertainties of 0.11~cm$^{-1}$ and 0.02~cm$^{-1}$, respectively, for the rovibronic levels of HD$^+$. \section{Experimental results}\label{results} \subsection{ Structure and dynamics of the H$\bar{H}(v=11-13)$ and B$^{\prime\prime}\bar{B}(v=11-13)$ intermediate states} \label{sec:HB_dynamics} \begin{figure}\centering \includegraphics[width=0.8\textheight,angle=90]{Fig4.pdf} \caption{Photoionisation spectrum of HD in the vicinity of the X$^+(v^+=1,N^+=0-2)$ levels of HD$^+$ (indicated by dashed vertical lines) recorded via the B (20,2) level. The HD$^+$, D$^+$ and H$^+$ ion signals are shown in the top, middle and bottom panels, respectively. The pairs of lines correspond to the P(2) and R(2) transitions to the $\bar{v}=11-13$ vibrational states of the $\bar{\rm H}~^1\Sigma_g^+$ and $\bar{\rm B}~^1\Sigma_u^+$ states. See text for details. \label{CRHD:fig:spec_HH_BB}} \end{figure} The (1+1') resonant two-photon ionisation spectrum of HD in the vicinity of the X$^+(v^+=1,N^+=0-2)$ levels of HD$^+$ recorded via the B $^1\Sigma_u^+$ (20,2) intermediate level is displayed in Fig.~\ref{CRHD:fig:spec_HH_BB}. An electric field of 260~V/cm was applied 100~ns after the VUV and VIS excitation to (i) field ionise molecular Rydberg states of HD and (ii) extract all ions toward the MCP detector. HD$^+$, D$^+$ and H$^+$ ion signals could be observed separately using the TOF spectrometer and the corresponding spectra are displayed in the top, middle and bottom panels, respectively. The dominant contributions to the HD$^+$ signal in the top panel of Fig.~\ref{CRHD:fig:spec_HH_BB} are from molecular Rydberg states, even though these states have low principal quantum numbers and cannot be field ionised by the 260~V/cm electric-field pulse. The HD$^+$ signal in this case is caused by (field-induced) autoionisation. In this spectral region, the Rydberg states excited from the B state, which has (1s$\sigma_g$)(2p$\sigma_u$) character at short range, are s and d Rydberg states converging on the X$^+(1,1)$ level of HD$^+$. Although HD does not have an inversion centre, the {\it g/u} symmetry of the electronic wavefunctions is a good symmetry within the Born-Oppenheimer approximation and is only broken by nonadiabatic interactions, as explained in the introduction. Consequently, the {\it g/u} symmetry remains useful as approximate symmetry in HD and can be used to explain intensity patterns. Moreover, HD, unlike H$_2$ and D$_2$, is not subject to restrictions imposed by the Pauli principle. Starting from a rotational state with odd rotational angular momentum in the ground state (e.g., X$(0,1)$), we expect the strongest autoionising Rydberg series to be s and d Rydberg series converging on the X$^+(1,1)$ state of HD$^+$ and such series can indeed be seen above 126460~cm$^{-1}$ in the top panel of Fig.~\ref{CRHD:fig:spec_HH_BB}. Some intensity is also observed for the $n$d$3_2\,(v^+=1)$ Rydberg series, which gives rise to the broad rotational-autoionisation resonances with Fano-type lineshapes above the X$^+(1,0)$ ionisation threshold. Rydberg series converging on higher vibrational levels of HD$^+$ also contribute to the spectrum depicted in the top panel of Fig.~\ref{CRHD:fig:spec_HH_BB}. The rovibrational levels of the $\bar{\rm H}~^1\Sigma_g^+$ and $\bar{\rm B}~^1\Sigma_u^+$ outer-well states that we wanted to use as intermediate levels to access the region of the dissociative-ionisation thresholds of HD are difficult to recognize in the top panel of Fig.~\ref{CRHD:fig:spec_HH_BB} but they are clearly visible in the middle and bottom panels, which display the D$^+$ and H$^+$ ion signals, respectively, observed in the region of the $v=11$-13 vibrational levels. The rovibrational states in the $\bar{\rm H}$ and $\bar{\rm B}$ outer wells of the H$\bar{\rm H}$ and B$^{\prime\prime}\bar{\rm B}$ states are known to predissociate to the H(2$l$) + D(1s) and H(1s) + D(2$l$) dissociation continua of lower electronic states of $\Sigma_{g/u}$ and $\Pi_{g/u}$ symmetry \cite{reinhold00a}. The H$^+$ and D$^+$ ions observed in the middle and bottom panels of Fig.~\ref{CRHD:fig:spec_HH_BB}, respectively, result from predissociation of the rovibrational levels of the $\bar{\rm H}$ and $\bar{\rm B}$ states forming H and D atoms in $n=2$ Rydberg states. These Rydberg states are then ionised by the VIS laser pulse in a multiphoton process to form H$^+$ and D$^+$ ions. Reinhold \emph{et al.} found lifetimes shorter than 5~ns for the low vibrational levels of the H$\bar{\rm H}$ state of HD compared to lifetimes of the order of 20~ns in H$_2$ and 50~ns in D$_2$ \cite{reinhold00a}. They concluded that the radiative decay is stronger in HD because of the larger number of dipole-allowed transitions to lower states originating from {\it g/u}-symmetry mixing. Our observation of strong HD$^+$, H$^+$ and D$^+$ signals following excitation to the $\bar{\rm H}$ and $\bar{\rm B}$ states indicates that autoionisation and predissociation are also important decay channels for vibrational levels in the range $v=11$-13 , the {\it g/u}-symmetry mixing allowing nonadiabatic interactions with additional continua compared to H$_2$ and D$_2$. The spectra depicted in the middle and bottom panels of Fig.~\ref{CRHD:fig:spec_HH_BB} are dominated by the P(2) and R(2) lines of the $\bar{\rm H}$(11-13) -- B(20) and $\bar{\rm B}$(11-13) -- B(20) bands. The rovibrational levels of the H$\bar{\rm H}$ and B$^{\prime\prime}\bar{\rm B}$ states are closely spaced, as expected from the large internuclear distances of the outer-well states. The degeneracy of the potential energies of the $\bar{\rm H}$ and $\bar{\rm B}$ states at long range (see Fig.~\ref{Fig0_pot}) implies a close proximity of their vibrational levels and a significant mixing of the {\it g/u} character in HD caused by nonadiabatic interactions. The spectral lines associated with each value of $v$ thus form two pairs, the lower and higher pairs corresponding to exitation of the $\bar{\rm B}$ and $\bar{\rm H}$ states, respectively. The $\bar{\rm H}$ and $\bar{\rm B}$ states have ion-pair character at large internuclear distance and can be represented as positive (\emph{gerade}) and negative (\emph{ungerade}) superposition of the H$^+$D$^-$(1s)$^2$ and H$^-$(1s)$^2$D$^+$ ion-pair configurations. The {\it g/u} mixing results in a superposition of these \emph{gerade} and \emph{ungerade} ion-pair configurations, which results in the (at least partial) localisation of the electrons on either the proton or the deuteron. Based on the difference in the ionisation energies and electron affinities of atomic hydrogen and deuterium, we expect the lower-lying pair of states, i.e., corresponding to the $\bar{\rm B}$ state, to be correlated with the H$^+$D$^-$ ion-pair configuration and the higher-lying pair of states (corresponding to the $\bar{\rm H}$ state) to be correlated with the H$^-$D$^+$ ion-pair configuration. The relative intensities of the transitions to the $v=11$-13 levels of the $\bar{\rm H}$ and $\bar{\rm B}$ states observed in the three spectra depicted in Fig.~\ref{CRHD:fig:spec_HH_BB} is determined by the complex interplay between the transition moments from the B(20) intermediate level and the competition between radiative decay and decay by predissociation and autoionisation. The B state of HD around $v=20$ has dominant {\it u} character so that the absorption intensities are proportional to the {\it g} character of the $\bar{\rm H}$(11-13) and $\bar{\rm B}$(11-13) levels. These characters can be derived from the analysis of these states presented in Refs.~\cite{wolniewicz98a,reinhold99a} to be 0.86, 0.81, and 0.72 for the $v=11$, 12 and 13 levels of the $\bar{\rm H}$ state and 0.14, 0.19 and 0.28 for the $v=11$, 12 and 13 levels of the $\bar{\rm B}$ state, with almost no dependence on the rotational quantum number. These characters explain why the lines associated with the excitation of the $\bar{\rm H}$ vibrational levels are overall stronger than the lines associated with the corresponding $\bar{\rm B}$ vibrational levels. However, the observed intensity ratios are less pronounced than would be expected on the basis of the {\it g} characters, presumably because of differences in the radiative decay of the $\bar{\rm H}$ and $\bar{\rm B}$ states \cite{reinhold00a}. The intensities of the transitions to the $v=11$-13 states of the $\bar{\rm H}$ and $\bar{\rm B}$ states observed in the HD$^+$ channel (top panel of Fig.~\ref{CRHD:fig:spec_HH_BB}) increase rapidly with increasing $v$ value. This increase can be readily understood by considering the potential-energy curves depicted in Fig.~\ref{Fig0_pot}. In the spectral region under investigation, autoionisation must take place to the ionisation continua associated with the X$^+(v^+=0,1)$ vibronic states of HD$^+$ for energetic reasons. The autoionisation rate is given by the overlap of the vibrational wavefunctions of the $\bar{\rm H}$ and $\bar{\rm B}$ states with the X$^+(v^+=0,1)$ vibrational wavefunction and, consequently, by the probability of tunnelling through the barriers separating the two potential wells of the H$\bar{\rm H}$ and B$^{\prime\prime}\bar{\rm B}$ states. This probability increases with increasing $v$ value. The relative intensities of the transitions to the $\bar{\rm H}$ and $\bar{\rm B}$ states in the spectra depicted in the middle and lower panels of Fig.~\ref{CRHD:fig:spec_HH_BB} indicate a slight preference for the $\bar{\rm B}$ levels to predissociate to the H(1s) + D($n=2$) continua and for the $\bar{\rm H}$ levels to predissociate to the D(1s) + H($n=2$) continua. This behaviour reflects the facts that the $\bar{\rm B}$ state correlates with the H$^+$D$^-$ ion-pair configuration and that the higher-lying $\bar{\rm H}$ state is correlated with the H$^-$D$^+$ ion-pair configuration, as mentioned above. Predissociation is a vibronic effect and it is unlikely that both electrons are affected. The dominant predissociation processes are therefore expected to be the one-electron charge-transfer (CT) processes \begin{equation} {\rm H}^+{\rm D}^-(1{\rm s}^2) \xrightarrow{\rm CT\ predissociation} {\rm H}(2l) + {\rm D}(1{\rm s}) \label{CRHD:eq:pred_B} \end{equation} \begin{equation} {\rm H}^-(1 {\rm s}^2){\rm D}^+ \xrightarrow{\rm CT\ predissociation} {\rm H}(1{\rm s}) + {\rm D}(2l) \label{CRHD:eq:pred_H}, \end{equation} in accord with the trends in relative intensities observed in the lower two panels of Fig.~\ref{CRHD:fig:spec_HH_BB}. \subsection{Bound levels of HD$^+$ just below the dissociative-ionisation thresholds of HD} \label{sec:bound} \begin{figure}\centering \includegraphics[height=0.8\textheight,angle=0]{Fig5.pdf} \caption{PFI-ZEKE photoelectron spectrum of HD recorded near the dissociative-ionisation threshold from the $\bar{\rm H}~^1\Sigma_g^+~(11,4)$ intermediate level. The H$^+$ + D(1s) and D$^+$ + H(1s) dissociation thresholds are indicated by dashed blue and red lines, respectively. \label{CRHD:fig:spec_HH_zeke}} \end{figure} The PFI-ZEKE PE spectrum of HD recorded via the $\bar{\rm H}~^1\Sigma_g^+~(11,4)$ intermediate level is depicted in Fig.~\ref{CRHD:fig:spec_HH_zeke}. The spectrum consists of transitions to bound levels of HD$^+$ that can be grouped in rotational progressions associated with vibrational levels with $v^+$ between 16 and 21 and $N^+$ between 0 and 10. In H$_2$ or D$_2$, the conservation of total parity and nuclear-spin symmetry implies that photoionisation to the X$^+$ $^2\Sigma_g^+$ state of H$_2^+$ of D$_2^+$ from a \emph{gerade} intermediate state with even (odd) rotational-angular-momentum quantum numbers $N$ exclusively yields ions in rotational levels of even (odd) $N^+$ values \cite{xie90a,signorell97c}. This rotational photoionisation selection rule can be expressed as $\Delta N= N^+-N=0, \pm2, \pm4, \ldots$. If the photoionisation is from an \emph{ungerade} intermediate state, the selection rule is $\Delta N= \pm1, \pm3, \ldots$. In HD, there are no restrictions imposed by the conservation of nuclear-spin symmetry and one has to consider the effects of {\it g/u} mixing both in the intermediate and the ionic states. Figure~\ref{CRHD:fig:spec_HH_zeke} shows that both even- and odd-$N^+$ levels of HD$^+$ are accessed from the $\bar{\rm H}$(11,4) intermediate level but that the transitions to the even-$N^+$ levels are much more intense than the transitions to the odd-$N^+$ levels. In the rotational progressions associated with $v^+=16$ and 17, the intensity ratio of the transitions to even and odd $N^+$ values roughly reflects the 0.86:0.14 ratio of {\it g} and {\it u} character of the $\bar{\rm H}(11)$ intermediate state. This observation suggests that the $v^+=16$ and 17 vibrational levels of the X$^+$ $^2\Sigma_g^+$ state of HD$^+$ have almost pure {\it g} character. With increasing value of $v^+$, however, the intensities of transitions to states of even and odd values of $N^+$ become more and more similar, indicating that {\it g/u} mixing in the HD$^+$ ion becomes significant beyond $v^+=18$. The PFI-ZEKE PE spectra of HD recorded from the $\bar{\rm B}$(11,2) and $\bar{\rm H}$(11,4) intermediate states are compared in Fig.~\ref{CRHD:fig:spec_HH_BB_zeke}. Both intermediate states have even rotational quantum numbers, but opposite nominal {\it g/u} symmetry and thus display the opposite $N^+$-propensity rule. To rule out that the Franck-Condon factors are the origin of the strong $N^+$-dependent intensity alternations in the spectra presented in Figs.~\ref{CRHD:fig:spec_HH_zeke} and~\ref{CRHD:fig:spec_HH_BB_zeke}, we calculated them in the adiabatic approximation and present them in Fig.~\ref{CRHD:fig:FC_HH_X+_zeke} (see discussion below concerning the adiabatic approximation). Within a rotational progression of a given vibrational state of HD$^+$, the Franck-Condon factors are found to either smoothly increase (e.g., for $v^+=17$) or smoothly decrease (e.g., for $v^+=19$), but no dependence on the even/odd nature of $N^+$ is observed. \begin{figure}\centering \includegraphics[width=0.8\textheight,angle=90]{Fig6.pdf} \caption{Comparison of the PFI-ZEKE photoelectron spectrum of HD near the dissociative-ionisation threshold recorded from the $\bar{\rm B}~^1\Sigma_g^+~(11,2)$ (upper panel) and $\bar{\rm H}~^1\Sigma_g^+~(11,4)$ (lower panel) levels. The H$^+$ + D(1s) and D$^+$ + H(1s) dissociation thresholds are indicated by dashed blue and red lines, respectively. Unassigned lines are marked with green asterisks. See text for details. \label{CRHD:fig:spec_HH_BB_zeke}} \end{figure} The lines marked with asterisks in Fig.~\ref{CRHD:fig:spec_HH_BB_zeke} could not be assigned. They show the same field-ionisation shifts as the lines that could unambiguoulsy be assigned to transitions to rovibrational levels of the X$^+$ state, but do not correspond to the positions expected for transitions to these levels from the intermediate $\bar{\rm H}$ or $\bar{\rm B}$ states nor from the B state. Unlike the assigned lines, these lines can be eliminated by applying a prepulse in the pulsed-field ionisation sequence, and thus be distinguished from the transitions to the high-$v^+$ levels discussed here. Moreover, the additional lines appear at different absolute energies when other rovibrational levels of the intermediate state are used in the multiphoton excitation sequence [Eq.~\eqref{CRHD:eq:excitation}]. They may correspond to transitions from unknown levels which are populated through the radiative decay of the intermediate states. \begin{figure}\centering \includegraphics[width=0.8\textwidth]{Fig7.pdf} \caption{Franck-Condon factors for the excitation from the H$\bar{\rm H}$ $(v=11)$ state to selected rotational levels ($N^+$) of the X$^+$ state of HD$^+$. The lines connecting the data points are used to guide the eye. \label{CRHD:fig:FC_HH_X+_zeke}} \end{figure} The level positions, relative to the position of the X$^+$(20,4) level, of the highest bound states of HD$^+$ determined experimentally are listed in the fourth column of Table~\ref{CRHD:tab:level_eng}. Comparison with the theoretical dissociation energies including relativistic and radiative corrections reported by Moss \cite{moss93b} (fifth column) shows excellent agreement with our results, i.e., within the experimental uncertainties. The term values of all levels with respect to the X(0,0) ground state of HD are listed in the last column. Their absolute uncertainties are larger (about 0.11~cm$^{-1}$) because of the uncertainties in the field-induced shift of the ionisation threshold and in the term values of the intermediate levels (see Section~\ref{exp_setup}). \begin{table}[t] \caption{Measured level positions (obs) of HD$^+$ with respect to the X$^+$(20,4) level. The columns labelled "calc" and "obs$-$calc" list the dissociation energies given by Moss \cite{moss93b} and their difference to our observed values. All values are in cm$^{-1}$. See text for details.}\label{CRHD:tab:level_eng} \begin{tabular}{l l l c r r} \hline & $v^+$ & $N^+$ & \text{obs} & calc~\cite{moss93b} & obs-calc \\ \hline $X^+$ & 16 & 4 & -1323.54(4) & -1323.4819 & -0.058 \\ $X^+$ & 16 & 5 & -1253.70(9) & -1253.6991 & -0.001 \\ $X^+$ & 16 & 6 & -1171.819(13) & -1171.8127 & -0.006 \\ $X^+$ & 16 & 8 & -975.802(14) & -975.7955 & -0.007 \\ $X^+$ & 16 & 10 & -744.990(22) & -744.9787 & -0.011 \\ $X^+$ & 17 & 0 & -960.05(3) & -960.0882 & 0.038 \\ $X^+$ & 17 & 2 & -922.644(14) & -922.6533 & 0.009 \\ $X^+$ & 17 & 4 & -837.360(13) & -837.3674 & 0.007 \\ $X^+$ & 17 & 5 & -778.18(6) & -778.2028 & 0.023 \\ $X^+$ & 17 & 6 & -709.125(11) & -709.1187 & -0.006 \\ $X^+$ & 17 & 7 & -631.147(25) & -631.1418 & -0.005 \\ $X^+$ & 17 & 8 & -545.455(12) & -545.4622 & 0.007 \\ $X^+$ & 17 & 9 & -453.39(8) & -453.4392 & 0.049 \\ $X^+$ & 18 & 0 & -551.11(5) & -551.1163 & 0.006 \\ $X^+$ & 18 & 1 & -540.87(4) & -540.8346 & -0.035 \\ $X^+$ & 18 & 2 & -520.464(12) & -520.4569 & -0.007 \\ $X^+$ & 18 & 3 & -490.380(17) & -490.3545 & -0.025 \\ $X^+$ & 18 & 4 & -451.082(12) & -451.0830 & 0.001 \\ $X^+$ & 18 & 5 & -403.392(12) & -403.3827 & -0.009 \\ $X^+$ & 18 & 6 & -348.178(11) & -348.1802 & 0.002 \\ $X^+$ & 18 & 7 & -286.612(15) & -286.5951 & -0.017 \\ $X^+$ & 18 & 8 & -219.978(12) & -219.9544 & -0.024 \\ $X^+$ & 18 & 10 & -77.97(8) & -78.0598 & 0.090 \\ $X^+$ & 19 & 0 & -245.100(17) & -245.0871 & -0.013 \\ $X^+$ & 19 & 1 & -237.256(23) & -237.2628 & 0.007 \\ $X^+$ & 19 & 2 & -221.819(12) & -221.8163 & -0.003 \\ $X^+$ & 19 & 3 & -199.176(14) & -199.1536 & -0.022 \\ $X^+$ & 19 & 4 & -169.903(12) & -169.8886 & -0.014 \\ $X^+$ & 19 & 5 & -134.859(14) & -134.8521 & -0.007 \\ $X^+$ & 19 & 6 & -95.113(12) & -95.1101 & -0.003 \\ $X^+$ & 19 & 7 & -52.024(15) & -52.0012 & -0.023 \\ $X^+$ & 19 & 8 & -7.21(3) & -7.2188 & 0.009 \\ $X^+$ & 20 & 0 & -47.067(16) & -47.0614 & -0.006 \\ $X^+$ & 20 & 1 & -42.004(21) & -41.9851 & -0.019 \\ $X^+$ & 20 & 2 & -32.065(12) & -32.0733 & 0.008 \\ $X^+$ & 20 & 3 & -17.846(20) & -17.8193 & -0.027 \\ $X^+$ & 20 & 4 & 0$^a$ & 0$^a$ & 0$^a$ \\ $X^+$ & 20 & 5 & 20.233(17) & 20.2435 & -0.011 \\ $X^+$ & 20 & 6 & 41.186(11) & 41.1419 & 0.044 \\ $X^+$ & 20 & 7 & 58.17(9) & $\dots$ & $\dots$ \\ $X^+$ & 21 & 0 & 36.798(16) & 36.7796 & 0.018 \\ $X^+$ & 21 & 1 & 38.468(16) & 38.4424 & 0.026 \\ $X^+$ & 21 & 2 & 41.374(22) & 41.4426 & -0.069 \\ $X^+$ & 21 & 3 & 45.17(3) & 45.1564 & 0.014 \\ $X^+$ & 21 & 4 & 48.647(22) & $\dots$ & $\dots$ \\ \hline \multicolumn{6}{l}{\parbox[t]{9.5cm}{\footnotesize {$^a$}Reference level determined at be located 146037.73(18)~cm$^{-1}$ above the ground rovibronic state of HD. }} \end{tabular} \end{table} To assess the quality of the calculated Franck-Condon factors and to explore the effects of the {\it g/u} mixing on the level positions of the bound states of HD$^+$, we calculated them based on Eq.~(\ref{H2+_hamiltonian}) in the adiabatic approximation using the methods described in Refs.~\cite{beyer16a,beyer17a}. The comparison with the nonadiabatic dissociation energies reported by Moss \cite{moss93b} is presented in Fig.~\ref{CRHD:fig:calc_eng}, which displays the differences between our values and those calculated by Moss. These differences correspond to the nonadiabatic corrections. Up to $v^+=18$ the nonadiabatic corrections are comparable to the corrections obtained in H$_2^+$ and D$_2^+$ (see Table 2 of Ref.~\cite{beyer16b} and Tables 1 and 2 of Ref.~\cite{beyer17a}). In this case, the adiabatic potential-energy function of the X$^+$ state can be used to obtain reliable Franck-Condon factors. Whereas the nonadiabatic corrections almost vanish for the highest vibrational states of H$_2^+$ and D$_2^+$, they increase very rapidly beyond $v^+=18$ in the case of HD$^+$. This behaviour is attributed to the fact that the single adiabatic potential function of the X$^+$ state fails to describe the distinct H$^+$ + D(1s) and H(1s) + D$^+$ dissociation thresholds and has the same origin ({\it g/u} mixing) as the gradual evolution toward equal intensities of transitions to even- and odd-$N^+$ levels with increasing $v^+$ values discussed in the context of Figs.~\ref{CRHD:fig:spec_HH_zeke} and \ref{CRHD:fig:spec_HH_BB_zeke}. \begin{figure} \centering \includegraphics[width=1\columnwidth]{Fig8.pdf}\\ \caption{Nonadiabatic corrections to the dissociation energies of the bound states of HD$^+$ obtained as differences between the adiabatic energies calculated in the present work and the nonadiabatic energies calculated by Moss \cite{moss93b}. The lines connecting the data points are used to guide the eye. \label{CRHD:fig:calc_eng}} \end{figure} \subsection{Resonances} The PFI-ZEKE PE spectra recorded from the $\bar{\rm H}$(11,4) and $\bar{\rm B}$(11,2) intermediate states and depicted in Fig.~\ref{CRHD:fig:spec_HH_BB_zeke} also have structure in the dissociation continuum. In both cases, an increase of the signal is observed just above the first dissociation threshold (H$^+$ + D(1s)). Both spectra also show a sharp feature just above each of the two (H$^+$ + D(1s) and D$^+$ + H(1s)) thresholds, which we assign to the X$^+(21,4)$ and A$^+(0,4)$ quasibound states, respectively. The sharp feature marked with an asterisk in the upper panel of Fig.~\ref{CRHD:fig:spec_HH_BB_zeke} was found \emph{not} to be a feature of the continuum but can be unambiguously identified as one of the spurious transitions discussed in Subsection~\ref{sec:bound}. The spectrum displayed in the upper panel of Fig.~\ref{CRHD:fig:spec_HH_BB_zeke} reveals a weak, broad feature that we attribute to the X$^+(20,7)$ quasibound state. This feature is not visible in the lower panel because of the gerade symmetry of the H$\bar{\rm H}$ state and the lower intensity of transitions to rotational states odd $N^+$ values of when starting from even-$N$ rotational levels of the $\bar{\rm H}$ state. The spectrum in the lower panel of Fig.~\ref{CRHD:fig:spec_HH_BB_zeke} reveals a second broad resonance just below the second dissociation threshold (D$^+$ + H(1s)) with a sharp decrease in signal intensity at the dissociation threshold. To elucidate the origin of this resonance, we recorded MATI spectra by monitoring the HD$^+$, D$^+$ and H$^+$ signals resulting from the delayed pulsed-field ionisation in separate time-of-flight windows. These spectra are compared with the corresponding PFI-ZEKE PE spectrum in Fig.~\ref{CRHD:fig:spec_res}. Below the first dissociation threshold (H$^+$ + D(1s)), the lines observed in the PFI-ZEKE spectrum all appear in the HD$^+$ MATI spectrum and stem from the pulsed-field ionisation of high-$n$ molecular Rydberg states of HD. Above the first threshold, the signal in the H$^+$ channel increases and corresponds to the field ionisation of H($nl$) Rydberg states produced by dissociation. Above the H$^+$ + D(1s) dissociation threshold, the HD$^+$ ion core thus dissociates and the Rydberg electron follows the charged fragment (here H$^+$). The Rydberg electron thus acts as a spectator to the HD$^+$-core dissociation. The broad feature that is observed in the PFI-ZEKE PE spectrum just below the second threshold also appears solely in the H$^+$ channel. Based on the calculations of Wolniewicz and Orlikowski \cite{wolniewicz91a}, we attribute this broad line to the overlapping A$^+(0,0$-$3)$ Feshbach resonances marked by red bars below the H$^+$ MATI spectrum . A nonzero signal is only observed in the D$^+$ channel above the second dissociation threshold (H(1s) + D$^+$) corresponding to excitation of molecular Rydberg states that dissociate into H(1s) and D($nl$). \begin{figure}\centering \includegraphics[width=0.8\textheight,angle=90]{Fig9.pdf} \caption{PFI-ZEKE photoelectron (top panel) and MATI (2nd panel: D$^+$; 3th panel: H$^+$; bottom panel: HD$^+$) spectra of HD recorded via the $\bar{\rm H}$(11,2) intermediate state. The first and second dissociation thresholds are indicated by blue and red dashed lines, respectively. The calculated level positions of the A$^+$ metastable states from Ref.~\cite{wolniewicz91a} are indicated by red bars and their calculated overlapping widths by a red rectangle.} \label{CRHD:fig:spec_res} \end{figure} \begin{table} \centering \caption{\small Observed positions and widths of the quasibound states in HD$^+$ above to the H$^+$ + D(1s) dissociation threshold. The relative experimental values were converted to absolute energies using the dissociation energy of the X$^+(20,4)$ level reported by Moss \cite{moss93b} (46.9909~cm$^{-1}$) (all values in cm$^{-1}$). \label{CRHD:tab:res_engs}} \begin{tabular}{lrrrr} \toprule Level & \multicolumn{2}{c}{Observed} & \multicolumn{2}{c}{Calculated} \\[2pt]\cline{2-3}\cline{4-5}\\[-2pt] & Position & Width & Position & Width \\ \midrule X$^+$(20,7) & 11.18(9) & 2.1(3) & 12 \cite{davis78a} & -- \\ X$^+$(21,4) & 1.656(19) & 0.28(7) & 1.55 \cite{davis78a} & -- \\ A$^+$(0,0-3) & 23(1)-29(1) & -- & 23.975-28.365 \cite{wolniewicz91a} & -- \\ A$^+$(0,4) & 31.19(3) & 0.8(2) & 31.076 \cite{orlikowski94a} & 0.567 \cite{orlikowski94a} \\ \bottomrule \end{tabular} \end{table} The resonance widths and positions observed experimentally are given in Table~\ref{CRHD:tab:res_engs}. They agree less well with the calculations than was the case in H$_2^+$ \cite{beyer16a,beyer16b} and D$_2^+$ \cite{beyer17a}. The agreement is nevertheless sufficient for these assignments to be made with some confidence. The assignments of (i) the A$^+(0,4)$ resonance as shape resonance with H(1s) + D$^+$ character and (ii) the A$^+(0,0$-$3)$ resonances as Feshbach resonances of H(1s) + D$^+$ character, and (iii) the X$^+(21,4)$ and X$^+(20,7)$ resonances as shape resonances of H$^+$ + D(1s) character lend support to the classification of these resonances proposed by Davis and Thorson \cite{davis78a}. \section{Conclusions} In this article, we have presented a study of the structure and nonadiabatic dynamics in HD and HD$^+$ resulting from the breakdown of the {\it g/u} symmetry at long range. The observations concerned the level structure and dissociation dynamics of HD in the $\bar{\rm H}$ and $\bar{\rm B}$ outer-well states into H$(n=2)$ + D(1s) and H(1s) + D$(n=2)$ fragments and of HD$^+$ near the dissociative-ionisation threhsolds H$^+$ + D(1s) and H(1s) + D$^+$. The preferential formation of H$(n=2)$ and D$(n=2)$ fragments in the dissociation of the $\bar{\rm B}$ and $\bar{\rm H}$ states, respectively, was interpreted as originating from the dominant role of one-electron-transfer processes over two electron processes [see Eqs.~(\ref{CRHD:eq:pred_B}) and~(\ref{CRHD:eq:pred_H})] The determination of precise X$^+$ rovibrational level energies for the weakly bound states of HD$^+$ just below the H$^+$ + D(1s) and the comparison with adiabatic and nonadiabatic levels energies enabled the observation of a sharp onset of nonadiabatic corrections beyond $v^+=18$ caused by {\it g/u}-symmetry breaking. The separate identification of the H$^+$ + D(1s) and H(1s) + D$^+$ dissociation thresholds in the middle panels of Fig.~\ref{CRHD:fig:spec_res} enabled the determination of the relative positions of these thresholds (see also Ref.~\cite{beyer18c}). The mass selectivity of the MATI spectra also permitted the unambiguous attribution of the spectral features observed in the continuum to either of the two dissociative ionisation channels. The comparison of the dissociative photoionisation cross sections observed in the spectra recorded from the H$\bar{\rm H}$ and B$^{\prime\prime}\bar{\rm B}$ state in Fig.~\ref{CRHD:fig:spec_HH_BB_zeke} leads to the following conclusions: \begin{itemize} \item The spectrum recorded from the $\bar{\rm B}$(11,2) state reveals a large step at the H$^+$ + D(1s) threshold but hardly any further increase of signal at the D$^+$ + H(1s) dissociation threshold. The D$^+$ + H(1s) channel is thus dark in this excitation sequence. \item In contrast, the spectra recorded from the $\bar{\rm H}$(11,4) state show a very weak step at the H$^+$ + D(1s) threshold and a large increase at the D$^+$ + H(1s) threshold. It is now the H$^+$ + D(1s) channel that is dark in this excitation sequence. \item The A$^+$ Feshbach resonances are not observed in spectra recorded from the $\bar{\rm B}$(11,2) level, because they are associated with the D$^+$ + H(1s) channel, which is dark. The Feshbach resonances are detected in the H$^+$ signal because of electronic predissociation into the H$^+$ + D(1s) continuum. \end{itemize} This behaviour is particularly interesting: Both the intermediate $\bar{\rm H}$ and $\bar{\rm B}$ states and the final ionic X$^+$ and A$^+$ states are of mixed {\it g/u} character, resulting in a partial (in the case of the intermediate states) or complete (in the case of the final states in the dissociation continua) localisation of the electron(s) on either the proton or the deuteron. The dark dissociative-ionisation channels are the channels for which the two electrons would need to be excited, and the bright dissociative-ionisation channels those for which charge-transfer excitation takes place, which is a one-electron process [see Eqs.~(\ref{CRHD:eq:pred_B}) and ~(\ref{CRHD:eq:pred_H})]. These considerations are thus analogous to those used to explain the dissociation dynamics of the $\bar{\rm H}$ and $\bar{\rm B}$. The conclusions drawn in the present study concerning the dissociation dynamics of the $\bar{\rm H}$ and $\bar{\rm B}$ states in HD, of the X$^+$ and A$^+$ levels in HD$^+$, and the intensities in the photoionisation and photoelectron spectra are all related to nonadiabatic {\it g/u}-symmetry breaking and are summarised schematically in Fig.~\ref{CRHD:fig:HD_dynamics_scheme}. \begin{figure}\centering \includegraphics[width=0.7\textwidth]{Fig10.pdf} \caption{Schematic illustration of the dissociation dynamics in $\bar{\rm H}$ and $\bar{\rm B}$ states of HD and in the X$^+$ and A$^+$ states of HD$^+$. The upper blue and magenta level schemes correspond to the weakly bound rovibrational levels of HD$^+$ and the dissociative-ionisation continua associated with the H$^+$ + D(1s) and H(1s) + D$^+$ dissociation thresholds, respectively,which correlate adiabatically to the X$^+$ and A$^+$ electronic states of HD$^+$. Resonances that can be classified as shape and Feshbach resonances are drawn as dashed and full horizontal lines, respectively. The lower pale blue and magenta dissociation continua are associated with the H(2$l$) + D(1s) and H(1s) and D(2$l$) dissociation continua of HD. The green and orange arrows indicate the bright one-electron charge-transfer excitation and the one-electron charge-transfer predissociation, respectively. See text for details. \label{CRHD:fig:HD_dynamics_scheme}} \end{figure} The results presented in this article illustrate the fact that {\it g/u}-symmetry breaking makes the dissociative-ionisation dynamics in HD is richer than in H$_2$ and D$_2$. Accurate calculations of the resonances in the dissociation continua of HD$^+$ observed for the first time in the present work would be desirable to confirm our assignments. \section*{Acknowledgements} This work is supported financially by the Swiss National Science Foundation (Grant No. 200020B-200478) and the European Research Council through an advanced grant under the European Union's Horizon 2020 research and innovation programme (Grant No. 743121). \bibliographystyle{tfo}
3,212,635,537,812
arxiv
\section{Introduction} The FCC-hh project, defined by the target of 100 TeV proton-proton collisions with a total integrated luminosity of 30 ab$^{-1}$, will allow to extend the searches for flavour-changing neutral currents (FCNC, figure \ref{fcnc_decays}) forbidden in Standard Model (SM) at tree level and are strongly suppressed in loop corrections by the Glashow-Iliopoulos-Maiani mechanism \cite{PhysRevD.2.1285}. The predicted SM branching fractions for top quark FCNC decays are expected to be $\mathcal{O}(10^{-12} - 10^{-17})$ \cite{Agashe:2013hma} and are not expected to be detectable at the FCC-hh experimental sensitivity. However, certain scenarios beyond the SM (BSM), such as two-Higgs doublet model, warped extra dimensions and minimal supersymmetric models, incorporate significantly enhanced FCNC behavior that can be directly probed at the future collider experiments \cite{Agashe:2013hma}. Observation of such processes would be a clear signal of new physics. FCNC searches in top quark sector are typically based on the selection of events with isolated, well separated objects. On the other hand due to the expected increase of the energy of future collider experiments a significant number of events will contain high-energetic, boosted objects that require an exploration of different analysis strategy. We study the sensitivity of the FCC-hh to $t \rightarrow q\gamma$ and $t \rightarrow qH$ FCNC transitions using the $pp \rightarrow t\bar{t} \rightarrow tq\gamma$ and $pp \rightarrow t\bar{t} \rightarrow tqH$ processes respectly where $q$ is a $u$ or $c$ quark. The analyzes exploit the boosted regime where top-quark $p_T$ is much larger than its mass. The signature of the signal processes includes high transverse momentum t-jet and a fat jet clustered from collinear photon or Higgs decay products and light-flavour jet. Resolved analysis of the FCNC in $tq\gamma$ via single top production in association with photon is described in \cite{Oyulmaz:2018irs}. In \cite{Papaefstathiou:2017xuv} study of the FCNC in $tqH$ has covered the $H \rightarrow \gamma \gamma$ decay. In this analyses the dominant Higgs decay channel $H \rightarrow b \bar{b}$ is explored. The study is based on ``fast'' simulation of the ``reference'' FCC-hh detector \cite{Zaborowska:2018origin, Zaborowska:2018qxe, Faltova:2018ayl}. \section{Monte Carlo samples} While the flavor-violating couplings of the top may arise from different sources, for the signal simulation the effects of BSM physics in top interactions described by an effective field theory approach. The most general effective Lagrangian can be written as \cite{AguilarSaavedra:2004wm} (terms up to dimension five): \begin{align} \begin{aligned} -\mathcal{L} & = g_s \kappa_{tqg} \bar{q} (g_L P_L + g_R P_R) \frac{i \sigma_{\mu\nu} q^\nu}{\Lambda} T^a t G^{a\mu} + e \kappa_{tq\gamma} \bar{q} (\gamma_L P_L + \gamma_R P_R) \frac{i \sigma_{\mu\nu} q^\nu}{\Lambda} t A^{\mu} + \\ & + \frac{g}{2 c_W} X_{tqZ} \bar{q} (x_L P_L + x_R P_R) t Z^{\mu} + \frac{g}{2 c_W} \kappa_{tqZ} \bar{q} (z_L P_L + z_R P_R) \frac{i \sigma_{\mu\nu} q^\nu}{\Lambda} t Z^{\mu} + \\ & + \frac{g}{2\sqrt{2}} \kappa_{tqH} \bar{q} (h_L P_L + h_R P_R) t H^{\mu} + h.c., \end{aligned} \end{align} where $P_L$ and $P_R$ are chirality projectors in spin space, $\kappa_{tqX}$ and $X_{tqZ}$ are effective couplings for the corresponding vertices, $\Lambda$ is the scale of new physics. The following background processes are considered for the $tq\gamma$ signal: QCD $\gamma+$jets, $t\bar{t}$, $t\bar{t}+\gamma$, $W+jets$, $Z+jets$, single top production and single top in association with photon. The following background processes are considered for the $tqH$ signal: QCD multijets, $t\bar{t}$ ($+W$, $+Z$, $+H$), $W+jets$, $Z+jets$ and single top production. All signals and backgrounds are generated at leading order using the {\scshape MG5\_}a{\scshape MC@NLO}~2.5.2~\cite{Alwall:2011uj} package, with subsequent showering and hadronization in {\scshape Pythia}~8.230~\cite{Sjostrand:2014zea}. The detector simulation has been performed with the fast simulation tool {\scshape Delphes}~3.4.2~\cite{deFavereau:2013fsa} using the reference FCC-hh detector parametrisation. No additional proton-proton collisions during a single bunch crossing is assumed in the simulation. In order to take into account higher order QCD corrections K-factors are applied to the signals and background samples. \begin{figure} \centering \includegraphics[width=0.9\linewidth,clip]{fey.pdf} \iffalse \begin{subfigure}[t]{0.24\textwidth} \centering \includegraphics[width=\linewidth,clip]{fey_1.pdf} \end{subfigure} \begin{subfigure}[t]{0.24\textwidth} \centering \includegraphics[width=\linewidth,clip]{fey_2.pdf} \end{subfigure} \begin{subfigure}[t]{0.24\textwidth} \centering \includegraphics[width=\linewidth,clip]{fey_3.pdf} \end{subfigure} \begin{subfigure}[t]{0.24\textwidth} \centering \includegraphics[width=\linewidth,clip]{fey_4.pdf} \end{subfigure} \fi \caption{Diagrams for top quark decays mediated by FCNC couplings.} \label{fcnc_decays} \end{figure} \section{Event selection and signal extraction} Events of the $tq\gamma$ signal are selected by requiring exactly one photon with $p_T > 200$ GeV, at least two jets with cone $R=0.4$ and $p_T > 30$ GeV (one of which must be $b$-tagged), at least two jets with cone $R=0.8$ (``fat'' jets) and $p_T > 30$ GeV and one or zero leptons ($e$ or $\mu$) with $p_T > 25$ GeV. The $\Delta R$ between selected photon and b-tagged jet should be greater than $0.8$. The fat jets matching photon and $b$-tagged jet respectively are required to have $p_T > 400$ GeV. All objects must have $|\eta| < 3$. Events of the $tqH$ signal are selected by requiring at least one jet with cone $R=0.8$ with at least two b-tagged subjets (with cone $R=0.4$) which corresponds to the FCNC decay of top quasrk (FCNC fat jet) and at least one additional fat jet with b-tagged subjet which corresponds to the SM decay of top (SM fat jet). The leading (subleading) selected fat jet should have $p_T > 500$ ($p_T > 300$) GeV. The $\Delta \phi$ between selected leading fat jet and subleading fat jet should be greater than $1.0$. All objects must have $|\eta| < 3$. The subjets with cone $R=0.2$ from selected fatjets are used to form the Higgs and W boson candidates. A Boosted Decision Tree (BDT) constructed within the TMVA framework \cite{TMVA2007} is used to separate the signal signature from the background contributions. $10\%$ of events selected for training and the remainder are used in the statistical analysis of the BDT discriminants with the CombinedLimit package. For each background a 30\% normalisation uncertainty is assumed and incorporated in statistical model as nuisance parameter. The asymptotic frequentist formulae \cite{Cowan:2010js} is used to obtain an expected upper limit on signal cross section based on an Asimov data set of background-only model. The following input variables are used for the $tq\gamma$ signal: $\tau_{21}$ variable \cite{Thaler:2010tr} of the fat jet matched to the photon ($\gamma$-jet), $\tau_{21}$ and $\tau_{32}$ variables of b-tagged fat jet (b-jet), masses of soft-dropped \cite{Larkoski:2014wba} $\gamma$-jet and b-jet, $p_T$ of the photon, $\gamma$-jet and b-jet, scalar product of the photon and $\gamma$-jet four-vectors, scalar product of b-jet and $\gamma$-jet four-vectors and masses of two soft-dropped fat jets most corresponds to the mass of top quark. The following input variables are used for the $tqH$ signal: soft-dropped masses, $p_T$, $\tau_{21}$, $\tau_{31}$, $\tau_{32}$ variables \cite{Thaler:2010tr} and scalar product of the selected fat jets, $p_T$ and masses of the Higgs from leading FCNC fat jet and W boson from leading SM fat jet, scalar product of the Higgs (W boson) candidate and corresponding fat jet, masses of the Higgs candidate from leading SM fat jet and W boson candidate from leading FCNC fat jet, and mass disbalance, defined as $|m^{SM}_{fat jet} - m^{FCNC}_{fat jet}| / \max{(m^{SM}_{fat jet}, m^{FCNC}_{fat jet})}$. \begin{figure} \centering \includegraphics[width=0.49\columnwidth]{lumi_new_u.pdf} \includegraphics[width=0.49\columnwidth]{lumi_tqH_u.pdf} \caption{ Expected exclusion limits at 95\% C.L. on the FCNC $t \rightarrow q\gamma$ (left) and $t \rightarrow qH$ (right) branching fractions as a function of integrated luminosity. } \label{tqgamma_limits} \end{figure} \begin{table}[h] \centering \label{tqgamma_limits_table} \caption{ The 95\% C.L. expected exclusion limits on the branching fractions for integrated luminosities of 30 ab$^{-1}$ and 3 ab$^{-1}$ in comparison with present experimental limits and estimation for the HL-LHC.} \vspace{5pt} \renewcommand{\arraystretch}{1.5} \begin{tabular}{c|c c r} \hline \hline Detector & $\mathcal{B}(t \rightarrow u\gamma)$ & $\mathcal{B}(t \rightarrow c\gamma)$ & Ref. \\ \hline CMS (19.8 fb$^{-1}$, 8 TeV) & $13 \times 10^{-5}$ & $170 \times 10^{-5} $ & \cite{Khachatryan:2015att} \\ CMS Phase-2 (300 fb$^{-1}$, 14 TeV) & $2.1 \times 10^{-5}$ & $15 \times 10^{-5}$ & \cite{Mandrik:2018gud} \\ CMS Phase-2 (3 ab$^{-1}$, 14 TeV) & $0.9 \times 10^{-5}$ & $7.4 \times 10^{-5} $ & \cite{Mandrik:2018gud} \\ FCC-hh (3 ab$^{-1}$, 100 TeV) & $9.8 \times 10^{-7}$ & $12.9 \times 10^{-7}$ & \\ FCC-hh (30 ab$^{-1}$, 100 TeV) & $1.8 \times 10^{-7}$ & $2.4 \times 10^{-7} $ & \\ \hline Detector & $\mathcal{B}(t \rightarrow uH)$ & $\mathcal{B}(t \rightarrow cH)$ & Ref. \\ \hline CMS (36.1 fb$^{-1}$, 13 TeV) & $4.7 \times 10^{-3}$ & $4.7 \times 10^{-3}$ & \cite{Sirunyan:2017uae} \\ ATLAS (36.1 fb$^{-1}$, 13 TeV) & $1.9 \times 10^{-3}$ & $1.6 \times 10^{-3} $ & \cite{Aaboud:2018pob} \\ FCC-hh (3 ab$^{-1}$, 100 TeV) & $8.4 \times 10^{-5}$ & $7.7 \times 10^{-5}$ & \\ FCC-hh (30 ab$^{-1}$, 100 TeV) & $4.8 \times 10^{-5}$ & $4.3 \times 10^{-5}$ & \\ \hline \end{tabular} \end{table} \section{Results and conclusions} To avoid ambiguities due to different normalizations of the couplings in the Lagrangian, the branching ratios of the corresponding FCNC processes are used for presentation of the results. The 95\% C.L. expected exclusion limits on the branching fractions are given in Table~\ref{tqgamma_limits_table}. Figure~\ref{tqgamma_limits} shows the expected exclusion limits on the FCNC branching fractions as a function of integrated luminosity. This would improve the existing experimental limits \cite{Khachatryan:2015att} on the $t \rightarrow q \gamma$ branching fractions by about three-four orders of magnitude. The limits on $\mathcal{B}(t \rightarrow cH)$, $\mathcal{B}(t \rightarrow uH)$ are comparable with the estimates of the limits on $\mathcal{B}(t \rightarrow qH)$ from \cite{Papaefstathiou:2017xuv}. Further improvements can be obtained from the combinations with different analysis strategies such as resolved analysis and FCNC in production of the single top quark events. \section*{Acknowledgments} I would like to thank H.~Gray, C.~Helsens and S.~Slabospitskii for useful discussions. \section*{References}
3,212,635,537,813
arxiv
\section{Introduction} \begin{figure*} \includegraphics[width=0.49\textwidth]{LC_Ground_SM.png} \hfill \includegraphics[width=0.49\textwidth]{LC_Space.png} \caption{Photometric observations of ASASSN-18tb. Ground-based $BVgri$ photometry obtained with LCOGT, ASAS-SN, and SMARTS is shown on the left, and space-based \textit{TESS} and \textit{Swift} UVOT photometry is shown on the right. The \textit{TESS} photometry is shown for 24 hour bins. Marker colors indicate the filter, and marker shapes indicate the source of the data. Error bars are shown for all points, but can be smaller than the symbol used to represent the data. The photometry is not corrected for Galactic extinction. The shaded bands in the left panel show the MLCS2k2 fit \citep{2007Jha} to the LCOGT light curve.} \label{fig:GroundandSpaceLightCurve} \end{figure*} It has been known for some time that Type Ia supernovae (SNe~Ia) are the result of the thermonuclear explosion of a carbon-oxygen white dwarf (CO WD) triggered by a companion \citep{1960Hoyle,1969Colgate,2011Nugent}. However, the physical nature of this companion and the details of the explosion mechanism remain an active area of debate. Broadly speaking, SN~Ia progenitor models can be grouped into two categories -- the single-degenerate (SD) and double-degenerate (DD) scenarios. In the standard DD scenario, a tight WD-WD binary loses energy and angular momentum via gravitational wave emission before undergoing tidal interactions and subsequently exploding \citep{1979Tutukov,1984Iben,1984Webbink,2012Shen}. \cite{2011Thompson} proposed that SNe~Ia originate from triple systems, and showed that Lidov-Kozai oscillations driven by a tertiary companion can accelerate WD-WD mergers via gravitational wave radiation and implied that they may lead to WD-WD collisions. \cite{2012Katz} and \cite{2013Kushnir} proposed and found supporting evidence suggesting that WD-WD direct collisions in triple systems may be a major channel for SNe Ia. Further evidence for SNe~Ia produced through this scenario has been found by \cite{2015Dong} and \cite{2019Vallely} in the form of bimodal distributions of $^{56}$Ni decay products in nebular phase spectra. In the canonical SD scenario, the WD accretes matter from a non-degenerate stellar companion, eventually approaching the Chandrasekhar limit and undergoing a thermonuclear runaway \citep{1973Whelan,1982Nomoto,2004Han}. The stellar companion will be struck by the supernova ejecta shortly after explosion, leading directly to a number of observable signatures. First, the companion interaction should lead to excess emission in the early-phase light curve. Although this emission is strongly dependent on the characteristics of the stellar companion and the viewing angle of the system, \cite{2010Kasen} showed that it should be observable for an appreciable number of SNe~Ia. Additionally, material stripped from the companion star should produce hydrogen emission lines visible in late-time nebular spectra \citep{1975Wheeler,2000Marietta,2012Pan,2012Liu,2017Boehner}. Finally, the ejecta interactions impact the post-explosion properties of the stellar companion (see, e.g., \citealt{2003Podsiadlowski}, \citealt{2012bPan}, and \citealt{2013bShappee}). Early time observations are being obtained for steadily increasing numbers of SNe~Ia. Most of these efforts have focused on finding or placing upper limits on excess emission due to ejecta colliding with a nearby SD companion, although \citet{2018Stritzinger} have also found evidence that the early time optical colors are correlated with the post-peak decline rates. The searches for distortions in the early time light curves have produced mixed results. Many SN~Ia light curves do not show evidence of companion interaction. The nearby Type Ia SN~2011fe had an early-phase light curve consistent with a single-component power-law \citep{2011Nugent, 2012Bloom}, and early-time observations of SN~2009ig are inconsistent with the \cite{2010Kasen} interaction models \citep{2012Foley}. Additionally, \cite{2015Olling} found no evidence for ejecta-companion interaction when examining three SNe~Ia observed by \textit{Kepler} \citep{2010Borucki}. Based on early excess non-detections, \cite{2016Shappee} were able to rule out most non-degenerate companions for ASASSN-14lp, and \cite{2018Holmbo} were able to place even tighter constraints on SN~2013gy. However, this is not the case for all events. An early linear phase in the light curve of SN~2013dy was observed by \cite{2013Zheng}, and observations of SN~2014J show evidence for additional early-time structure \citep{2014Zheng,2015Goobar,2015Siverd}. \cite{2018Contreras} found that the light curve of SN~2012fr had an initial roughly linear phase that lasted for $\sim2.5$ days, and \textit{K2} observations of ASASSN-18bt showed a similar $\sim4$ day linear phase \citep{2018Brown,2019Shappee,2019Li,2019Dimitriadis}. Additionally, \cite{2016Marion} found potential indications of interaction with a non-degenerate binary companion in SN~2012cg, although this interpretation is challenged by \cite{2018Shappee}. Searches for hydrogen emission lines at late times as evidence for stripped material have largely failed. No such signatures were detected for SNe 1998bu and 2000cx \citep{2013Lundqvist}, SN~2001el \citep{2005Mattila}, SNe 2005am and 2005cf \citep{2007Leonard}, SN~2012cg \citep{2018Shappee}, SN~2013gy \citep{2018Holmbo}, or SN~2017cbv \citep{2018Sand}, nor were they detected by \cite{2017Graham} in 8 other SNe~Ia. The nearby SNe~Ia 2011fe and 2014J were particularly well-studied events \citep{2012Brown,2013Munari,2014Mazzali,2014Foley,2014Goobar,2016Galbany,2016Vallely,2018Dhawan}, but they too showed no hydrogen emission in their late-time spectra \citep{2013Shappee,2015Lundqvist,2016Sand}. Even SN~2012cg, SN~2012fr, and ASASSN-18bt, events with early excess emission potentially indicative of a single-degenerate progenitor system, did not have hydrogen in their nebular phase spectra \citep{2018Shappee,2017Graham,2018Tucker,2019Dimitriadis_b}. \cite{2016Maguire} looked at a sample of 11 nearby SNe~Ia and found tentative evidence for H$\alpha$ emission in only one event. \cite{2019Sand} examined 8 fast-declining SNe~Ia at nebular phase and could only place upper limits on H$\alpha$ emission. Furthermore, using new and archival spectra of over 100 SNe~Ia, \cite{2019Tucker} found no evidence for the hydrogen or helium emission expected from a non-degenerate companion. There exists a rare class of thermonuclear SNe that show evidence for interaction with a H-rich circumstellar medium (CSM), the archetype of which is SN~2002ic \citep{2003Hamuy,2004Deng,2004Wang,2004WoodVasey}. Other well-studied ``SNe~Ia-CSM'' include SN~2005gj \citep{2006Aldering,2007Prieto,2008Trundle}, SN~2008J \citep{2012Taddia,2013Fox}, and PTF~11kx \citep{2012Dilday}. \cite{2013Silverman} identified a number of new events and produced the most detailed analysis to date of this class of transients. While H emission signatures are present in these SNe, they are due to interaction with a H-rich CSM and do not constitute detections of stripped companion material \citep{2016Maguire}. These events do not obey the empirical light curve relations (e.g., \citealt{1993Phillips,2006Prieto,2014Burns}) that define ``normal'' SNe~Ia, and they are also considerably brighter (by $\sim1$ mag) than typical SNe~Ia \citep{2013Silverman}. To date, only one normal Type~Ia SN, ASASSN-18tb \citep{2018BrimacombeATel}, shows compelling evidence for strong H$\alpha$ emission \citep{2019Kollmeier}. Even here, the phenomenon is clearly rare, as it is the only example in the sample of 75 spectra obtained to date for the well-defined 100 Type Ia Supernova sample (100IAS, \citealt{2018Dong100Ias}) and there were none in the larger, heterogeneous sample of \cite{2019Tucker}. ASASSN-18tb was also observed by \textit{TESS} \citep{2015RickerTESS}, providing a high cadence, early-time light curve. Here we report on these \textit{TESS} observations as well as on additional ground based photometry and spectroscopy. We describe the observations in \S2, the \textit{TESS} systematics in \S3, the \textit{TESS} early-time light curve in \S4, the spectroscopic characteristics in \S5, and discuss the results in \S6. \section{Observations} \label{sec:obs} \subsection{Discovery and Host Galaxy} \label{subsec:disc} ASASSN-18tb (SN 2018fhw) was discovered by the All-Sky Automated Survey for Supernovae \citep[ASAS-SN;][]{2014ShappeeASASSN,2017Kochanek,2017HoloienCat2,2017HoloienCat3,2017HoloienCat1,2019Holoien} in images obtained on UT 2018-08-21.31 (JD 2\,458\,351.81) at J2000 R.A. 04$^\textnormal{h}$18$^\textnormal{m} 06\fs149$ and Decl. $-$63{\degree}36'$56\farcs68$ \citep{2018BrimacombeATel,2018BrimacombeTNS}. From the \cite{2011Schlafly} recalibration of the \cite{1998Schlegel} infrared-based dust map, we find that the supernova suffers relatively little Galactic extinction, $E(B-V)=0.03$ mag. ASASSN-18tb is located $4\farcs8$ south and $1\farcs9$ east of the center of 2MASX J04180598--6336523, an extended source in the Two Micron All Sky Survey \citep[2MASS;][]{2006Skrutskie} with magnitudes $m_J = 15.06 \pm 0.11$, $m_H = 14.23 \pm 0.13$, and $m_K = 13.88 \pm 0.17$. Prior to the discovery of ASASSN-18tb, there were no public spectroscopic observations of 2MASX J04180598-6336523. However, when obtaining a classification spectrum of the supernova, \cite{2018Eweis} also obtained a spectrum of the host galaxy. Using cross-correlations with galaxy templates, they found that it has a heliocentric redshift of $5090 \pm 30$ km s$^{-1}$ $(z=0.0170 \pm 0.0001)$. This redshift yields a luminosity distance of 74.2 Mpc assuming $H_0 = 69.6$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_M = 0.286$, and $\Omega_\Lambda = 0.714$ \citep{2006Wright,2014Bennett}. Using the Supernova Identification code \citep[SNID;][]{TonrySNIDAlgorithm,BlondinSNID}, \cite{2018Eweis} classified ASASSN-18tb as a spectroscopically normal SN~Ia based on a SALT spectrum obtained on UT 2018-08-23.3, finding a good match to the Type Ia SN~2003iv at a phase of $+1$ day beyond maximum light. \cite{2019Kollmeier} find that, like SN~2003iv \citep{2012Blondin}, ASASSN-18tb has a ``cool'' sub-classification in the scheme of \citet{2006Branch}, and that photometrically, ASASSN-18tb is a fast-declining, sub-luminous SN~Ia. \subsection{Photometry} \label{subsec:phot} We present photometric observations obtained over the course of 70 days, beginning at MJD~58343 (see Figure~\ref{fig:GroundandSpaceLightCurve}). Most of the ground-based observations were obtained using the 1m telescopes and Sinistro CCDs of the Las Cumbres Observatory Global Telescope Network \citep[LCOGT;][]{Brown2013}. Additional high-cadence observations near maximum light were obtained using the quadruple 14-cm ASAS-SN telescopes ``Cassius'' and ``Bohdan Paczy\'{n}ski'' deployed in Cerro Tololo, Chile. We also present late-time $B$ and $V$ band observations obtained using the ANDICAM instrument \citep{2003ANDICAM} mounted on the 1.3-m telescope at the Cerro Tololo Inter-American Observatory (CTIO) operated by the Small \& Moderate Aperture Research Telescope System (SMARTS) Consortium. ASAS-SN images are processed in an automated pipeline using the \textsc{ISIS} image subtraction package \citep{1998Alard,2000Alard}. Using the IRAF \textsc{apphot} package, we performed aperture photometry on the subtracted images and then calibrated the results using the AAVSO Photometric All-Sky Survey \citep[APASS;][]{Henden2015}. Reduced images (after bias/dark-frame and flat-field corrections) from LCOGT and the SMARTS 1.3m telescope were downloaded from the respective data archives. We perform point-spread-function (PSF) photometry using the DoPHOT \citep{1993Schechter} package. Optical photometry in the $B$, $V$, $r$, and $i$ bands were calibrated using the APASS standards. We also obtained images in the $V$, $B$, $U$, $UVW1$, $UVM2$ and $UVW2$ bands with the \textit{Neil Gehrels Swift Observatory}'s Ultraviolet Optical Telescope \citep[UVOT;][]{2005RomingUVOT}. The \textit{Swift} UVOT photometry is extracted using a $5\farcs0$ aperture and a sky annulus with an inner radius of $15\farcs0$ and an outer radius of $30\farcs0$ with the \textsc{uvotsource} task in the \textsc{heasoft} package. The \textit{Swift} UVOT photometry is calibrated in the Vega magnitude system based on the revised zero-points and sensitivity from \cite{2011Breeveld}. We characterize the ASASSN-18tb light curve using the \citet{2007Jha} update of the \citet{1996Riess} and \citet{1998Riess} multicolor light-curve shape method, \textsc{MLCS2k2}. We find that the peak of the $B$ light curve occurred at $t_0=58357.33\pm0.12$ MJD. We reference our observations to this inferred date of maximum light throughout this work. After accounting for Galactic extinction, we find that extinction from the host galaxy is negligible. The MLCS2k2 fit yields a light-curve shape parameter $\Delta = 1.41\pm0.03$, squarely in the fast-declining region of parameter space. It has a color stretch of $s_{BV} \approx 0.44$ and $\Delta m_{15} (B) \approx 2.0$ mag using the relations given by \citet{2018ApJ...869...56B}. These results are in agreement with those of \citet{2019Kollmeier}, who find $s_{BV} = 0.50 \pm 0.04$ and $\Delta m_{15}(B) = 2.0 \pm 0.1$ mag using the SNooPy light curve fitter \citep{2011AJ....141...19B}. Our MLCS2k2 fits give peak absolute magnitudes for ASASSN-18tb of $M_B = -17.66 \pm 0.09$ and $M_V = -18.05 \pm 0.09$ mag and a slightly closer distance ($65 \pm 4$ Mpc) than the distance of 74.2 Mpc inferred from the redshift. The difference is roughly consistent with the peculiar velocity uncertainty at this redshift \citep[$\sim$0.13 mag, e.g.,][]{2005Reindl}. Located near the Southern \textit{TESS} continuous viewing zone, ASASSN-18tb was well-observed by \textit{TESS}. This allowed us to extract the Sector 1 and 2 \textit{TESS} light curves that we present in this paper. We used image subtraction \citep{1998Alard,2000Alard} on the full frame images (FFIs) from the first \textit{TESS} data release to produce high fidelity light curves. In principle it is possible to generate a single reference image and then rotate it accordingly for use during multiple sector pointings, but the large pixel scale of the \textit{TESS} observations makes this particularly difficult and introduces a relatively large source of uncertainty. We instead chose to construct independent reference images for each sector. The Sector 1 reference image was constructed using 100 FFIs obtained between MJD 58324.8 and 58326.88, and the Sector 2 reference image was constructed using 100 FFIs obtained between MJD 58353.63 and 58355.69. In each case these are the first 100 FFIs obtained during the sector. The light curves change little when different images are used to build the reference, and our light curves are consistent with those obtained using the public \textit{TESS} aperture photometry tool \textsc{eleanor}\footnote{\url{https://adina.feinste.in/eleanor/}} \citep{2019Feinstein}. Because the Sector 2 reference was constructed from images containing a considerable amount of flux from the supernova, fluxes in the raw difference light curve for Sector 2 are systematically lower than the intrinsic values. We correct for this by using a power-law fit (described in more detail in Section~\ref{sec:earlyLC}) to align the Sector 1 and 2 light curves. The offset is calculated by fitting the first day (48 epochs) of Sector 2 photometry to the best-fit single-component power-law for the Sector 1 photometry. The Sector 1 and 2 fluxes were converted into \textit{TESS}-band magnitudes by adopting a zero point of 20.44 electrons per second in the FFIs, based on the values quoted in the TESS Instrument Handbook\footnote{\url{https://archive.stsci.edu/missions/tess/doc/TESS_Instrument_Handbook_v0.1.pdf}}. \textit{TESS} observes in a single broad-band filter, ranging from about 6000--10000\,\AA{} with an effective wavelength of $\sim$8000 \AA{}, and the \textit{TESS} magnitude system is calibrated to the Vega system \citep{2015SullivanTESS}. The complete photometry is shown in Figure~\ref{fig:GroundandSpaceLightCurve}, and all of the data is available in machine-readable format in the online version of the paper. \begin{figure} \centering \includegraphics[width=\columnwidth]{ASASSN-18tb-SALT+comps.pdf} \caption{SALT and du Pont spectra of ASASSN-18tb obtained at phases ranging from pre-maximum light to the early nebular phase. Note the clear presence of H$\alpha$ emission in the nebular phase spectra. Also shown for comparison are spectra of the fast-declining SNe~Ia 2003iv, 1998bp, and 1986G \citep{2001Richardson,2002Hamuy,2012Blondin}. The ASASSN-18tb spectra have been smoothed using a Savitzky-Golay filter for presentation.} \label{fig:SALTSpectra} \end{figure} \begin{figure*} \includegraphics[width=0.49\textwidth]{Detrending.png} \hfill \includegraphics[width=0.49\textwidth]{EarlyRiseFits.png} \caption{The \textit{TESS} Sector 1 light curve of ASASSN-18tb obtained using image subtraction. The raw and detrended light curves of ASASSN-18tb are shown on the left, as well as the artifact-tracing light curve of Stars 1 and 2 (see text for details). Flux values for every epoch are shown in lighter colors, and a 6 hour rolling median of these flux values is shown in darker colors. Vertical gray regions indicate times when considerable scattered Earthshine artifacts are present. The detrended \textit{TESS} light curve of ASASSN-18tb as well as three simple power-law fits and their residuals are shown on the right. Normalized flux is given relative to the maximum Sector 1 flux of 0.701 mJy. Although the rise is shallower (with index $1.69\pm0.04$) than that of a simple expanding fireball model, there is no compelling evidence for additional structure beyond a single-component power law.} \label{fig:EarlyLC} \end{figure*} \begin{table*} \begin{centering} \caption{Power-law Fits.} \label{tab:fitparams} \begin{tabular}{ccccccccc} \hline\hline Model & $z$ ($\mu$Jy) & $t_1$ (MJD) & $h_1$ ($\mu$Jy) & $a_1$ & $t_2$ (MJD) & $h_2$ ($\mu$Jy) & $a_2$ & $\chi^2/\nu$ \\ \hline\hline Fireball & $-0.6\pm0.6$ & $58340.48 \pm 0.06$ & $4.82\pm0.06$ & $\equiv2$ & $\cdots$ & $\cdots$ & $\cdots$ & 0.995 \\ Single & $+0.9\pm0.9$ & $58341.68 \pm 0.16$ & $12.3 \pm 1.50$ & $1.69 \pm 0.04$ & $\cdots$ & $\cdots$ & $\cdots$ & 1.007 \\ Double & $+0.6\pm0.6$ & $58340.61 \pm 0.37$ & $4.32 \pm 1.41$ & $1.99 \pm 0.10$ & $58345.73 \pm 0.12$ & $36.23 \pm 5.56$ & $0.45 \pm 0.11$ & 1.003 \\ \hline\hline \end{tabular} \\ \end{centering} \end{table*} \subsection{Spectroscopy} \label{subsec:spec} The bulk of the spectra we present in this paper were obtained using the Southern African Large Telescope (SALT) with the Robert Stobie Spectrograph \citep{2006Buckley}. We used the PG0900 grating with a $1\farcs5$ slit at multiple tilt positions to cover the optical wavelength range with a typical resolution of $R\sim1000$. The total exposure time varied from 1932 to 3000 seconds as the supernova faded. Our first spectrum provided the classification reported by \cite{2018Eweis} and was obtained on UT 2018-08-23, four days before ASASSN-18tb attained maximum light. Our last spectrum was taken on UT 2019-01-25, nearly 150 days after maximum light. The SALT spectra were reduced using a custom pipeline based on the PySALT package \citep{2010Crawford}, which accounts for basic CCD characteristics (e.g., cross-talk, bias and gain correction), removal of cosmic rays, wavelength calibration, and relative flux calibration. Standard IRAF/Pyraf routines were used to accurately account for sky and galaxy background removal. The 1D spectra are delivered with a nominal dispersion of $\sim 1$\AA{}/pixel. For our analysis in \S\ref{sec:spectra}, each spectrum is rebinned to 7\AA{}/pixel which is the approximate spectral resolution at H\ensuremath{\alpha}\xspace. The RMS of the original pixels within each bin is used to estimate the uncertainty at each wavelength in the rebinned spectrum. To model the continuum we use a 2nd-order Savitsky-Golay polynomial of variable width. The continuum fit width is varied from $2\,000-5\,000~\rm{km}~\rm s^{-1}$ at each pixel, and we take the median of these values as the continuum level and the RMS as the uncertainty. Because of the instrument design, which has a moving, field-dependent and under-filled entrance pupil, observations of spectrophotometric flux standards do not suffice to provide accurate absolute flux calibration for SALT observations (see, e.g., \citealt{2018Buckley}). Therefore, in order to characterize the interesting H$\alpha$ signature in our spectra, we recalibrate our observed spectra to match the measured photometry using a low-order polynomial in wavelength. Fortunately, we have contemporaneous LCOGT $BVri$ coverage for most of our spectroscopic epochs and can perform the absolute flux calibration reasonably well ($\pm$ 5\% estimated uncertainty). For the three late-phase spectra obtained beyond MJD 58420, we do not have sufficient multi-filter coverage from our photometric observations, so we estimate $BVri$ from extrapolations of the \textsc{MLCS2k2} fits. In this regime, we assume our flux calibration error is of order $\pm 10\%$. We also obtained one lower-resolution spectrum with SALT on UT 2018-10-22 using the PG0300 grating and the same 1$\farcs$5 slit, yielding $R \approx 350$ in a 1600 second exposure at one grating tilt position. We further observed ASASSN-18tb with the B\&C spectrograph on the Ir\'{e}n\'{e}e du Pont telescope at Las Campanas Observatory on UT 2018-09-14 using the 300 line grating with the 150 \micron~slit in three 1000 second exposures. These spectra were reduced using standard routines, and recalibrated to match the photometry as described above. Because of the lower spectral resolution, we have not used these spectra in the further analysis described below. Figure \ref{fig:SALTSpectra} shows the spectral evolution of ASASSN-18tb with other fast-declining SNe~Ia for comparison. \section{TESS Systematics} \label{sec:systematics} \begin{figure} \includegraphics[width=\columnwidth]{TESS_18tb_Finder.png} \caption{The $5\times5$ grid of points for which we obtain image subtraction light curves. The image is the median combination of the last 100 \textit{TESS} Sector 1 FFIs, when the SN is brightest in Sector 1. The grid points are spaced 3 \textit{TESS} pixels away from one another ($\sim 63''$). The red circle at the center of the grid (point C3) indicates the location of ASASSN-18tb, and the two magenta circles located near points C4 and D3 indicate the locations of Star 1 and Star 2, respectively, the stars we consider as tracers of artifacts in the raw supernova light curve.} \label{fig:CheckFinder} \end{figure} \begin{figure*} \includegraphics[width=0.90\textwidth]{CheckGrid.png} \caption{Image subtraction light curves obtained for a $5\times5$ grid of points centered on the location of ASASSN-18tb. These light curves correspond to the test coordinates indicated in Figure~\ref{fig:CheckFinder}. Flux values for every epoch are shown in gray, and a 6 hour rolling median of these flux values is shown in color. We have replaced the C4 and D3 light curves with those obtained for the exact locations of Star 1 and Star 2, respectively. Note how well Star 2 traces the pre-explosion bump artifact in the raw ASASSN-18tb light curve.} \label{fig:CheckGrid} \end{figure*} The raw \textit{TESS} Sector 1 image subtraction light curve of ASASSN-18tb is shown in blue in the left panel of Figure~\ref{fig:EarlyLC}. It is clear by inspection that there are a number of systematic artifacts present in the data, some of which are fairly well understood and discussed in the TESS Instrument Handbook and TESS Data Release Notes\footnote{\url{https://archive.stsci.edu/tess/tess_drn.html}}. For instance, we observe high frequency $\sim24$ hour oscillations in the light curve that are likely introduced in the image backgrounds by the rotation of the Earth, as discussed in Section~1.3 of the Sector 1 TESS Data Release Notes\footnote{\url{https://archive.stsci.edu/missions/tess/doc/tess_drn/tess_sector_01_drn01_v01.pdf}}. While these low-level oscillations are not significant for the scientific goals of our analysis, some of the other systematics present in the data are. Section~7.3.2 of the TESS Instrument Handbook discusses the presence of a patch of scattered Earthlight in TESS FFIs whose structure and intensity depends on the Earth elevation, azimuth, and distance. We visually inspected the TESS Sector 1 FFIs and found that the brightest component of this patch is spatially coincident with ASASSN-18tb for $\sim3$ days at the start of each orbit. No straightforward method exists to account for this artifact in our reductions, so we exclude these artifact regions from our analysis. These exclusion windows are shown as the gray shaded regions in the left panel of Figure~\ref{fig:EarlyLC}. The raw light curve suggests the presence of significant pre-explosion emission for $\sim10$ days prior to the supernova. Such a signature is entirely unprecedented for SNe~Ia, both theoretically and observationally, so we put some effort into investigating its origin. To do so, we obtained image subtraction light curves for a $5\times5$ grid of test coordinates surrounding ASASSN-18tb using the same reference image (shown in Figure~\ref{fig:CheckFinder} overlayed on a sample FFI). The sources were spaced 3 \textit{TESS} pixels away from one another to ensure sampling on a large spatial scale. The resulting test light curves are shown in Figure~\ref{fig:CheckGrid}. We note that the light curves obtained for the test coordinates located at C4 and D3 show a similar bump artifact. C4 and D3 lie very near the positions of two relatively bright stars in the Gaia Data Release 2 catalog \citep{2016Gaia,2018Gaia,2018ArenouGaia}: Gaia DR2 4676041915767041280 $(G_{RP}=12.3438\pm0.0005 \textnormal{ mag})$ and Gaia DR2 4676043427595528448 $(G_{RP}=13.653\pm0.001 \textnormal{ mag})$, respectively. Here we will simply refer to the two stars as Star 1 and Star 2, where Star 1 is the star nearest to point C4 (Gaia DR2 4676041915767041280) and Star 2 is the star nearest to the point D3 (Gaia DR2 4676043427595528448). The mean combination of the Star 1 and Star 2 light curves traces the early-time artifact structure in the ASASSN-18tb light curve better than either of the individual light curves. When either of the star light curves is used by itself, the pre-explosion light curve shows a small linear residual. As can be seen in Figure~\ref{fig:EarlyLC}, this is not the case when the mean combination is used. We thus use the mean combination of the Star 1 and Star 2 light curves in order to correct for systematics. This is shown as the red artifact light curve in Figure~\ref{fig:EarlyLC}. We subtract this artifact light curve from the raw ASASSN-18tb light curve, emphasizing that we apply no multiplicative factor to match the scale of the bump artifact in the two light curves. That the two light curves exhibit nearly identical structure prior to explosion further confirms that the bump is an artifact and not intrinsic to the supernova. After removing the bump artifact, we force the flux zeropoint of the detrended light curve to the average value of the observations obtained from MJD 58327.5 to 58338, corresponding to an average of all observations obtained over the first Sector 1 orbit after removing the window where the bright patch of scattered Earthlight is spatially coincident with ASASSN-18tb. \section{Early Light Curve} \label{sec:earlyLC} \begin{figure} \centering \includegraphics[width=\columnwidth]{companionASASSN-18tb.pdf} \caption{The early light curve of ASASSN-18tb compared to the companion interaction models from \protect\cite{2010Kasen}. We adopt $t_1$ from our best-fit single-compnent power-law model as the time of first light, and \textit{TESS} data is shown for 4 hour bins. Our best-fit single-component power-law model is shown in red, and interaction models for non-degenerate $1~R_\odot$, $10~R_\odot$, and $40~R_\odot$ companions are shown in cyan, purple, and gold, respectively. These models are for a viewing angle $(\theta=45\degree)$ where the predicted effect is strong. The lower panel shows residuals relative to our best-fit single-component power-law model.} \label{fig:InteractionModels} \end{figure} The \textit{TESS} light curve of ASASSN-18tb, after accounting for the systematics described in Section~\ref{sec:systematics}, is shown in the right panel of Figure~\ref{fig:EarlyLC}. It is also provided in machine-readable format in the online version of the paper. The light curve does not show evidence of the double-component rise observed for ASASSN-18bt by $K2$ \citep{2019Shappee,2019Dimitriadis}, but motivated by the identification of strong H$\alpha$ emission in the spectra of this supernova, we fit a number of power-law models to the light curve in order to better characterize its properties. The light curve uncertainties were estimated by measuring the root-mean-square scatter $\sigma$ of the pre-explosion observations obtained between MJD 58327.5 and 58338. The simplest of these models is the expanding fireball model, $f=z+{h_1}(t-t_1)^2$, with three parameters. The fireball model assumes a homologously expanding ejecta, which determines the temporal exponent \citep{Riess1999,Nugent2011}. We also fit arbitrary index power-law models of the form \begin{flalign} &f= z \textnormal{ when } t<t_1,&\\ &f= z + h_1(t-t_1)^{\alpha_1} \textnormal{ when } t_1<t<t_2,&\\ &f= z + h_1(t-t_1)^{\alpha_1} + h_2(t-t_2)^{\alpha_2} \textnormal{ when } t_2<t,& \end{flalign} where we obtain a single-component fit by simply fixing $h_2\equiv0$. The single-component power-law model thus has four parameters, while the double-component power-law model has seven. We fit these models using the \textsc{scipy.optimize.curve\_fit} package's Trust Region Reflective method, and our best-fit models are shown in the right panel of Figure~\ref{fig:EarlyLC}. Our best-fit fireball model is shown using the solid red curve. The fit is quite reasonable, although the model has moderate discrepancies with the observed flux. Our best-fit single- and double-component models are shown using the dashed blue and solid green lines, respectively. By eye, the three models are virtually indistinguishable, indicating that there is no need to invoke a model more complex than the single-component power-law. The $\chi^2$ per degree of freedom $(\nu)$ for each of our three fits are given in Table~\ref{tab:fitparams}. We find no evidence to justify using the double-component power law, as it produces no significant change in $\chi^2/\nu$. Although these simple fits indicate that there is no significant secondary source of early-time emission, it is worth examining the potential contribution to the light curve from ejecta interaction with a non-degenerate companion given the prominent H$\alpha$ emission seen in the spectra. \cite{2010Kasen} showed that such interaction would produce significant additional flux for certain viewing angles and provided concise analytic solutions, which we utilize here. Interaction models for $1~R_\odot$, $10~R_\odot$, and $40~R_\odot$ companions are shown in Figure~\ref{fig:InteractionModels}, along with our best-fit single-component power-law and the \textit{TESS} observations immediately surrounding the beginning of the explosion. These interaction models depend strongly on the viewing angle. One generally only expects to see significant signal for viewing angles looking down on the collision region $(\theta \sim 0\degree)$, and the models in Figure~\ref{fig:InteractionModels} assume a value fairly close to this optimal viewing angle $(\theta=45\degree)$. Under this assumption, it is clear that we can rule out any companion significantly larger than $1~R_\odot$ for ASASSN-18tb. However, constraining the viewing angle for any individual event is extremely difficult. In practice, one could almost completely mask the interaction signature from even a massive star if it were viewed at an angle of $\theta\sim180\degree$. We note, however, that the H$\alpha$ emission we observe is slightly blue-shifted (See Section~\ref{sec:spectra}). If the H$\alpha$ signature is indeed produced by swept-up material from a companion star, it would appear blueshifted only for viewing angles relatively close to $\theta\sim0\degree$ \citep{2018Boty}. We thus regard the optimal viewing angle models shown in Figure~\ref{fig:InteractionModels} as instructive, if not definitive, and consider non-degenerate companions of $R \gtrsim R_\odot$ to be inconsistent with our observations. \cite{2019Kollmeier} showed that ASASSN-18tb is a broadly normal under-luminous SN~Ia based on its empirical characteristics. Here we use our excellent set of photometric observations to estimate the near-maximum bolometric luminosity and examine the physical parameters needed to produce it. As in \cite{2018Vallely}, we estimate the bolometric luminosity using Markov Chain Monte Carlo (MCMC) methods to fit a blackbody to the observed spectral energy distribution. We limit our analysis to near-maximum dates with good filter coverage $(n_{filters}>4)$. All photometry was corrected for Galactic foreground extinction prior to being fit. As discussed in Section~\ref{subsec:phot}, there is no evidence for additional extinction from the host galaxy. Semi-analytic models for light curves powered by the radioactive decay of $^{56}$Ni have been available for some time \citep{1979Arnett,1982Arnett}. We can estimate the ejecta mass $(M_{ej})$ by assuming that the light curve peaks approximately at the diffusion time $(t_d)$. The ejecta mass is then approximated by \begin{equation} M_{ej}=t_d^2\frac{4\pi c v_{ej}}{3\kappa} \approx \Big( \frac{t_{peak}-t_1}{1+z} \Big) ^2\frac{4\pi c v_{ej}}{3\kappa}, \end{equation} where $c$ is the speed of light, $M_{ej}$ is the ejecta mass, $\kappa$ is the opacity of the ejecta, $v_{ej}$ is the ejecta velocity, $z$ is the redshift, $t_{peak}$ is the time of maximum light, and $t_1$ is the time of explosion. We will assume an approximate 10\% systematic uncertainty for our ejecta mass estimate (see, e.g., \citealt{2013Blondin}, \citealt{2018Wilk}, and \citealt{2006Stritzinger}). We again adopt MJD 58357.33 for $t_{peak}$, and we use the $t_1$ value from our single-component power-law model (MJD 58341.68). For the ejecta velocity, we adopt $v_{ej}=10\,000$ km s$^{-1}$, consistent with estimates of the expansion velocity from \cite{2018Eweis}. Like \cite{2018Khatami} and \cite{2018Sukhbold}, we adopt an opacity value $\kappa=0.1$ cm$^2$ g$^{-1}$ for our model that is typical of ionized ejecta \citep{2000Pinto,2017Arnett,2017BranchWheeler}. Using these values, we obtain a slightly sub-Chandrasekhar ejecta mass of $M_{ej}=1.11\pm0.12~\textnormal{M}_\odot$. This is consistent with the results of \cite{2019Scalzo}, who found a preference for sub-Chandrasekhar mass explosions among 1991bg-like SNe~Ia. We can estimate the amount of $^{56}$Ni $(M_{Ni})$ synthesized in the explosion using Arnett's rule, noting that at time $t_d$ after explosion, when the supernova attains maximum brightness, the luminosity will approximately equal that of the instantaneous radioactive decay power from the $^{56}$Ni$\rightarrow^{56}$Co$\rightarrow^{56}$Fe decay chain. We can then solve for $M_{Ni}$ as \begin{equation} M_{Ni} = \frac{L_{peak}}{\textnormal{ergs s}^{-1}} \big( C_{Ni}e^{-t_d/\tau_{Ni}} + C_{Co}e^{-t_d/\tau_{Co}} \big)^{-1} \textnormal{ M}_\odot, \end{equation} where the decay times of $^{56}$Ni and $^{56}$Co are known to be $\tau_{Ni}$=8.77 days and $\tau_{Co}$=111.3 days, respectively \citep{2005Stritzinger,2006Taubenberger,1987Martin}, and $C_{Ni} \approx 6.45\times10^{43}$ and $C_{Co} \approx 1.45\times10^{43}$ \citep{1994Nadyozhin,2018Sukhbold}. After including a 4 Mpc uncertainty in our redshift-estimated luminosity distance ($74.2\pm4$ Mpc), our MCMC fit yields $L_{peak}=(7.4\pm1.1)\times10^{42}$ ergs s$^{-1}$. From this we find that nominally $M_{Ni}=0.31\pm0.05~\textnormal{M}_\odot$. However, this simple model is typically only accurate to within 20\% \citep{2013Blondin,2017Hoeflich}, and it tends to overestimate $M_{Ni}$ for sub-luminous SNe~Ia like ASASSN-18tb \citep{2018Khatami}. To account for this, we report a lower limit that is 20\% smaller than that of the nominal $M_{Ni}$ estimate. We thus find that $M_{Ni}=0.21\textnormal{ -- }0.36~\textnormal{M}_\odot$. This is comparable to the $M_{Ni}$ estimates found by \cite{2019Scalzo} for the 1991bg-like SNe 2006gt and 2007ba, but is somewhat larger than the $M_{Ni}\sim0.1\textnormal{M}_\odot$ estimated for SN~1991bg itself by \cite{2006Stritzinger}. Futhermore, it is reasonably consistent with the $M_{Ni}\approx0.2~\textnormal{M}_\odot$ estimate obtained using the \cite{2018Goldstein} fitting functions calibrated using a library of radiative transfer models. \section{Early- and Late-phase Spectroscopy} \label{sec:spectra} \begin{figure*} \centering \includegraphics[width=\linewidth]{spectra_full.png} \caption{ \textit{Left}: Flux-calibrated spectral evolution of ASASSN-18tb\xspace. \textit{Insets}: Zoom-in view around H\ensuremath{\alpha}\xspace for early-phase (top) and late-phase (bottom) spectra. The dashed red lines indicate the continuum fit and the vertical dotted line indicates the rest wavelength of H\ensuremath{\alpha}\xspace. \textit{Right}: Evolution of the integrated H\ensuremath{\alpha}\xspace luminosity as a function of time from peak brightness compared to the evolution of the approximate optical luminosity (integrated from $4000-7000$~\AA{}, red) and the integrated Fe~III$~\lambda4660$ luminosity (blue) over the same timespan. Colored points correspond to the same color spectrum in the left panel. The right axis denotes the measured flux values since the distance is uncertain. K19 refers to values taken from \citet{2019Kollmeier}. The H\ensuremath{\alpha}\xspace luminosity does not track the falling luminosity of the SN and is consistent with a constant value. } \label{fig:spec-evol} \end{figure*} \begin{figure} \centering \includegraphics[width=\linewidth]{linediagnostics.pdf} \caption{Evolution of the H\ensuremath{\alpha}\xspace profile as a function of time. K19 refers to values taken from \citet{2019Kollmeier} which are not included in the fitting process due to the lack of reported uncertainties. \textit{Top}: Velocity shift as a function of time for the H\ensuremath{\alpha}\xspace (black) and the Fe~III$~\lambda4660$ (blue) emission lines. Note that our best-fit solution for H\ensuremath{\alpha}\xspace aligns with the K19 measurement even though it was omitted from the fit. \textit{Bottom}: Width of the H\ensuremath{\alpha}\xspace emission feature as a function of time. The width is roughly constant over the span of our spectral observations. } \label{fig:linefits} \end{figure} Our SALT spectra span $-4$ to $+148$ days relative to maximum light. As shown in Figure \ref{fig:SALTSpectra}, excluding the broad H\ensuremath{\alpha}\xspace emission, the spectra of ASASSN-18tb\xspace share many qualities with the under-luminous 91bg-like class of thermonuclear supernovae \citep{1992Fillippenko,1993Leibundgut,1994Hamuy}. The near-maximum spectra exhibit the Si~II absorption feature typical of SNe Ia\xspace plus hints of the Ti~II absorption of 91bg-like objects. Additionally, broad [Ca~II] emission at $\lambda\sim 7300$\AA{} is present in the late-phase spectra. The most intriguing aspect of ASASSN-18tb\xspace is the presence of broad, FWHM $\sim 1000~\rm{km}~\rm s^{-1}$ H\ensuremath{\alpha}\xspace emission \citep{2019Kollmeier}. While the H\ensuremath{\alpha}\xspace emission is clearly visible in the $> +100~\rm d$ late-phase spectra (Fig. \ref{fig:SALTSpectra}), we also see evidence of H\ensuremath{\alpha}\xspace emission in spectra starting roughly $+50~\rm d$ after peak light (Fig. \ref{fig:spec-evol}). There is a tentative detection in the $+37~\rm d$ spectrum, and a non-detection in the $+30~\rm d$ spectrum. The upper limit on H\ensuremath{\alpha}\xspace for the +30~d spectrum assumes an H\ensuremath{\alpha}\xspace profile similar to the one detected in the +37~day spectrum, using a FWHM velocity of $\sim 1500~\rm{km}~\rm s^{-1}$ blue-shifted by $\sim 1000~\rm{km}~\rm s^{-1}$. To characterize the nature of the H\ensuremath{\alpha}\xspace emission, we subtract off the continuum and fit the emission line with a Gaussian profile. Fig. \ref{fig:linefits} shows the line center and FWHM evolution of the H\ensuremath{\alpha}\xspace emission. For comparison, we also include the evolution of the Fe~III$~\lambda4660$ line in the top panel. The line center of the Fe~III feature is measured by fitting a Gaussian profile plus linear continuum at each epoch. We use a linear model to calculate the temporal evolution for each line, \begin{equation} v_\lambda = \dot{v}_\lambda (t-t_{\rm{max}}) + b_\lambda. \end{equation} \noindent Here, $v_\lambda$ is the velocity shift from rest for the H\ensuremath{\alpha}\xspace ($v_{\rm H\alpha}$) and Fe~III ($v_{\rm{FeIII}}$) lines at phase ($t-t_{\rm{max}}$) days. The values $\dot{v}_\lambda$ and $b_\lambda$ are computed using linear least-squares fitting and a bootstrap-resampling technique to estimate the uncertainties. The value from \citet{2019Kollmeier} does not have a reported uncertainty, so we do not include it in our fit. We see clear evidence for varying line velocities, with $\dot{v}_{\rm H\alpha} = 6.9^{+2.0}_{-1.2} ~\rm{km~s^{-1}/day}$ and $\dot{v}_{\rm{FeIII}} = 20.3^{+2.0}_{-1.9} ~\rm{km~s^{-1}/day}$. The line velocity drift rates for these two lines are discrepant at $\sim 5 \sigma$. We also fit the FWHM velocity ($\Delta v_{\rm{FWHM}}$) of the H\ensuremath{\alpha}\xspace emission, shown in the bottom panel of Fig. \ref{fig:linefits}, and find a weighted mean of $\Delta v_{\rm{FWHM}} = 1390\pm220~\rm{km}~\rm s^{-1}$. To determine the temporal evolution of $\Delta v_{\rm{FWHM}}$, we use the same linear model and bootstrap-resampling technique as before, finding the width of the H\ensuremath{\alpha}\xspace emission is consistent with a temporally constant value, although the uncertainties are large (Fig. \ref{fig:linefits}). Because a roughly constant H\ensuremath{\alpha}\xspace flux is consistent with CSM SNe Ia, we searched for other emission lines associated with circumstellar interaction, such as He~I and H$\beta$. No other broad emission lines atypical of SNe Ia\xspace are found in our spectra and we place upper limits on these non-detections. For the early-phase spectra ($<100~\rm d$ after max), we place a limit on the Balmer decrement of $F_{H\ensuremath{\alpha}\xspace}/F_{\rm H\beta}\gtrsim 2$. For the late-phase spectra we place a limit of $F_{H\ensuremath{\alpha}\xspace}/F_{\rm H\beta}\gtrsim 5$, consistent with the value found by \citet{2019Kollmeier}. For the non-detection of He~I$~\lambda5875$, we find $F_{H\ensuremath{\alpha}\xspace}/F_{\rm{He\texttt{I}}}\gtrsim 3$ for all spectra. \section{Discussion and Conclusions} \label{sec:conclusion} ASASSN-18tb is clearly an unusual event, and its place in the ever-changing menagerie of supernova taxonomy will likely be the source of ongoing discussion. The detection of strong H$\alpha$ emission in an empirically normal SN~Ia is unprecedented. \cite{2019Kollmeier} discussed possible sources for this signature, including swept up material from a non-degenerate companion and CSM interaction, but their analysis was necessarily limited by having only one post-maximum spectrum to examine. With our additional photometric and spectroscopic observations, we can provide a more extensive discussion of the origin of ASASSN-18tb and its unusual characteristics. While the $\textnormal{FWHM}\sim1000$ km s$^{-1}$ H$\alpha$ emission we observe in the late-time spectra is consistent with the predicted signatures of swept up material from a non-degenerate companion, other aspects of the emission are not. It is difficult to reconcile the approximately constant H$\alpha$ luminosity with this interpretation, as one would expect the H$\alpha$ to follow the SN bolometric luminosity. This is because the H$\alpha$ emission is powered by gamma-ray deposition from the decay of $^{56}$Ni, the same source which powers the SN light curve \citep{2018Boty}. Additionally, if the H were swept up in the SN ejecta the velocity evolution of H$\alpha$ emission should approximately trace that of Fe~III \citep{2018Boty}, but we do not observe this in ASASSN-18tb. It is possible the companion interaction models do not accurately represent the early evolution of the H\ensuremath{\alpha}\xspace emission. Models in the literature do not provide a clear calculation of when stripped material should start becoming visible. For all spectra with detected H\ensuremath{\alpha}\xspace emission, the Fe emission feature at $\lambda\approx 4660$\AA{} is also present, indicating that the inner ejecta are partially visible. However, for the H\ensuremath{\alpha}\xspace emission to stem from a stripped companion, the H\ensuremath{\alpha}\xspace material would need a previously unincorporated external power source or trapping mechanism to sustain the near-constant luminosity. Furthermore, the early-time \textit{TESS} light curve shows no indication of the excess predicted from ejecta-companion interaction, as discussed in Section~\ref{sec:earlyLC}. The more likely interpretation appears to be that the H$\alpha$ signature is a product of CSM interaction. An approximately constant H$\alpha$ luminosity prior to $\sim150$ days beyond maximum light is an established feature of SNe~Ia-CSM, and while we do not detect H$\beta$ emission in the spectra we present, the upper limit we place on the Balmer decrement in late phase spectra $(F_{H\ensuremath{\alpha}\xspace}/F_{\rm H\beta}\gtrsim 5)$ is consistent with measurements by \cite{2013Silverman} for the SNe~Ia-CSM population. However, even among this rare class of events ASASSN-18tb stands out as a significant outlier in many respects. A major difference between ASASSN-18tb and other SNe~Ia-CSM is that the light curve of ASASSN-18tb is fairly normal for a low-luminosity SNe~Ia, while other SNe~Ia-CSM generally do not obey the standard empirical SNe~Ia light curve relations \citep{2013Silverman}. \cite{2013Silverman} also found that all SNe~Ia-CSM have absolute magnitudes in the range $-21.3~\rm{mag} \leq M_R \leq -19~\rm{mag}$. ASASSN-18tb is nearly a full magnitude less luminous, with $M_R \approx -18.1~\rm{mag}$. Additionally, while all of the SNe~Ia-CSM identified by \cite{2013Silverman} were found in late-type spirals or dwarf irregulars (star-forming galaxies indicative of young stellar populations), as noted by \cite{2019Kollmeier}, the host of ASASSN-18tb is an early-type galaxy dominated by old stellar populations. ASASSN-18tb is also spectroscopically distinct from the SNe~Ia-CSM population at early times. While previously identified SNe~Ia-CSM resemble slow-declining, overly luminous 1991T-like SNe~Ia, ASASSN-18tb is more comparable to the fast-declining, underluminous 1999bg-like SNe~Ia. Like SN~1991bg, ASASSN-18tb falls in the ``Cool'' (CL) region of the \cite{2006Branch} Diagram, while SN~1991T and the SNe~Ia-CSM belong to the ``Shallow-Silicon'' (SS) subtype \citep{2019Kollmeier}. Whether ASASSN-18tb represents a distinct sub-class of SNe~Ia-CSM or the extreme end of a continuum remains to be seen, but it is clearly inconsistent with the properties of previously studied SNe~Ia-CSM. Future observations and theoretical studies of this event will hopefully shed light on its unusual characteristics. X-ray emission has previously been observed for one Ia-CSM SN~2012ca by \cite{2018Bochenek}, and such signatures may be visible for ASASSN-18tb, although the presumably low density CSM of this event would likely make such an observation very challenging. Radio observations are powerful probes of the CSM surrounding SNe \citep{2012Chomiuk,2012Krauss,2013Milisavljevic}, and may be able to better characterize the environment of ASASSN-18tb. Recent work indicates that underluminous SNe~Ia tend to be produced through the collisional model \citep{2015Dong,2019Vallely}. As shown by \cite{2014Piro}, the combination of the WD mass function and the collisional model simulations of \cite{2013Kushnir} predict a $^{56}$Ni yield distribution peaked near $M_{Ni}\sim0.3\,\text{M}_\odot$, strikingly similar to the $M_{Ni}=0.29\pm0.07~\textnormal{M}_\odot$ we estimate for ASASSN-18tb. As such, it is interesting to ponder a scenario where one might be able to observe CSM interaction from a collisional model DD progenitor scenario. It may be possible to achieve this by invoking a red giant tertiary. The collisional model requires a tertiary to drive the eccentricity oscillations that produce the collision. Occasionally the tertiary would be an evolved red giant whose mass loss could produce a low density CSM into which the SN then explodes. While \cite{2013Silverman} found that nearly all SNe~Ia-CSM exhibit H$\alpha$ luminosities in the range $10^{40}-10^{41}$ ergs s$^{-1}$, the H\ensuremath{\alpha}\xspace luminosity of ASASSN-18tb\xspace is 2 orders of magnitude lower at $\sim10^{38}$ ergs s$^{-1}$. This implies an overall lower amount of CSM material for the ejecta to interact with, which can be explained by a tertiary with relatively low mass that has outlived the inner binary. Further observations and theoretical modeling will hopefully constrain the mass of the H\ensuremath{\alpha}\xspace emitting material, which can provide additional clues to its origin. The \textit{TESS} observations we present also emphasize how powerful the mission will be for probing the early-time behavior of SNe. Due to its smaller aperture and wide field of view, \textit{TESS} cannot match \textit{Kepler}'s exquisite precision for events of comparable brightness. However, \textit{TESS} covers a much larger area of the sky, and will be able to observe significantly more SNe over the duration of its two year mission. So far, six SNe have been published from the \textit{Kepler} and \textit{K2} missions \citep{2015Olling,2016Garnavich,2019Shappee}, whereas TESS will obtain obtain relatively high precision light curves for $\sim130$ SNe ($\sim$100 SNe~Ia, and $\sim30$ SNe~II; \citealt{2019Fausnaugh}). These observations will provide an unparalleled sample of early-time SN light curves. While it is difficult to produce stringent upper limits on companion interaction light curve signatures in a single event (due to the strong viewing angle dependence of the predicted effect), this can easily be accounted for once a large sample of light curves has been collected, and the predicted emission from the \cite{2010Kasen} companion interaction models is readily detectable in the \textit{TESS} band \citep{2019Fausnaugh}. \textit{TESS} will either finally detect the long-sought signature of companion interaction, or put stringent non-detection limits on the phenomenon and add to the growing list of observational constraints in tension with the single-degenerate scenario. Another advantage of this sample is that because these \textit{TESS} SNe are necessarily bright, it will be possible to obtain late-phase spectra for them. Observations of ASASSN-18bt \citep{2019Shappee,2019Dimitriadis} showed that the early-time light curve alone leads to degeneracies between the observational signatures of the interactions with a nearby companion, radioactive material near the outside of the ejecta, and circumstellar interactions. The combination of a large number of well-observed early-time \textit{TESS} SNe light curves and late-phase spectra of these transients will provide a unique probe that can break these degeneracies. \section*{Acknowledgments} We thank the referee, Mark Phillips, for very helpful comments. We thank the Las Cumbres Observatory (LCOGT) and its staff for its continuing support of the ASAS-SN project. This research utilizes LCOGT observations obtained with time allocated through the National Optical Astronomy Observatory TAC (NOAO Prop. ID 2018B-0110, PI: P. Vallely). We thank Nidia Morrell for obtaining the du Pont spectrum we present. We also thank the Swift PI, the Observation Duty Scientists, and the science planners for promptly approving and executing our observations. Most of the spectroscopic observations were obtained using the Southern African Large Telescope (SALT) in the Rutgers University program 2018-1-MLT-006 (PI: S.~W.~Jha), with an additional SALT spectrum obtained as part of the Large Science Programme on transients (2016-2-LSP-001; PI: Buckley). Polish participation in SALT is funded by grant no. MNiSW DIR/WK/2016/07. ASAS-SN is supported by the Gordon and Betty Moore Foundation through grant GBMF5490 to the Ohio State University and NSF grant AST-1515927. Development of ASAS-SN has been supported by NSF grant AST-0908816, the Mt. Cuba Astronomical Foundation, the Center for Cosmology and AstroParticle Physics at the Ohio State University, the Chinese Academy of Sciences South America Center for Astronomy (CASSACA), the Villum Foundation, and George Skestos. PJV is supported by the National Science Foundation Graduate Research Fellowship Program Under Grant No. DGE-1343012. This work at Rutgers University (SWJ, YE) is supported by NSF award AST-1615455. MAT acknowledges support from the DOE CSGF through grant DE-SC0019323. KZS, CSK, and TAT are supported by NSF grants AST-1515876, AST-1515927, and AST-1814440. CSK is also supported by a fellowship from the Radcliffe Institute for Advanced Studies at Harvard University. PC, SD and SB acknowledge Project 11573003 supported by NSFC. Support for JLP is provided in part by FONDECYT through the grant 1191038 and by the Ministry of Economy, Development, and Tourism's Millennium Science Initiative through grant IC120009, awarded to The Millennium Institute of Astrophysics, MAS. MS is supported by a research grant (13261) from the VILLUM FONDEN and a project grant by IRFD (Independent Research Fund Denmark). TAT is supported in part by a Simons Foundation Fellowship and an IBM Einstein Fellowship from the Institute for Advanced Study, Princeton. DAHB's research is supported by the National Research Foundation (NRF) of South Africa. MG is supported by the Polish NCN MAESTRO grant 2014/14/A/ST9/00121. SB is partially supported by China postdoctoral science foundation grant No.2018T110006. This paper includes data collected by the \textit{TESS} mission, which are publicly available from the Mikulski Archive for Space Telescopes (MAST). Funding for the \textit{TESS} mission is provided by NASA's Science Mission directorate. We thank Ethan Kruse for uploading the TESS FFIs to YouTube, as these videos were invaluable when investigating the systematics in our data. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This research uses data obtained through the Telescope Access Program (TAP), which has been funded by the National Astronomical Observatories of China, the Chinese Academy of Sciences, and the Special Fund for Astronomy from the Ministry of Finance. This research has made use of the SVO Filter Profile Service (http://svo2.cab.inta-csic.es/theory/fps/) supported from the Spanish MINECO through grant AyA2014-55216. See \cite{2012Rodrigo} for more details on the SVO Filter Profile Service. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. \input{ASASSN18tb.bbl} \label{lastpage} \end{document}
3,212,635,537,814
arxiv
\section{Abstract} An extensive program for the calculation of galactic cosmic-ray propagation has been developed. Primary and secondary nucleons, primary and secondary electrons, secondary positrons and antiprotons are included. Fragmentation and energy losses are computed using realistic distributions for the interstellar gas and radiation fields. Models with diffusion and convection only do not account naturally for the observed energy dependence of $B/C$, while models with reacceleration reproduce this easily. The height of the halo propagation region is determined, using recent $^{10}Be/\,^9Be$\ measurements, as greater than 4 kpc. The radial distribution of cosmic-ray sources required is broader than current estimates of the SNR distribution for all halo sizes. Our results include an estimate of cosmic-ray antiproton and positron spectra, and the Galactic diffuse $\gamma$-ray emission (see accompanying paper: Moskalenko 1998b). \\ \section{Introduction.} We are constructing a model which aims to reproduce self-consistently observational data of many kinds related to cosmic-ray (CR) origin and propagation: direct measurements of nuclei, electrons, positrons, antiprotons, gamma rays, and synchrotron radiation. These data provide many independent constraints on any model and our approach is able to take advantage of this since it must be consistent with all types of observation. Here we present our results on the evaluation of diffusion/convection and reacceleration models based on the $B/C$\ and $^{10}Be/\,^9Be$\ ratios, and set limits on the halo size. A re-evaluation of the halo size is desirable since new $^{10}Be/\,^9Be$\ data are now available from Ulysses (Connell 1998) with better statistics than previously. Our preliminary results were presented in Strong (1997a,b) and full results for protons, Helium, positrons, and electrons in Moskalenko (1998a). Some illustrative results for gamma-rays and synchrotron radiation are given in Strong (1997a) and Moskalenko (1998b) and all details are given in Strong (1998)\footnote{For more details see {\it http://www.gamma.mpe--garching.mpg.de/$\sim$aws/aws.html}}. \\ \section{The model description.} The models are three dimensional with cylindrical symmetry in the Galaxy, and the basic coordinates are $(R,z,p)$, where $R$ is Galactocentric radius, $z$ is the distance from the Galactic plane, and $p$ is the particle momentum. The propagation equations are solved numerically on a grid by the method described in Strong (1998). $R_\odot$ is taken as 8.5 kpc. The propagation region is bounded by $R=R_h$, $z=z_h$ beyond which free escape is assumed. We take $R_h=30$ kpc. The range $z_h=1-20$ kpc is considered. For a given $z_h$ the diffusion coefficient as a function of momentum is determined by $B/C$\ for the case of no reacceleration; if reacceleration is assumed then the reacceleration strength (related to the Alfv\'en speed) is constrained by the energy-dependence of $B/C$. The spatial diffusion coefficient for the case of no reacceleration is taken as $D_{xx} = \beta D_0(\rho/\rho_0)^{\delta_1}$ below rigidity $\rho_0$, $\beta D_0(\rho/\rho_0)^{\delta_2}$ above rigidity $\rho_0$. Since the introduction of a sharp break in $D_{xx}$ is an extremely contrived procedure which is adopted just to fit $B/C$\ at all energies, we also consider the case $\delta_1=\delta_2$ (no break). The convection velocity $V=V(z)$ is assumed to increase linearly with distance from the plane. For the case with reacceleration, the spatial diffusion coefficient is $D_{xx} = \beta D_0(\rho/\rho_0)^\delta$ with $\delta=\frac{1}{3}$ for all rigidities, and the momentum-space diffusion coefficient $D_{pp}$ is related to $D_{xx}$ (using Berezinskii 1990 and Seo 1994). The injection spectrum of nucleons is assumed to be a power law in momentum. The interstellar hydrogen distribution uses HI and CO surveys and information on the ionized component; the Helium fraction of the gas is taken as 0.11 by number. Energy losses for nucleons by ionization and Coulomb interactions are included. The distribution of CR sources is chosen to reproduce the CR distribution determined by analysis of EGRET $\gamma$-ray data (Strong 1996b). The secondary nucleon and secondary $e^\pm$ source functions are computed from the propagated primary distribution and the gas distribution. \\ \begin{figure}[t! \begin{picture}(148,73)(0,1) \put(0,0){\makebox(70,0)[lb] {\psfig{file=fig1a.ps,width=70mm,height=65mm,clip=}}} \put(75,0){\makebox(70,0)[lb] {\psfig{file=fig1b.ps,width=70mm,height=65mm,clip=}}} \end{picture} \small Fig. 1. {\it Left panel:} $B/C$\ ratio for diffusion/convection models without break in diffusion coefficient (Strong 1998 and references therein), for $z_h$ = 3 kpc, $dV/dz$ = 0 (solid line), 5 (dotted line), and 10 km s$^{-1}$ kpc$^{-1}$ (dashed line); solid line: interstellar ratio, shaded area: modulated to 300 -- 500 MV; data: HEAO-3, Voyager, Ulysses. {\it Right panel:} $B/C$\ ratio for diffusive reacceleration models (Strong 1998) with $z_h$ = 5 kpc, $v_A$ = 0 (dotted), 15 (dashed), 20 (thin solid), 30 km s$^{-1}$ (thick solid). In each case the interstellar ratio and the ratio modulated to 500 MV is shown. \end{figure} \section{Illustrative results.} We consider the cases of diffusion+convection and diffusion+reacceleration, since these are the minimum combinations which can reproduce the key observations. Our basic conclusion is that the reacceleration models are more satisfactory in meeting the constraints provided by the data, reproducing the $B/C$\ energy dependence without {\it ad hoc} variations in the diffusion coefficient; further it is not possible to find any {\it simple} version of the diffusion/convection model which reproduces $B/C$\ satisfactorily. Figure~1a shows the diffusion+convection model without break, $\delta_1 = \delta_2$; for each $dV/dz$, the remaining parameters $D_0$, $\delta_1$ and $\rho_0$ are adjusted to fit the data as well as possible. It is clear that a {\it good} fit is {\it not} possible; the basic effect of convection is to reduce the variation of $B/C$\ with energy, and although this improves the fit at low energies the characteristic peaked shape of the measured $B/C$\ cannot be reproduced. If we allow $\delta_1\not=\delta_2$ it can clearly be fitted, but the break has to be large and the procedure is {\it ad hoc}. Figure~1b illustrates a diffusive reacceleration model and shows the effect on $B/C$\ of varying $v_A$ ($= 0\div30$ km s$^{-1}$) for $z_h= 5$ kpc. This shows how the initial form becomes modified to produce the characteristic peaked shape. Reacceleration models thus lead naturally to the observed peaked form of $B/C$, as pointed out by previous authors (e.g., Letaw 1993, Seo 1994, Heinbach 1995); a value $v_A\sim20$ km s$^{-1}$ seems satisfactory. \begin{figure}[t! \begin{picture}(148,73)(0,1) \put(0,0){\makebox(70,0)[lb] {\psfig{file=fig2a.ps,width=70mm,height=65mm,clip=}}} \put(75,0){\makebox(70,0)[lb] {\psfig{file=fig2b.ps,width=70mm,height=65mm,clip=}}} \end{picture} \small Fig. 2. (a) Predicted $^{10}Be/\,^9Be$\ ratio as function of $z_h$ for $dV/dz$ = 0, 5, 10 km s$^{-1}$ kpc$^{-1}$, the Ulysses experimental limits are shown as horizontal dashed lines. The shaded regions show the parameter ranges allowed by the data. (b) $^{10}Be/\,^9Be$\ ratio for diffusive reacceleration models as function of $z_h$ at 525 MeV/nucleon corresponding to the mean interstellar value for the Ulysses data (Connell 1998). \end{figure} Figure~2a summarizes the limits on $z_h$ and $dV/dz$ for diffusion/convection, using the $^{10}Be/\,^9Be$\ ratio at the interstellar energy of 525 MeV/nucleon appropriate to the Ulysses data (Connell 1998). We conclude that in the absence of convection $4{\rm\ kpc}<z_h < 12 {\rm\ kpc}$, and if convection is allowed the lower limit remains but no upper limit can be set. In the case $dV/dz < 7$ km s$^{-1}$ kpc$^{-1}$, this figure places upper limits on the convection parameter for each halo size. These limits are rather strict, and a finite wind velocity is only allowed in any case for $z_h > 4$ kpc. Figure~2b shows $^{10}Be/\,^9Be$\ for the reacceleration models as a function of $z_h$ at 525 MeV/nucleon corresponding to the Ulysses measurement and we again find that $4{\rm\ kpc} <z_h < 12$ kpc. Figure~3 (left panel) shows the effect of halo size on the radial distribution of 3 GeV CR protons, for the reacceleration model. For comparison we show the CR distribution deduced by model-fitting to EGRET gamma-ray data ($>100$ MeV) from Strong (1996b), which is dominated by the $\pi^0$-decay component; the analysis by Hunter (1997), based on a different approach, gives a similar result. The predicted CR distribution using the SNR source function is too steep even for large halo sizes; in fact the halo size has a relatively small effect on the distribution. Other related distributions such as pulsars have an even steeper falloff. Based on these results we have to conclude, in the context of the present models, that the distribution of sources is not that expected from the (highly uncertain) distribution of SNR. In view of the difficulty of deriving the SNR distribution this is perhaps not a serious shortcoming; if SNR are indeed CR sources then it is possible that the gamma-ray analysis gives the best estimate of their Galactic distribution. Therefore, we have chosen a CR source distribution to fit the $\gamma$-ray data after propagation (Figure~3, right panel). The possibility of anisotropic diffusion (preferentially in the radial direction) has not yet been addressed in our models. \begin{figure}[t! \begin{picture}(148,73)(0,1) \put(0,0){\makebox(70,0)[lb] {\psfig{file=fig3a.ps,width=70mm,height=65mm,clip=}}} \put(75,0){\makebox(70,0)[lb] {\psfig{file=fig3b.ps,width=70mm,height=65mm,clip=}}} \end{picture} \small Fig. 3. {\it Left panel}: radial distribution of 3 GeV protons at $z = 0$, for diffusive reacceleration model with halo sizes $z_h = 1$, 3, 5, 10, 15, and 20 kpc (solid curves). Dashed line: the source distribution is that for SNR given by Case (1996), histogram: the CR distribution deduced from EGRET $>$100 MeV gamma rays (Strong 1996b). {\it Right panel}: radial distribution of 3 GeV protons at $z = 0$ for the source distribution actually adopted (dashed line), for diffusive reacceleration model with various halo sizes $z_h = 1$, 3, 5, 10, 15, and 20 kpc (solid curves). \label{fig4} \end{figure} The positron fraction computed is in good agreement with the measured one between 1 and 10 GeV, where the data are rather precise. Our positron predictions from Moskalenko (1998a) have been compared with more recent absolute measurements in Barwick (1998) and the agreement is good; for the positrons this new comparison has the advantage of being independent of the electron spectrum (see also Moskalenko 1998b). \\ \secref{References.} \setlength{\parindent}{-5mm} \begin{list}{}{\topsep 0pt \partopsep 0pt \itemsep 0pt \leftmargin 5mm \parsep 0pt \itemindent -5mm} \item S.W.~Barwick et al.\ {\it Ap.J.} 498 (1998) 779--789. \item V.S.~Berezinskii et al.\ {\it Astrophysics of Cosmic Rays.} North Holland.\ Amsterdam.\ (1990). \item G.~Case and D.~Bhattacharya.\ {\it A\&AS.} 120C (1996) 437--440. \item J.J.~Connell.\ {\it Ap.J.} 501 (1998) L59--L62. \item U.~Heinbach and M.~Simon.\ {\it Ap.J.} 441 (1995) 209--221. \item S.D.~Hunter et al.\ {\it Ap.J.} 481 (1997) 205--240. \item J.R.~Letaw, R.~Silberberg and C.H.~Tsao.\ {\it Ap.J.} 414 (1993) 601--611. \item I.V.~Moskalenko and A.W.~Strong.\ {\it Ap.J.} 493 (1998a) 694--707. \item I.V.~Moskalenko and A.W.~Strong.\ {\it 16th ECRS.} (1998b) GR-1.3.\ (astro-ph/9807288) \item E.S.~Seo and V.S.~Ptuskin.\ {\it Ap.J.} 431 (1994) 705--714. \item A.W.~Strong and J.R.~Mattox.\ {\it A\&A.} 308 (1996b) L21--L24. \item A.W.~Strong and I.V.~Moskalenko.\ {\it 4th Compton Symp. AIP 410.} Ed.\ C.D.~Dermer et al.\ 1162--1166.\ AIP.\ NY.\ (1997a). \item A.W.~Strong, I.V.~Moskalenko and V.~Sch\"onfelder.\ {\it 25th ICRC.} 4 (1997b) 257--260. \item A.W.~Strong and I.V.~Moskalenko.\ {\it Ap.J.} 509 (1998) in press. (astro-ph/9807150) \end{list} \end{document
3,212,635,537,815
arxiv
\section{Introduction} The search for the QCD critical point has attracted considerable theoretical and experimental attention recently. The existence of such a point -- an ending point of the first order chiral transition in QCD -- was suggested a long time ago \cite{Asakawa,Barducci}, and the properties were studied using universality arguments and model calculations more recently \cite{Berges,Halasz-pdqcd} (see Ref.~\cite{cp-review} for review). The experimental search for the critical point using heavy ion collisions has been proposed in \cite{signatures}. It is apparent that theoretical knowledge of the location of the critical point on the phase diagram is important for the success of the experimental search. First principle lattice QCD calculations aimed at determining the location of the critical point on the $T,\,\muB$ (temperature, baryon-chemical potential) diagram have been attempted recently using several different techniques \cite{Fodor,Allton,Gavai,deForcrand} (see Ref.\cite{Philipsen-review} for review). The major obstacle for direct Monte Carlo simulation is the well-known lack of positivity of the measure of the path integral defining QCD partition function at nonzero baryo-chemical potential $\muB$ --- the sign problem. One of the methods to deal with this problem is to Taylor expand the QCD pressure in powers of $\muB$ around $\muB=0$, i.e., around the point at which direct Monte Carlo simulations are not hindered by the sign problem \cite{Allton,Gavai}. The success of such an approach in determining the location $(T_E,\,\mu_E)$ of the critical ending point crucially depends on the convergence radius of the Taylor expansion around $\muB=0$ \cite{Allton,Gavai,Lombardo}. The convergence radius, in turn, is a function of the position of the singularities in the complex $\muB$ plane. Little is known about the location of these singularities to date. The purpose of this paper is to expand our knowledge of the location of these complex plane singularities. We shall be able to determine the position of the singularities in the regime where the quark masses $\mq$ are sufficiently small. To summarize, the strongest rigorous consequence of our analysis is that the convergence radius, $\muR$, achieves its minimum value at a temperature slightly above $T_c$ (by ${\cal O}(m^{1/(\beta\delta)})$). This value scales as \begin{equation}\label{mur} \min_T\, \mu_R(T) \sim m^{1/(2\beta\delta)}, \end{equation} and vanishes in the chiral limit ($m\to0$). The value $1/(\beta\delta)\approx 0.54$ is determined by the critical exponents of the O(4) universality class in 3 dimensions. The singularity which determines the radius in \eqref{mur} lies in the complex plane and pinches the real axis (together with its conjugate) at the critical point. The convergence radius $\mu_R(T)$ has a certain nonanalyticity at $T=T_E$, which we shall describe. The study of thermodynamic singularities, or partition function zeros, in the complex plane was pioneered by Yang and Lee \cite{YangLee}. Their analysis was extended by Fisher from the complex magnetic field singularities to complex temperature singularities \cite{Fisher}. The properties of these singularities following from scaling and universality have been further studied by Fisher \cite{Fisher-edge}, Itzykson, Pearson and Zuber \cite{Itzykson} and others. From the point of view of the lattice studies of the QCD phase diagram, we would like to distinguish two separate issues. One is the convergence of the series as the truncation order is increased. This is the issue which this study will impact. The other is the convergence of each term in the series to its thermodynamic limit as the volume and/or the number of Monte Carlo configurations are increased. The sign-problem is affecting this latter convergence and will not be addressed here (see, e.g., \cite{Halasz-convergence,Splittorff,Ejiri}). \section{Universal properties of the complex singularities} \label{sec:universal} Here we describe generic universal properties of the complex thermodynamic singularities. The basic results following from scaling and universality are not new, and can be found in \cite{Itzykson}. For clarity and completeness, we rederive the needed facts here using slightly different approach. Our purpose is to apply these results to QCD at finite temperature and density. \subsection{Complex $\mu$ singularities as Fisher zeros} \begin{figure} \centerline{ \epsfig{file=pd-sketch.eps,width=.5\textwidth} } \caption[]{A sketch of the QCD phase diagram in the vicinity of the critical line $\mu_c(T)$ for zero and nonzero quark mass.} \label{fig:pd-sketch} \end{figure} We shall first consider QCD in the chiral limit -- the limit of two lightest quark masses taken to zero. The phase diagram in the $(T,\,\muB)$ plane is sketched in Figure~\ref{fig:pd-sketch}. The low-temperature phase is separated from the high-temperature phase by a phase transition. This is unavoidable, because the symmetry of the ground state must change from SU(2)$_V\times$U(1)$_B$ to the full symmetry of the action SU(2)$_V\times$SU(2)$_A\times$U(1)$_B$ as the temperature is raised \cite{Kogut-book}. This transition is of second order for $\muB<\muB_3$, and of first order for $\muB>\muB_3$. Let us now fix $T$ at a value in the interval $(T_3,T_c)$ and study the behavior of singularities in the complex $\muB$ plane. On the real axis, as $\muB$ increases from zero, the transition occurs at a value which we denote as $\muc(T)$ (see Fig. \ref{fig:pd-sketch}). Therefore, at a fixed temperature, the change of the chemical potential $\muB-\muc(T)$ is a relevant perturbation. In the universality class of the O(3)$\to$O(4) transitions, to which QCD chiral restoration transition belongs \cite{Pisarski}, there is only one relevant variable -- the thermal variable $t$ (we are discussing the symmetry limit -- the magnetic field variable $h$ is absent). Therefore, we are led to consider the universal behavior of the singularities in the complex {\em temperature} plane, which correspond to Fisher zeros. The scaling parameter $t$ is to linear order \begin{equation}\label{tmu} t\sim \muB^2-\muc^2(T). \end{equation} We use $\mu^2$ instead of $\mu$ to ensure that at $\mu=0$ the thermal variable is proportional to $T-T_c$. \subsection{Partition function zeros and electrostatic analogy} As first exposed by Lee and Yang \cite{YangLee}, the thermodynamic singularities in the complex plane are related to the zeros of the partition function. For a finite system the partition function $Z$, by definition, is strictly positive for {\em real} values of the parameters. However, $Z$ has zeros in the complex plane, whose number grows linearly with the size of the system. In the thermodynamic limit the zeros typically coalesce into cuts. A phase transition occurs where such a cut pinches (second order) or crosses (first order) the real axis. The considerations of Lee and Yang apply to the variable $\lambda\equiv\exp(\muB/T)$ since the partition function of QCD is a polynomial in this variable due to quantization of the baryon charge. It is convenient to use the electrostatic analogy. The partition function -- a polynomial, can be written in terms of its roots \begin{equation} Z(\lambda)=\prod_k(\lambda-\lambda_k)\,. \end{equation} Therefore the free energy (or grand potential, if we are dealing with grand canonical ensemble, as in QCD) is \begin{equation} \Omega(\lambda) = -T\log Z = -T\sum_k\log(\lambda-\lambda_k). \end{equation} For complex $\lambda$, the real part $\Re\Omega$ can be interpreted as an electrostatic potential created by charges located on the plane $(\Re\lambda,\Im\lambda)$: \begin{equation} \Re \Omega(\lambda) = -T\sum_i \log|\lambda-\lambda_k|. \end{equation} All charges have the same magnitude and sign (degenerate roots can be treated as coincident charges). Now let us assume, as is known to be true in most cases, and in the universality region of interest in particular, that the zeros coalesce into 1-dimensional curves in the thermodynamic limit. Then, the electrostatic potential $\Re\Omega$ is continuous across such a curve, while the analog of the electric field \begin{equation}\label{E} \bm E = -\bm\nabla (\Re\Omega)=-\left( \frac{\pd\Re\Omega}{\pd \Re\lambda},\, \frac{\pd\Re\Omega}{\pd \Im\lambda} \right) = (-\Re,\,\Im)\,\frac{d \Omega}{d \lambda} \end{equation} is discontinuous -- the normal component jumps by an amount proportional to the linear charge density $\rho$ on the curve. This curve can be viewed as the location of a cut on a Riemann sheet of the analytic function $\Omega(\lambda)$. \subsection{Stokes boundaries in the scaling region at $h=0$} Consider now the critical region in the vicinity of a critical point $\lambda_c$. In the scaling regime the singular (i.e., non-analytic) part of the potential $\Omega(t)$ is proportional to a power of $t$ \begin{equation}\label{Osing} \Omega_{\rm sing}(t) = \left\{ \begin{array}{ll} A_+\, t^{2-\alpha},&\quad t>0;\\ A_-\, (-t)^{2-\alpha},&\quad t<0; \end{array} \right. \,\qquad t\equiv\frac{\lambda-\lambda_c}{\lambda_c} \end{equation} which defines the universal specific heat exponent $\alpha$ and the amplitudes $A_{\pm}$, whose ratio is also universal. Off the real axis $\Omega(t)$ must be an analytic function everywhere except for discontinuities across the cuts. Such cuts (at least two, by symmetry $\Omega(t^*)=\Omega(t)$) must be present, because the function \eqref{Osing} taken at $t>0$ does not match its analytic continuation from the $t<0$ axis along a path around $t=0$. The location of the cuts can be determined using electrostatic analogy, which requires $\Re\Omega$ to be continuous across the cut. Parameterizing $t=-s\,e^{i\varphi}$ using real parameter $s>0$ we find: \begin{equation} A_+\cos[(2-\alpha)(\varphi-\pi)] = A_-\cos[(2-\alpha)\varphi]. \end{equation} Therefore, the cuts are straight lines at an angle with respect to the negative $t$ axis given by (cf. \cite{Itzykson}) \begin{equation}\label{phi} \tan[(2-\alpha)\varphi] = \frac{\cos(\pi\alpha)-A_-/A_+}{\sin(\pi\alpha)}\,, \end{equation} as shown in Figure~\ref{fig:scalingzeros}. All quantities entering this formula are universal. The cuts are termed Stokes boundaries in \cite{Itzykson} --- they carry conceptual resemblance to the anti-Stokes lines in the WKB theory. Across these lines, the function $\Omega$ switches from one of its Riemann sheets to another. The density of the ``charges'' on the cut is proportional to the discontinuity of the normal component of $\bm E$, and thus to $\Im ( e^{i\varphi}\, d\Omega/dt)$, which vanishes at the branching point as \begin{equation}\label{rho} \rho\sim |t|^{1-\alpha}. \end{equation} \subsection{Stokes boundaries at $h\ne0$} \label{sec:hne0} \begin{figure} \epsfig{file=scalingzeros.eps,width=.4\textwidth} \caption[]{Universal behavior of Stokes boundaries in the scaling region for zero and non-zero symmetry breaking parameter $h$. Only upper complex half-plane is shown. The trajectory of the branching point $t_*(h)$ is indicated by a dashed line.} \label{fig:scalingzeros} \end{figure} The magnetic field $h$ is another relevant variable near the O(4) critical point. The magnetic field breaks the O(4) down to O(3), and in QCD this role is played by the quark mass $\mq$ (more precisely, the average of the $u$ and $d$ masses). At $h\neq0$ the free energy is analytic function of $t$ at $t=0$. In the scaling region the singular part of the free energy scales as a power of $h$ if $t$ is also changed to keep the scaling variable $x\equiv t\, h^{-1/(\beta\delta)}$ fixed. Therefore the (two) branching points must be located at a point away from the origin given by \begin{equation} t_* = x_* h^{1/(\beta\delta)} \end{equation} and at $(t_*)^*$, where $x_*$ is a complex constant. The phase of $t_*$ (the polar angle coordinate of the branching point) is determined by the following argument. By scaling postulate, the singular contribution to the free energy must be given by \begin{equation} \Omega_{\rm sing} = h^{1+1/\delta} A\left(t\, h^{-1/(\beta\delta)}\right) \end{equation} where the function $A(x)$ is analytic at $x=0$. $A(x)$ has two cuts which originate at points $x_*$ and $(x_*)^*$ and go off to infinity. Consider $\Omega_{\rm sing}$ as a function of {\em complex} $h$ at fixed real~$t$. By symmetry ($h\leftrightarrow -h$, $h\leftrightarrow h^*$), the Stokes boundaries of this function lie on the imaginary axis (more rigorously, this follows from the Lee-Yang theorem and universality). Thus the branching point, for $t>0$, at $h_*=(t/x_*)^{\beta\delta}$, is purely imaginary, and therefore $\beta\delta\arg x_*=\pi/2$. Thus, \begin{equation}\label{psi} t_* = |x_*| e^{i\psi} h^{1/(\beta\delta)}, \qquad \psi=\frac\pi{2\beta\delta}\,. \end{equation} To summarize, at $h=0$, the complex singularities in the scaling region of the thermal parameter $t$ form cuts (Stokes boundaries) which go along the rays at angle $\varphi$ with the negative real axis given by \eqref{phi}. With increasing symmetry breaking parameter $h$ the branching point shifts away from $t=0$ by an amount proportional to $h^{1/(\beta\delta)}$ along the direction at the angle $\psi$ to the positive $t$ axis given by \eqref{psi}. This is illustrated in Fig.~\ref{fig:scalingzeros}. \section{Singularities of QCD in the complex $\muB$ plane} \label{sec:qcd-universal} \subsection{Chiral limit: $m=0$} Let us begin with the chiral limit $\mq=0$. Consider the interval of $T\in (T_3,T_c)$. In this interval increasing $\muB$ leads to the second order transition at $\muB=\muc(T)$. As discussed in the previous section in the vicinity of the transition we must identify $t$ with $\muB^2-\muc^2(T)$ (up to an irrelevant constant factor) -- Eq.~\eqref{tmu}. Thus, at a given temperature $T$, near $\muc(T)$ the location of the singularities in the complex $\muB$ plane is determined by the universal arguments of the previous section. Conformal transformation $\mu\to\mu^2$ does not affect the angles $\phi$ and $\psi$ away from $\mu=0$. In particular, at $m=0$, the (two) cuts should originate at the branching point located at $\muc(T)$ on the real axis, and follow the rays at angle $\varphi$ given by \eqref{phi} with respect to the negative real axis as shown in Figure \ref{fig:qcdzeros} (left). Taking the values $\alpha\approx -0.25$ \cite{Pelissetto} and $A_+/A_-\approx1.6$ \cite{Toussaint} we estimate the value of the angle as $\varphi\approx 77^\circ$. At the tricritical point $\alpha=1/2$ and $A_+/A_-=0$ and thus $\varphi= 60^\circ$. \subsection{Small quark mass: $\mq\ne0$} At finite quark mass $\mq$ the second order line $\muB=\muc(T)$ is replaced by an analytic crossover for all temperatures $T>T_E$. The critical ending point of the first order transition is located at a temperature we denote $T_E$. For $T<T_E$ the transition is of the first order. At fixed $T>T_3$, and small mass $\mq$, in the vicinity of the crossover point the singularities are described by the universal arguments with \begin{equation} h\sim \mq\,. \end{equation} For the purpose of the discussion we can define the crossover point as the value of the real part of the branching point: $\mu_{\rm crossover}\equiv\Re(\mu_*(m))$. The branching point, $\mu_*(m)$ or the nearest singularity to the real axis, is shifted by the amount proportional to $m^{1/(\beta\delta)}$, which is $m^{0.54}$, using the O(4) critical exponents \cite{Pelissetto}, in the direction along the ray at angle \eqref{psi} $\psi=\pi/(2\beta\delta)\approx 48^\circ$ towards the positive real axis as shown in Figure~\ref{fig:qcdzeros} (left). \begin{figure} \hspace{\stretch{.2}} \epsfig{file=cuts-m0ne0.eps,height=15em} \hspace{\stretch{1}} \epsfig{file=cuts-mne0T.eps,height=15em} \hspace{\stretch{.2}} \caption[]{Stokes boundaries in QCD at fixed $T$ and two different values of $m$ (left) and at fixed small $m$ and three different values of $T$ (right) as dictated by universality.} \label{fig:qcdzeros} \end{figure} When $T$ is decreased towards $T_E$ this branching point $\mu_*(m)$ (and its conjugate) approach the real axis and pinch it when $T=T_E$. At this temperature, the cuts originate from the branching point on the real axis $\mu_E$ (see Figure \ref{fig:qcdzeros} (right)). The point $(T_E,\mu_E)$ is the QCD critical ending point. This ordinary critical point is in the universality class of the Ising model~\cite{Berges,Halasz-pdqcd}. The initial direction of the cuts near this point is perpendicular to the real axis, i.e., $\varphi=90^\circ$. This follows from the fact that the perturbation $\muB-\mu_E$ is magnetic-field-like, \begin{equation} h\sim\muB-\mu_E\,, \end{equation} and from the fact that singularities in the $h$ plane lie on the imaginary axis. The reason that $\muB-\mu_E$ is not $t$-like, but $h$-like, is the following. In the vicinity of the critical point, since the O(4) is explicitly broken, the perturbation $\muB-\mu_E$ affects both the thermal variable $t$ as well as magnetic-field-like variable $h$ to linear order. Since $\beta\delta>1$, the scaling variable $x=th^{-1/(\beta\delta)}$ is small, which means the perturbation $\muB-\mu_E$ takes the system into the region where the variable $h$ dominates the scaling.% \footnote{An equivalent way of saying this is by comparing the scaling dimensions of the variables coupled to thermal and magnetic relevant operators: $y_t=1/\nu$ and $y_h=\beta\delta/\nu$. Since both operators couple linearly to the variable $\muB-\mu_E$, and $y_h>y_t$, the magnetic field operator dominates the response to $\muB-\mu_E$ perturbation near the critical point. In contrast, at $m=0$, the coupling to magnetic operator is forbidden by symmetry.} \section{Random matrix model} In this section we illustrate the universal properties discussed above by determining the complex plane singularities of a random matrix model of the QCD partition function at finite $T$ and $\muB$ which was introduced in Ref.~\cite{Halasz-pdqcd} and applied to the study of the QCD phase diagram and the (tri)critical point. At $\mu=0$ this model is equivalent to the finite-$T$ model of Ref.~\cite{Jackson-Verbaarschot} and at $T=0$ -- to the finite-$\mu$ model of Ref.~\cite{Stephanov-rm}. The parameters $T$, $\muB$ and $m$ used in this section are dimensionless, and correspond to measuring $T$ in units of $T_c=160$ MeV, $\mu$ in units of 2.27 GeV, and $m$ in units of 100 MeV. For more details, see Ref.~\cite{Halasz-pdqcd}. The model can be solved in the thermodynamic limit, which corresponds to the infinite size of the random matrix, $N\to\infty$, by using the replica trick. This gives, for real $\mu$, $T$ and~$m$, \begin{equation}\label{zrm} \log Z_{RM} = -\min_{\phi}\Omega(\phi) \end{equation} where \begin{equation} \Omega(\phi)= \phi^2 - \frac12\ln\left\{ [(\phi+m)^2- (\mu+iT)^2]\cdot [(\phi+m)^2- (\mu-iT)^2] \right\}. \end{equation} Analytically continuing into the complex $\mu$-plane, one finds the branching points of the partition function by solving a system of two algebraic equations \begin{equation}\label{branch} \frac{d\Omega}{d\phi}=0;\qquad \frac{d^2\Omega}{(d\phi)^2}=0 \qquad \mbox{(branching points)} \end{equation} for two unknowns: $\phi$ and $\mu$. The second equation states that two of the solutions determined by the first equation are coalescing into one at this value of $\mu$. The Stokes boundaries can be determined by solving the condition $\Re\Omega(\phi_1)=\Re\Omega(\phi_2)$ where $\phi_1$ and $\phi_2$ are the two solutions of the first equation in \eqref{branch} which coalesce at the branching point: \begin{equation}\label{rm-stokes} \frac{d\Omega}{d\phi}\Big|_{\phi=\phi_1,\phi_2}=0;\qquad \Re\Omega(\phi_1)=\Re\Omega(\phi_2)\,. \qquad \mbox{(Stokes boundaries)} \end{equation} At {\em finite\/} $N$ the partition function can be written explicitly as a polynomial (apart from an irrelevant constant factor) \begin{equation}\label{zrmn} \begin{split} Z^{(N)}_{RM} = \sum_{k_1,k_2=0}^{N/2} \binom{N}{k_1}\binom{N}{k_2} (N-k_1-k_2)!\, {}_1F_1(k_1+k_2-N;1;-m^2N) \\\times \left[-(\mu+iT)^2N\right]^{k_1}\left[-(\mu-iT)^2N\right]^{k_2} \end{split} \end{equation} using the procedure similar to \cite{Halasz-zeros}, where ${}_1F_1(a;b;c)$ is the Kummer confluent hypergeometric function (in \eqref{zrmn} it is a polynomial in $m^2$). The zeros are found numerically for $N=120$ and plotted in Figure \ref{fig:rm} together with the Stokes boundaries given by \eqref{rm-stokes}. Near the point $T=T_c$, $\muB=0$, the thermal scaling variable $t$ is proportional to $(T-T_c)+C\muB^2$, where $C$ is the constant giving the slope the second order transition curve (see eq.~\eqref{tmu}). Therefore it is convenient to plot the zeros in the complex $\muB^2$ plane. Only the vicinity of the origin $\mu^2=0$ is shown in Figure \ref{fig:rm}. The universal properties described in the previous section are manifest. It is clear from the form of the solution \eqref{zrm} that the critical exponents near the second order line have their mean field values, and correspondingly, the angles are $\varphi=45^\circ$ and $\psi=60^\circ$. At the tricritical point, the exponents are given by their mean field values also in QCD (albeit with logarithmic corrections): $\varphi=60^\circ$, $\psi=72^\circ$. One can also see that the density of the zeros decreases near the branching point, as dictated by \eqref{rho}. \begin{figure} \framebox{\epsfig{file=plotall-m000.eps,width=0.35\textwidth}} \hspace{\stretch{1}} \framebox{\epsfig{file=plotall-m007.eps,width=0.35\textwidth}} \caption[]{Stokes boundaries and zeros of the $N=120$ random matrix partition function at representative values of $T$ at zero and nonzero quark mass. The trajectory of each branching point as a function of $T$ is indicated by a dashed line.} \label{fig:rm} \end{figure} \begin{figure} \epsfig{file=murt-007.eps,width=0.32\textwidth} \caption[]{The convergence radius $\mu_R^2$ as a function of $T$ in the random matrix model at $m=0.07$ (7 MeV). The value of $\mu_R^2$ is the distance of the singularity on the dashed line on Fig.~\ref{fig:rm} from the origin. The critical point at $T=T_E$ where the singularities pinch the real axis is shown.} \label{fig:murt} \end{figure} \section{Convergence radius} Relevant for the search of the QCD critical point is the question of the convergence radius $\mu^2_R(T)$ of the Taylor expansion around $\mu=0$ of the QCD thermodynamic potential as a function of $T$. For sufficiently small quark masses, and sufficiently near $T_c$, the position of the nearest singularity, limiting this radius, is determined by the universal arguments given above. For illustration, this radius in the random matrix model is plotted on Figure~\ref{fig:murt}. All generic and universal features are manifest in Fig.~\ref{fig:murt}. As $T$ decreases, and the branching point singularity slides along the dashed line on Fig.~\ref{fig:rm} from left to right, the radius $\mu_R$ contracts, and then, below the crossover temperature, begins to expand again. From the universality arguments of Section \ref{sec:qcd-universal} (see Fig. \ref{fig:qcdzeros} (right)) we conclude that near the chiral ($m\to0$) limit the minimum value of the radius scales with $m$ as \begin{equation} \min_T\, \mu_R^2(T) \sim m^{1/(\beta\delta)} \sim m^{0.54} \end{equation} and is achieved at a temperature $T$ which scales as $T-T_c\sim m^{1/(\beta\delta)}\sim m^{0.54}$. Further away from the minimum, at the critical point, $T=T_E$, the singularity and its conjugate pinch the real axis. At this point one observes a non-analyticity in $\mu_R^2(T)$: a contribution of order $(T_E-T)^{\beta\delta}$ turns on below $T=T_E$. This is due to the kink in the dashed line on Fig.~\ref{fig:rm} at $T_E$. In QCD $\beta\delta\approx 1.56$, given by the exponents of the 3d Ising model, while in the random matrix model $\beta\delta$ has the mean field value $3/2$. More explicitly, the trajectory of the branching point $\mu_*$ near $\mu_E$ is given by: \begin{equation}\label{mu*} \mu_*(T)=\mu_E+c_1(T_E-T)+i\,c_2(T-T_E)^{\beta\delta} +{\cal O}\left((T-T_E)^2\right), \end{equation} with some nonuniversal positive coefficients $c_{1,2}$. To derive this equation one observes that both $t$ and $h$ scaling variables are linear combinations of $(T-T_E)$ and $(\mu-\mu_E)$ and uses equation \eqref{psi} for the branching point. The fact that the third term on the r.h.s. in \eqref{mu*} is purely imaginary for $T>T_E$ is related to the fact that the branching point $h_*$ in the $h$ plane is on the imaginary axis for $t>0$ as discussed in Section~\ref{sec:hne0}. Therefore \begin{equation}\label{murte} \mu_R^2(T)=|\mu_*(T)|^2=\mu_E^2+\tilde{c}_1(T_E-T) + \tilde c_2\,\theta(T_E-T)(T_E-T)^{\beta\delta} + {\cal O}\left((T-T_E)^2\right). \end{equation} Below $T_E$, the singularity continues to move away from the origin, and the radius of convergence continues to increase. The radius is now determined by the spinodal point of the first order phase transition (but see discussion in Appendix). This singularity resides on the continuation of the physical Riemann sheet under the cut.% \footnote{On a more subtle level, one has to note that the fact that the singularity \eqref{mu*} remains on the real axis is an artifact of the mean field critical behavior in the random matrix model ($\beta\delta=3/2$). In QCD, the singularity moves off the real axis by an amount which scales as $(T_E-T)^{\beta\delta}$. } One must point out that the random matrix model of Ref.~\cite{Halasz-pdqcd} does not capture a known feature of the QCD partition function -- the periodicity, or invariance under the shift $\muB\to\muB+2\pi iT$, which is due to the quantization of the baryon charge.\footnote{A random matrix model which does capture this feature is studied in \cite{Halasz-matsubara}.} As shown by Roberge and Weiss \cite{Roberge} this periodicity is related to the appearance of a Stokes boundary given by $\Im \muB=\pi T$ for sufficiently high temperatures. For $T$ of order 160 MeV, this Stokes boundary could interfere with convergence of the series only if the singularity we discuss moves further than $|\muB| \approx 500$~MeV. \section{Summary and discussion} We have described the location as well as temperature and quark mass dependence of the singularities of the QCD partition function in the complex $\muB$ plane. In the vicinity of the chiral phase transition at $\mq=0$ the universality and scaling arguments predict that in the infinite volume the singularities are two complex conjugate branch cuts originating at a branching point on the real $\mu$ axis. The cuts are oriented at an angle to the negative $\mu$ axis given by \eqref{phi}. At nonzero $m$ the branching points (and the cuts) are shifted in the direction given by angle $\psi\approx 48^\circ$, by a distance of order $m^{0.54}$ (see Fig.~\ref{fig:qcdzeros}). A related consequence of the universal behavior of the complex $T$ singularities, and the fact that $\psi<90^\circ$, is the prediction that the crossover point at $m\ne0$, defined as the projection of the closest singularity onto the real axis, is {\em above} the second order O(4) line, as sketched in Fig. \ref{fig:pd-sketch}. The singularities we describe determine the convergence of the Taylor expansion around the point $\mu=0$. As a result, the radius of convergence $\muR^2$ at $T=T_c$ is limited by a singularity whose distance from the origin scales as $\muR^2\sim m^{0.54}$, vanishing in the chiral limit. At $T=T_E$ the convergence radius $\mu_R(T)$ shows nonanalyticity described by Eq.~\eqref{murte}. The random matrix model of Ref.~\cite{Halasz-pdqcd} illustrates these universal predictions: Figures \ref{fig:rm}, \ref{fig:murt}. The knowledge of the complex plane singularities might be used to improve the Taylor expansion methods, for example, by constructing Pad\'e or similar extrapolations \cite{Lombardo}, accommodating the correct universal singular behavior. It can also be used to crosscheck the results of lattice Monte-Carlo simulations, by comparing the expected universal behavior of the partition function zeros to the output of a lattice calculation. \begin{acknowledgments} This work was supported by the DOE grant No.\ DE-FG0201ER41195, and by the Alfred~P.~Sloan Foundation. \end{acknowledgments}
3,212,635,537,816
arxiv
\section{Introduction} For its own sake, it is interesting to understand how the so far most fundamental theory of quantum fields is related to kinetic theory - a description of physics in terms of momentum distributions that is closer to the physics of classical particles. The relation of quantum field theory and kinetic theory has mainly been studied in flat space-times \cite{ trove.nla.gov.au/work/9783845} via the generalization of the non-relativistic Wigner transformation \cite{Hillery:1983ms} to special relativity. However, there are few publications on a generalization of this idea to general curved space-times. The early works by \cite{Winter:1986da} and \cite{Calzetta:1987bw} start from an off-shell formulation of two-point functions of the real scalar field and make use of Riemann normal coordinates to obtain a Wigner transformation. Later, reference \cite{Fonarev:1993ht} proposed an off-shell transformation via exponentiated covariant derivatives lifted on the tangent bundle while \cite{Antonsen:1997dc} also proposed to keep the Wigner transform as an operator without taking expectation values. In this paper, we consider again a covariant Wigner transformation by combining the ideas of the last two references, but this time by using a formulation in terms of canonical quantum field operators that exhibits on-shell closure and that was already proposed within the longitudinal scalar gauge for the metric in \cite{Prokopec:2017ldn}. Thus, we extend our previous work to general curved space-times and derive dynamics for spatially covariant, phase-space operators of the real scalar field, i.e. quadratic operators with a space-time and a spatial momentum dependence. We derive conditions for classical states under which these phase-space operators describe stochastic distributions of classical particles. The metric is assumed to be derived from the semi-classical Einstein equations, thus fixed through expectation values of the field operators. However, it would not change the formalism within this paper if the metric is assumed to be fixed by an unknown source without any back reaction. In the hybrid approach in \cite{Prokopec:2017ldn} we assumed a stochastic initial density matrix for these two-point functions to account for stochasticity in cosmological perturbations and we have a similar setting in mind for equations in this paper, although we are focussing primarily on the evolution. \par When studying how classical equations emerge from a quantum field theoretic description, there are two major limits which are a priori of different nature. The first limit concerns the classical stochastic field theory limit of quantum field theory which approximates non-commutating field operators with commuting field operators. At the level of two-point functions, we can also rephrase this limit as having a particle number (the state dependent part) which is much bigger than the quantum contribution originating from the non-commutativity of canonical field operators.\footnote{Note that we are dealing with bosons, a large particle number per momentum for fermions can only be achieved by coarse graining in phase-space.} It is thus the particle number and not the state independent (vacuum) part of the propagator that dominates loop calculations in this limit. The classical stochastic field theory limit of quantum field theory should a priori be considered separately from the classical particle limit of quantum field theory which involves an expansion in temporal and spatial gradients with respect to energies and momenta, $\Delta E \Delta t \gg \hbar$ and $\Delta p \Delta x \gg \hbar$. Such a limit is possible after subtracting quantum UV-modes or virtual particles of the Wigner-transformed two-point function which can also be viewed as a special case of coarse-graining. Here, we follow a procedure that is closely related to normal-ordering and involves subtraction of the state-independent part of the two-point function in a normal neighbourhood. The spatial gradient expansion results from assuming that the remaining, state-dependent two-point functions around a collective point on a spatial hypersurface are correlated only in a small neighbourhood relative to spatial gradients taken with respect to that collective point. Thus, the spatial gradient expansion constitutes a separation of spatial scales and it gives rise to an infinite series in the dynamical equations which needs to be truncated. The situation is different for temporal gradients, at least in a one-loop or Gaussian state approximation, since such a state allows for an on-shell closure of the involved two-point functions and takes the form of four first-order differential equations in time. However, enforcing additionally the limit $\Delta E \Delta t \gg \hbar$ for this closed set of equations reduces them to leading order in spatial gradients to the dynamics of non-interacting classical particles, i.e. dynamics described by a collisionless Boltzmann equation or in the context of curved space-time, the Vlasov equation. The effect of self-interactions is analogous to Minkowski space-time where one-loop corrections provide a space-time dependent mass shift \cite{Berges:2015kfa}. Out of the scope of this paper is classical particle scattering as it appears on the right-hand-side of the Boltzmann equation. It can be obtained from the quantum field theory by including two-loop processes which are subsequently approximated with a quasi-particle picture to close on-shell. \par A direct application for the transition from quantum field theory in curved space-time to classical particle physics lies in cosmology and astrophysics. This transition should be studied carefully since in the context of cosmology and astrophysics, physics has many different faces and so it happens that for example dark matter \cite{Ade:2015xua} is - among many other possible models - a priori believed to be equally well described by a stochastic distribution function of classical non-relativistic, non-interacting, massive particles or a condensate of a stochastic scalar field. The condensate description is easily related to the microscopic scalar field theory whereas the relation between classical particle dark matter to a microscopic theory is less clear and the result of this paper is not only to show that it is indeed well described by a real quantum scalar field but also to systematically keep track of all approximations that lead to the classical particle picture. Maintaining the classical particle picture is a question of scales as we pointed out in the paragraph above and the natural question to ask is whether these scales can be related to other significant scales in the study of large-scale structures (i.e. the scale of non-linearity $k<k_{nl} \sim 0.3~{\rm Mpc}^{-1}$ \cite{Bernardeau:2001qr} or galactic scales $\sim 10 \text{kpc}^{-1}$). Apart from the predictability, fundamental dark matter descriptions may also lead to a transfer of calculational techniques from quantum field theory to make progress on analytical or numerical results in the studies of large-scale structures. \footnote{This point has already been pursued by deriving a Vlasov equation from Wigner transforming the non-relativistic Schr\"odinger-Poisson system for condensates \cite{Widrow:1993qq} \cite{Davies:1996kp}\cite{Uhlemann:2014npa} \cite{Garny:2017xkc} \cite{Mocz:2018ium}. However, the degrees of freedom for one-point functions suffice only to provide an independent mass density and momentum density on the microscopic level. By taking into account coarse-graining, certain momentum distributions may be modelled by exchanging microscopic degrees of freedom below the cut-off for higher moments in phase-space. The connected part of the phase-operators considered in this paper does account for these degrees of freedom without any coarse-graining.} \par Let us give an overview of the paper. We will start from an interacting real scalar field theory that is non-minimally coupled to gravity via the Ricci scalar with an arbitrary classical metric $g_{\mu \nu}$ in a 3+1 decomposition and discuss equations of motion and renormalization in the operator formalism. We introduce four composite spatially covariant quantum field operators $\hat{F}_{\phi \phi},\hat{F}_{\Pi \phi},\hat{F}_{\phi \Pi},\hat{F}_{\Pi \Pi} $ by integrating combinations of covariantly translated canonical operators over the tangent space of spatial hypersurfaces. We rescale them to yield four dimensionally equivalent phase-space density operators $\hat{f}_1^{\pm}(x^{\mu}, p_j)\, , \hat{f}_{2,3}(x^{\mu}, p_j)$ with a dependence on the on-shell momenta. These operators are in fact scalars on the tangent bundle of spatial hypersurfaces for any time. Moreover, we discuss that the state-independent part of these operators should be subtracted in a normal neighbourhood to yield a finite energy-momentum tensor. As a first step towards the classical particle limit, we rewrite hydrodynamic variables like energy density, pressure and fluid velocity in terms of the phase-space operators $\langle \hat{f}_{1,2,3} \rangle$. Afterwards, we derive the dynamics for expectation values $\langle \hat{f}_{1,2,3} \rangle$ in a spatial gradient expansion $\Delta x \Delta p \gg \hbar$ and a one-loop approximation. We then combine two out of these four into the most important phase-space density operator $\hat{f_{1}} = \hat{f}_1^{+} + \hat{f}_1^{-}$ whose expectation value can be related to a classical Boltzmann distribution under certain conditions. Namely, to leading order in the classical particle limit $\Delta x \Delta p \gg \hbar$, $\Delta t \Delta E \gg \hbar$, the dynamics of the expectation value $\langle \hat{f}_{1}\rangle$ resembles the dynamics of the classical on-shell one-particle phase-space density $f_{\text{cl}} $ for gravitating particles which is given by the truncated BBGKY hierarchy, which is to leading order the Vlasov equation in curved space-time, \begin{eqnarray} \Big[ p^{\mu} \partial_{\mu} + p_{\mu} p^{\nu} \Gamma^{\mu}_{\; \nu i} \frac{\partial}{\partial p_i} \Big] f_{\text{cl}} (x^{\mu}, p_j) &=& 0\, , \\ p^0 (x^{\mu}, p_j) &:=& \sqrt{\big( g^{0j} p_j\big)^2 - g^{00} \big( m^2 + g^{ij} p_i p_j \big)}\, . \end{eqnarray} One could in principle capture higher n-particle distributions by integrating out the gravitational constraint fields. However, here we are mostly interested in giving a field theoretic description of cold dark matter with massive particles where two- and higher n-particle densities $f_{n} (x^0, (x^{i}, p_j)^1,(x^{i}, p_j)^2, ...)$ can be neglected \cite{Bertschinger:1993xt}. \par We work in units where $c=1$ with a mostly plus signature $(-,+,+,+)$. \pagebreak \section{Canonical operator formalism in curved space-time} As opposed to previous approaches for off-shell Wigner transformations in curved space-times \cite{Winter:1986da}\cite{Calzetta:1987bw}\cite{Fonarev:1993ht}\cite{Antonsen:1997dc}, we want to obtain on-shell phase-space densities from the very beginning by working with a Hamiltonian formulation as we have done it for the scalar, linearized longitudinal gauge in \cite{Prokopec:2017ldn}. For convenience we recap the Hamiltonian formulation at the classical level for a massive, real, self-interacting scalar field $\phi$ in curved space-time with metric $g_{\mu \nu}$ in the ADM formalism which is for example discussed in \cite{kiefer2012quantum} or \cite{wald2010general}. We then quantize the matter field and write down the Heisenberg equations for the canonical field operators. \subsection{ADM decomposition and equations of motion} We begin by writing down the classical action for the theory \begin{equation} S_{\text{tot}}\left[ \phi, g_{\mu \nu} \right] = S_{g} \left[ g_{\mu \nu} \right] + S_{m} \left[ \phi, g_{\mu \nu} \right]\, , \end{equation} where the matter action $S_m$ is given by \begin{equation} S_{m} \left[ \phi, g_{\mu \nu} \right] = - \int d^4 x \sqrt{-g} \left[ \frac{1}{2} g^{\mu \nu} \partial_{\mu} \phi \partial_{\nu} \phi + \frac{1}{2} \frac{m^2}{\hbar^2} \phi^2 + \frac{1}{2} \xi R \phi^2 + \frac{1}{4!} \frac{\lambda}{\hbar} \phi^4 \right]\, , \label{sMatter} \end{equation} and the gravitational action $S_g$ reads \begin{equation} S_{g} \left[ g_{\mu \nu} \right] = \frac{M_P^2}{2 \hbar} \int d^4 x \sqrt{-g} R \, \label{SemiClassGravityAction} \, . \end{equation} Here, $R$ denotes the four-dimensional Ricci scalar, we have a tree level mass parameter $m^2$ and we allow for a non-minimal coupling as well as a self-interaction given by the tree-level parameters $\xi$ and $\lambda$, respectively. We continue by slicing the space-time into spatial hypersurfaces $\Sigma_t$ that are determined by constant values of a four-scalar field $t (x^{\mu})$ whose corresponding vector field $t^{\mu}$, obeying $t^{\mu} \nabla_{\mu} t = 1$, is given by \begin{equation} t^{\mu} = N n^{\mu} + N^{\mu} \, , \quad \partial_t = t^{\mu} \partial_{\mu}\, , \end{equation} where $N$ is the lapse function and $N^{\mu}$ is the shift vector such that $n^{\mu}$ is the vector normal to the spatial hypersurface. We note that $N$ is a four-scalar given by \begin{equation} g (\partial_t , \partial_t ) = - N^2 + N_{\mu} N^{\mu} \,, \end{equation} and that \begin{equation} \partial_0 \neq \partial_t \; \text{in general} \, , \end{equation} i.e. we can in principle choose to work with a zero coordinate that is different from our choice of time $t$ used for the slicing into hypersurfaces. The next step is to define a projection tensor \begin{equation} \gamma_{\mu \nu} = g_{\mu \nu} + n_{\mu } n_{\nu}\, . \end{equation} This allows us the write down the extrinsic curvature associated with our choice of the normal vector field as \begin{equation} K_{\mu \nu} = - \nabla_{\nu} n_{\mu} - a_{\mu} n_{\nu} = - \gamma_{\mu}^{\; \alpha} \nabla_{\alpha} n_{\nu} = - \frac{1}{2} \mathcal{L}_n \gamma_{\mu \nu} \, , \end{equation} where $\mathcal{L}_n$ denotes the Lie derivative along $n^{\mu}$ and the acceleration is given by \begin{equation} a_{\mu} = n^{\alpha} \nabla_{\alpha} n_{\mu} = \gamma_{\mu}^{\; \nu} \nabla_{\nu} \log N \, . \end{equation} The Ricci scalar can be rewritten as \cite{straumann2012general} \begin{equation} R = {^{(3)} R} + K^2 + K_{\mu \nu} K^{\mu \nu} - \frac{2}{N} \big( \partial_t - N^{\mu} \partial_{\mu} \big) K - \frac{2}{N} {^{(3)}\nabla}_{\mu} {^{(3)}\nabla}^{\mu} N\, , \end{equation} where ${^{(3)}R}$ is the three-dimensional Ricci scalar on spatial hypersurfaces given by \begin{equation} {^{(3)}R_{\mu \nu \rho}^{\; \quad \sigma}} \big[ \gamma_{\sigma}^{\; \alpha} v_{\alpha} \big] = \big[ {^{(3)}\nabla}_{\mu} {^{(3)}\nabla}_{\nu} - {^{(3)}\nabla}_{\nu} {^{(3)}\nabla}_{\mu} \big] \big[ \gamma_{\rho}^{\; \alpha} v_{\alpha} \big]\, ,\end{equation} for some dual vector $v_{\alpha}$ and the covariant derivative on spatial hypersurfaces reads \begin{equation} {^{(3)}\nabla}_{\mu} \big[ \gamma_{\nu}^{\; \rho} v_{\rho} \big] = \gamma_{\mu}^{\; \rho} \gamma_{\nu}^{\; \sigma} \nabla_{\rho} \big[ \gamma_{\sigma}^{\; \alpha} v_{\alpha} \big]\, . \end{equation} We have already remarked that the zero coordinate $x^0$ and the scalar field $t$ are a priory not related. However, as is often done in the ADM decomposition, we choose the zero coordinate $x^0$ to coincide with $t$, \begin{equation} x^0 = t \, . \end{equation} We then have the following component decomposition of the metric \begin{equation} g_{\mu \nu} = \begin{pmatrix} - N^2 + N^i N_i & N_i \\ N_i & \gamma_{ij} \\ \end{pmatrix}\, , \; g^{\mu \nu} = \begin{pmatrix} - N^{-2} & N^{-2} N^i \\ N^{-2} N^i & \gamma^{ij} - N^{-2} N^i N^j \\ \end{pmatrix}\, , \; \sqrt{-g} = N \gamma^{1/2} \, , \end{equation} with \begin{equation} n^{\mu} = N^{-1}(1, \, - N^i)\,, \quad n_{\mu} = (-N, \,0)\, , \end{equation} and $\gamma_{ij}$ being the induced metric on the spatial hypersurface. The action for gravity evaluates up to boundary terms\footnote{Since we are only interested in a classical approximation to gravity, these boundary terms can be safely neglected, as they do not influence the dynamics in the semi-classical approximation that we will be using. } to \begin{equation} S_{g} \left[ N, N^k, \gamma_{ij} \right] = \frac{M_P^2}{2 \hbar} \int N dt \gamma^{1/2} d^3 x \Big[{^{(3)}R} - K^2 + K_{ij } K^{ij} \Big] \, , \end{equation} where we used \begin{equation} \partial_t \log \gamma^{1/2} = - NK + {^{(3)} \nabla}_i N^i \, . \end{equation} We define the canonical momentum as a classical field configuration by means of the scalar field $t(x^{\mu})$, \begin{equation} \Pi = \frac{\delta S_m}{ \delta \big[ \partial_t \phi \big]} =\sqrt{-g} \frac{n^{\mu} }{N} \partial_{\mu} \phi = \frac{\gamma^{1/2}}{N} \big[ \partial_t - N^j \partial_j \big] \phi \, , \end{equation} and find for the classical matter action \begin{equation} S_{m} = \int N dt \gamma^{1/2} d^3 x \left[\frac{1}{2} \frac{\Pi^2}{\gamma} - \frac{1}{2}\gamma^{i j } \partial_{i} \phi \partial_{j} \phi - \frac{1}{2} \frac{m^2}{\hbar^2} \phi^2 - \frac{1}{2} \xi R \phi^2 -\frac{1}{4!} \frac{\lambda}{\hbar} \phi^4\right]\, , \end{equation} which is manifestly invariant under spatial coordinate transformations. Since we will be dealing mostly with $3+1$ variables in the main parts of the paper, we would like to mention once that it is not the spatial Ricci scalar $^{(3)}R$ but the four-dimensional Ricci scalar $R$ that enters the non-minimal coupling to the matter field $\phi$ and we will sometime refrain from expanding it in a $3+1$ split in order to save space. \par We intend to quantize the matter field $\phi$ in a curved space-time with a classical metric $g_{\mu \nu}$ which is an excellent approximation whenever momenta are much smaller than the Planck mass. The quantum theory in the operator formalism is formally specified by the time-evolution or the Hamilton operator $\hat{H}$ in \eqref{hamiltonian}, the Heisenberg equations motion \eqref{eqn:phiDot} and \eqref{eqn:piDot} as well as the equal-time commutation relation \eqref{commRel}. The Hamilton operator $\hat{H}$ is a functional of the canonical (bare) field operators $\hat{\phi}_B$ and $\hat{\Pi}_B$. Moreover, it depends on the bare couplings $m^2_B$, $\xi_B$, ${\lambda_B}$ as well as the classical, possibly stochastic metric $g_{\mu \nu}$ in the $3+1$ split, \begin{multline} \hat{H} =\int_{\Sigma_t} N \gamma^{1/2} d^3 x \Bigg[ \frac{1}{2} \gamma^{-1} \hat{\Pi}^2_B + \gamma^{-1/2}{N}^{-1} \hat{\Pi}_B {N^j} \partial_{j} \hat{\phi}_B + \frac{1}{2} \gamma^{ij} \partial_{i} \hat{\phi}_B \partial_{j} \hat{\phi}_B + \frac{1}{2} \frac{m^2_B}{\hbar^2} \hat{\phi}^2_B + \frac{1}{2}\xi_B R \hat{\phi}^2_B +\frac{1}{4!} \frac{\lambda_B}{\hbar} \hat{\phi}^4_B\Bigg] \\ - \hat{\id}\int_{\Sigma_t}N \gamma^{1/2} d^3 x \Bigg[ \Lambda_B + \kappa_B R + \alpha_{1B} R_{\mu \nu \rho \sigma} R^{\mu \nu \rho \sigma} + \alpha_{2B} R_{\mu \nu} R^{\mu \nu} + \alpha_{3B} R^2 \Bigg]\, , \label{hamiltonian} \end{multline} where we refrained from a $3+1$-split of the gravitational counterterms. The counterterms are unavoidable in order to obtain a finite Hamiltonian, once we solve the Heisenberg equations of motion and impose the equal-time commutation relation \begin{equation} \Big[ \hat{\phi}_B (x^0, x^i)\, , \hat{\Pi}_B(x^0, \widetilde{x}^i) \Big] = i \hbar \delta^{(3)} (x^i - \widetilde{x}^i)\, , \label{commRel} \end{equation} where other combinations of canonical fields at equal time commute. The Heisenberg equations of motion read \begin{eqnarray} \mathcal{L}_t \hat{\phi}_B &=& \partial_t \hat{\phi}_B = \frac{N}{\gamma^{1/2} } \hat{\Pi}_B + N^{j} \partial_{j} \hat{\phi}_B \, , \label{eqn:phiDot}\\ \mathcal{L}_t \hat{\Pi}_B &=& \partial_t \hat{\Pi}_B + \hat{\Pi}_B \partial_{\mu} t^{\mu} = \partial_{j} \Big[ N^{j} \hat{\Pi }_B \Big] +\partial_{i} \Big[ N \gamma^{1/2} \gamma^{ij} \partial_{j} \hat{\phi}_B \Big] \nonumber \\&& \qquad\qquad\qquad\qquad\qquad\qquad\qquad - N \gamma^{1/2} \Big[ \frac{m^2_B}{\hbar^2} \hat{\phi}_B+ \xi_B R \hat{\phi}+ \frac{1}{6} \frac{{\lambda_B}}{\hbar} \hat{\phi}^3_B \Big]\, . \label{eqn:piDot} \end{eqnarray} In covariant notation we find\footnote{Note that \begin{equation} \nabla_{\mu} \Big[ \gamma^{\mu \nu} \nabla_{\nu} \hat{\phi}_B \Big] = {^{(3)}\nabla_{\mu}} {^{(3)}\nabla^{\mu}} \hat{\phi}_B + a_{\mu}{^{(3)}\nabla^{\mu}} \hat{\phi}_B = {^{(3)}\nabla_{i}} {^{(3)}\nabla^{i} \hat{\phi}_B } + {^{(3)}\nabla_{i}} \log N {^{(3)}\nabla^{i}} \hat{\phi}_B\, .\label{opEqPhi} \end{equation}} \begin{eqnarray} \frac{ n^{\mu}}{N} \nabla_{\mu} \hat{\phi}_B &=& \frac{\hat{\Pi}_B }{\sqrt{-g}} \, , \\ \nabla_{\mu} \Big[N n^{\mu} \frac{\hat{\Pi}_B}{\sqrt{-g}} \Big] &=& \nabla_{\mu} \Big[ \gamma^{\mu \nu} \nabla_{\nu} \hat{\phi}_B \Big] - \frac{m^2_B}{\hbar^2} \hat{\phi}_B- \xi_B R \hat{\phi}_B -\frac{1}{6} \frac{{\lambda_B}}{\hbar} \hat{\phi}^3_B \, , \label{opEqPi} \end{eqnarray} which is equivalent to \begin{equation} \Box \hat{\phi}_B = \nabla_{\mu} \nabla^{\mu} \hat{\phi}_B = \frac{m^2_B}{\hbar^2} \hat{\phi}_B + \xi_B R \hat{\phi}_B +\frac{1}{6} \frac{{\lambda_B}}{\hbar} \hat{\phi}^3_B \, . \label{opEq} \end{equation} In contrast to their classical counterparts, the latter equations exhibit a couple of subtleties of which we have to be aware if we want to formulate phase-space densities that are based on quantum field operators. A very important remark we would like to spell out right away is that even the renormalized version of \eqref{opEq} holds strictly speaking only for n-point functions at non-coincident points $x_1, x_2, ... , x_n$ in space-time. The equations of motion do not need to hold for monomials of operators in the coincident limits $x_{i} \rightarrow x_{j} $ due to anomalies that emerge from renormalization (see the summary of section 3.3 in \cite{Hollands:2007zg}) and we will comment more on this anomaly when we discuss the normal ordered energy-momentum tensor entering the semi-classical Einstein equation in the next section. \par Renormalizing n-point functions in the coincident limit or similarly normal ordering the composite operators appearing within them is unavoidable due to the constraints that have to be imposed on the canonical operators via the commutation relation \eqref{commRel}. These constraints distinguish the quantum field theory from a stochastic field theory whose (commuting) field operators may be given any initial values (formulated in terms of all n-point functions) which are evolved via the analogue of \eqref{opEqPhi} and \eqref{opEqPi} or equivalently of \eqref{opEq} in the stochastic field theory. Apart from the dynamics, which is altered by the non-commutativity, the quantum field theory is thus also different from a stochastic field theory in the sense that their action is constrained to yield a certain value for any n-point function since the commutation relation \eqref{commRel} is independent of any state with respect to which we evaluate these operators. We can see this for example by looking at the two Wightman-functions that solve \begin{eqnarray} {\Box}_{x} \langle \hat{\phi}_B (x) \hat{\phi}_B (y) \rangle &=& \frac{m^2_B}{\hbar^2} \langle \hat{\phi}_B(x) \hat{\phi}_B (y) \rangle + \xi_B R (x) \langle \hat{\phi}_B (x) \hat{\phi}_B (y) \rangle +\frac{1}{6} \frac{{\lambda_B}}{\hbar} \langle \hat{\phi}^3_B (x) \hat{\phi}_B (y) \rangle \, ,\label{eqWightman} \end{eqnarray} where the expectation value refers to an arbitrary state and the same equation holds if the differential operator including the Ricci scalar acts on the other coordinate. No matter which state we choose, the equal-time commutation relation \eqref{commRel} forces us to pick up a bi-solution (solution in both arguments) of equation \eqref{eqWightman} which is singular in the limit $x \rightarrow y$. It also forces us to bestow the Wightman functions with an imaginary part for non-equal times and we note that it is the same singular behavior that yields Greens functions for the Klein-Gordon operator (see e.g. \cite{DeWitt:1960fc}). Still, the state of the system can very well posseses additional non-singular behavior which is exactly the part which is suitable to be described by the kinetic equations we derive below in some approximation. A clear example for this contribution of singular and non-singular behavior to the two-point function is a thermal state in Minkowski space-time which contains the vacuum contribution as well as a finite temperature dependent piece (see e.g. \cite{Quiros:1999jp}). \par Moreover, the distributional nature of quantum fields forces us to renormalize parameters of the theory as soon as composite operators such as $\hat{\phi}^2(x)$ enter physical observables. This becomes apparent if we consider the energy-momentum tensor such that we have to renormalize gravitational couplings as we will review below and it will continue to do so at the level of self-interactions. Looking at the operator equations \eqref{opEq}, we can already see that the mass $m_B$, the non-minimal coupling parameter $\xi_B$ and coupling ${\lambda_B}$ will get renormalized since they have to balance the divergent pieces of the composite operator $\hat{\phi}^3$ in the coincident limit which itself may be expressible as a formal series in ${\lambda_B}$ of composite free-field operators. A better way of writing \eqref{opEq} makes use of a normal ordering procedure that has been developed in the context of algebraic quantum field theory in curved space-time (see \cite{HOLLANDS20151} \cite{Fredenhagen:2014lda} \cite{Brunetti:2015vmh} and references therein, in particular \cite{Hollands:2004yh}). Defining the renormalized field operator $\hat{\phi}$ and the renormalized couplings $m, \xi , \lambda$, equation \eqref{opEq} now reads \begin{equation} \Box \hat{\phi} = \frac{m^2}{\hbar^2} \hat{\phi} + \xi R \hat{\phi} +\frac{1}{6} \frac{{\lambda}}{\hbar} {: \hat{\phi}^3:} \, \, ,\label{opEqRen} \end{equation} where $": (\, . \,) :"$ denotes a normal ordering procedure whose essential ideas are explained in the review \cite{HOLLANDS20151}. The main observation is that a class of well-defined states - called Hadamard states that cover for example Gaussian states and thermal states - have the same singular behaviour concerning the coincidence limit of their two-point function in the free field limit. The singular behaviour is given in terms of the Hadamard parametrix ${H}(x,y)$ which is a local (normal neighborhood) bi-solution to the free Klein-Gordon equation \eqref{eqWightman} ($\lambda =0$) up to state-dependent terms that remain smooth in the coincident limit, it reads \begin{equation} H(x,y) = \frac{\hbar}{4 \pi^2} \Bigg[ \frac{u(x,y)}{\sigma(x,y) + i 0^+ \tau(x,y)} + v(x,y) \log \Big[ \mu^2 \Big( {\sigma(x,y) + i 0^+ \tau(x,y)} \Big) \Big] \Bigg]\, , \label{hadamardParametrix} \end{equation} where the bi-scalar $\sigma(x,y)$ is the signed squared geodesic distance between two points $x,y$ in space-time ($+$ for space-like and $-$ for time-like separations), $\tau(x,y)$ is the difference of some global time function between $y$ and $x$ and $\mu$ is an arbitrary energy scale. Moreover, the bi-scalars $u$ and $v$ are smooth, real valued and depend on the squared mass as well as local geometric quantities. The bi-scalar $v$ may be written as a formal series in the signed squared geodesic distance $\sigma$ whose coefficients can be determined iteratively \cite{DeWitt:1960fc}. The two-point function $w_2(x,y)$ of any Hadamard state has then locally the form, \begin{eqnarray} w_2(x,y) &=& \frac{\hbar}{4 \pi^2} \Bigg[ \frac{u(x,y)}{\sigma(x,y) + i 0^+ \tau(x,y)}\nonumber \\ && \qquad \qquad + \Big( \sum_{n=0}^N v_n(x,y) \sigma^n(x,y) \Big) \log \Big[ \mu^2 \Big( {\sigma(x,y) + i 0^+ \tau(x,y)} \Big) \Big] \Bigg] + R_{N,w} \nonumber \\ &=& H_N (x,y) +R_{N,w} \, , \label{hadamardSplit} \end{eqnarray} where $R_{N,w}$ is a smooth, $N+1$-times differentiable remainder that depends on the state. Normal ordering of a quadratic monomial of off-shell field operators with $N$ derivatives at the same space-time point is achieved by covariant point-splitting, subtracting the Hadamard parametrix ${H}_N (x,y)$, taking the coincidence limit and fixing a finite number of ambiguities which can be related to the arbitary energy-scale $\mu$. This fixation may be achieved by demanding a certain state or the value of certain renormalized couplings. The deviation from minimal normal ordering (i.e. substracting exclusively terms that diverge in the coincidence limit) is for some monomials necessary to fulfill reasonable requirements as for example stress-energy conservation \cite{Wald:1978pj}. The latter observation may be understood as a consequence of consistently defining algebraic quantum field theory in curved space-time \cite{Hollands:2004yh}. The procedure of normal ordering quadratic monomials can also be generalized to higher-order monomials and a rigorous definition is given in equation (59) in \cite{Hollands:2004yh}, where it is also discussed that normal ordering obeys the Leibniz rule for off-shell field operators. An anomaly in the free scalar field theory (i.e. failure of the equations of motion to be satisfied for composite operators) is eventually related to the normal ordered operator $: \hat{\phi} (x) \Big[ \square_x - \frac{m^2}{\hbar^2} - \xi R(x)\Big] \hat{\phi} (x) : \, \propto \hat{\id} Q(x) $ where $Q(x)$ is a classical field constructed from purely geometrical quantities which cannot be set to zero via counterterm ambiguities (a detailed calculation via point-splitting is for example available in \cite{Hack:2010iw}, where, however, the counterterm ambiguities still need to be applied). It is the latter observation that forbids us to enforce the Heisenberg equations for monomials in the coincident limit. It translates to the fact the energy-momentum tensor acquires a trace even if it was classically zero. Moreover, since the above anomaly can be written as an operator identity, it is independent of the state in which one would like to evaluate this operator and thus cannot be argued away for example by choosing a state that has some notion of classicality. It might be negligible but is strictly speaking always present. The whole program of algebraic quantum field theory in curved space-time is then carried forward to include also interactions by defining time-ordered products in order to relate free and interacting field operators via a formal power series in the coupling constant. \subsection{Semi-classical Einstein equation and stress-energy renormalization } The difficulties of quantum field theory in curved space-time are in particular revealed if we ask how to determine the classical metric $g_{\mu \nu}$. The first option is to postulate the metric to have a certain form by means of additional degrees of freedom that couple to the field $\phi$ only indirectly via gravity neglecting any back reaction. The second option is to include back-reactions via the renormalized semi-classical Einstein equations, \begin{equation} G_{\mu \nu} \big[ g_{\mu \nu} \big] = \frac{\hbar}{M_P^2}\langle : \hat{T}_{\mu \nu } \big[\hat{\phi}, g_{\mu \nu} \big] : \rangle\, \label{semiClassicalEinstein} , \end{equation} where the normal ordering regularizes the infinite contribution of composite operators such that we are dealing with finite quantities but also with renormalized couplings. The quantum expectation values are taken with respect to some yet unspecified state with possibly stochastic initial conditions (for example in order to account for cosmological setups). Ambiguities in the normal ordering prescription can be interpreted as a change of couplings of a renormalized effective action on the gravitational side. These ambiguities may be fixed by demanding that the left-hand side of the Einstein equation remains in its classical form without a cosmological constant. A standard way to carry out this renormalization is to make use of the effective action that is defined in terms of a path integral which implicitly makes use of a preferred state. This state is unambigious in Minkowski space but fails to be so for general curved space-times. Nonetheless, in the context of slowly-varying space-times one can pick an adiabatic vacuum and calculate the renormalized effective action by methods such as dimensional regularization (this is discussed for example in the standard reference \cite{Birrell:1982ix} as well as the more recent textbook \cite{Parker:2009uva}). Thus, we choose the renormalization parameters for gravity such that they shall neither contain a cosmological constant, nor higher-order geometrical terms other than the four-dimensional Ricci scalar $R$ so that it agrees with the classical action $S_g$ and the renormalized Planck mass is given by $M_P \approx 2.45 \times 10^{18}\, \text{GeV}$ which is another way of phrasing that corrections to the classical Einstein equations without a cosmological constant can safely be neglected at the energy scales that we are preparing experiments at. \par Keeping in mind that composite operators diverge in the coincidence limit and that we have to be careful evaluating the equations of motion, the energy-momentum operator in \eqref{semiClassicalEinstein} reads formally \begin{equation} \hat{T}_{\mu \nu} = \partial_{\mu} \hat{\phi}_B \partial_{\nu} \hat{\phi}_B + \xi_B \big(g_{\mu \nu} \square - \nabla_{\mu} \nabla_{\nu} + R_{\mu \nu} \big) \hat{\phi}^2_B - \frac{g_{\mu \nu}}{2} \Big[ \partial^{\alpha} \hat{\phi}_B \partial_{\alpha} \hat{\phi}_B + \frac{m^2_B}{\hbar^2} \hat{\phi}^2_B +\xi_B R \hat{\phi}^2_B + \frac{1}{12} \frac{\lambda_B}{\hbar} \hat{\phi}^4_B \Big] \, . \label{enMomOp} \end{equation} \par Let us rewrite the energy-momentum tensor given by \eqref{enMomOp} in terms of the canonical field operators and the 3+1 decomposition \begin{equation} \hat{T}_{\mu \nu} = \hat{E} n_{\mu} n_{\nu} + \hat{P}_{\mu} n_{\nu}+ \hat{P}_{\nu} n_{\mu} + \hat{S}_{\mu \nu} \label{energyMomDecom}\, . \end{equation} One can verify that the non-trivial equation of motion of the canonical field operators is encoded in the bare spatial operator $\hat{S}_{\mu \nu}$. Without going into any details we now take for granted that we have a normal ordering procedure available as we sketched it above and that this procedure includes perturbative interactions as well. The finite energy, momentum and stress densities with respect to a normal observer, \begin{equation} E = \langle : \hat{E} : \rangle\, , \quad P_j = \langle : \hat{P}_j : \rangle\, , \quad S_{jk} = \langle : \hat{S}_{jk} : \rangle\, , \end{equation} are then according to \eqref{EADM2} to \eqref{SADM2} expressible in terms of the normal ordered equal-time correlators of renormalized fields $\langle :\hat{\Pi}^2 :\rangle$,$\langle :\big({^{(3)} \nabla_k}\big)^m \hat{\phi} \big({^{(3)} \nabla_j}\big)^{2-m} \hat{\phi} :\rangle$, $\langle {:\hat{\phi}^4 :}\rangle$, ... as well as in terms of the renormalized couplings $m^2$, $\xi$ and $\lambda$. The spatial tensor $S_{jk}$ will always get an anomalous contribution $\gamma_{kj} Q $ after evaluating the normal ordered equation of motion, despite the viewpoint that this anomalous contribution may be safely neglected for a certain choice of a state. We have\footnote{As a cross-check, we verify that also in this $3+1$-split the anomalous trace is indeed given by $Q$ for the configuration $m^2=\lambda=0$ and $\xi=1/6$, \begin{equation} \langle : \hat{T}^{\mu}_{\; \, \mu} : \rangle = S - E := S^{k}_{\;\, k} - E\,, \qquad \Big(S - E\Big) \Big|_{m^2 = \lambda =0, \, \xi = 1/6} = {Q} \;\Big|_{m^2 = \lambda =0, \, \xi = 1/6}\, . \end{equation}} \begin{alignat}{2} {E} & = \frac{1}{2} \Big[\gamma^{-1} \langle : \hat{\Pi}^2 : \rangle+ \langle : {^{(3)}\nabla^{k}} \hat{\phi} {^{(3)}\nabla}_{k} \hat{\phi} : \rangle - 2 \xi {^{(3)}\nabla^{k}}{^{(3)}\nabla}_{k} \langle : \hat{\phi}^2 : \rangle + \frac{m^2}{\hbar^2} \langle : \hat{\phi}^2 : \rangle \nonumber \\& \qquad - 2 \xi K \gamma^{-1/2} \Big( \langle : \hat{\phi}\hat{\Pi} : \rangle + \langle : \hat{\Pi}\hat{\phi} : \rangle \Big) +\xi \big( {^{(3)} R} + K^2 - K_{ij} K^{ij} \big) \langle :\hat{\phi}^2 : \rangle+ \frac{1}{12} \frac{\lambda}{\hbar} \langle : \hat{\phi}^4 : \rangle \Big] \, ,\label{EADM2} \\ {P}_{j} & = - \frac{1}{2} \gamma^{-1/2} \Big[ \langle :\hat{\Pi} {^{(3)}\nabla}_{j} \hat{\phi} : \rangle+ \langle : {^{(3)}\nabla}_{j} \hat{\phi} \, \hat{\Pi} : \rangle \Big] + \xi {^{(3)}\nabla}_{j} \Big[ \gamma^{-1/2} \langle : \hat{\Pi} \, \hat{\phi} : \rangle + \gamma^{-1/2} \langle : \hat{\phi} \, \hat{\Pi}: \rangle \Big] \nonumber \\ & \qquad \qquad \qquad\qquad \qquad\qquad\qquad \qquad +\xi \Big[ {^{(3)} \nabla^{m}} K_{j m} - {^{(3)} \nabla_{j}} K + K_{j}^{\; m} {^{(3)} \nabla_{m}} \Big]\langle : \hat{\phi}^2 : \rangle \, ,\label{PADM2}\\ {S}_{jk} & = \langle :{^{(3)}\nabla}_{j} \hat{\phi} {^{(3)}\nabla}_{k} \hat{\phi} : \rangle - \xi {^{(3)}\nabla}_{j}{^{(3)}\nabla}_{k} \langle : \hat{\phi}^2 : \rangle - \xi K_{j k} \gamma^{-1/2} \Big[ \langle : \hat{\Pi} \, \hat{\phi} : \rangle + \langle : \hat{\phi} \, \hat{\Pi} : \rangle \Big] + 2\xi Q \gamma_{j k} \nonumber \\ & \qquad + \xi \Big[ {^{(3)} R}_{jk} + K K_{jk} - 2 K_{j m}K^{m}_{\; k} - \mathcal{L}_{n} K_{jk} + N^{-1}{^{(3)}\nabla_{j}}{^{(3)}\nabla}_{k} N \Big]\langle :\hat{\phi}^2 :\rangle \nonumber \\ & \qquad\qquad - \frac{1}{2}\big(1 - 4\xi \big) \gamma_{j k} \Big[ - \gamma^{-1} \langle : \hat{\Pi}^2 : \rangle + \langle: {^{(3)}\nabla^{m} }\hat{\phi} {^{(3)}\nabla}_{m} \hat{\phi}: \rangle + \frac{m^2}{\hbar^2} \langle : \hat{\phi}^2 :\rangle \nonumber \\ & \qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad +\xi R \langle : \hat{\phi}^2 :\rangle + \frac{1}{12}\frac{1 -8 \xi}{1-4\xi} \frac{\lambda}{\hbar} \langle : \hat{\phi}^4:\rangle \Big]\label{SADM2} \, . \end{alignat} It should be clear that the trace ${S}$ appearing in these equations is not to be confused with the total classical action $S_{\text{tot}}$. \par As explained below equation \eqref{semiClassicalEinstein} we now choose the renormalized couplings on the left-hand-side of the semi-classical Einstein equation - and thus the normal ordering ambiguities - such, that we are dealing with classical gravity without a cosmological constant. We then have the expressions found in \cite{rezzolla2013relativistic}, \begin{multline} \begin{aligned} \frac{1}{2} \Big[{^{(3)} R} + K^2 - K_{ij} K^{ij} \Big] & = \frac{\hbar}{M_P^2} E \, ,\\ {^{(3)}\nabla_j} K^j_{\; i} - {^{(3)}\nabla_i} K & = \frac{\hbar}{M_P^2} {P}_i \, ,\\ \mathcal{L}_{Nn} K_{ij} + {^{(3)}\nabla_i} {^{(3)}\nabla_j} N -N\Big[ {^{(3)}R_{ij}} +K K_{ij} -2 K_{im}K^{m}_{\; j} \Big] &= \frac{\hbar}{M_P^2} {N} \Big[ \frac{1}{2} \big({S} - {E} \big)\gamma_{ij} - {S}_{ij} \Big] \, , \end{aligned} \end{multline} where we restricted the expressions to spatial indices for tensors in the spatial hypersurface. As we remarked in the beginning, the split of the two-point function into a part which is singular in the coincident limit (state-independent) and non-singular (state-dependent) part can be read as a split into a manifestly microscopic part inherited from the quantum commutation relation and a part that in principle allows for a macroscopic distribution of particles (among many other possibilities). Our goal is now to rewrite the quantities $E$, $P_i$ and $S_{ij}$ in terms of integrated phase-space densities which allow for a particle distribution interpretation in certain limits. \pagebreak \section{Wigner operators from canonical fields \label{defWignerSect}} We have reviewed the Hamiltonian formulation for the real scalar field operator and its conjugate momentum in curved space-time with a classical metric that is given through the semi-classical Einstein equation. Up to this point we have a description of matter in terms of canonical field operators. We would like to get to a different description by retaining the operator nature and forming a set of transformed objects $\hat{f}_i(x^{\mu}\,, p_i)$ out of the canonical field operators that depend on phase-space variables, where $x^{\mu}$ is a collective space-time point and $p_i$ labels momenta distributed around it. We will construct four such operators $\hat{f}_1^{\pm}$, $\hat{f}_2$ and $\hat{f}_3$. The first two $\hat{f}_{1}^{\pm}$ naturally combine into a single operator $\hat{f}_1$ which may be straightfordwardly interpreted as a fluctuating particle distribution in phase-space under certain conditions. This means that whenever the state, that the operator $\hat{f}_1$ eventually acts on can be characterized as classical, we want to interprete the operator $\hat{f}_1$ as a classical, fluctuating phase-space density in the sense of statistical mechanics. The remaining two phase-space densities $\hat{f}_2$ and $\hat{f}_3$ stem from the relativistic description of the Klein-Gordon equation and do represent degrees of freedom which are absent for classical particle descriptions, so they may be interpreted as giving small backreaction on the operator $\hat{f}_1$ whenever we are in regime where the contributions of $\hat{f}_1$ dominate. We remark, that we see no advantage in reformulating the system in terms of these phase-space operators if the state cannot be characterized as classical. This requirement can be understood by looking at the dynamics of the operators $\hat{f}_{1,2,3}$, which will involve an infinite series of spatial derivatives that needs to be truncated, which is not possible if the state does not allow for a seperation of scales, see also the explanations in \cite{Prokopec:2017ldn}.\par Although we are talking about phase-space \textit{operators} so far, we should note that it is not overly important to retain the operator nature and we will soon drop it by taking expectation values. The reason we mentioned the operator nature in the first place, was to make easier contact to an n-particle distribution for the operator $\hat{f}_1$, such as for example the irreducible two-particle distribution $f_1^{(2)} = \langle \hat{f}_1 \hat{f}_1 \rangle - \langle \hat{f}_1 \rangle \langle \hat{f}_1 \rangle$ which will appear naturally once we switch on interactions or take into account the classical stochastic limit of quantized gravitational perturbations (the role of these higher-order correlators which goes under the name BBGKY hierarchy is discussed for example in \cite{Bertschinger:1993xt} in the context of dark matter). However, even if we switch on interactions, we are interested in regimes where the higher connected n-point functions are considered to have a small influence on the dynamics (Gaussian state truncation or resummed 1-loop approximation \cite{Destri:2005qm}), \begin{equation} \langle : \hat{\phi}(x_1)...\hat{\phi}(x_{n+2}) : \rangle_{\text{connected}} \approx 0 \, , \quad n >2\, . \end{equation} This is the case when the self-coupling $\lambda$ multiplied by the number of particles running in the loops is small. Moreover, we want to consider a state with vanishing one-point functions \begin{equation} \langle \hat{\phi} \rangle = \langle \hat{\Pi} \rangle =0 \, . \end{equation} In principle there is no obstacle in including also one-point functions in the formalism and it is certainly worth studying the influence of condensates. Nonetheless, in order to the keep the scope of the paper focussed we will postpone this discussion. The reason is that densities which are obtained by Wigner transforming products of one-point functions admit a gradient expansion only after a smoothing procedure \cite{Uhlemann:2014npa} \cite{Garny:2017xkc}, which makes it necessary to deal separately with their dynamical equations and the way they react back on the connected part of the two-point function (directly via self-interactions or indirectly via gravity). By working with the just mentioned assumptions, we see that the full four-point function entering the energy-momentum tensor becomes \begin{equation} \langle : \hat{\phi}^4 : \rangle \approx 3 \langle : \hat{\phi}^2 : \rangle^2 = 3 \times \includegraphics[width=0.2\linewidth, valign=c]{8} \, . \end{equation} \par After having discussed our assumptions on the quantum state and the self-interaction, let us now get started by gaining some intuition for the phase-space operators $\hat{f}_i$ that we are after. We know that the energy-momentum tensor for a classical particle distribution in a general relativistic setting is given via second moments for the corresponding classical Boltzmann distribution $f_{\text{class}}$, \begin{equation} T_{\mu \nu}^{\text{class}} (x^{\mu}, p_i) = \int \frac{d^3 p }{\gamma^{1/2}} p_{\mu} p_{\nu} f_{\text{class}}(x^{\mu}, p_i) \, , \quad p_0 = \omega (p_i)\, , \label{classT} \end{equation} where the zero component of the four-momentum is constrained by an on-shell condition. On the other hand we see that, at least in the absence of self-interactions, the energy-momentum tensor of the scalar field theory \eqref{energyMomDecom} is given in terms of quadratic monomials of the canonical operators. It is suggestive to look for some kind of Fourier transform of these quadratic monomials with respect to a shift variable $r^k$ whose conjugate variable $p_k$ may be interpreted as a spatial momentum such that gradient terms in \eqref{energyMomDecom} will contain integrals over these momenta. This spatial Fourier transform is called Wigner transform and it is well known for the special relativistic case with flat metric $\eta_{\mu \nu}$ in terms of a formulation where the zero component of the momentum variable is off-shell (see e.g. \cite{trove.nla.gov.au/work/9783845} for an introduction), \begin{equation} f_{\text{sr}} (x^{\mu},p_{\mu}) \propto \int {d^4r} e^{-\frac{i}{\hbar} r^{\mu} p_{\mu}} \Big\langle \hat{\phi} \Big(x+\frac{r}{2} \Big)\hat{\phi} \Big(x-\frac{r}{2} \Big) \Big\rangle \, . \label{WignerMink} \end{equation} It is important to note that taking moments of these densities in $k_0$ will yield more than one density as it was worked out in \cite{Garbrecht:2002pd} for FLRW space-times and the relation to the energy density in \eqref{classT} has been provided there. \par Furthermore, fully general relativistic Wigner transforms have been proposed at the level of two-point functions for scalar fields using local expansions \cite{Winter:1986da} \cite{Calzetta:1987bw} as well as non-perturbative expression based on an operator formulation \cite{Fonarev:1993ht} \cite{Antonsen:1997dc}. However, all of the latter four fully covariant proposals are based descriptions that leave the zero component of the momentum off-shell which does not make a closed set of differential equations for all involved degrees of freedom manifest if we assume a Gaussian state truncation.\footnote{The Gaussian state truncation implies that the effect of interaction via connected higher n-point functions is neglected, if the latter had substantial effect and were taken into account, the system would not close anyway unless a quasi-particle approximation is applied.} In \cite{Prokopec:2017ldn} we were working with linearized gravitational fields in longitudinal gauge and defined equal-time densities via two-point functions of canonical field operator which did not depend on off-shell momenta. We concluded that in a large mass, non-relativistic limit only a combination of two out of four two-point functions may be regarded as classical Boltzmann distribution whereas the other two two-point functions are to leading order highly oscillatory and otherwise suppressed. Although these oscillatory densities need not to be observable themselves, they still can have the potential to influence the classical particle density. \par In addition to the work of \cite{Winter:1986da} \cite{Calzetta:1987bw} \cite{Fonarev:1993ht} \cite{Antonsen:1997dc}, it is our goal the make the application of the kinetic representation of quantum field theory for generic curved space-times more feasible by generalizing the description in terms of canonical fields in linearized longitudinal gauge in \cite{Prokopec:2017ldn} to arbitrary metrices. This framework would firstly allow us once more to identify the quantity that comes closest to a classical Boltzmann phase-space density (which is non-trivial for a real scalar field in inhomogeneous setups, i.e. it is not simply the integrated version of the various off-shell densities discussed in \cite{Winter:1986da} \cite{Calzetta:1987bw} \cite{Fonarev:1993ht} \cite{Antonsen:1997dc}) and secondly, to systematically study the effect of highly oscillatory state contributions on the dynamics of the slowly varying part of the state that comes closest to a classical particle description.\footnote{The last viewpoint is similar to the analysis in \cite{Namjoo:2017nia} in which relativistic correction on the non-relativistic amplitude of an interacting real scalar field in flat space-time have been worked out to yield.} Looking at \eqref{WignerMink}, we see that the main ingredient will be a covariant shift of the canonical fields which has been worked out by \cite{Fonarev:1993ht} by treating space and time on equal footing. We apply the idea here to our canonical formulation on spatial hypersurfaces. \par Before we can present our definition, we introduce the following differential operator on the spatial hypersurfaces $\Sigma_t$, \begin{equation} {^{(3)}{\nabla}_k^{\text{H}}} := {^{(3)}\nabla}_k - r^l {^{(3)}\Gamma^{n}_{\; k l }} \frac{\partial}{\partial r^n} \, , \label{horizontalLiftCovDer} \end{equation} where ${^{(3)}\nabla}$ is the time-dependent covariant derivative on the spacelike hypersurfaces $\Sigma_t$ and ${^{(3)}\Gamma^{n}_{\; k l }}$ are the associated connection coefficients. The differential operator \eqref{horizontalLiftCovDer} is in fact the horizontal lift of the covariant derivative ${^{(3)}\nabla}$ (induced on $\Sigma_t$ via the 3+1 decomposition) to the tangent bundle $\text{T}\Sigma_t$ (see for example \cite{de2011methods} or \cite{Sarbach:2013uba} for an introduction to induced covariant derivatives on tangent bundles). This covariant derivative satisfies ${^{(3)}{\nabla}_k^{\text{H}}} r^j = 0$ . \par Let $\hat{X} $ and $\hat{Y}$ denote canonical operators $\hat{\phi}$ and $\gamma^{-1/2} \hat{\Pi}$. If we combine a pair of canonical operators $\lbrace \hat{X}_x,\,\hat{Y}_y \rbrace$ into a single operator $\hat{X}_x \hat{Y}_y$ it will yield a state-independent and moreover UV-divergent part that can be defined in a normal neighbourhood around a collective point. In order to capture this region for operators at equal times, let $\Theta\big[r^k,l_N(x^{\mu}) \big]$ be a cut-off function for the spatial tangent space at $x^{\mu}$ that vanishes for values of $r^k$ which yield a spatial geodesic $s (x^{\mu} , r^k)$ with initial tangent vector $r^k$ emanating from $x^{\mu}$ whose associated distance $||s||(x^{\mu} , r^k)$ is bigger than the radius of a spatial, normal neighbourhood specified by the scalar $l_N (x^{\mu})$ around the point $x^{\mu}$ which much smaller than the scale provided by the curvature but much bigger than a typical momentum scale, $ {(\Delta p)}^{-1} \ll l_N \ll R^{-1/2}$. Now we define for each pair of spatially separated canonical operators a function that removes the state-independent part of the operator $\hat{X} \big[s (x^{\mu} , r^k/2) \big] \hat{Y} \big[s (x^{\mu} , -r^k/2) \big]$ within a normal region around it, \begin{eqnarray} H_{\phi \phi}\big[ x^{\mu}, r^k\big] &:=& H _{\lambda} \big[ y, z \big] \Big|_{y =s(x^{\mu}, r^k/2), \, z = s(x^{\mu}, -r^k/2)} \, ,\\ H_{\phi \Pi}\big[ x^{\mu}, r^k\big] &:=& \big( n_{\nu} {\nabla^{\nu}} \big)_z H_{\lambda} \big[ y, z \big] \Big|_{y =s(x^{\mu}, r^k/2),\, z = s(x^{\mu}, -r^k/2)} \, ,\\ H_{\Pi \phi}\big[ x^{\mu}, r^k\big] &:=& \big( n_{\nu} {\nabla^{\nu}} \big)_y H_{\lambda} \big[ y, z \big] \Big|_{y =s(x^{\mu}, r^k/2),\, z = s(x^{\mu}, -r^k/2)} \, ,\\ H_{\Pi \Pi}\big[ x^{\mu}, r^k\big] &:=& \big( n_{\nu} {\nabla^{\nu}} \big)_y \big( n_{\rho} {\nabla^{\rho}} \big)_z H_{\lambda} \big[ y, z \big]\Big|_{y =s(x^{\mu}, r^k/2), \, z = s(x^{\mu}, -r^k/2)}\, , \end{eqnarray} where $H_{\lambda}$ has to be computed perturbatively in the self-coupling $\lambda$ in a normal neighbourhood. The free-field limit $H_{\lambda=0}$ is given by \eqref{hadamardParametrix}. We define the associated, spatially covariant, Wigner operator as \begin{alignat}{2} \hat{F}_{X Y}(x^{\mu}, p_k) &:= \gamma^{1/2}(x^{\mu}) \int_{T\Sigma_t} dr^{3} e^{-\frac{i }{\hbar}r^k p_k} \Bigg[ \exp \Bigg( {\frac{r^k}{2}{^{(3)}{\nabla}_k^{\text{H}}}(x^{\mu})} \Bigg) \hat{X} (x^{\mu}) \Bigg] \Bigg[ \exp \Bigg({-\frac{r^k}{2}{^{(3)}{\nabla}_k^{\text{H}}}(x^{\mu})} \Bigg) \hat{Y} (x^{\mu}) \Bigg] \nonumber\\ & \qquad\qquad\qquad\qquad - \hat{1} \gamma^{1/2}(x^{\mu}) \int_{T\Sigma_t} dr^{3} e^{-\frac{i }{\hbar}r^k p_k} \Theta\big[ r^j , l_{\text{N}}(x^{\mu}) \big] H_{XY}\big[ x^{\mu}, r^k\big] \\ \nonumber &=: \gamma^{1/2}(x^{\mu}) \int_{T\Sigma_t} dr^{3} e^{-\frac{i }{\hbar}r^k p_k} \\ & \qquad\qquad\qquad \times {:\Bigg[ \exp \Bigg( {\frac{r^k}{2}{^{(3)}{\nabla}_k^{\text{H}}}(x^{\mu})} \Bigg) \hat{X} (x^{\mu}) \Bigg] \Bigg[ \exp \Bigg({-\frac{r^k}{2}{^{(3)}{\nabla}_k^{\text{H}}}(x^{\mu})} \Bigg) \hat{Y} (x^{\mu}) \Bigg]:}\, , \label{genDefWignerOp} \end{alignat} with $X,Y \in \lbrace\phi, \gamma^{-1/2} \Pi \rbrace$ and the corresponding expectation values \begin{equation} {F}_{\phi \phi} = \langle\hat{F}_{\phi \phi} \rangle\,, \quad {F}_{\phi \Pi} = \langle\hat{F}_{\phi \Pi} \rangle\,,\quad {F}_{\Pi \phi} = \langle\hat{F}_{\Pi \phi} \rangle\,,\quad {F}_{\Pi \Pi} = \langle\hat{F}_{\Pi \Pi} \rangle\, . \end{equation} In the definition \eqref{genDefWignerOp}, $H_{XY}$ subtracts the state-independent part in a normal neighbourhood around $x^{\mu}$ and may be viewed as a off-coincident normal ordering.\footnote{See also the inclusion of normal ordering for Minkowski space Wigner transformations in \cite{trove.nla.gov.au/work/9783845} which is not restricted to a normal neighbourhood due to vanishing curvature.} We can similarly view the subtraction as a coarse-graining with respect to quantum UV-modes and it is interesting to note that boundary terms arising from the normal neighbourhood can give rise to noise terms as they appear in stochastic inflation \cite{Starobinsky:1986fx}. Including these type of terms in kinetic equations is however beyond the scope of this paper. Although a state-independent, off-coincident normal ordering operation is discussed in the context of algebraic quantum field theory for free fields by means of the Hadamard parametrix \cite{HOLLANDS20151}, we are not aware that this definition is extended to interacting fields at the rigorous level that algebraic quantum field theory operates on. However, we think it is important to signal that such a procedure is necessary to obtain operators that describe real particle fluctuation and exclude virtual particles. Moreover, normal ordering is clearly demanded if we take moments in the momenta $p_k$ of \eqref{genDefWignerOp} which would otherwise result in coincident limit two-point functions that are divergent, instead we have for example \begin{equation} \frac{1}{(2 \pi \hbar)^3}\int \frac{d^3 p}{\gamma^{1/2} } \hat{F}_{\phi \phi} (x^{\mu}, p_k) = {: \hat{\phi}^2:} \,(x^{\mu})\, , \end{equation} which is crucial if we want to rewrite the energy-momentum tensor in terms of $\hat{F}_{\phi \phi}$, $\hat{F}_{\Pi \phi}$, $\hat{F}_{\phi \Pi}$, $\hat{F}_{\Pi \Pi}$. On the other hand, the details of the normal ordering procedure should not affect the effective description that we are after whenever infrared and ultraviolet physics decouple which concretely amounts in this paper to neglect firstly, anomalous contributions of the matter two-point functions, secondly, boundary terms of the state-independent part due to the normal region and thirdly, 2-loop corrections that will again pick up quantum contributions running in the loop. \par Let us discuss the other ingredients appearing \eqref{genDefWignerOp}. We verify in appendix \ref{proofExpCovDer} that powers of the spatially covariant shift operator ${\frac{r^k}{2}{^{(3)}{\nabla}_k^{\text{H}}}}$ yield the following when acting on scalar densities $f(x^{\mu})$ with weight zero, \begin{equation} \Big[ { {r^k}{^{(3)}{\nabla}_k^{\text{H}}}} \Big]^n f = {r^{i_1}...r^{i_n} } {^{(3)}{\nabla}_{i_1}}... {^{(3)}{\nabla}_{i_n}} f = \Bigg[{ {r^k} \Bigg( \partial_k - {^{(3)}\Gamma^m_{\; kl}} r^l \frac{\partial}{\partial r^m} } \Bigg)\Bigg]^n f \, , \label{horLiftOnfunc} \end{equation} which allows us to consider the definition \eqref{genDefWignerOp} even without the introduction of geometrical objects on the tangent bundles $\text{T}\Sigma_t$. We also realize that any change of spatial coordinates in \eqref{genDefWignerOp} can be absorbed into the integration variable $r^k$ that then transforms as a 3-vector and leaves the measure invariant thanks to the spatial determinant factor. The conjugate momentum $p_k$ then transforms as a covariant 3-vector. \par Equation \eqref{horLiftOnfunc} also reveals, that the covariant shift operators reduce to spatial translations when working with Riemann normal coordinates since only symmetrized covariant derivatives enter. It shows that the exponentials acting in \eqref{genDefWignerOp} translate the operators $\hat{X},\hat{Y}$ from point $x^{\mu}$ to the point specified by the spatial geodesic emanating at $x^{\mu}$ with tangent vector $r^k$ in opposite directions. Taking expectation values of the operator $\hat{f}_{XY}$ will then contain the information in the momentum space representation on how $\hat{X}$ and $\hat{Y}$ are correlated around the collective point $x^{\mu}$ where the normal ordering takes care that UV correlations that are purely quantum are removed up to boundary terms. However, the cut-off related to the normal neighbourhood around $x^{\mu}$ should have small effect as long as the state-dependent field correlations are restricted to regions much smaller than the inverse curvature. The gradients with respect to the center coordinate $x^{\mu}$ then quantify how this correlation changes in space-time. \par Given the canonical operators of the real scalar field theory, we have four different Wigner operators $\hat{F}_{\phi \phi}$, $\hat{F}_{\Pi \phi}$, $\hat{F}_{\phi \Pi}$, $\hat{F}_{\Pi \Pi}$ whose dynamics are determined by the dynamics of the operators $\hat{\phi}$ and $\hat{\Pi}$. Unfortunately, the calculation is tedious and some techniques to perform it have to be introduced. Let us therefore make some easier observation before that and save the difficulties for later. \par We observe that the operators $\hat{F}_{\phi \phi}$, $\hat{F}_{\Pi \phi}$, $\hat{F}_{\phi \Pi}$, $\hat{F}_{\Pi \Pi}$ are dimensionally inequivalent and not all of them are real. In order to rescale the Wigner operators in units of energy, we consider the free particle energy via the 3+1 decomposition \begin{equation} \omega_p = N p^0 = \Big( m^2 + \gamma^{kl} p_k p_l \Big)^{1/2}\, , \end{equation} and define the following dimensionally equivalent phase-space density operators and the corresponding expectation values \begin{alignat}{2} \label{deff+} {f}_{1}^{+} &= \langle \hat{f}_{1}^{+} \rangle &:=& \; \frac{1}{(2 \pi \hbar)^3}\frac{1}{2 \hbar} \Big[ \frac{\omega_p}{\hbar} \langle \hat{F}_{\phi \phi} \rangle + \frac{\hbar}{\omega_p} \langle \hat{F}_{\Pi \Pi} \rangle\Big] \, ,\\ \label{deff-} {f}_{1}^{-} &=\langle \hat{f}_{1}^{-} \rangle &:= &\; \frac{1}{(2 \pi \hbar)^3}\frac{i}{2\hbar} \Big[ \langle\hat{F}_{ \Pi \phi }\rangle - \langle\hat{F}_{ \phi \Pi } \rangle \Big]\,, \\ \label{deff1}{f}_{2} &= \langle\hat{f}_{2} \rangle &:=&\; \frac{1}{(2 \pi \hbar)^3}\frac{1}{2 \hbar} \Big[ \frac{\omega_p}{\hbar} \langle \hat{F}_{\phi \phi}\rangle - \frac{\hbar}{\omega_p} \langle \hat{F}_{\Pi \Pi} \rangle\Big] \, , \\ \label{deff2}{f}_{3} &= \langle\hat{f}_{3}\rangle &:=&\; \frac{1}{(2 \pi \hbar)^3} \frac{1}{2\hbar} \Big[\langle \hat{F}_{ \Pi \phi }\rangle + \langle \hat{F}_{ \phi \Pi }\rangle \Big]\, . \end{alignat} We note that the phase-space density operators $\hat{f}_1^{+}$, $\hat{f}_2$ and $\hat{f}_3$ are even functions of the momentum $p_k$ whereas $\hat{f}_1^-$ is an odd function of the momentum. From here on we will work mostly with expectation values of operators which is clarified by omitting the hats. \par Making use of delta functions, setting the connected four-point functions to zero, dropping the anomalous contribution and boundary terms, we can express the energy-momentum tensor in terms of the phase-space densities \eqref{deff+} to \eqref{deff2} as follows, \begin{alignat}{2} {E} \; &= \int \frac{d^3 p}{\gamma^{1/2}} \omega_p \, {f}_1^+ - 2 \xi \hbar K \int \frac{d^3 p}{\gamma^{1/2}} f_3 +\frac{\hbar^2}{8} \Big[1 -8 \xi \Big] {^{(3)}\nabla_k}{^{(3)}\nabla^k}\int \frac{d^3 p}{\gamma^{1/2}} \frac{ {f}_1^+ + {f}_2 }{\omega_p} \nonumber \\ &\qquad +\xi \frac{\hbar^2 }{2} \Big( {^{(3)}R} +K^2 - K_{ij} K^{ij} \Big)\int \frac{d^3 p}{\gamma^{1/2}} \frac{ {f}_1^+ + {f}_2 }{\omega_p} + \lambda \frac{\hbar^3}{8} \Bigg[ \int \frac{d^3 p}{\gamma^{1/2}} \frac{ {f}_1^+ + {f}_2 }{\omega_p} \Bigg]^2 \, ,\label{eMomDecom1} \\ \nonumber {P}_{k} \; &= \int \frac{d^3 p}{ \gamma^{ 1/2}} {p_k} {f}_1^{-} - \frac{\hbar}{2} \big[1-4 \xi \big] {^{(3)}\nabla_k} \int \frac{d^3 p}{ \gamma^{ 1/2}} {f}_3 \\&\qquad + \xi \hbar^2 \Big[ {^{(3)} \nabla^{m}} K_{j m} - {^{(3)} \nabla_{j}} K + K_{j}^{\; m} {^{(3)} \nabla_{m}} \Big] \int \frac{d^3 p}{\gamma^{1/2}} \frac{ {f}_1^+ + {f}_2 }{\omega_p} \, ,\label{eMomDecom2}\\ \nonumber \qquad{S}_{km} \; &= \int \frac{d^3 p}{\gamma^{ 1/2}} \frac{p_k p_m}{\omega_p}\big({f}_1^+ + {f}_2 \big) - 2 \xi \hbar K_{km}\int \frac{d^3 p}{ \gamma^{ 1/2}} {f}_3 + \frac{\hbar^2}{4} \Big[1-4 \xi \Big] {^{(3)}\nabla_k}{^{(3)}\nabla_m} \int \frac{d^3 p}{\gamma^{1/2}} \frac{ {f}_1^+ + {f}_2 }{\omega_p} \\ \nonumber & \quad -\gamma_{km} \big[1- 4 \xi \big]\Bigg[\int \frac{d^3 p}{\gamma^{1/2}} \omega_p \, {f}_2 + \frac{\hbar^2}{8} {^{(3)}\nabla^j}{^{(3)}\nabla_j}\int \frac{d^3 p}{\gamma^{1/2}} \frac{ {f}_1^+ + {f}_2 }{\omega_p} \Bigg] \\ & \quad \nonumber + 2 \xi \hbar \Big[ R \gamma_{km} + {^{(3)} R}_{km} + K K_{km} - 2 K_{k j}K^{j}_{\; m} - \mathcal{L}_{n} K_{km} + N^{-1}{^{(3)}\nabla_{j}}{^{(3)}\nabla}_{k} N \Big]\int \frac{d^3 p}{\gamma^{1/2}} \frac{ {f}_1^+ + {f}_2 }{\omega_p} \\ & \quad - \gamma_{km}\lambda \frac{\hbar^3}{2} \big[1-8 \xi \big] \Bigg[ \int \frac{d^3 p}{\gamma^{1/2}} \frac{ {f}_1^+ + {f}_2 }{\omega_p} \Bigg]^2 \, , \label{eMomDecom3} \end{alignat} where we used relations of the type \begin{equation} {:\partial_i \hat{\phi} \partial_j \hat{\phi}:} = \frac{1}{4} {{^{(3)}\nabla_i}{^{(3)}\nabla_j} {:\hat{\phi}^2}:} + \int \frac{d^3 p}{(2 \pi \hbar)^3} \gamma^{- 1/2} \frac{p_i p_j}{\hbar^2} \hat{F}_{\phi \phi}\, . \end{equation} We can also write down energy-momentum conservation $\nabla_{\mu} \langle : \hat{T}^{\mu \nu}: \rangle = 0$ in terms of this 3+1 decomposition (see for example \cite{rezzolla2013relativistic}), \begin{equation} \partial_t \big(\gamma^{1/2} {E} \big) + \partial_i \big[\gamma^{1/2}\big(N {P}^i - N^i \hat{E} \big) \big] = N \gamma^{1/2} \big( S_{ij} {K}^{ij} - {P}^i \partial_i \ln N \big) \, , \end{equation} \begin{equation} \partial_t \big(\gamma^{1/2} {P}_j \big) + \partial_i \big[\gamma^{1/2}\big(N {P}^i_{\, j} - N^i {P}_j \big) \big] = N \gamma^{1/2} \big( \frac{1}{2} {S}^{ik} \partial_j \gamma_{ik} + N^{-1}{P}_i \partial_j N^i - {E} \partial_j \ln N \big) \, , \end{equation} which however does not help very much since the involved quantities are still constrained via the phase-space densities $f_{i}$. We thus have to know their dynamics which we will postpone to a later section as promised. \par Let us see what we can read off from the decomposition \eqref{eMomDecom1} to \eqref{eMomDecom3}. By writing $\hat{f}_{1} =\hat{f}_{1}^{+} + \hat{f}_{1}^{-}$ and using the parity properties of these operators , we arrive at the form \begin{alignat}{2} E &= \int \frac{d^3 p}{\gamma^{1/2}} \omega_p f_1 + \hbar^2 \mathcal{O} \big( f_1 \label{EClass} \, \big)+ \hbar^2 \mathcal{O} \big( f_2 \big)+ \mathcal{O} \big( \xi, {\lambda} \big)\, , \\ P_k &= \int \frac{d^3 p}{ \gamma^{ 1/2}} {p_k} f_1 + \hbar \, \mathcal{O} \big( f_3 \big) + \mathcal{O} \big( \xi \big) \, ,\label{PClass} \\ {S}_{k m} &= \int \frac{d^3 p}{\gamma^{ 1/2}} \frac{p_k p_m}{\omega_p} f_1 + \mathcal{O} \big(f_2 \big) + \hbar^2 \mathcal{O} \big(f_1 \big)+ \mathcal{O} \big( \xi, {\lambda} \big)\, .\label{SClass} \end{alignat} We can do the same 3+1 projection with the classical energy-momentum tensor \eqref{classT} whose building blocks can be fluctuating phase-space densities that still need to be averaged over in statistical context. Since we could have written down the equautions \eqref{EClass} to \eqref{SClass} also at the level of renormalized operators, we can tentatively identify the operator $\hat{f}_{1}$ as a fluctuating phase-space density at the level of the normal projected energy-momentum tensor, up to certain correction terms. The average $f_1$ is viewed as the one-particle distribution in phase-space. Let us discuss under which conditions this identification is justified. \par The first conditions concerns a spatial gradient expansions proportional to the Planck constant $\hbar$ where spatial gradients with respect to the variable $x^i$ are compared to either the energy $\omega_p$ or the momentum $p_k$ within spatial momentum integrals of a phase-space density $\langle \hat{f}_i(x^{\mu}, p_j) \rangle$. If we picture a non-relativistic setting where $m \gg |p_k|$ for any $f_i(x^{\mu}, p_j)$, we see that the gradient expansion is applicable if the energy-scales satisfy $m \gg \Delta p \approx \frac{\hbar}{\Delta r} \gg \frac{\hbar}{\Delta x}$.\footnote{We commented more on this expansion in the context of dark matter in \cite{Prokopec:2017ldn}.} The relation between the short distance difference scale $\Delta r$ and the long distance, center coordiate scale $\Delta x$ lies at the heart of the Wigner transformation. In the context of general relativity, it corresponds to locally homogeneous two-point functions depending only on the (covariantly generalized) difference coordinate of the involved operators subsequently yielding only a momentum dependence around $\Delta p$ which is then corrected on larger scales $\Delta x$ via gravitational inhomogeneities (plus additional effects due to self-interactions). We underpin once more, that it depends on the state in Hilbert space whether or not these corrections are small since the higher-order spatial gradient corrections are strictly speaking always present. Typical correction terms in the dynamics of phase-space operators will include \begin{equation} \mathcal{O} \big( \hbar \big) {f}_{i} (t,x, p) \sim \Big\lbrace \hbar \partial_k \frac{\partial}{\partial p_k}, \,\frac{\hbar \partial_k}{\sqrt{m^2 + \gamma^{ij} p_i p_j }} , \, \frac{\hbar}{{p^k \partial_k}}{{^{(3)}\square}}, \, ... \Big\rbrace {f}_{i} (t,x, p) \, . \label{spatialGradientApprox} \end{equation} Another obvious condition that should be satisfied in order to treat the operator $\hat{f}_{1}=\hat{f}_{1}^{+} + \hat{f}_{1}^{-}$ similarly to a classical, particle-associated fluctuating phase-space density concerns the expectation values of the two operators $\hat{f}_{2,3}$ appearing in \eqref{EClass} to \eqref{SClass}. If we wanted an averaged energy-momentum tensor from the field theoretic description that looks almost identical to the one obtained from an averaged classical particle description, the densities $f_{2,3}$ would have to be chosen small initially. Note that the assumption that $f_1^{+}$ should be regarded as the dominant density in comparison with $f_2$ is supported by observation that \begin{equation} \int \frac{d^3 p}{\gamma^{1/2}} \omega_p f_1 \geq \left| \int \frac{d^3 p}{\gamma^{1/2}} \omega_p f_2\right| \, , \end{equation} which follows from their very definition. The bound can be hit for example for homogeneous condensates $\langle \hat{\phi} \rangle (t) \propto \sin (mt/\hbar)$. On the other, only the density $f_1^{-}$ can fulfil the job as a classical particle phase-space density since it is the only odd density and thus clearly favoured in comparison to $f_3$ in \eqref{PClass}. However, even if we make the identification \eqref{EClass} to \eqref{SClass} initially, we have to be sure that the dynamics keeps the influence of the fluctations ${f}_{2,3}$ small over time which relates to requirements on the parameters of the theory ($m,\xi, \lambda$). It is clear that we expect for example from a strongly interacting regime ${\lambda} \gg 1$ many more effects than a mass renormalization and pressure correction, since the Gaussian state approximation breaks down and higher n-point functions enter the dynamics. \par Let us summarize what we have found so far. We have provided a spatially covariant set of three even and one odd quadratic equal-time operators and their expectation values \eqref{deff+} to \eqref{deff2} that have units of phase-space densities and that depend on a space-time point $x^{\mu}$ and spatial three-momentum $p_i$. There is no dependence on a off-shell zero-momentum component. By looking at the 3+1 decomposition of the energy-momentum tensor \eqref{EClass} to \eqref{SClass}, we identified a distinguished combination of one even and the odd operator $\hat{f}_{1} = \hat{f}_{1}^{+}+\hat{f}_{1}^{-}$ which appears to mimic a fluctuating, classical phase-space density in the sense of statistical mechanics whenever it is acting on a state that is classical enough such that it admits a spatial gradient expansion. The remaining two even operators $\hat{f}_{2,3} $ represent degrees of freedom that stem from the fundamental relativistic, field theoretic description and we have argued that the expectation values ${f}_{2,3} $ should taken to be small for a purely particle-like interpretation. However, they can in principle have a significant role for the evolution of the system and it is worth studying how such additional components from the field theoretic description correct the classical particle picture. \section{Hydrodynamic cold dark matter limit: from normal observers to fluid rest-frame} Our original motivation to identify the phase-space densities $f_{1,2,3}$ was to study cold dark matter from a field theoretic description that allows for a systematic inclusion of relativistic effects \cite{Prokopec:2017ldn}. This section is devoted to making a closer contact to a cold dark matter description that is formulated in terms of hydrodynamic variables. We want to show as a proof of concept how the hydrodynamical variables, that are used for the classical particle description, can be derived from the theory of scalar quantum field in a certain classical limit. It turns out that this map is already non-trivial even at the level of vanishing self-interaction and minimal coupling to gravity which is why we stick to the simplified case $\lambda = \xi =0$ in this section. We were discussing the energy-momentum tensor in a 3+1 decomposition. The projected quantities \begin{equation} \langle :\hat{E}: \rangle = E \,, \quad \langle: \hat{P}_k: \rangle = P_k \,, \quad \langle: \hat{S}_{kl}: \rangle = S_{kl} \, , \end{equation} that appear in the energy-momentum tensor \eqref{eMomDecom1} to \eqref{eMomDecom3} are related to the observer specified from any other frame by the normal vector $n^{\mu}$. This normal (also referred to as Eulerian) observer measure an energy density $E$, a momentum $P_i$ and a stress tensor $S_{ij}$. However, especially in the context of cosmology it is standard to work with a different decomposition that assumes a hydrodynamic representation of energy-momentum which relates to an observer co-moving with the fluid. The fluid is specified from any other frame by the four-velocity $u^{\mu}$ that corresponds to an observer moving with a fluid element and the energy-momentum tensor for a hydrodynamic representation is usually written as \begin{equation} \langle : \hat{T}_{\mu \nu}: \rangle \equiv T_{\mu \nu} = \big( e + P \big) u_{\mu} u_{\nu} + P g_{\mu \nu} + \pi_{\mu \nu}\, , \quad u^{\mu} \pi_{\mu \nu} = 0 \, , \quad \pi^{\mu}_{\; \mu} = 0\, . \end{equation} In this formulation one assumes that energy and momentum are expressible in terms of a fluid with rest-frame energy density $e$, pressure $P$ and non-isotropic stresses $\pi_{ij}$. The energy density $e$ that is measured by the observer moving with the fluid is in general different from the energy density $E$ measured by the normal observer. We remark that \begin{equation} u^{\mu} = -n^{\nu}u_{\nu} \big( n^{\mu} + v^{\mu} \big) = W \big( n^{\mu} + v^{\mu} \big) \, , \quad W = \frac{1}{\sqrt{1-v^i v^j \gamma_{ij}}}\, , \end{equation} where $v^{\mu}$ is the spatial part of the four-velocity with respect to the normal vector and $W$ is the Lorentz factor \cite{rezzolla2013relativistic}. We then have the following relations \begin{eqnarray} \label{S} E &=& W^2 \big( e + \gamma^{ij} v_i v_j P \big) + \pi_{ij}{v^i v^j} \, ,\\ \label{S_i} {P}^i &=& \big( e + P \big) W^2 v^i + \pi_{kl} v^k \gamma^{li}\, , \\ \label{S_ij} {S}^{ij} &=& (e+P)W^2 v^i v^j + P \gamma^{ij} + \gamma^{ik} \gamma^{jl} \pi_{kl} \, . \end{eqnarray} We would now like to invert the relations \eqref{S} to \eqref{S_ij}, which is in principle a complicated task. However, it can be done in principle exactly and we will do it for the case of the real scalar field stress-energy tensor, where we set the self-coupling ${\lambda}$ and the non-minimal coupling to gravity $\xi$ for simplicity to zero, \begin{equation} \langle : \hat{T}_{\mu \nu} : \rangle_{{\lambda}= \xi =0} = \langle : \partial_{\mu} \hat{\phi} \partial_{\nu} \hat{\phi} : \rangle - \frac{g_{\mu \nu}}{2} \Big[ \langle : \partial^{\alpha} \hat{\phi} \partial_{\alpha} \hat{\phi}: \rangle + \frac{m^2}{\hbar^2} \langle : \hat{\phi}^2 : \rangle \Big] \, . \label{TMuNuScalar} \end{equation} It turns out, that it is more convenient at this point to first work without any time-slicing and define the object \begin{equation} \chi^{\mu}_{\; \nu} := \langle : \partial^{\mu} \hat{\phi} \partial_{\nu} \hat{\phi} : \rangle \, , \end{equation} which is the key ingredient, if we want to consider non-perfect fluids. The energy density is the negative eigenvector of the fluid four-velocity, whereas the pressure is one third of the sum of the principal stresses, which are eigenvalues belonging to the spatial part of the energy-momentum tensor. The task is thus to find the eigenvalues of energy-momentum \eqref{TMuNuScalar}, which amounts to finding the eigenvalues of $\chi^{\mu}_{\; \nu}$, which is initially for every point in space an arbitrary matrix that obeys the Cayley-Hamilton equation \begin{multline} \big[\chi^4 \big]^{\mu}_{\; \nu} - \tr \big[ \chi \big] \big[\chi^3 \big]^{\mu}_{\; \nu} + \frac{1}{2} \Big[\big( \tr \big[ \chi \big] \big)^2 - \tr \big[\chi^2 \big] \Big]\big[\chi^2 \big]^{\mu}_{\; \nu} \\ - \frac{1}{6} \Big[ \big(\tr \big[ \chi\big] \big)^3 - 3 \tr \big[ \chi^2 \big] \tr \big[ \chi \big] + 2 \tr \big[ \chi^3 \big] \Big] \chi^{\mu}_{\; \nu} + \delta^{\mu}_{\; \nu} \det \chi = 0\, . \end{multline} The eigenvalues ${\sigma}$ of this matrix are then subject to the quartic equation \begin{equation} {\sigma}^4 + \widetilde{b} {\sigma}^3 + \widetilde{c} {\sigma}^2 + \widetilde{d} {\sigma} + \widetilde{e} = 0\, , \end{equation} where \begin{eqnarray} \widetilde{b} &=& - \tr \big[ \chi \big] \, , \\ \widetilde{c} &=& \frac{1}{2} \Big[\big( \tr \big[ \chi \big] \big)^2 - \tr \big[\chi^2 \big] \Big] \, , \\ \widetilde{d} &=& - \frac{1}{6} \Big[ \big(\tr \big[ \chi\big] \big)^3 - 3 \tr \big[ \chi^2 \big] \tr \big[ \chi \big] + 2 \tr \big[ \chi^3 \big] \Big] \, , \\ \widetilde{e} &=& \det \chi \, . \end{eqnarray} We express these traces in terms of two-point functions of canonical field operators in appendix \ref{traces}. The solutions of the quartic eigenvalue equation may be written as \begin{eqnarray} {{\sigma}}_0 &=& -\frac{\widetilde{b}}{4} - \big| \widetilde{S} \big| - \frac{1}{2} \Bigg[-4 \widetilde{S}^2 - 2 \widetilde{p} + \frac{\widetilde{q}}{\big|\widetilde{S}\big|} \Bigg]^{1/2} \, , \\ {{\sigma}}_1 &=& -\frac{\widetilde{b}}{4} - \big|\widetilde{S}\big| + \frac{1}{2} \Bigg[-4 \widetilde{S}^2 - 2 \widetilde{p} + \frac{\widetilde{q}}{\big|\widetilde{S}\big|} \Bigg]^{1/2} \, , \\ {{\sigma}}_{2,3} &=& -\frac{\widetilde{b}}{4} + \big|\widetilde{S}\big| \pm \frac{1}{2} \Bigg[-4 \widetilde{S}^2 - 2 \widetilde{p} - \frac{\widetilde{q}}{\big|\widetilde{S}\big|} \Bigg]^{1/2} \, , \end{eqnarray} in terms of the following quantities \begin{eqnarray} \widetilde{p} &:=& \frac{8 \widetilde{c}-3 \widetilde{b}^2}{8} = \frac{1}{8} \big(\tr \big[ \chi \big] \big)^2 - \frac{1}{2} \tr \big[ \chi^2 \big] \, , \\ \widetilde{q} &:=& \frac{ \widetilde{b}^3 - 4 \widetilde{b} \widetilde{c} + 8 \widetilde{d}}{8} = - \frac{1}{24} \big(\tr \big[ \chi \big] \big)^3 + \frac{1}{4} \tr \big[ \chi^2 \big] \tr \big[ \chi \big] - \frac{1}{3} \tr \big[ \chi^3 \big] \, , \\ \widetilde{S} &:=& \frac{1}{2} \Bigg[ -\frac{2}{3} \widetilde{p} + \frac{1}{3} \Bigg( \widetilde{Q} + \frac{\Delta_0}{\widetilde{Q}} \Bigg) \Bigg]^{1/2} \, , \\ \widetilde{Q} &:=& \Bigg[\frac{\Delta_1}{2} + \frac{1}{2} \Big(\Delta_1^2 - 4\Delta_0^3 \Big)^{1/2} \Bigg]^{1/3}\, , \\ \Delta_0 &:=& \widetilde{c}^2 - 3 \widetilde{b} \widetilde{d} + 12 \widetilde{e}\, ,\\ \Delta_1 &:=& 2 \widetilde{c}^3 - 9 \widetilde{b}\widetilde{c}\widetilde{d} +27 \widetilde{b}^2 \widetilde{e} + 27 \widetilde{d}^2 - 72 \widetilde{c} \widetilde{e}\, . \end{eqnarray} We can identify the eigenvalue that will be related to the energy density $e$ by looking at the limiting case where the full scalar field two-point function reduces into products of classical fields and thus yields a perfect fluid energy-momentum tensor \begin{equation} \big( \chi_{\text{cl}}\big)^{\mu}_{\; \nu}= \partial^{\mu} \langle \hat{\phi} \rangle \partial_{\nu} \langle \hat{\phi} \rangle = \partial^{\mu} \phi_{\text{cl}} \partial_{\nu} \phi_{\text{cl}}\, . \end{equation} In this case, all coefficients of the quartic eigenvalue equation vanish except for $\widetilde{b}$ and we find \begin{equation} {{\sigma}}_0^{\text{cl}} = \partial^{\mu} \phi_{\text{cl}} \partial_{\mu} \phi_{\text{cl}}\, , \quad {{\sigma}}_{1,2,3}^{\text{cl}} = 0\, . \end{equation} Setting \begin{equation} \langle \partial_{\mu} \hat{\phi} \partial_{\nu} \hat{\phi} \rangle - \frac{g_{\mu \nu}}{2} \Big[ \langle \partial^{\alpha} \hat{\phi} \partial_{\alpha} \hat{\phi}\rangle + \frac{m^2}{\hbar^2} \langle\hat{\phi}^2\rangle \Big] = \big( e + P \big) u_{\mu} u_{\nu} + P g_{\mu \nu} + \pi_{\mu \nu}\, , \end{equation} and taking $-{{\sigma}}_0$ as the eigenvalue corresponding to the eigenvector of the four-velocity $u^{\mu}$, we have \begin{eqnarray} e &=& - {{\sigma}}_0 + \frac{1}{2} \Big[ \langle \partial^{\alpha} \hat{\phi} \partial_{\alpha} \hat{\phi}\rangle + \frac{m^2}{\hbar^2} \langle\hat{\phi}^2\rangle \Big]\, ,\\ P &=& \frac{1}{3} \Big[- {{\sigma}}_0 - \frac{1}{2}\langle \partial^{\alpha} \hat{\phi} \partial_{\alpha} \hat{\phi}\rangle - \frac{3}{2} \frac{m^2}{\hbar^2} \langle\hat{\phi}^2\rangle \Big]\,. \end{eqnarray} We still have to identify the four-velocity itself, which can be done by rewriting the Cayley-Hamilton equation as \begin{equation} \prod_{\mu=0}^3 \big( \chi - \id {{\sigma}}_{\mu} \big) = 0\, , \end{equation} which tells us that we have four potential eigenvectors for the eigenvalue ${{\sigma}}_0$ and we label them by the letter $\kappa$, \begin{multline} \big(u_{\kappa} \big)^{\mu} := \langle \partial^{\mu} \hat{\phi} \partial_{\nu} \hat{\phi} \rangle \langle \partial^{\nu} \hat{\phi}\partial_{\rho} \hat{\phi} \rangle \langle \partial^{\rho} \hat{\phi}\partial_{\kappa} \hat{\phi} \rangle - \big( {{\sigma}}_1 + {{\sigma}}_2 + {{\sigma}}_3 \big) \langle \partial^{\mu} \hat{\phi} \partial_{\nu} \hat{\phi} \rangle \langle \partial^{\nu} \hat{\phi}\partial_{\kappa} \hat{\phi} \rangle \\ + \big( {{\sigma}}_1 {{\sigma}}_2 + {{\sigma}}_2 {{\sigma}}_3 + {{\sigma}}_1 {{\sigma}}_3 \big) \langle \partial^{\mu} \hat{\phi} \partial_{\kappa} \hat{\phi} \rangle - {{\sigma}}_1 {{\sigma}}_2 {{\sigma}}_3 \delta^{\mu}_{\; \kappa}\, . \end{multline} However, by considering the homogeneous case for classical fields we see that the only reasonable choice is $\kappa=0$. Taking into account a normalisation factor $\alpha$, we are left with \begin{multline} u^{\mu} = \alpha \Big[ \langle \partial^{\mu} \hat{\phi} \partial_{\nu} \hat{\phi} \rangle \langle \partial^{\nu} \hat{\phi}\partial_{\rho} \hat{\phi} \rangle \langle \partial^{\rho} \hat{\phi}\partial_{0} \hat{\phi} \rangle - \big( {\sigma}_1 + {\sigma}_2 + {\sigma}_3 \big) \langle \partial^{\mu} \hat{\phi} \partial_{\nu} \hat{\phi} \rangle \langle \partial^{\nu} \hat{\phi}\partial_{0} \hat{\phi} \rangle \\ + \big( {\sigma}_1 {\sigma}_2 + {\sigma}_2 {\sigma}_3 + {\sigma}_1 {\sigma}_3 \big) \langle \partial^{\mu} \hat{\phi} \partial_{0} \hat{\phi} \rangle - {\sigma}_1 {\sigma}_2 {\sigma}_3 \delta^{\mu}_{\; 0} \Big]\, , \quad u^{\mu} u_{\mu } = -1\, . \end{multline} Note that the above reproduces the classical field identification (${\sigma}_i =0$), \begin{eqnarray} e_{\text{cl}} &=& - \frac{1}{2} \partial^{\mu} \phi_{\text{cl}} \partial_{\mu} \phi_{\text{cl}} + \frac{1}{2} \frac{m^2}{\hbar^2} \phi_{\text{cl}}^2 \, ,\\ P_{\text{cl}} &=& - \frac{1}{2} \partial^{\mu} \phi_{\text{cl}} \partial_{\mu} \phi_{\text{cl}} - \frac{1}{2} \frac{m^2}{\hbar^2} \phi_{\text{cl}}^2\, ,\\ u_{\text{cl}}^{\mu} &=& \alpha \big(\chi_{\text{cl}}^3 \big)^{\mu}_{\; 0} = \frac{\partial^{\mu} \phi_{\text{cl}} }{\big(-\partial^{\nu} \phi_{\text{cl}}\partial_{\nu} \phi_{\text{cl}} \big)^{1/2}}\,, \quad \alpha = \Big[- \big(\chi_{\text{cl}}^6 \big)^0_{\; 0} \Big]^{1/2} \,. \end{eqnarray} We would like to check whether our identification of energy density and pressure yields meaningful expressions beyond the special case where the two-point function reduces to classical fields. We consider the limiting case where the mass $m$ constitutes the biggest energy scale and we can perturbatively expand with respect to this scale. This non-relativistic expansion with parameter $\varepsilon_p = p^2/m^2$ is another approximation on top of the gradient approximation that we have explained in the previous chapter and which is denoted by $\varepsilon_{\hbar} \propto \hbar^2 m^{-2} {^{(3)}\nabla_x^2},...$. We find \begin{multline} \widetilde{b} = - \langle \partial^{\mu} \hat{\phi} \partial_{\mu} \hat{\phi} \rangle = \int \frac{d^3 p}{\gamma^{1/2}} \omega_p \big( {f}_1^{+} - f_2 \big) - \gamma^{ij} \int \frac{d^3 p}{\gamma^{1/2}} p_i p_j\frac{{f}_1^{+} + {f}_2 }{\omega_p} \\- \frac{\hbar^2}{4} {^{(3)} \nabla_i} {^{(3)} \nabla^i} \int \frac{d^3 p}{\gamma^{1/2}} \frac{{f}_1^{+} + {f}_2 }{\omega_p} \\ = \int \frac{d^3 p}{\gamma^{1/2}} \omega_p \big( {f}_1^{+} - {f}_2 \big) \Big[ 1 + \mathcal{O}\big( \varepsilon_{p} \big) + \mathcal{O}\big( \varepsilon_{\hbar} \big) \Big] \, . \end{multline} However, this expansion is only meaningful if ${f}_1^{+} \neq {f}_2 $ which needs not to be satisfied for arbitrary times and initial conditions. Just for illustration one can consider the classical field case in a perturbed FLRW-universe. The solution will be oscillatory \begin{equation} \phi_{\text{cl}} (x) \propto \sqrt{\rho_{\text{cl}}(x)} \cos \big[ m \int^{x^0} a d\tilde{x}^0 - v(x) - \theta \big] \,, \label{1PIPHi} \end{equation} and thus the correlator $\langle : \hat{\Pi}^2 : \rangle$ is periodically and for short times not determined by the scale $m$ but by a smaller energy scale \begin{equation} {\Pi}_{\text{cl}} \propto m \sqrt{\rho_{\text{cl}}(x)} \sin \big[ ... \big] + \dot{v}_{\text{cl}}(x)\sqrt{\rho_{\text{cl}}(x)} \sin \big[... \big] - \dot{\sqrt{\rho_{\text{cl}}}(x)} \cos \big[ ...\big] \, . \end{equation} However, the case of classical fields is itself not problematic since we already have the exact answer for $e,P$ and $u^{\mu}$. We only wanted to make the reader aware that an expansion with respect to the scale $m$ might be more subtle than one would naively expect. Still, in order to make progress with the non-relativistic limit we will assume that \begin{equation} {f}_2 \propto \mathcal{O} \big( \varepsilon_{\lbrace p,\hbar \rbrace} \big) {f}_1^{+} \, , \end{equation} which matches one of the conditions for a pure particle limit, that we formulated in the end of the last section. The symbol $\varepsilon_{\lbrace p,\hbar \rbrace}$ denotes a correction in either $\varepsilon_{p}$ or $\varepsilon_{\hbar}$. It is clear from the one-point function analysis in \eqref{1PIPHi} that this condition requires a description that goes beyond coherent states (unless an averaging procedure is employed). Once this condition is satisfied, it makes sense to continue the expansion with respect to the scale $m$ and find \begin{equation} \frac{\widetilde{c}}{\widetilde{b}^2} = \mathcal{O} \big( \varepsilon_{\lbrace p,\hbar \rbrace} \big) \, , \quad \frac{\widetilde{d}}{\widetilde{b}^3} = \mathcal{O} \big( \varepsilon_{\lbrace p,\hbar \rbrace }^2 \big) \, , \quad \frac{\widetilde{e}}{\widetilde{b}^4} = \mathcal{O} \big( \varepsilon_{ \lbrace p,\hbar \rbrace }^{5/2} \big) \, , \end{equation} where we used \begin{multline} \langle : \partial^{\mu} \hat{\phi} \partial_{\nu} \hat{\phi} :\rangle \langle : \partial^{\nu} \hat{\phi}\partial_{\mu} \hat{\phi} : \rangle = \Bigg[ \int \frac{d^3 p}{\gamma^{1/2}} \omega_p \Big({f}_1^{+} - {f}_2 \Big) \Bigg]^2 \\ - 2 \gamma^{ij} \Bigg[\frac{\hbar}{2}{^{(3)} \nabla_i} \int \frac{d^3 p}{\gamma^{1/2}}{f}_3 + \int \frac{d^3 p}{\gamma^{1/2}} p_i {f}_1^{-} \Bigg] \Bigg[\frac{\hbar}{2}{^{(3)} \nabla_j} \int \frac{d^3 p}{\gamma^{1/2}} {f}_3 - \int \frac{d^3 p}{\gamma^{1/2}} p_j {f}_1^{-} \Bigg] \\ +\gamma^{jk} \gamma^{il}\Bigg[ \int \frac{d^3 p}{\gamma^{1/2}} p_i p_j\frac{{f}_1^{+} +{f}_2 }{\omega_p} + \frac{\hbar^2}{4} {^{(3)} \nabla_i} {^{(3)} \nabla_j} \int \frac{d^3 p}{\gamma^{1/2}} \frac{{f}_1^{+} + {f}_2 }{\omega_p} \Bigg] \\ \times \Bigg[ \int \frac{d^3 p}{\gamma^{1/2}} p_k p_l \frac{{f}_1^{+} + {f}_2 }{\omega_p} + \frac{\hbar^2}{4} {^{(3)} \nabla_k} {^{(3)} \nabla_l} \int \frac{d^3 p}{\gamma^{1/2}} \frac{{f}_1^{+} + {f}_2 }{\omega_p} \Bigg] \, , \end{multline} and similar expression for the cubic trace and the determinant. We can now perturb the quartic equation for the eigenvalue ${\sigma}_0$ around its zero order solution $\overline{{\sigma}_0} = - \widetilde{b}$ and find \begin{alignat}{2} {\sigma}_0 &= - \widetilde{b} + \frac{\widetilde{c}}{\widetilde{b}}+ \mathcal{O} \big( \varepsilon_{\lbrace p, \hbar \rbrace}^2 \big)= \langle : \partial^{\mu} \hat{\phi} \partial_{\mu} \hat{\phi} : \rangle - \frac{1}{2} \frac{\langle : \partial^{\mu} \hat{\phi} \partial_{\mu} \hat{\phi} : \rangle^2 - \langle : \partial^{\mu} \hat{\phi} \partial_{\nu} \hat{\phi} : \rangle \langle : \partial^{\nu} \hat{\phi}\partial_{\mu} \hat{\phi} : \rangle}{\langle : \partial^{\mu} \hat{\phi} \partial_{\mu} \hat{\phi} : \rangle} + \mathcal{O} \big( \varepsilon_{\lbrace p, \hbar \rbrace}^2 \big) \\ &=-\gamma^{-1} \langle : \hat{\Pi} \hat{\Pi} : \rangle + \langle : \hat{\Pi} \hat{\Pi} : \rangle^{-1} \langle :\hat{\Pi} \partial_i \hat{\phi} : \rangle \gamma^{ij} \langle : \partial_j \hat{\phi} \hat{\Pi} : \rangle + \mathcal{O} \big( \varepsilon_{\lbrace p, \hbar \rbrace}^2 \big) \\ &= -\int \frac{d^3 p}{\gamma^{1/2}} \omega_p \Big({f}_1^{+} - {f}_2 \Big) + \Bigg[ \int \frac{d^3 p}{\gamma^{1/2}} \omega_p \Big({f}_1^{+} - {f}_2 \Big)\Bigg]^{-1 } \\ & \times\gamma^{ij} \Bigg[\frac{\hbar^2}{4}{^{(3)} \nabla_i} \int \frac{d^3 p}{\gamma^{1/2}} {f}_3 {^{(3)} \nabla_j} \int \frac{d^3 p}{\gamma^{1/2}} {f}_3 - \int \frac{d^3 p}{\gamma^{1/2}} p_i {f}_1^{-} \int \frac{d^3 p}{\gamma^{1/2}} p_j {f}_1^{-} \Bigg] + \mathcal{O} \big( \varepsilon_{\lbrace p, \hbar \rbrace}^2 \big)\, . \end{alignat} When we now calculate the energy density up to this order, we find that the leading order contribution containing ${f}_2 $ drops out \begin{multline} e = \int \frac{d^3 p}{\gamma^{1/2}} \omega_p {f}_1^{+} - \Bigg[ \int \frac{d^3 p}{\gamma^{1/2}} \omega_p \Big({f}_1^{+} - {f}_2 \Big)\Bigg]^{-1}\gamma^{ij} \Bigg[\frac{\hbar^2}{4}{^{(3)} \nabla_i} \int \frac{d^3 p}{\gamma^{1/2}} {f}_3 {^{(3)} \nabla_j} \int \frac{d^3 p}{\gamma^{1/2}} {f}_3 \\ - \int \frac{d^3 p}{\gamma^{1/2}} p_i {f}_1^{-} \int \frac{d^3 p}{\gamma^{1/2}} p_j {f}_1^{-} \Bigg] + \frac{\hbar^2}{8} {^{(3)} \nabla_j}{^{(3)} \nabla^j} \int \frac{d^3 p}{\gamma^{1/2}} \frac{{f}_1^{+} + {f}_2 }{\omega_p} + \mathcal{O} \big( \varepsilon_{\lbrace p, \hbar \rbrace}^2 \big)\, . \end{multline} Considering the pressure, we find that the dependence on the density ${f}_2 $ is still present to leading order, \begin{multline} P = \frac{1}{3}\int \frac{d^3 p}{\gamma^{1/2}} \gamma^{ij} \frac{p_i p_j}{\omega_p} \Big({f}_1^{+} + {f}_2 \Big) -\int \frac{d^3 p}{\gamma^{1/2}} \omega_p {f}_2 \\+\frac{1}{3} \Bigg[ \int \frac{d^3 p}{\gamma^{1/2}} \omega_p \Big({f}_1^{+} - {f}_2 \Big)\Bigg]^{-1}\gamma^{ij} \Bigg[\frac{\hbar^2}{4}{^{(3)} \nabla_i} \int \frac{d^3 p}{\gamma^{1/2}} {f}_3 {^{(3)} \nabla_j} \int \frac{d^3 p}{\gamma^{1/2}} {f}_3 \\ - \int \frac{d^3 p}{\gamma^{1/2}} p_i {f}_1^{-} \int \frac{d^3 p}{\gamma^{1/2}} p_j {f}_1^{-} \Bigg] - \frac{\hbar^2}{24} {^{(3)} \nabla_j}{^{(3)} \nabla^j} \int \frac{d^3 p}{\gamma^{1/2}} \frac{{f}_1^{+} + {f}_2 }{\omega_p} + \mathcal{O} \big( \varepsilon_{\lbrace p, \hbar \rbrace}^2 \big) \,. \end{multline} Similarly, let us compute the four-velocity to next-to-leading order. This can done by considering \begin{multline} u^{\mu} = \alpha \Big[ \langle : \partial^{\mu} \hat{\phi} \partial_{\nu} \hat{\phi} :\rangle \langle : \partial^{\nu} \hat{\phi}\partial_{\rho} \hat{\phi} :\rangle \langle : \partial^{\rho} \hat{\phi}\partial_{0} \hat{\phi} :\rangle \\ - \big( \langle : \partial^{\nu} \hat{\phi} \partial_{\nu} \hat{\phi} :\rangle - {\sigma}_0 \big) \langle : \partial^{\mu} \hat{\phi} \partial_{\nu} \hat{\phi} :\rangle \langle : \partial^{\nu} \hat{\phi}\partial_{0} \hat{\phi} :\rangle + \mathcal{O}\big( \varepsilon_{\lbrace p, \hbar \rbrace}^{2} \big) \Big]\, , \quad u^{\mu} u_{\mu } = -1\, . \end{multline} We compute the non-normalized Lorentz factor first \begin{multline} \frac{W}{\alpha} = - \frac{n_{\nu}u^{\nu}}{\alpha} = -N \gamma^{-3} \langle : \hat{\Pi} \hat{\Pi} :\rangle^3 -N^k \gamma^{-5/2}\langle \hat{\Pi} \partial_k \hat{\phi} :\rangle \langle : \hat{\Pi} \hat{\Pi} :\rangle^2 \\ + N \gamma^{-2} \langle : \hat{\Pi} \hat{\Pi} :\rangle \langle : \hat{\Pi} \partial_i \hat{\phi} :\rangle \gamma^{ij} \langle : \partial_j \hat{\phi} \hat{\Pi} :\rangle + N\gamma^{-2} \gamma^{ij} \langle : \partial_i \hat{\phi} \partial_j \hat{\phi} :\rangle \langle : \hat{\Pi} \hat{\Pi} :\rangle^2+ \mathcal{O}\big( \varepsilon_{\lbrace p, \hbar \rbrace}^{3/2} \big)\, . \end{multline} Next, we compute \begin{multline} \frac{u^k}{\alpha} = N^k \gamma^{-3} \langle : \hat{\Pi} \hat{\Pi} :\rangle^3+ \frac{N^k N^s}{N} \gamma^{-5/2} \langle : \hat{\Pi} \hat{\Pi} :\rangle^2 \langle : \hat{\Pi} \partial_s \hat{\phi} :\rangle + \gamma^{kl}N\langle : \partial_l \hat{\phi} \hat{\Pi} :\rangle \gamma^{-2} \langle : \hat{\Pi} \hat{\Pi} :\rangle^2\\ + \gamma^{kl} N^s \gamma^{-2} \langle :\partial_l \hat{\phi} \hat{\Pi} :\rangle\langle : \hat{\Pi} \hat{\Pi} :\rangle \langle : \hat{\Pi} \partial_s \hat{\phi} :\rangle - N^k\gamma^{-2} \langle : \hat{\Pi} \hat{\Pi} :\rangle\langle : \hat{\Pi} \partial_i \hat{\phi} :\rangle \gamma^{ij}\langle: \partial_j \hat{\phi} \hat{\Pi} :\rangle \\ - N^k\gamma^{-2} \langle : \hat{\Pi} \hat{\Pi} :\rangle^2 \langle : \partial_i \hat{\phi} \partial_j \hat{\phi} :\rangle \gamma^{ij}+ \mathcal{O}\big( \varepsilon_{\lbrace p, \hbar \rbrace}^{3/2} \big)\, . \end{multline} Finally, we can calculate the spatial part of the four-velocity without the need to explicitly calculate the normalization factor $\alpha$, \begin{multline} v^k = \frac{\alpha^{-1}u^{k} }{\alpha^{-1} W} + \frac{N^k}{N} = - \gamma^{kl} \frac{\langle : \partial_l \hat{\phi} \hat{\Pi} : \rangle}{\langle : \hat{\Pi} \hat{\Pi} : \rangle} + \mathcal{O}\big( \varepsilon_{\lbrace p, \hbar \rbrace}^{3/2} \big) \\ = - \gamma^{kl} \Bigg[ \int \frac{d^3 p}{\gamma^{1/2}} \omega_p \Big({f}_1^{+} -{f}_2 \Big)\Bigg]^{-1} \Bigg[\frac{\hbar}{2}{^{(3)} \nabla_i} \int \frac{d^3 p}{\gamma^{1/2}} {f}_3 - \int \frac{d^3 p}{\gamma^{1/2}} p_i {f}_1^{-} \Bigg] + \mathcal{O}\big( \varepsilon_{\lbrace p, \hbar \rbrace}^{3/2} \big) \, , \end{multline} and the normalized Lorentz factor is read-off in the standard way \begin{equation} W = 1 + \frac{1}{2} v^i v^j \gamma_{ij} + \mathcal{O}\big( v^4 \big) = 1 + \frac{1}{2} \gamma^{kl} \frac{\langle : \partial_k \hat{\phi} \hat{\Pi} : \rangle\langle: \partial_l \hat{\phi} \hat{\Pi} : \rangle}{\langle : \hat{\Pi} \hat{\Pi} : \rangle^2} + \mathcal{O}\big( \varepsilon_{\lbrace p, \hbar \rbrace}^{2} \big)\, . \end{equation} In order to recover expressions in terms of a purely classical particle distribution, we need to assume that the $\varepsilon_{\hbar}$ corrections are negligible with respect to the $\varepsilon_{p}$ corrections and that our initial state is allowing for a hierarchy $m \gg \Delta p \gg \frac{\hbar}{\Delta x} $. Once we put forward the identification of even ($ {f}_1^{+} $) and odd (${f}_1^{-} $) phase-space densities that we discussed in the previous section, we end up with the following expressions, \begin{multline} e = \int \frac{d^3 p}{\gamma^{1/2}} \omega_p {f}_1^{+} + \Bigg[ \int \frac{d^3 p}{\gamma^{1/2}} \omega_p {f}_1^{+} \Bigg]^{-1}\gamma^{ij} \Bigg[ \int \frac{d^3 p}{\gamma^{1/2}} p_i {f}_1^{-} \int \frac{d^3 p}{\gamma^{1/2}} p_j {f}_1^{-} \Bigg] +\mathcal{O} \big( \varepsilon_\hbar \big) + \mathcal{O} \big( \varepsilon_p^2\big) \\ = \int \frac{d^3 p}{\gamma^{1/2}} \omega_p {f}_{1} + v^k \int \frac{d^3 p}{\gamma^{1/2}} p_k {f}_{1} +\mathcal{O} \big( \varepsilon_\hbar \big) + \mathcal{O} \big( \varepsilon_p^2\big) \, , \end{multline} \begin{multline} P = \frac{1}{3}\int \frac{d^3 p}{\gamma^{1/2}} \gamma^{ij} \frac{p_i p_j}{\omega_p} {f}_1^{+} \\ -\frac{1}{3} \Bigg[ \int \frac{d^3 p}{\gamma^{1/2}} \omega_p {f}_1^{+} \Bigg]^{-1}\gamma^{ij} \Bigg[ \int \frac{d^3 p}{\gamma^{1/2}} p_i {f}_1^{-} \int \frac{d^3 p}{\gamma^{1/2}} p_j {f}_1^{-} \Bigg] +\mathcal{O} \big( \varepsilon_\hbar \big) + \mathcal{O} \big( \varepsilon_p^2\big) \\ = \frac{1}{3}\int \frac{d^3 p}{\gamma^{1/2}} \gamma^{ij} \frac{p_i p_j}{m} {f}_{1} -\frac{1}{3} v^k\int \frac{d^3 p}{\gamma^{1/2}} p_k {f}_{1} +\mathcal{O} \big( \varepsilon_\hbar \big) + \mathcal{O} \big( \varepsilon_p^2\big) \,, \end{multline} \begin{multline} v_k = \Bigg[ \int \frac{d^3 p}{\gamma^{1/2}} \omega_p {f}_1^{+} \Bigg]^{-1} \Bigg[ \int \frac{d^3 p}{\gamma^{1/2}} p_k {f}_1^{-} \Bigg] +\mathcal{O} \big( \varepsilon_\hbar \big) + \mathcal{O} \big( \varepsilon_p^{3/2}\big) \\ = \Bigg[ \int \frac{d^3 p}{\gamma^{1/2}} {f}_{1} \Bigg]^{-1} \int \frac{d^3 p}{\gamma^{1/2}} \frac{p_k}{m} {f}_{1} +\mathcal{O} \big( \varepsilon_\hbar \big) + \mathcal{O} \big( \varepsilon_p^{3/2}\big) \, . \end{multline} These expressions are identical to the expressions one would obtain for a distribution of classical non-relativistic particles in curved space-time. The above identification shows once more that such classical distributions may be represented by two-point functions of real scalar field operators via the Wigner transformation \eqref{genDefWignerOp} and the subsequent recombination \eqref{deff+} to \eqref{deff2}, always provided we are given a state that behaves classical enough. An example for such a state was given in \cite{Pirk:1989bs} for an FRLW-universe with a particular vacuum choice. We identify the correlators $f_{2,3}$ in our paper with combinations of the squeezing contributions $\langle \hat{a}_k \hat{a}_{-k} \rangle$ and $\langle \hat{a}_k^{\dagger} \hat{a}_{-k}^{\dagger} \rangle$ in their paper ($\hat{a}_k$ and $\hat{a}^{\dagger}_k$ denote creation and annihilation operators, respectively), which they eventually dropped. The density $f_1$ in our paper, that approximates a classical particle phase-space density, is expressible in terms of $\langle \hat{a}_k^{\dagger} \hat{a}_{k}\rangle$ in their paper and gives an intuitive interpretation of $\hat{f}_1$ as a counting operator. The state-independent (or in this setting vacuum) contributions in \cite{Pirk:1989bs} were removed by hand, which corresponds to the normal ordering prescription. Let us remark that the starting point in \cite{Pirk:1989bs} is a phase-space description that makes use of an off-shell momentum variable which makes it in our opinion difficult to take the other degrees of freedom encoded in $f_{2,3}$ into account (they were dropped in \cite{Pirk:1989bs} as they are in the review literature \cite{trove.nla.gov.au/work/9783845} for Minkowski space-time). \section{Dynamics of phase-space densities} In the previous sections, we have interpreted the averaged phase-space densities \eqref{deff+} to \eqref{deff2} always with respect to the energy-momentum tensor, without self-interactions and without non-minimal couplings to the geometry. The goal of this section is to work out their dynamics in a spatial gradient approximation including the non-minimal coupling to the curvature and even including self-interaction in a one-loop approximation where we assume, for simplicity, that one-point functions $\langle \hat{\phi} \rangle$, $\langle \hat{\Pi} \rangle$ are absent.\footnote{ One-point functions are straightforwardly included by shifting the canonical operators. This shift is necessary since the gradient approximation cannot simply be applied for Wigner transforms of products of classical fields without a smoothing procedure. We discuss this also in \cite{Prokopec:2017ldn} and list the references where such a procedure is pursued.} Another point we have to stress again, is that we will not include anomalous contributions in the following kinetic equations. This means that we assume those contributions to be negligible, which remain after the equations of motion have been applied on the terms that normal order the phase-space operators in \eqref{genDefWignerOp}, which is well justified for the energy scales we are interested in, since such anomalous contributions are of order $R M_P^{-2}$ at the level of the energy-momentum tensor. Moreover, we assume contributions, that result from the boundary of the normal neighbourhood, to be negligible which is a requirement on the state that goes hand in hand with the spatial gradient expansion. \par The dynamics for the averaged phase-space densities $f_{1}^{\pm}$, $f_2$, $f_3$ given in \eqref{deff+} to \eqref{deff2} can be derived by first considering the expectation values ${F}_{\phi \phi}$, ${F}_{\phi \Pi}$, ${F}_{\Pi \phi}$, ${F}_{\Pi \Pi}$ given via \eqref{genDefWignerOp} and acting with a time-derivative, commuting it with the exponential shift operators, using the equations of motions for the canonical fields, commuting the resulting operators back and rewriting them in such a way that they act on the expectation values of ${F}_{\phi \phi}$, ${F}_{\phi \Pi}$, ${F}_{\Pi \phi}$, ${F}_{\Pi \Pi}$ which is the most difficult part of the calculation. Finally, we rewrite everything in terms of the dimensionally rescaled quantities \eqref{deff+} to \eqref{deff2}. The spatial gradient approximation truncates the infinite series of spatial derivatives, that results from commuting various differential operators. We keep on the other hand all time derivatives and thus all degrees of freedom. Since the calculation involves a number of lengthy expressions, it is unavoidable to introduce some notation. A lot of technical details of this procedure are deferred to Appendix \ref{defAndId} to \ref{dynWig}. \par First, we define \begin{eqnarray} \hat{u}^{\pm} &:=& \exp \Bigg[{\pm \frac{r^k}{2}{^{(3)}{\nabla}^H_k}} \Bigg] \hat{\phi} \, , \quad \; \qquad \hat{v}^{\pm} := \exp \Bigg[{\pm \frac{r^k}{2}{^{(3)}{\nabla}^H_k}} \Bigg] \Big[\gamma^{-1/2} \hat{\Pi} \Big] \, , \\ N^{\pm} &:= & \exp \Bigg[{\pm \frac{r^k}{2}{^{(3)}{\nabla}^H_k}} \Bigg] N \, , \quad (NK)^{\pm} := \exp \Bigg[{\pm \frac{r^k}{2}{^{(3)}{\nabla}^H_k}} \Bigg] (NK) \,, \\ \big[ N {:\hat{\phi}^2:} \big]^{\pm} &:= &\exp \Bigg[{\pm \frac{r^k}{2}{^{(3)}{\nabla}^H_k}} \Bigg]\big[ N {:\hat{\phi}^2:} \big] \, , \quad (NR)^{\pm} := \exp \Bigg[{\pm \frac{r^k}{2}{^{(3)}{\nabla}^H_k}} \Bigg] (NR) \,, \end{eqnarray} where $R$ is the four-dimensional Ricci scalar. Moreover, we will need a couple of differential operators denoted by $\mathcal{T}^{\pm}_*$, $\mathcal{M}^{\pm}_*$, $\big({^{(3)} \square}\big)^{\pm}_*$ and $({^{(3)} \nabla} N)^{\pm}_*$, which are calculated in a gradient approximation in appendix \ref{commis} based on the general identities in \ref{defAndId}. We find the following expressions, up to anomalous contributions, boundary terms and higher-order correlators which are all assumed to be small, \begin{multline} \gamma^{1/2} \partial_t \Big[ \gamma^{-1/2} \langle \hat{F}_{\phi \phi} \rangle \Big] = \frac{1}{2}\int_{T\Sigma_t} dr^{3} \gamma^{1/2} e^{-\frac{i }{\hbar}r^k p_k} \big( N^{+} +N^{-}\big) \langle{: \hat{v}^+ \hat{u}^- +\hat{u}^+ \hat{v}^- :}\rangle \\ + \frac{1}{2} \int_{T\Sigma_t} dr^{3} \gamma^{1/2} e^{-\frac{i }{\hbar}r^k p_k} \big( N^{+} -N^{-}\big) \langle {:\hat{v}^+ \hat{u}^- - \hat{u}^+ \hat{v}^- :} \rangle \\ +\int_{T\Sigma_t} dr^{3} \gamma^{1/2} e^{-\frac{i }{\hbar}r^k p_k} \Big(\mathcal{T}^+_* +\mathcal{T}^-_* +\mathcal{M}^+_* +\mathcal{M}^-_* \Big) \langle {: \hat{u}^+ \hat{u}^- :} \rangle \, , \label{F00BeforeInt} \end{multline} \begin{multline} \frac{1}{2}\gamma^{1/2} \partial_t \Big[ \gamma^{-1/2} \langle \hat{F}_{\Pi \phi} + \hat{F}_{\phi \Pi} \rangle \Big] =\frac{1}{2} \int_{T\Sigma_t} dr^{3}_{T\Sigma_t} \gamma^{1/2} e^{-\frac{i }{\hbar}r^k p_k} \big(N^+ + N^- \big) \langle : \hat{v}^+ \hat{v}^- : \rangle \\ +\frac{1}{2}\int dr^{3}_{T\Sigma_t} \gamma^{1/2} e^{-\frac{i }{\hbar}r^k p_k} \Big(\mathcal{T}^+_* + \mathcal{T}^-_* +\mathcal{M}^+_* + \mathcal{M}^-_* + \frac{1}{2}(NK)^{+} +\frac{1}{2}(NK)^{-}\Big) \langle : \hat{v}^+ \hat{u}^- + \hat{u}^+ \hat{v}^- : \rangle \\ +\frac{1}{2}\int_{T\Sigma_t} dr^{3} \gamma^{1/2} e^{-\frac{i }{\hbar}r^k p_k} \Big(\frac{1}{2} (NK)^{+} -\frac{1}{2}(NK)^{-}\Big) \langle { : \hat{v}^+ \hat{u}^- - \hat{u}^+ \hat{v}^- :} \rangle \\ +\frac{1}{2} \int_{T\Sigma_t} dr^{3} \gamma^{1/2} e^{-\frac{i }{\hbar}r^k p_k} \Big( N^+ \big({^{(3)} \square}\big)^+_* + N^- \big({^{(3)} \square}\big)^-_* - \frac{m^2}{\hbar^2}\big( N^{+} +N^{-}\big)- \xi \big[ (NR)^{+} +(NR)^{-}\big] \\- \frac{1}{2 } \frac{{\lambda}}{\hbar}\big( \big[ N \langle :\hat{\phi}^2: \rangle \big] ^{+} +\big[ N \langle: \hat{\phi}^2 :\rangle\big] ^{-}\big) +({^{(3)} \nabla} N)^{+}_*+({^{(3)} \nabla} N)^{-}_* \Big)\langle : \hat{u}^+ \hat{u}^- : \rangle \, ,\label{F+BeforeInt} \end{multline} \begin{multline} \frac{1}{2}\gamma^{1/2} \partial_t \Big[ \gamma^{-1/2} \langle \hat{F}_{\Pi \phi} -\hat{F}_{\phi \Pi} \rangle \Big] = -\frac{1}{2} \int_{T\Sigma_t} dr^{3} \gamma^{1/2} e^{-\frac{i }{\hbar}r^k p_k} \big(N^+ - N^- \big) \langle: \hat{v}^+ \hat{v}^- : \rangle \\ +\frac{1}{2}\int_{T\Sigma_t} dr^{3} \gamma^{1/2} e^{-\frac{i }{\hbar}r^k p_k} \Big(\mathcal{T}^+_* + \mathcal{T}^-_* +\mathcal{M}^+_* + \mathcal{M}^-_* + \frac{1}{2}(NK)^{+} +\frac{1}{2}(NK)^{-} \Big) \langle: \hat{v}^+ \hat{u}^- - \hat{u}^+ \hat{v}^- : \rangle \\ +\frac{1}{2}\int_{T\Sigma_t} dr^{3} \gamma^{1/2} e^{-\frac{i }{\hbar}r^k p_k} \Big( \frac{1}{2}(NK)^{+} -\frac{1}{2}(NK)^{-}\Big)\langle: \hat{v}^+ \hat{u}^- + \hat{u}^+ \hat{v}^- : \rangle \\ +\frac{1}{2} \int_{T\Sigma_t} dr^{3} \gamma^{1/2} e^{-\frac{i }{\hbar}r^k p_k} \Big( N^+ \big({^{(3)} \square}\big)^+_* - N^- \big({^{(3)} \square}\big)^-_* - \frac{m^2}{\hbar^2}\big( N^{+} - N^{-}\big) -\xi \big[ (NR)^{+} -(NR)^{-}\big] \\ - \frac{1}{2 } \frac{{\lambda}}{\hbar}\big( \big[ N \langle : \hat{\phi}^2: \rangle \big] ^{+} -\big[ N \langle :\hat{\phi}^2 : \rangle\big] ^{-}\big) + ({^{(3)} \nabla} N)^{+}_* - ({^{(3)} \nabla} N)^{-}_* \Big)\langle: \hat{u}^+ \hat{u}^- :\rangle \, ,\label{F-BeforeInt} \end{multline} \begin{multline} \gamma^{1/2} \partial_t \Big[ \gamma^{-1/2} \langle \hat{F}_{\Pi \Pi} \rangle \Big] = +\int_{T\Sigma_t} dr^{3} \gamma^{1/2} e^{-\frac{i }{\hbar}r^k p_k} \Big(\mathcal{T}^+_* +\mathcal{T}^-_* +\mathcal{M}^+_* +\mathcal{M}^-_* + (NK)^{+} +(NK)^{-} \Big) \langle: \hat{v}^+ \hat{v}^- : \rangle \\ -\frac{1}{2}\int_{T\Sigma_t} dr^{3} \gamma^{1/2} e^{-\frac{i }{\hbar}r^k p_k} \Big(N^+ \big({^{(3)} \square}\big)^+_* - N^- \big({^{(3)} \square}\big)^-_* - \frac{m^2}{\hbar^2}\big( N^{+} - N^{-}\big) - \xi \big[ (NR)^{+} -(NR)^{-}\big]\\ -\frac{1}{2 } \frac{{\lambda}}{\hbar}\big( \big[ N \langle:\hat{\phi}^2: \rangle \big] ^{+} -\big[ N \langle : \hat{\phi}^2 : \rangle \big] ^{-}\big) +({^{(3)} \nabla} N)^{+}_* - ({^{(3)} \nabla} N)^{-}_* \Big) \langle: \hat{v}^+ \hat{u}^- - \hat{u}^+ \hat{v}^- :\rangle \\ +\frac{1}{2}\int_{T\Sigma_t} dr^{3} \gamma^{1/2} e^{-\frac{i }{\hbar}r^k p_k} \Big(N^+ \big({^{(3)} \square}\big)^+_* + N^- \big({^{(3)} \square}\big)^-_* - \frac{m^2}{\hbar^2}\big( N^{+} + N^{-}\big) -\xi \big[ (NR)^{+} +(NR)^{-}\big] \\ -\frac{1}{2 } \frac{{\lambda}}{\hbar}\big( \big[ N \langle : \hat{\phi}^2 : \rangle \big] ^{+} +\big[ N \langle : \hat{\phi}^2 : \rangle \big] ^{-}\big) + ({^{(3)} \nabla} N)^{+}_* + ({^{(3)} \nabla} N)^{-}_* \Big)\langle: \hat{v}^+ \hat{u}^- + \hat{u}^+ \hat{v}^- : \rangle \, .\label{F11BeforeInt} \end{multline} The dynamical equations for $\hat{F}_{\phi \phi}$, $\hat{F}_{\Pi \phi}$, $\hat{F}_{\phi \Pi}$, $\hat{F}_{\Pi \Pi}$ take a convenient form in terms of the horizontal lift of the covariant derivative \cite{de2011methods} on the cotangent bundle of spatial hypersurfaces \begin{equation} {D}_k := {^{(3)} \nabla}_{k} + p_l {^{(3)} \Gamma^l_{\; k j}} \frac{\partial}{ \partial p_j}\,, \quad D_k p_j = 0\, . \end{equation} The latter derivative transforms covariantly under a change of spatial coordinates. For brevity and to illustrate the structure, we write down the dynamics for the Wigner transformed expectation values ${F}_{\phi \phi}$, ${F}_{\Pi \phi}$, ${F}_{\phi \Pi}$, ${F}_{\Pi \Pi}$ only to leading order in the spatial gradient expansion and the next-to-leading order expressions may be found in appendix \ref{dynWig}, \begin{multline} \partial_t {F}_{\phi \phi} = \Big[ N + \mathcal{O}\big(\hbar^2\big) \Big] \Big[ {F}_{\Pi \phi} + {F}_{\phi \Pi } \Big] + \frac{i }{2} \hbar \Big[ N_{;k} \frac{\partial}{\partial p_k} + \mathcal{O}\big(\hbar^2\big) \Big] \Big[ {F}_{\Pi \phi} - {F}_{\phi \Pi } \Big] \\ + \Bigg[N^k D_k - p_k N^k_{\; ; m} \frac{\partial}{\partial p_m} - NK + \mathcal{O}\big(\hbar^2\big) \Bigg] {F}_{\phi \phi} \, , \end{multline} \begin{multline} \frac{1}{2}\partial_t \big( {F}_{\Pi \phi} +{F}_{\phi \Pi} \big) = \Big[ N + \mathcal{O}\big(\hbar^2\big) \Big] {F}_{\Pi \Pi } +\frac{1}{2} \Bigg[N^k D_k - p_k N^k_{\; ; m} \frac{\partial}{\partial p_m} + \mathcal{O}\big(\hbar^2\big)\Bigg] \big( {F}_{\Pi \phi} +{F}_{\phi \Pi} \big) \\ +\hbar \frac{i}{4} \Big[\big(NK\big)_{;j} \frac{\partial}{\partial p_j} + \mathcal{O}\big(\hbar^2\big)\Big] \big( {F}_{\Pi \phi}-{F}_{\phi \Pi} \big) \\ - \frac{1}{\hbar^2}\Bigg[ N {m^2}+N \gamma^{kj} {p_k p_j} +\frac{1}{2} \hbar{{\lambda}} \Big[N+ \mathcal{O}\big(\hbar^2\big)\Big] \int \frac{d^3 q }{\gamma^{1/2}} {F}_{\phi \phi} (q) + \mathcal{O}\big(\hbar^2\big)\Bigg]{F}_{\phi \phi} \, , \end{multline} \begin{multline} \frac{i}{2} \partial_t \big( {F}_{\Pi \phi} -{F}_{\phi \Pi} \big) = \frac{ \hbar}{2} \Big[ N_{;k} \frac{\partial}{\partial p_k} + \mathcal{O}\big(\hbar^2\big) \Big] {F}_{\Pi \Pi } \\ +\frac{i}{2} \Bigg[N^k D_k - p_k N^k_{\; ; m} \frac{\partial}{\partial p_m} + \mathcal{O}\big(\hbar^2\big)\Bigg] \big( {F}_{\Pi \phi} -{F}_{\phi \Pi} \big) -\frac{\hbar}{4}\Big[ \big(NK\big)_{;j} \frac{\partial}{\partial p_j} + \mathcal{O}\big(\hbar^2\big) \Big]\big( {F}_{\Pi \phi}+{F}_{\phi \Pi} \big)\\ -\frac{1}{2\hbar}\Bigg[ 2N p_j D^j - \omega_p^2 N_{;k} \frac{\partial}{\partial p_k} + \mathcal{O}\big(\hbar^2\big) -\frac{1}{2} {\hbar} {{\lambda}} \Big[ \big[N \int \frac{d^3 q }{\gamma^{1/2}} {F}_{\phi \phi} (q)\big]_{;k} \frac{\partial}{\partial p_k} + \mathcal{O}\big(\hbar^2\big) \Big] \Bigg] {F}_{\phi \phi} \, , \end{multline} \begin{multline} \partial_t {F}_{\Pi \Pi} =\Bigg[N^k D_k +NK - p_k N^k_{\; ; m} \frac{\partial}{\partial p_m} + \mathcal{O}\big(\hbar^2\big)\Bigg] {F}_{\Pi \Pi}\\ -\frac{i}{2\hbar}\Bigg[ 2N p_j D^j + \mathcal{O}\big(\hbar^2\big) -\frac{1}{2} {\hbar} {{\lambda}} \Big[ \big[N \int \frac{d^3 q }{\gamma^{1/2}} {F}_{\phi \phi} (q)\big]_{;k} \frac{\partial}{\partial p_k} + \mathcal{O}\big(\hbar^2\big) \Big] \Bigg]\Big[ {F}_{\Pi \phi} - {F}_{\phi \Pi}\Big] \\ - \frac{1}{\hbar^2}\Bigg[ N {m^2}+N\gamma^{kj} {p_k p_j} \Big] + \mathcal{O}\big(\hbar^2\big) +\frac{1}{2} \hbar {{\lambda}} \Big[ N \int \frac{d^3 q }{\gamma^{1/2}} {F}_{\phi \phi} (q) + \mathcal{O}\big(\hbar^2\big) \Big] \Bigg] \Big[ {F}_{\Pi \phi} + {F}_{\phi \Pi}\Big] \, . \end{multline} The next step is to convert the dynamical equation for the dimensionally unequal expectation values $F_{\phi \phi}$, $F_{\Pi \phi}$, $F_{\phi \Pi}$ and $F_{\Pi \Pi}$ into dynamical equations for the dimensionally equal phase-space densities $f^{\pm}_1$, $f_2$ and $f_3$, that we defined in \eqref{deff+} to \eqref{deff2}. It turns out that several leading order terms cancel in this dimensional rescaling, such that some next-to-leading order terms of the previous equations turn into leading order terms for the equations of the rescaled quantities. We would have to include even higher order terms in the previous calculation for $F_{\phi \phi}$, $F_{\Pi \phi}$, $F_{\phi \Pi}$ and $F_{\Pi \Pi}$ in order to get to next-to-leading order terms for rescaled quantities $f^{\pm}_1$, $f_2$ and $f_3$. However, we see that even certain leading order corrections are of order $\hbar$ and thus first order terms concerning the spatial gradient expansion. We find \begin{multline} \partial_t {f}_1^{+} = \Bigg[ N^k D_k - p_k N^k_{\; ; m} \frac{\partial}{\partial p_m} \Bigg]{f}_1^{+} -\Bigg[ NK + \frac{ p_m p_k }{\omega_p^2} N^{k\, m}_{\;\; ; } \Bigg] {f}_2 - \frac{1}{\omega_p}\Big[ N p_j D^j - \omega_p^2N_{;m} \frac{\partial}{\partial p_m} \Big]{f}_1^{-} \\ +\frac{\hbar}{\omega_p}\Bigg[ \frac{1}{2} p_j N_{;k} \frac{\partial}{\partial p_k } D^j + \frac{1}{4}N D_j D^j -\frac{1}{3} N p_i p_j {^{(3)}R^{i \; \; \; \; j}_{\; q m }} \frac{\partial^2}{\partial p_q \partial p_m} \\ -\frac{1}{12}N p_i {^{(3)}R^i_{\; k}} \frac{\partial}{\partial p_k} + \frac{1}{6} N {^{(3)}R}-\xi N R \Bigg] {f}_3 \\ +\frac{{\lambda}}{2} {\omega_p}\Big[ N\frac{\hbar^3}{\omega_p^2} \int \frac{d^3 q }{\gamma^{1/2}} \frac{ {f}_1^{+}(q) + {f}_2(q)}{\omega_q} \Big]_{;k} \frac{\partial}{\partial p_k}{f}_1^{-} \\ -\frac{\lambda}{2}\frac{\omega_p}{\hbar } \Bigg[ N \frac{\hbar^3}{\omega_p^2} \int \frac{d^3 q}{\gamma^{1/2}} \frac{ {f}_1^{+}(q) + {f}_2(q)}{\omega_q} - \frac{\hbar^2}{8} \frac{\hbar^3}{\omega_p^2} \big[N \int \frac{d^3 q }{\gamma^{1/2}} \frac{ {f}_1^{+}(q) + {f}_2(q)}{\omega_q} \big]_{; k s} \frac{\partial^2}{\partial p_k \partial p_s} \Bigg] {f}_3 \, , \label{finf1+} \end{multline} \begin{multline} \partial_t {f}_1^{-} = \Bigg[N^k D_k - p_k N^k_{\; ; m} \frac{\partial}{\partial p_m} \Bigg] {f}_1^{-} -\frac{\hbar}{2} \big(NK\big)_{;j} \frac{\partial}{\partial p_j} {f}_3 -\frac{1}{ \omega_p}\Big[ N p_j D^j - \omega_p^2N_{;m} \frac{\partial}{\partial p_m} \Big] {f}_1^{+} \\ -\frac{1}{ \omega_p}\Big[ N p_j D^j + N_{;m} {p_l \gamma^{lm}} \Big] {f}_2 +\frac{{\lambda}}{2} \omega_p \Big[ N \frac{ \hbar^3}{\omega_p^2} \int \frac{d^3 q }{\gamma^{1/2}} \frac{ {f}_1^{+}(q) + {f}_2(q)}{\omega_q} \Big]_{;k} \frac{\partial}{\partial p_k} \Big[ {f}_1^{+} + {f}_2 \Big] \, ,\label{finf1-} \end{multline} \begin{multline} \partial_t {f}_2 = 2 \frac{\omega_p}{\hbar} N {f}_3 + {\omega_p} N_{;k} \frac{\partial}{\partial p_k} {f}_1^{-} + \Bigg[ N^k D_k - p_k N^k_{\; ; m} \frac{\partial}{\partial p_m} -\frac{p_i p_k}{\omega_p^2} \big(N K^{ij} - N^{i \; j}_{\; ;} \big) \Bigg]{f}_2 \\ - \Bigg[ NK + \frac{ p_m p_k }{\omega_p^2} N^{k\, m}_{\;\; ; } \Bigg] {f}_1^{+} + N\frac{p_j}{\omega_p} D^j {f}_1^{-} \\ -\frac{\hbar}{\omega_p} \Bigg[ \frac{1}{2} p_j N_{;k} \frac{\partial}{\partial p_k } D^j + \frac{1}{4}N D_j D^j -\frac{1}{3} N p_i p_j {^{(3)}R^{i \; \; \; \; j}_{\; q m }} \frac{\partial^2}{\partial p_q \partial p_m} \\ -\frac{1}{12}N p_i {^{(3)}R^i_{\; k}} \frac{\partial}{\partial p_k} + \frac{1}{6} N {^{(3)}R} -\xi N R \Bigg] {f}_3 \\ +\frac{{\lambda}}{8} {\omega_p}\Big[ N\frac{\hbar^3}{\omega_p^2} \int \frac{d^3 q }{\gamma^{1/2}} \frac{ {f}_1^{+}(q) + {f}_2(q)}{\omega_q} \Big]_{;k} {f}_1^{-} \\ + \frac{\lambda}{2} \frac{\omega_p}{\hbar } \Bigg[ N \frac{\hbar^3}{\omega_p^2} \int \frac{d^3 q }{\gamma^{1/2}} \frac{ {f}_1^{+}(q) + {f}_2(q)}{\omega_q} - \frac{\hbar^2}{8} \frac{\hbar^3}{\omega_p^2}\big[ N \int \frac{d^3 q }{\gamma^{1/2}} \frac{ {f}_1^{+}(q) + {f}_2(q)}{\omega_q} \big]_{; k s} \frac{\partial^2}{\partial p_k \partial p_s} \Bigg] {f}_3 \, ,\label{finf2} \end{multline} \begin{multline} \partial_t {f}_3 = \Bigg[N^k D_k - p_k N^k_{\; ; m} \frac{\partial}{\partial p_m} \Bigg] {f}_3 +\frac{\hbar}{2} \big(NK\big)_{;j} \frac{\partial}{\partial p_j} {f}_1^{-} \\ - \frac{\hbar}{\omega_p}\Bigg[ 2\frac{\omega_p^2}{\hbar^2} N - \frac{\omega_p^2}{4} N_{; qm} \frac{\partial^2}{\partial p_q \partial p_m} - \frac{1}{4} \frac{p_i p_j}{\omega_p^2} N_{;}^{\; ij} - \frac{1}{2} p_j N_{;m} \frac{\partial}{\partial p_m} D^j +\frac{1}{2} \frac{p_i p_j}{\omega_p^2} N_{;}^{\,i} D^j - \frac{1}{4}N D_j D^j \\ +\frac{1}{3} N p_i p_j {^{(3)}R^{i \; \; \; \; j}_{\; q m }} \frac{\partial^2}{\partial p_q \partial p_m} +\frac{1}{12}N p_i {^{(3)}R^i_{\; m}}\frac{\partial}{\partial p_m} +\frac{1}{4}N \frac{p_i p_j}{\omega_p^2} {^{(3)}R^{ ij}} - \frac{1}{6} N {^{(3)}R} + \xi N R \Bigg]{f}_2 \\ - \frac{\hbar}{\omega_p}\Bigg[ \frac{1}{2} p_m N_{; \; \, q}^{\,m} \frac{\partial}{\partial p_q} + \frac{1}{4} N_{; \; j}^{\;j } - \frac{1}{2} \frac{p_i p_j}{\omega_p^2} N_{;}^{\; ij} - \frac{1}{2} p_j N_{;m} \frac{\partial}{\partial p_m} D^j +\frac{1}{2} \frac{p_i p_j}{\omega_p^2} N_{;}^{\,i} D^j - \frac{1}{4}N D_j D^j \\ +\frac{1}{3} N p_i p_j {^{(3)}R^{i \; \; \; \; j}_{\; q m }} \frac{\partial^2}{\partial p_q \partial p_m} +\frac{1}{12}N p_i {^{(3)}R^i_{\; m}}\frac{\partial}{\partial p_m} +\frac{1}{4}N \frac{p_i p_j}{\omega_p^2} {^{(3)}R^{ ij}} - \frac{1}{6} N {^{(3)}R} + \xi N R \Bigg] {f}_1^{+} \\ -\frac{\lambda}{2} \frac{\omega_p}{\hbar } \Bigg[ N \frac{\hbar^3}{\omega_p^2} \int \frac{d^3 q }{\gamma^{1/2}} \frac{ {f}_1^{+}(q) + {f}_2(q)}{\omega_q} - \frac{\hbar^2}{8} \frac{\hbar^3}{\omega_p^2} \big[ N \int \frac{d^3 q }{\gamma^{1/2}} \frac{ {f}_1^{+}(q) + {f}_2(q)}{\omega_q} \big]_{; k s} \frac{\partial^2}{\partial p_k \partial p_s} \Bigg] \Big[ {f}_1^{+} + {f}_2 \Big] \, .\label{finf3} \end{multline} Equations \eqref{finf1+} to \eqref{finf3} are the main result of this paper.\footnote{Although we excluded states containing one-point functions for simplicity, they will give rise to similar equations subject to a constraint equation due to the lack of degrees of freedom - such equations have been derived for example in \cite{Widrow:1993qq} where higher time derivatives and thus degrees of freedom were dropped. However, the conditions to obtain a leading order classical particle Vlasov equation \eqref{Vlasov} can only be satisfied on time-averages over the expectation values. This can be understood for example by considering the Minkowski space-time limit where the two solutions of $\langle \hat{f}_2 \rangle $ that are determined by condensates are given by $\langle \hat{f}_2 \rangle_{\text{cond}}^{\text{flat}} = \alpha \cos(2 \omega_p t) + \beta \sin (2 \omega_pt) $. Fixing the proportionality constants of these two solutions to be zero, as we were able to do it for the general case, would also set $\langle \hat{f}_1^{+} \rangle_{\text{cond}}^{\text{flat}} = (\alpha^2 + \beta^2)^{1/2}$ to zero and yields only a trivial solution of the system. The resolution is thus to keep all the degrees of freedom and perform a time-averaging in this case.} These equations are an effective description of the state-dependent (normal ordered) part of the dynamics of a real scalar field quantum state in curved space-time in the language of phase-space variables $(x^{\mu},p_k)$, under the assumption that the state admits a gradient and loop expansion. For macroscopic observables of systems, that have some notion of classicality, the quantities $f_1^{\pm}(x^{\mu},p_k)$ and $f_{2,3}(x^{\mu},p_k)$ should dominate over the state-independent part coming from the quantum commutation relation. They can be given any initial value that is compatible with the spatial gradient approximation and their symmetry properties. \par We first note that all equations for the operators $f_1^{\pm}$ and $f_{2,3}$ are spatially covariant and provide in principle candidates for phase-space density operators. However, we also need to realize that $f_{1}^+$ and ${f}_{2,3}$ are even functions in $p_k$ whereas ${f}_1^{-}$ is an odd function in the momentum. Thus, only some combination these two-point functions can account for the degrees of freedom of a classical particle phase-space density. A promising candidate is read off from the first pair of equations \eqref{finf1+} and \eqref{finf1-} in the non-interacting limit, \begin{multline} \Bigg[ \partial_{t} - N^k {^{(3)}D}_k + \big( {^{(3)} \nabla}_j N^k \big) p_k \frac{\partial}{\partial p_j} + N \frac{p^k}{\omega_p} {^{(3)}D}_k - \omega_p \big[ \partial_j N \big] \frac{\partial}{\partial p_j} \\ +\frac{{\lambda}}{2} {\omega_p}\Big[ N\frac{\hbar^3}{\omega_p^2} \int \frac{d^3 q }{\gamma^{1/2}} \frac{ {f}_1^{+}(q)}{\omega_q} \Big]_{;k} \frac{\partial}{\partial p_k} + \mathcal{O}\big( \hbar^2 \big)\Bigg] \Big[ {f}_1^{+} + {f}_1^{-} \Big] = \mathcal{O} \big({f}_{2,3} \big) \, . \label{almostVlasov} \end{multline} By rewriting the above equation for ${f}_{1} = f_1^+ + f_1^-$, we find the Vlasov equation with a one loop correction, that can be interpreted as a mass shift, as well as source terms that are due to the additional correlators in the scalar field description and higher-order spatial gradient corrections. Undoing the ADM-decomposition, the equation reads \begin{eqnarray} \Big[ p^{\mu} \partial_{\mu} + p_{\mu} p^{\nu} \Gamma^{\mu}_{\; \nu i} \frac{\partial}{\partial p_i} +\frac{{\lambda}}{2} {\omega_p}\Big[ N\frac{\hbar^3}{\omega_p^2} \int \frac{d^3 q }{\gamma^{1/2}} \frac{ {f}_1(q)}{\omega_q} \Big]_{;k} \frac{\partial}{\partial p_k} + \mathcal{O}\big( \hbar^2 \big) \Big] {f}_{1} (x^{\mu}, p_j) &=& \mathcal{O} \big({f}_{2,3} \big) \, , \label{Vlasov} \\ p^0 (x^{\mu}, p_j) := \sqrt{\big( g^{0j} p_j\big)^2 - g^{00} \big( m^2 + g^{ij} p_i p_j \big)} = \sqrt{\big( g^{0j} p_j\big)^2 - g^{00} \omega_p^2} && \, . \end{eqnarray} We remark that within the one loop approximation we do not find $2 \rightarrow 2$ particle scattering processes which come from self-energy diagrams whose first contribution is proportional to $\lambda^2$.\footnote{It is the one-loop approximation that allows the system to close on-shell since two-loop contributions will integrate off-shell energies which are not only supported on the mass-shell. However, one can employ a quasi-particle approximation for the 2-loop contributions which eventually leads also to a $2 \rightarrow 2$ particle scattering contribution as it appears on the right hand side of the classical particle Boltzmann equation. These properties for the $\lambda \phi^4$ theory are known in Minkowsky space \cite{trove.nla.gov.au/work/9783845}\cite{Berges:2015kfa}, but a general curved space-time discussion is still lacking.} However, the self-masses $\propto \lambda$ are included and - depending on the problem - may already give significant corrections to the dynamics of the Vlasov equation. \par Combining the other pair of equations \eqref{finf2} and \eqref{finf3} shows that ${f}_2$ and ${f}_3$ are to leading order oscillators with frequency of the particle energy $\omega_p$. Thus, equations \eqref{finf1+} to \eqref{finf3} generalize the Vlasov equation for relativistic particles in curved space-time by including the additional densities $f_{2,3}$. The latter densities can be rewritten as higher-order time derivatives acting on $f_1^{\pm}$. We conclude that if we wanted to recover the limit of a classical particle density, we would have to impose a state such that $ {f}_2 $ is initially of higher order in $\hbar$ and also remains of higher order in $\hbar$, which then translates into a condition for ${f}_3 $ and finally into ${f}_2 \sim \mathcal{O}(\hbar^2) \big( {f}_1^{+} \, , {f}_1^{-} \big) $ (these are rough estimates and it remains to be studied whether such conditions can be maintained by the dynamics). First-order corrections to \eqref{Vlasov}, that are contained in \eqref{finf1+} to \eqref{finf3}, may be obtained by expanding the phase-space densities into harmonics and see how the oscillatory terms back-react on the non-oscillatory part of the density $f_1$ via the self-interaction terms or via non-linear terms that are obtained by making use of the Einstein equations. Also keeping in mind a generalization in terms of higher-loop effects, we think that the advantage of our formalism lies in an end-to-end link between quantum field theory and particle kinetics in curved space-time, that allows one to systemically include field theoretic corrections, while being able to refer to a (in some sense modified) particle interpretation. \section{Generalized cold dark matter kinetics in linearized gravity} In the last section we have dealt with a set of fairly general but lengthy equations. The idea of this section is to see how they reduce to more feasible sets of equations once we apply them to the concrete cosmological set up of cold dark matter perturbations between galactic scales and the Hubble horizon. The main result is a generalization of the kinetic description of classical particle cold dark matter as it discussed for example in \cite{Bernardeau:2001qr}. \par Let us fix a linearly perturbed metric in FLRW background in the generalized Newtonian gauge, that includes vector and tensor perturbations and which is also referred to as Poisson gauge \cite{Bertschinger:1993xt} \cite{Bruni:1996im}. We label equal-time hypersurfaces by the variable $\eta$ and denote spatial coordinates by $x$, \begin{equation} \partial_t \rightarrow \partial_{\eta} = \big(\,.\,\big)^{\prime} \, . \end{equation} Indices for the linear quantities are raised and lowered by the comoving background spatial metric $\delta_{ij}$ as in \cite{Malik:2008im}. The (3+1)-dimensional metric takes the form \begin{equation} g_{\mu \nu} = a^2 \begin{pmatrix} - (1 + 2 \Phi_N )& - s_i \\ - s_i & \delta_{ij}(1 - 2 \Psi_N) + h_{ij} \\ \end{pmatrix}\, , \; \delta^{kj} \partial_k s_j = 0\, , \; \delta^{kj} \partial_k h_{ji} = 0\, , \;\delta^{ij} h_{ij} = 0\, , \end{equation} such that the spatial metric, its inverse and its determinant are given to linear order by \begin{equation} \gamma_{ij} = a^2 \big[ \delta_{ij}(1 - 2 \Psi_N) + h_{ij}\big]\, , \; \gamma^{ij} = a^{-2}\big[\delta^{ij}(1 + 2 \Psi_N) - h^{ij} \big]\, , \; {\gamma}^{1/2}= a^3 \big( 1 - 3 \Psi_N \big) \, , \end{equation} and the lapse function and shift vector read \begin{equation} N = a(1 + \Phi_N) \,, \quad N^i = -s^i = - \delta^{ij} s_j = a^{-2} \delta^{ij} N_j\, . \end{equation} We define a gravitational perturbation parameter related to the metric perturbations by \begin{equation} \varepsilon_g \sim \Phi_N \, , s_i \, ,\Psi_N \, , h_{ij} \ll 1\, . \end{equation} We have \begin{equation} {^{(3)}\Gamma^l_{\; km}} = \delta^{ls} \delta_{mk} \partial_s \Psi_N -\delta^l_{\;m} \partial_k \Psi_N - \delta^{l}_{\; k} \partial_m \Psi_N +\frac{1}{2}\big( \partial_k h^l_{\; m} + \partial_m h^l_{\;k} - \delta^{sl} \partial_s h_{km} \big)\,. \end{equation} Let us collect further geometrical quantities, that appear in the Einstein equation in ADM decomposition. \begin{equation} {^{(3)}R}_{ij} = \delta_{ij} \Delta \Psi_N + \partial_i \partial_j \Psi_N - \frac{1}{2} \Delta h_{ij} \, , \quad {^{(3)}R} = \frac{4}{a^2} \Delta \Psi_N\, , \end{equation} \begin{eqnarray} K_{ij} &=& - a \mathcal{H} \big[ \delta_{ij}(1- \Phi_N - \mathcal{H}^{-1}\Psi_N^{\prime} - 2 \Psi_N) + h_{ij}\big] - \frac{a}{2} \big[ s_{i,j} +s_{j,i} + h_{ij}^{\prime} \big] \, ,\\ K&=& - 3 a^{-1} \mathcal{H} (1- \Phi_N - \mathcal{H}^{-1}\Psi_N^{\prime})\, , \quad K^2 = 3 K_{ij} K^{ij} + \mathcal{O}\big( \varepsilon_g^2 \big) \, . \end{eqnarray} We split the normal observer momentum vector and stress tensor in scalar, vector and tensor components, \begin{eqnarray} {P}_i &=& a \partial_i P_L + a {P}_i^{T}\, , \quad \partial^j P^T_j = 0\, \label{momdecom},\\ {S}_{ij} &=& \frac{a^2\delta_{ij}}{3} S + a^2 \big( \partial_i \partial_j S^A - \frac{\Delta}{3} \delta_{ij} S^A + \partial_i S_j + \partial_j S_i + S_{ij}^{TT} \big) , \partial^j S_j = \partial^j S_{ij}^{TT} = \delta^{ij} S_{ij}^{TT} = 0 .\label{stressdecom} \end{eqnarray} Indices for the quantities on the right-hand-side of \eqref{momdecom} and \eqref{stressdecom} will be raised and lowered with the flat three-dimensional metric. Let us write down the Einstein equations in terms of the perturbed metric \begin{eqnarray} 3 \mathcal{H}^2 + 2 \Delta \Psi_N & =& \frac{\hbar}{M_P^2} a^2\Big[ E -3 \mathcal{H} P_L \Big] \, ,\label{E1}\\ \frac{1}{2} \Delta s_i & = & - \frac{\hbar}{M_P^2} a^2 P_i^T \, , \label{E2} \\ \Phi_N - \Psi_N &=& - \frac{\hbar}{M_P^2} a^2 S_A \, , \label{E3}\\ h_{ij}^{\prime \prime} +2 \mathcal{H} h_{ij}^{\prime} - \Delta h_{ij} & =& \frac{\hbar}{M_P^2} 2a^2{S}^{\text{TT}}_{ij} \label{E4} \, , \end{eqnarray} Energy-momentum conservation reads in linearized gravity \begin{multline} \partial_{\eta} \big(a^3 \big[ 1 - 3 \Psi_N \big]E \big) + a^3 \partial_i \Big(\big[\delta^{ij}(1 + \Phi_N- \Psi_N) - h^{ij} \big]a^{-1} {P}_j + \delta^{ij} s_j {E} \Big) \\+ a^3 \Big( \mathcal{H} \big[ 1 - \mathcal{H}^{-1}\Psi_N^{\prime} - 3 \Psi_N\big]S + \big[ s_{i ,j} + \frac{1}{2} h^{\prime}_{ij} \big] {S}^{ij} + a^{-1}\delta^{ij}{P}_j \partial_i \Phi_N \Big) = 0 \, , \label{econsLin} \end{multline} \begin{multline} \partial_{\eta} \big(a^3 \big[ 1 - 3 \Psi_N \big] {P}_j \big) + a^3 \partial_i \big(a\big[ 1 + \Phi_N- 3 \Psi_N \big] {S}^i_{\, j} + s^i {P}_j \big) \\ = a^4 \big(a^2 \frac{1}{2} {S}^{ik} \partial_j h_{ik} - S \partial_j \Psi_N - a^{-1} {P}_i \partial_j s^i - {E} \partial_j \Phi_N \big) \, , \end{multline} which does not help much unless we know how $S_{ij}$ depends on $E$ and $P_i$. Note that it is suggestive to approximate these equations further with the linearized Einstein equations and rewrite $E, P_i, S_{ij}$ in terms of the gravitational perturbations. However, the resulting non-linear terms are not necessarily small since they involve gradient terms of the type $\mathcal{H}^{-2} \Delta $. Some of them may become important around the scale where the density contrast in Fourier space defined via $E(\eta,k) = \bar{E}(\eta) + \delta E(\eta, k)$ is of order one, $\bar{E}^{-1} \delta E (k_{NL}) \propto{\mathcal{H}^{-2}} k^2_{NL} \Psi_N(k_{NL}) \approx 1$. This scale is on the order of roughly $k^{-1}_{\text{NL}} \approx 5 \, \text{Mpc}$. We emphasize that linearization in the gravitational perturbations can still be valid on these scales, although the density contrast has to be treated non-linearly. In the context of cosmological large-scale structures one is typically interested in the evolution on sub-Hubble scales ($ k^{-1}_{H} \lesssim 10^{4} \text{Mpc} $). We capture the corrections, that result from separations with respect to this scale, by introducing a perturbation parameter $\varepsilon_{H}$, \begin{equation} \mathcal{O}\big( \varepsilon_{H}^{-1} \big) \delta g_{\mu \nu} \sim \mathcal{H}^{-2} \Delta\delta g_{\mu \nu} \gg \delta g_{\mu \nu} \,. \end{equation} This expansion allows us for example to drop several corrections in $E$, $P_i$ and $S_{ij}$, that are related to the perturbation of the determinant of the spatial metric $\delta \gamma^{1/2}$. On the other hand, the smallest large-scale structures we are interested in are related to galactic scales $k^{-1}_{\text{g}} \sim 10 \, \text{kpc}$. In order to be consistent with our perturbative schemes, we have to contrast the scale $k^{-1}_{\text{g}}$ with the de Broglie wave length $k^{-1}_{dB} f_i (\vec{k},\vec{p}) \sim \hbar \parallel \frac{\partial}{\partial p_j} f_i (\vec{k},\vec{p}) \parallel$ which was related to the spatial gradient expansion that we have used to derive the kinetic equations \eqref{finf1+} to \eqref{finf3}. By using typical galaxy velocities of $v_{\text{g}} \approx 10^{-3} c$ we can express the de Broglie wavelength in terms of the Compton wavelength $k_{\text{C}}^{-1} \propto \hbar m^{-1}$ as $k^{-1}_{\text{dB}} \sim 10^3 k^{-1}_{\text{C}}$. For dark matter, the mass of which is at the electroweak scale ($\sim 10^{2} \, \text{GeV}$), we find that de Broglie wavelength is of order $k^{-1}_{\text{dB,EW}} \sim 10^{-33} \, \text{kpc} $ and thus spatial gradient corrections can be safely neglected, whereas for ultralight dark matter with mass $\sim 10^{-31} \, \text{GeV}$ we find $k^{-1}_{\text{dB,UL}} \sim 10^{6} \, \text{kpc}$ such that gradient corrections can play a role at galactic scales. However, let us focus here on the less exotic case where $k^{-1}_g \gg k^{-1}_{\text{dB}}$. Moreover, we are given a non-relativistic expansion by means of the galactic velocities \begin{equation} \varepsilon_p \sim \frac{p_i p_j \gamma^{ij}}{m^2} \sim 10^{-6} \, , \end{equation} such that the particle energy is dominated by the mass. This relation justifies at least for certain mass ranges the inclusion of a self-coupling term in the kinetic equations, as we will see shortly. We also want to consider small corrections to the classical particle density picture and demand \begin{equation} f_{1} \gg \left| f_{2,3} \right|\, . \end{equation} In order to stick close to the cold dark matter scenario, we also want a first bound on the dark matter interactions such that it does not source the Hubble rate too much \begin{equation} \lambda \frac{\hbar^3}{m^3} \int \frac{d^3 p}{\gamma^{1/2}} {f}_1^+ \ll 1 \, . \label{interactionConst} \end{equation} Moreover, we want to keep the influence of the non-minimal coupling fairly small such that it cannot spoil the smallness of gradients or gravitational perturbations, \begin{equation} {\hbar^2}| \xi R | \lesssim {m^2} \, . \end{equation} We now express the leading order terms of the right-hand-side of \eqref{E1} to \eqref{E4} in terms of the phase-space densities such that we can plug the constraint equations back in the kinetic equations for $f_{1}^{\pm}$, $f_2$ and $f_3$ and solve them together with the gravitational wave equation. In accordance with the slightly more general discussion around \eqref{EClass} to \eqref{SClass}, we find that the gravitational perturbations get their leading order contributions between galactic scales and the Hubble scale from the two phase-space densities $f_{1}^{\pm}$ (as is the case for the classical particle cold dark matter scenario if we split the classical density into even and odd parts). The Poisson equation reads \begin{equation} 3 \mathcal{H}^2 + 2 \Delta \Psi_N \approx \frac{\hbar}{M_P^2} \frac{m}{a} \int {d^3 p} \, {f}_1^+ = \frac{\hbar}{M_P^2} \frac{m}{a} \int {d^3 p} \, {f}_1\, , \end{equation} and we note that the constraint \eqref{interactionConst} relates the mass and the coupling via \begin{equation} \lambda \frac{\hbar^3}{m^3} \int \frac{d^3 p}{\gamma^{1/2}} {f}_1^+ \ll 1 \quad \longrightarrow \, \quad\lambda \Big( \frac{\hbar^2 \mathcal{H}^2}{a^2 m^2} \Big) \frac{M_P^2}{m^2} \sim \lambda \frac{10^{-8} \big(\text{eV}\big)^4}{m^4} \ll 1. \label{constSelfHub} \end{equation} It is now clear that for masses around the electroweak scale the interaction energy does not influence the Hubble rate whereas it can become important for ultralight particles already for very small couplings. Moreover, vector perturbations and the gravitational slip are given by \begin{eqnarray} \frac{1}{2} \Delta^2 s_i & \approx & - \frac{\hbar}{M_P^2} a^{-1} \Big[ \Delta \int {d^3 p} \, {p_k} {f}_1^{-} - {\partial_i\partial^k} \int {d^3 p} \, {p_k} {f}_1^{-}\Big] \, , \\ \Delta^2 \big( \Phi_N - \Psi_N \big) &\approx & \frac{\hbar}{M_P^2} \frac{3}{2} a^{-3} \Big[\frac{\Delta}{3}\delta^{kj} - {\partial^k\partial^j}\Big] \int {d^3 p} \, {p_k}p_j {f}_1^{+} \, . \end{eqnarray} Note that we can replace $f_1^{\pm}$ with $f_1 = f_1^+ + f_1^-$ in these equations due to their symmetry properties. The only dynamical gravitational perturbation are the traceless, transverse tensor perturbations which obey \begin{multline} h_{ij}^{\prime \prime} +2 \mathcal{H} h_{ij}^{\prime} - \Delta h_{ij} \approx \\ \frac{\hbar}{M_P^2} \frac{2}{m a^3} \int {d^3 p} \Bigg[ \, {p_i}p_j - \frac{\delta_{ij}}{3} p_k p_m \delta^{km} + p_k p_m \frac{\Delta^{-2}}{2} \Big(\partial_i \partial_j + \Delta \delta_{ij} \Big)\Big(\partial^k \partial^m - \frac{ \Delta}{3} \delta^{km} \Big) \\ + p_k p_m \delta^{km} \frac{2}{3} \Delta^{-1} \partial_i \partial_j - \Delta^{-1} \Big( p_k p_j \partial_i \partial^k + p_k p_i \partial_j \partial^k \Big) \Bigg]{f}_1^{+} \, . \end{multline} Thus, the Einstein equations look to leading order in our perturbation parameters the same, whether we use a classical particle phase-space density or the density derived from the scalar quantum field, $f_1 = f_1^+ + f_1^-$. The densities $f_{2,3}$ enter at higher order. However, the dynamics for this source in the Einstein-equations is generalized by the following set of differential equations including the densities $f_{2,3}$. We find for the phase-space density $f_1^+$, \begin{multline} \big({f}_1^{+} \big)^{\prime} + \Big[ s^k \partial_k - \big(\partial_m s^k \big) p_k \frac{\partial}{\partial p_m} \Big]{f}_1^{+} \approx \\ - \Bigg[ \delta^{jk} \frac{p_j}{m a} \partial_k - m a \partial_k \Bigg[ \Phi_N - \frac{\lambda}{2} \frac{\hbar^3}{m^3 a^3} \int d^3 q f_1^{+}(q) \Bigg] \frac{\partial}{\partial p_k} \Bigg]{f}_1^{-} \\ +3 \Big[ \mathcal{H} - \Psi_N^{\prime} \Big] {f}_2 -\frac{\hbar}{ma} \Bigg[ \frac{1}{4} \delta^{ij} \partial_i \partial_j + \frac{\lambda}{2} \frac{\hbar^2}{m^2 a^2} \int d^3 q f_1^{+}(q) \Bigg] {f}_3 \, , \label{f1+LinGrav} \end{multline} where we drop higher-order terms involving relativistic corrections or gradients that are small compared to the mass scale. Note that the last term in \eqref{f1+LinGrav} may be important for certain combinations of masses and self-couplings, which is still consistent with the constraint \eqref{constSelfHub}, \begin{equation} || {\partial_{\eta} } ||^{-1} \lambda \frac{\hbar^2}{m^2 a^2} \int d^3 q f_1^{+}(q) \sim \lambda \frac{\hbar \mathcal{H}}{a m} \frac{M_P^2}{m^2} \sim \lambda \frac{10^{-3} \big(\text{GeV}\big)^3}{m^3}. \end{equation} Maybe more importantly, the self-interaction term multiplying $f_1^{-}$ in \eqref{f1+LinGrav}, \begin{equation} \partial_k \Bigg[ \Phi_N - \frac{\lambda}{2} \frac{\hbar^3}{m^3 a^3} \int d^3 q f_1^{+}(q) \Bigg] \sim \partial_k \Bigg[ \Phi_N - \frac{\lambda}{2} \frac{ \hbar^2 \mathcal{H}^2}{m^2 a^2 } \frac{M_P^2}{m^2} \frac{\Delta \Psi_N }{\mathcal{H}^2} \Bigg] \end{equation} can compete with the Newtonian potential at the non-linear scale where $\Delta \Psi_N \sim \mathcal{H}^2$ and still obey the constraint \eqref{constSelfHub} for certain combinations of mass and self-coupling, \begin{equation} \Phi_N (k_{\text{NL}}) \sim \frac{\lambda}{2} \frac{ \hbar^2 \mathcal{H}^2}{m^2 a^2 }\frac{M_P^2}{m^2} \frac{ k_{\text{NL}}^2 }{\mathcal{H}^2} \Psi_N (k_{\text{NL}}) \, \quad \text{for} \quad \frac{\lambda}{2} \frac{ \hbar^2 \mathcal{H}^2}{m^2 a^2 }\frac{M_P^2}{m^2} \sim 10^{-5} \ll 1\, . \end{equation} We also note that the gravitational vector perturbations enter at this order as a corrective for the time derivative, which is true for all four densities as we will see in a moment. Tensor perturbations enter in equations like \eqref{f1+LinGrav} in various places, however, such terms are all of higher-order in the spatial gradient expansion. The same is again true for the dynamical equations of the other densities. Also, terms involving the non-minimal coupling $\xi$ are of higher-order in all equations. For the odd density $f_1^-$ we find, \begin{multline} \big({f}_1^{-} \big)^{\prime} + \Big[ s^k \partial_k - \big(\partial_m s^k \big) p_k \frac{\partial}{\partial p_m} \Big]{f}_1^{-} \approx \\ - \Bigg[ \delta^{jk} \frac{p_j}{m a} \partial_k - m a \partial_k \Bigg[ \Phi_N - \frac{\lambda}{2} \frac{\hbar^3}{m^3 a^3} \int d^3 q f_1^{+}(q) \Bigg] \frac{\partial}{\partial p_k} \Bigg]{f}_1^{+} \\ -\frac{\hbar}{2} \big(\partial_j \Psi_N^{\prime} ) \frac{\partial}{\partial p_j} {f}_3 - \delta^{jk} \frac{p_j}{m a} \partial_k {f}_2 \, . \label{f1-Lin} \end{multline} The term involving the density $f_3$ is probably negligible for $f_1 \gg | f_{2,3} |$, however we kept it to see the type of the leading order term for $f_3$. The differential equations for $f_2$ and $f_3$ read \begin{multline} \big({f}_2 \big)^{\prime} + \Big[ s^k \partial_k - \big(\partial_m s^k \big) p_k \frac{\partial}{\partial p_m} \Big]{f}_2 \approx 2 \frac{\omega_p}{\hbar} a \big( 1 + \Phi_N \big) {f}_3\\ + \Bigg[ \delta^{jk} \frac{p_j}{m a} \partial_k - m a \partial_k \Bigg[ \Phi_N - \frac{\lambda}{2} \frac{\hbar^3}{m^3 a^3} \int d^3 q f_1^{+}(q) \Bigg] \frac{\partial}{\partial p_k} \Bigg] {f}_1^{-} \\ +3 \big( \mathcal{H} - \Psi_N^{\prime} \big) {f}_1^{+} -\frac{\hbar}{ma} \Bigg[ \frac{1}{4} \delta^{ij} \partial_i \partial_j - \frac{\lambda}{2} \frac{\hbar^2}{m^2 a^2} \int d^3 q f_1^{+}(q) \Bigg] {f}_3 \, , \label{f2LinGrav} \end{multline} \begin{multline} \label{f3LinGrav} \big({f}_3 \big)^{\prime} + \Big[ s^k \partial_k - \big(\partial_m s^k \big) p_k \frac{\partial}{\partial p_m} \Big]{f}_3 \approx - 2 \frac{\omega_p}{\hbar} a \big( 1 + \Phi_N \big) {f}_2 + \hbar \frac{3}{2} \big( \partial_j \Psi_N^{\prime} \big) \frac{\partial}{\partial p_j} {f}_1^{-} \\ + \frac{\hbar}{ma} \Bigg[ \frac{1}{4} \delta^{ij} \partial_i \partial_j - \frac{\lambda}{2} \frac{\hbar^2}{m^2 a^2} \int d^3 q f_1^{+}(q) \Bigg] \big( f_1^+ + {f}_2 \big) \, , \end{multline} where we also included higher-order gradient terms acting on $f_3$ and the self-coupling, as they might play a role in determining the non-oscillatory behavior of $f_2$ and $f_3$. Also, the correction to the rest-mass energy may be included for the first term on the right-hand-side of \eqref{f2LinGrav} and \eqref{f3LinGrav}, \begin{equation} \omega_p \approx m \Big(1 + \frac{1}{2} \frac{p_i p_j \delta^{ij}}{ m^2 a^2} \Big) \, . \end{equation} We remark once more, that the equations \eqref{f1+LinGrav} to \eqref{f3LinGrav} reduce to the classical particle, cold dark matter phase-space dynamics if we can approximate $f_{2,3} \approx 0$ and set $\lambda=0$. However, we think that the additional densities $f_{2,3}$ have the potential to significantly alter the evolution of $f_1^{\pm}$ for certain combinations of parameters. As a first step, we are currently investigating the effect of the oscillatory densities $f_{2,3}$ on the density $f_1$ and thus the Hubble rate $\mathcal{H}$ in the homogeneous limit. \section{Conclusion and outlook} Motivated by dark matter models for large-scale structures we introduced a spatially covariant framework based on canonical field operators $\hat{\phi}$, $\hat{\Pi}$ to study the transition from the quantum theory of a self-interacting real scalar field on curved space-time to the kinetic theory of classical particles by using a spatial gradient expansion. We also included a non-minimal coupling to the Ricci scalar, since it is required at the level of bare parameters and non-renormalized interaction terms. We used a c-number metric that is determined through the semi-classical Einstein equations, although in principle we could have taken any classical metric for our deviation. It was in this sense unrestricted. The metric is a c-number with respect to quantum expectation values but might be taken to be stochastic as a one-point function to account for stochastic features of cosmological settings. Moreover, we considered a Gaussian state or one-loop truncation and neglected the effect of connected higher-order n-point functions related to the self-coupling, anomalous contributions, that result from the renormalization procedure. These effects can in principle be included and it depends on the scales and couplings of the underlying problem whether they become relevant. \par In \eqref{deff+} to \eqref{deff2}, we identified four phase-space operators $\hat{f}_1^{\pm},\hat{f}_{2,3}$ which depend on a space-time point $x^{\mu}$ and a three-momentum $p_k$. Two of them can be combined and interpreted as a fluctuating phase-space density $\hat{f}_1 = \hat{f}_1^+ + \hat{f}_1^-$, the average of which, $f_1 = \langle \hat{f}_1 \rangle$, describes a classical statistically-distributed one-particle density, whenever the quantum state of the system is such that the other two phase-space operators are on average small $f_1 = \langle \hat{f}_1 \rangle \gg | \langle \hat{f}_{2,3} \rangle|$ (expectation values of n factors of $\hat{f}_1$ are after subtraction of their disconnected piece similarly interpreted as n-particle phase-space densities) . This picture is consistent when we rewrite the hydrodynamic energy density, pressure and velocity in the non-interacting limit in terms of momentum integrals over $f_1$. However, the main result of this paper are the dynamical equations \eqref{finf1+} to \eqref{finf3} for the phase-space densities $f_{1}^{\pm}$, $f_{2,3}$ which describe up to one-point functions all degrees of freedom of a Gaussian state. We are not aware that equations \eqref{finf1+} to \eqref{finf3} have been derived elsewhere for general curved space-times. Moreover, these equations support the interpretation that the density $f_1$ has a limit as a classical one-particle density since the equations \eqref{finf1+} to \eqref{finf3} reduce to the Vlasov equation \eqref{Vlasov} to lowest order in the gradient expansion and upon neglecting the densities $f_2$ and $f_3$ and the self-interaction which amounts to a mass correction in the one-loop approximation. \par In the derivation of the kinetic description of the real scalar quantum field, we argue that it is necessary to normal order the involved quadratic field operators \eqref{genDefWignerOp} also in the off-coincident limit, since only then one is able to extract quantities, that yield a well-defined renormalized energy-momentum tensor and whose dynamics can be approximated with a finite number of spatial derivatives. As far as we know, this problem has not been addressed in detail in the context of quantum kinetic theory in curved space-time and it should be further investigated whether the boundary terms related to the local subtraction can be given a quantum noise interpretation. \par Eventually, we have used the general kinetic equations \eqref{finf1+} to \eqref{finf3} to extend our earlier results on scalar field dark matter with linearized gravity \cite{Prokopec:2017ldn} to include vector and tensor perturbations as well as self-interaction terms. The resulting equations generalize previous cold dark matter descriptions and have, as far as we know, never been studied before. Note, that we did not include condensates or one-point functions, a popular description of dark matter that goes under the name fuzzy, wave or axion dark matter, which has been around for a long time \cite{Turner:1983he} \cite{Sin:1992bg} \cite{Lee:1995af} \cite{Hu:2000ke} \cite{Goodman:2000tg} \cite{Peebles:2000yy} \cite{Marsh:2015daa} \cite{Hui:2016ltb}. Equipped with a very small mass, the real scalar field condensate leads to different behaviour on small scales. Recently strong bounds on the mass of fuzzy dark matter have been obtained \cite{Irsic:2017yje} and more elaborate models combining fuzzy and cold dark matter were suggested \cite{Armengaud:2017nkf}. Such a condensate component of the state is easily incorporated into our formalism by adding source terms for the Einstein equations \eqref{eMomDecom1} to \eqref{eMomDecom3} via the shift $\hat{f}_{XY} \rightarrow \hat{f}_{(X-\langle X \rangle)(Y-\langle Y \rangle)}$ in \eqref{genDefWignerOp}. The dynamics of the condensates can quickly be derived by taking expectation values of the dynamical equations for the canonical field operators \eqref{opEqPhi} and \eqref{opEqPi} whose non-linear terms have to be expressed in terms of one-point functions and the one-particle phase-space densities (which are related to the connected part of the two-point function). The coupling between one-point functions and the connected two-point functions happens then directly via one-loop self-interactions or indirectly via the gravitational fields and it is promising to study whether and on which scales the particle or the condensate nature dominates (such a dark matter model, that differentiates between different matter phases depending on the scales has been proposed by \cite{Berezhiani:2015bqa}). Moreover, our formalism can be useful in studying how a Quintessence field \cite{Tsujikawa:2013fta} that goes beyond a condensate, can play role in large-scale structure dynamics. In this case an additional degree of freedom has to be added to play the role of dark matter itself. Another application we have in mind for our formalism is to study the interplay between gravitational waves and the real scalar field on space and time scales where the other gravitational potentials give negligible effects. \par We think our results are important to systematically include special and general relativistic corrections to dark matter models and study their range of applicability. By using non-equilibrium quantum field theory techniques like the Schwinger-Keldysh formalism and the classical limits we have established in this paper, we also hope to give soon alternative routes in analytical and numerical studies of dark matter beyond fluid approximations, particularly concerning the small scale behavior of large-scale structures. \paragraph{Acknowledgments.} This work is part of the research programme of the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organisation for Scientific Research (NWO). This work is in part supported by the D-ITP consortium, a program of the Netherlands Organization for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). \pagebreak
3,212,635,537,817
arxiv
\section{\@startsection {section}{1}{\zeta@}% {-3.5ex \@plus -1ex \@minus -.2ex}% {2.3ex \@plus.2ex}% {\normalfont\large\bfseries}} \renewcommand\subsection{\@startsection{subsection}{2}{\zeta@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\normalsize\bfseries}} \renewcommand\subsubsection{\@startsection{subsubsection}{3}{\zeta@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\normalsize\it}} \renewcommand\paragraph{\@startsection{paragraph}{4}{\zeta@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\normalsize\bf}} \numberwithin{equation}{section} \def\revise#1 {\raisebox{-0em}{\rule{3pt}{1em}}% \marginpar{\raisebox{.5em}{\vrule width3pt\ \vrule width0pt height 0pt depth0.5em \hbox to 0cm{\hspace{0cm}{% \parbox[t]{4em}{\raggedright\footnotesize{#1}}}\hss}}}} \newcommand\fnxt[1] {\raisebox{.12em}{\rule{.35em}{.35em}}\mbox{\hspace{0.6em}}#1} \newcommand\nxt[1] {\\\fnxt#1} \newcommand{{\it i.e.,}\ }{{\it i.e.,}\ } \newcommand{{\it e.g.,}\ }{{\it e.g.,}\ } \newcommand{\mt}[1]{\textrm{\tiny #1}} \def{\cal A} {{\cal A}} \def{\mathfrak A} {{\mathfrak A}} \def{\underline \calA} {{\underline {\mathfrak A}}} \def{\cal B} {{\cal B}} \def{\cal C} {{\cal C}} \def{\cal D} {{\cal D}} \def{\cal E} {{\cal E}} \def{\cal F} {{\cal F}} \def{\cal G} {{\cal G}} \def{\mathfrak G} {{\mathfrak G}} \def{\cal H} {{\cal H}} \def{\cal I} {{\cal I}} \def{\cal J} {{\cal J}} \def{\cal K} {{\cal K}} \def{\cal L} {{\cal L}} \def{\cal M} {{\cal M}} \def{\cal N} {{\cal N}} \def{\cal O} {{\cal O}} \def{\cal P} {{\cal P}} \def{\cal Q} {{\cal Q}} \def{\cal R} {{\cal R}} \def{\cal S} {{\cal S}} \def{\cal T} {{\cal T}} \def{\cal U} {{\cal U}} \def{\cal V} {{\cal V}} \def{\cal W} {{\cal W}} \def{\cal X} {{\cal X}} \def{\mathbb C} {{\mathbb C}} \def{\mathbb N} {{\mathbb N}} \def{\mathbb P} {{\mathbb P}} \def{\mathbb Q} {{\mathbb Q}} \def{\mathbb R} {{\mathbb R}} \def{\mathbb Z} {{\mathbb Z}} \def\partial {\partial} \def\bar\partial {\bar\partial} \def{\rm e} {{\rm e}} \def{\rm i} {{\rm i}} \def{\circ} {{\circ}} \def\mathop{\rm Tr} {\mathop{\rm Tr}} \def{\rm Re\hskip0.1em} {{\rm Re\hskip0.1em}} \def{\rm Im\hskip0.1em} {{\rm Im\hskip0.1em}} \def{\it id} {{\it id}} \def\de#1#2{{\rm d}^{#1}\!#2\,} \def\De#1{{{\cal D}}#1\,} \def{\frac12}{{\frac12}} \newcommand\topa[2]{\genfrac{}{}{0pt}{2}{\scriptstyle #1}{\scriptstyle #2}} \def\undertilde#1{{\vphantom#1\smash{\underset{\widetilde{\hphantom{\displaystyle#1}}}{#1}}}} \def\mathop{{\prod}'}{\mathop{{\prod}'}} \def\gsq#1#2{% {\scriptstyle #1}\square\limits_{\scriptstyle #2}{\,}} \def\sqr#1#2{{\vcenter{\vbox{\hrule height.#2pt \hbox{\vrule width.#2pt height#1pt \kern#1pt \vrule width.#2pt}\hrule height.#2pt}}}} \def\square{% \mathop{\mathchoice{\sqr{12}{15}}{\sqr{9}{12}}{\sqr{6.3}{9}}{\sqr{4.5}{9}}}} \newcommand{\fft}[2]{{\frac{#1}{#2}}} \newcommand{\ft}[2]{{\textstyle{\frac{#1}{#2}}}} \def\mathop{\mathchoice{\sqr{8}{32}}{\sqr{8}{32}{\mathop{\mathchoice{\sqr{8}{32}}{\sqr{8}{32}} {\sqr{6.3}{9}}{\sqr{4.5}{9}}}} \newcommand{\mathfrak{w}}{\mathfrak{w}} \newcommand{\mathfrak{q}}{\mathfrak{q}} \newcommand{\mathfrak{w}}{\mathfrak{w}} \newcommand{{\Omega}}{{\Omega}} \def\alpha{\alpha} \def\beta{\beta} \def\omega{\omega} \def\rho{\rho} \def\delta{\delta} \def\epsilon{\epsilon} \def\chi{\chi} \def\gamma{\gamma} \def\gamma{\gamma} \def\hat{x}{\hat{x}} \def\hat{\rho}{\hat{\rho}} \def\hat{\chi}{\hat{\chi}} \def\hat{h}{\hat{h}} \def\hat{f}{\hat{f}} \def\hat{g}{\hat{g}} \def\hat{K}{\hat{K}} \def\phi{\phi} \def\psi{\psi} \def\hat{h}{\hat{h}} \def\nabla_\mu{\nabla_\mu} \def\nabla_\nu{\nabla_\nu} \def\Gamma{\Gamma} \def{\rm arctanh}{{\rm arctanh}} \def\Delta{\Delta} \def\kappa{\kappa} \def\rm dilog{\rm dilog} \def\tilde{a}{\tilde{a}} \def\lambda{\lambda} \def\zeta{\zeta} \def\Omega{\Omega} \def\Omega{\Omega} \def\tilde{f}{\tilde{f}} \def\tilde{h}{\tilde{h}} \def\tilde{K}{\tilde{K}} \def\tilde{k}{\tilde{k}} \def\tilde{g}{\tilde{g}} \def\hat{\rho}{\hat{\rho}} \def\hat{f}{\hat{f}} \def\hat{K}{\hat{K}} \def\hat{P}{\hat{P}} \def\rangle{\Longrightarrow} \def\vev#1{\langle #1 \rangle} \def\hat{p}_0{\hat{p}_0} \def\hat{K}_0{\hat{K}_0} \def\hat{a}{\hat{a}} \def\hat{\beta}{\hat{\beta}} \def\hat{G}{\hat{G}} \def{\chi\rm{SB}}{{\chi\rm{SB}}} \def\kappa{\kappa} \def\epsilon{\epsilon} \def\tau{\tau} \def\sigma{\sigma} \def\langle{\langle} \def\rangle{\rangle} \def\tilde{p}{\tilde{p}} \def\tilde{a}{\tilde{a}} \def\hat{\phi}{\hat{\phi}} \def{\rm arcsinh}{{\rm arcsinh}} \def\l_{GB}{\lambda_{GB}} \def\hat{c}{\hat{c}} \def\hat{a}{\hat{a}} \def\hat{d}{\hat{d}} \def\comment#1{{\bf [[#1]]}} \catcode`\@=12 \begin{document} \title{\bf Black hole spectra in holography: consequences for equilibration of dual gauge theories} \date{May 6, 2015} \author{ Alex Buchel\\[0.4cm] \it Department of Applied Mathematics\\ \it University of Western Ontario\\ \it London, Ontario N6A 5B7, Canada\\ \it Perimeter Institute for Theoretical Physics\\ \it Waterloo, Ontario N2J 2W9, Canada } \Abstract{ For a closed system to equilibrate from a given initial condition there must exist an equilibrium state with the energy equal to the initial one. Equilibrium states of a strongly coupled gauge theory with a gravitational holographic dual are represented by black holes. We study the spectrum of black holes in Pilch-Warner geometry. These black holes are holographically dual to equilibrium states of strongly coupled $SU(N)$ ${\cal N}=2^*$ gauge theory plasma on $S^3$ in the planar limit. We find that there is no energy gap in the black hole spectrum. Thus, there is a priory no obstruction for equilibration of arbitrary low-energy states in the theory via a small black hole gravitational collapse. The latter is contrasted with phenomenological examples of holography with dual four-dimensional CFTs having non-equal central charges in the stress-energy tensor trace anomaly. } \makepapertitle \body \let\version\@version\@gobble n2 thermal gap \tableofcontents \section{Introduction and summary}\label{intro} Consider an interacting system in a finite volume. Suppose that the theory is gapless --- there are arbitrarily low-energy excitations. If a generic state in a theory equilibrates, there can not be a gap in the spectrum of equilibrium states in the theory. This obvious statement has a profound implication for strongly coupled gauge theories with an asymptotically AdS gravitational dual \cite{m1}. In a holographic dual the equilibrium states are realized by black holes \cite{Aharony:1999ti}. Thus, if it is possible to prepare an arbitrary low-energy initial configurations in a holographic dual with a gapped spectrum of black holes, such states of the boundary gauge theory will never equilibrate. Correspondingly, the asymptotically AdS dual is guaranteed to be stable against gravitational collapse for sufficiently small amplitude of the perturbations. Examples of this type would violate ergodicity from the field theory perspective. In this paper we show that while it is possible to realize above scenario in a phenomenological (bottom-up) holographic example --- the Einstein-Gauss-Bonnet (EGB) gravity with a negative cosmological constant, it does not occur in a specific model of gauge theory/supergravity correspondence we consider --- the holographic duality between ${\cal N}=2^*$ $SU(N)$ gauge theory and the gravitational Pilch-Warner (PW) flow \cite{pw,bpp,cj}. From the gauge theory perspective, $SU(N)$ ${\cal N}=2^*$ gauge theory is obtained from the parent ${\cal N}=4$ SYM by giving a mass to ${\cal N}=2$ hypermultiplet in the adjoint representation. In $R^{3,1}$ space-time, the low-energy effective action of the theory can be computed exactly \cite{Donagi:1995cf}. The theory has quantum Coulomb branch vacua ${\cal M}_{\cal C}$, parameterized by the expectation values of the complex scalar $\Phi$ in the ${\cal N}=2$ vector multiplet, taking values in the Cartan subalgebra of the gauge group, \begin{equation} \Phi={\rm diag}(a_1,a_2,\cdots,a_N)\,,\qquad \sum_{i}a_i=0\,, \eqlabel{vevs} \end{equation} resulting in complex dimension of the moduli space \begin{equation} {\rm dim}_{\mathbb C}\ {\cal M}_{\cal C}\ =\ N-1\,. \eqlabel{dim} \end{equation} In the large-$N$ limit, and for strong 't Hooft coupling, the holographic duality reduces to the correspondence between the gauge theory and type IIb supergravity. Since supergravities have finite number of light modes, one should not expect to see the full moduli space of vacua in ${\cal N}=2$ examples of gauge/gravity correspondence. This is indeed what is happening: the PW flow localizes on a semi-circle distribution of \eqref{vevs} with a linear number density \cite{bpp}, \begin{equation} \begin{split} &{\rm Im\hskip0.1em}(a_i)=0\,,\qquad a_i\in [-a_0,a_0]\,,\qquad a_0^2=\frac{m^2 g_{YM}^2 N}{4\pi^2}\,,\\ &\rho(a)=\frac{8\pi}{m^2 g_{YM}^2}\ \sqrt{a_0^2-a^2}\,,\qquad \int_{-a_0}^{a_0}da\ \rho(a)=N\,, \end{split} \eqlabel{pwdistr} \end{equation} where $m$ is the hypermultiplet mass. This holographic localization can be deduced entirely from the field theory perspective \cite{Buchel:2013id}, using the $S^4$-supersymmetric localization techniques \cite{Pestun:2007rz}. To summarize, ${\cal N}=2^*$ holography is a well-understood nontrivial example of gauge/gravity correspondence that passes a number of highly nontrivial tests \cite{bpp,Buchel:2013id,Bobev:2013cja}. We would like to compactify the background space of the ${\cal N}=2^*$ strongly coupled gauge theory on $S^3$ of radius $\ell$ --- in a dual gravitational picture we prescribe the boundary condition for the non-normalizable component of the metric in PW effective action to be that of $R\times S^3$. This is in addition to specifying non-normalizable components (corresponding to $m$ in \eqref{pwdistr}) for the two PW scalars, dual to the mass deformation operators of dimensions $\Delta=2$ and $\Delta=3$ of the gauge theory hypermultiplet mass term. Thus, we produced a holographic example of a strongly interacting system in a finite volume. The single dimensionless parameter\footnote{${\cal N}=2^*$ theory in Minkowski space-time has a scale associated with the Coulomb branch moduli distribution \eqref{pwdistr}. Once the theory is compactified on the $S^3$ the moduli space is lifted.}, so far, is $m \ell$. We proceed to construct regular solutions of the PW effective gravitational action with the prescribes boundary condition, interpreting them as vacua of $S^3$-compactified strongly coupled ${\cal N}=2^*$ gauge theory. Using the standard holographic renormalization technique\footnote{For the model in hand this was developed in \cite{Buchel:2004hw}.} we compute the vacuum energy of the theory as a function of $m\ell$, $E_{vacuum}=E_{vacuum}(m\ell)$. We do not verify in this work whether described $S^3$-compactifications preserve any supersymmetry; thus, it is important to check the stability of the vacuum solutions. Previously, careful analysis of the $S^4$-compactified PW holographic flows of \cite{Buchel:2013fpa} pointed to the discrepancy in the free energy of the solutions, compared with the localization prediction in \cite{Buchel:2013id}. This discrepancy was resolved by identifying a larger truncation \cite{Bobev:2013cja} (BEFP)\footnote{Of course, BEFP can itself be consistently truncated to PW.}, where it was pointed out that preservation of the $S^4$-supersymmetry necessitates turning on additional bulk scalar fields. Stability of the PW embedding inside BEFP was discussed in \cite{Balasubramanian:2013esa}. We verify here that $S^3$-compactified PW vacua are stable within BEFP truncation. Having constructed vacuum solutions, we move to the discussion of the black hole spectrum. We construct regular Schwarzschild black hole solutions in PW effective action, and compute $\delta E\equiv \delta E(m\ell, \ell_{BH}/L)\equiv E-E_{vacuum}(m\ell)$. We argue that there is no obstruction of initializing arbitrary low-energy excitations over the vacuum. Thus, one would expect no gap in the energy spectrum of PW black hole solutions, realizing equilibrium configurations of the strongly coupled ${\cal N}=2^*$ gauge theory in the planar limit. Indeed, we find strong numerical evidence that \begin{equation} \lim_{\ell_{BH}/L\to 0}\ \frac{\delta E(m\ell, \ell_{BH}/L)}{E_{vacuum}(m\ell=0)}\ =\ 0 \,. \eqlabel{result} \end{equation} The rest of the paper is organized as follows. In the next section we discuss the spectrum of black holes in five-dimensional EGB gravity with a negative cosmological constant. These gravitational backgrounds can be interpreted as holographic duals to equilibrium states of strongly coupled conformal gauge theories with non-equal central charges in the stress-energy tensor trace anomaly. We show that there is a gap in the spectrum of black holes. However, as one imposes constraints on EGB gravity coming from interpreting it as an effective description of gauge theory/string theory correspondence, the claim about the gap becomes unreliable --- higher derivative corrections, which are not under control, make order-one corrections to the gap. We follow up with the discussion in the ${\cal N}=2^*$ holographic example. In the section \ref{action} we review the PW effective action and its embedding within a larger BEFP truncation. In section \ref{vacuum} we construct gravitational dual to vacuum states of ${\cal N}=2^*$ gauge theory on $S^3$. Stability of the latter states within BEFP truncation is discussed in section \ref{vacuumstability}. In section \ref{bh} we study the spectrum of black holes in PW effective action. \section{Black hole spectrum in Einstein-Gauss-Bonnet gravity} Effective action of a five-dimensional Einstein-Gauss-Bonnet gravity with a negative cosmological constant takes form: \begin{equation} \begin{split} S=&\frac{1}{2\ell_p^3}\int_{{\cal M}_{5}}d^{5}z \sqrt{-g} \biggl(\frac{12}{L^2}+R+ \frac{\l_{GB}}{2} L^2\left(R^2-4 R_{\mu\nu}R^{\mu\nu} +R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}\right) \biggr)\,. \end{split} \eqlabel{eq:aisnotc} \end{equation} When interpreted in a framework of gauge theory/gravity correspondence\footnote{See \cite{Banerjee:2014oaa} for a recent review.}, EGB action \eqref{eq:aisnotc} represents a holographic dual to a putative strongly coupled conformal theory with non equal central charges, $c\ne a$, of the boundary stress-energy tensor, \begin{equation} \begin{split} &\langle T^\mu{}_\mu\rangle_{\rm CFT} =\frac{c}{16\pi^2} I_4-\frac{a}{16\pi^2} E_4\,,\\ &E_4= r_{\mu\nu\rho\lambda}r^{\mu\nu\rho\lambda}-4 r_{\mu\nu}r^{\mu\nu}+r^2 \,,\\ & I_4= r_{\mu\nu\rho\lambda}r^{\mu\nu\rho\lambda}-2 r_{\mu\nu}r^{\mu\nu} +\frac 13r^2\,, \end{split} \eqlabel{eq:anomaly} \end{equation} where $E_4$ and $I_4$ are the four-dimensional Euler density and the square of the Weyl curvature of the CFT background space-time. The precise identification of the central charges is as follows: \begin{equation} \begin{split} &c=\frac{\pi^2 \tilde{L}^3}{\ell_p^3}\left(1-2\frac{\l_{GB}}{\beta^2}\right)\,,\qquad a=\frac{\pi^2 \tilde{L}^3}{\ell_p^3}\left(1-6\frac{\l_{GB}}{\beta^2}\right)\,,\\ &\tilde{L}\equiv \beta L\,,\qquad \beta^2\equiv \frac 12 +\frac 12 \sqrt{1-4\l_{GB}}\,. \end{split} \eqlabel{eq:defca} \end{equation} The gravitational dual to the vacuum state of a CFT on a three-sphere $S^3$ is a global $AdS_5$, \begin{equation} ds^2 = \frac{L^2\beta^2}{\cos^2 x} \left( - dt^2 +{dx^2} +\sin^2x \, d\Omega^2_{3} \right) \, , \qquad x\in[0,\pi/2]\,, \eqlabel{eq:adsmetric} \end{equation} where $d\Omega_3^2$ is the metric of $S^3$. Notice that $\l_{GB}$ is restricted to be \begin{equation} \l_{GB}\le \frac 14\,; \eqlabel{lgbconst1} \end{equation} otherwise, there is simply no asymptotic AdS solution. Following holographic renormalization of EGB gravity developed in \cite{Liu:2008zf,Banerjee:2014oaa}, we find that the vacuum energy (the mass) of \eqref{eq:adsmetric}, or the Casimir energy from the boundary CFT perspective, is \begin{equation} E_{vacuum}=\frac{3a}{4\tilde{L}}\,. \eqlabel{eq:casimir} \end{equation} Black holes (equilibrium configurations of EGB CFT) are found as a regular horizon solutions within the metric ansatz, \begin{equation} ds^2 = \frac{L^2\beta^2}{\cos^2 x} \left( -A(x) dt^2 +\frac{dx^2}{A(x)} +\sin^2x \, d\Omega^2_{3} \right) \,. \eqlabel{gbbh} \end{equation} The most general solution of equations of motion obtained from \eqref{eq:aisnotc} determine $A(x)$ is terms of a single parameter $M>0$, \begin{equation} \begin{split} A=&~1-\frac{1}{2 \l_{GB}} \biggl((2 \l_{GB}-\beta^2) \sin^2 x +\biggl(4 \l_{GB} (\beta^2-2 \l_{GB}) M \cos^4 x\\ &+(2 \l_{GB}-\beta^2)^2 \cos^4 x -\beta^4 (1-4 \l_{GB}) \cos(2 x)\biggr)^{1/2}\biggr)\,.\\ \end{split} \eqlabel{eq:statbh} \end{equation} Furthermore, using the machinery of the holographic renormalization, the energy of the boundary CFT is \begin{equation} E=\frac{3c}{4L \beta} \biggl(\frac{\beta^2-6\l_{GB}}{\beta^2-2\l_{GB}}+4 M\biggr) =\frac{3c}{4\tilde{L}} \biggl(\frac{a}{c}+4 M\biggr)\,. \eqlabel{eq:energyxi} \end{equation} It is remarkable that the regular Schwarzschild horizon in the geometry \eqref{gbbh}, \eqref{eq:statbh} exists only provided \cite{Cai:2001dz,Buchel:2014dba} \begin{equation} M \ge \begin{cases} \frac{1-\beta^2}{2\beta^2-1}\,, &{\rm if}\ \l_{GB}>0\,,\\ {(\beta^2-1)}{(2\beta^2-1)}\,, &{\rm if}\ \l_{GB}<0\,. \end{cases} \eqlabel{eq:defm} \end{equation} For positive $\l_{GB}$, the bound comes requiring that $S^3$ remains finite at the location of the horizon (otherwise the curvature at the horizon diverges). For negative $\l_{GB}$, violating the bound would render geometry complex (expression inside the square root in \eqref{eq:statbh} would turn negative for some $x\in (0,\pi/2)$). Constraints \eqref{eq:defm} imply the gap in $\delta E\equiv E-E_{vacuum}$ in the spectrum of EGB black holes, \begin{equation} \frac{\delta E}{|E_{vacuum}|}\ \ge\ \epsilon_{gap}=\frac{4(1-\beta^2)}{|6\beta^2-5|}\times \begin{cases} 1\,,\qquad \l_{GB}>0\,,\\ -(2\beta^2-1)^2\,,\qquad \l_{GB}<0\,, \end{cases} \eqlabel{gbgap} \end{equation} With the only restriction \eqref{lgbconst1} on $\l_{GB}$, $\epsilon_{gap}$ is unbounded as $\l_{GB}\to -\infty$ and $\l_{GB}\to 5/36$. We argue now that attempts to interpret EGB holography as an effective description of some gauge theory/string theory correspondence make the gap claim \eqref{gbgap} unreliable. First, causality of the holographic GB hydrodynamics requires that \cite{Buchel:2009tt} \begin{equation} -\frac{7}{36}\le\l_{GB}\le\frac{9}{100}\qquad \Longrightarrow\qquad \epsilon_{gap}\le \begin{cases} 1\,,\ \l_{GB}>0\,,\\ \frac{16}{27}\,,\ \l_{GB}<0\,. \end{cases} \end{equation} Additionally, it was pointed out \cite{Camanho:2014apa} that pure EGB gravity with a negative cosmological constant can not arise as a low-energy limit of a gauge theory/string theory correspondence --- the difference of central charges $(c-a)/c$ is bounded by $\Delta_{gap}^{-2}$, where $\Delta_{gap}$ is the dimension of the lightest single particle operators with spin $J>2$ in the holographically dual conformal gauge theory. Integrating out massive $J>2$ spin states generically produces new higher-curvature contributions, in addition to the Gauss-Bonnet term. These higher curvature corrections are as important as the Einstein-Hilbert term and the GB term in \eqref{eq:aisnotc} when the size of a black hole becomes of order $\l_{GB} L$. The latter is true even as $\l_{GB} \ll 1$, as the Ricci scalar evaluated on the horizon of $\sim \l_{GB} L$ size black hole \eqref{eq:statbh} diverges as $\frac 1\l_{GB}$. \section{PW/BEFP effective actions}\label{action} We begin with description of the PW effective action \cite{pw}. The action of the effective five-dimensional supergravity including the scalars $\alpha$ and $\chi$ (dual to mass terms for the bosonic and fermionic components of the hypermultiplet respectively) is given by \begin{equation} \begin{split} S=&\, \int_{{\cal M}_5} d\xi^5 \sqrt{-g}\ {\cal L}_{PW}\\ =&\frac{1}{4\pi G_5}\, \int_{{\cal M}_5} d\xi^5 \sqrt{-g}\left[\ft14 R-3 (\partial\alpha)^2-(\partial\chi)^2- {\cal P}\right]\,, \end{split} \eqlabel{action5} \end{equation} where the potential% \footnote{We set the five-dimensional supergravity coupling to one. This corresponds to setting the radius $L$ of the five-dimensional sphere in the undeformed metric to $2$.} \begin{equation} {\cal P}=\frac{1}{16}\left[\frac 13 \left(\frac{\partial W}{\partial \alpha}\right)^2+ \left(\frac{\partial W}{\partial \chi}\right)^2\right]-\frac 13 W^2\,, \eqlabel{pp} \end{equation} is a function of $\alpha$ and $\chi$, and is determined by the superpotential \begin{equation} W=- e^{-2\alpha} - \frac{1}{2} e^{4\alpha} \cosh(2\chi)\,. \eqlabel{supp} \end{equation} In our conventions, the five-dimensional Newton's constant is \begin{equation} G_5\equiv \frac{G_{10}}{2^5\ {\rm vol}_{S^5}}=\frac{4\pi}{N^2}\,. \eqlabel{g5} \end{equation} Supersymmetric vacuum of ${\cal N}=2^*$ gauge theory in Minkowski space-time is given by \begin{equation} ds_5^2=e^{2 A}\left(-dt^2 +d\vec{x}^2\right)+dr^2\,,\qquad \rho=\rho(r)\equiv e^{\alpha(\rho)}\,,\qquad \chi=\chi(r)\,, \eqlabel{pwsusy} \end{equation} with \begin{equation} \begin{split} e^A&=\frac{k \rho^2}{\sinh(2\chi)}\,,\qquad \rho^6=\cosh(2\chi)+\sinh^2(2\chi)\,\ln\frac{\sinh(\chi)}{\cosh(\chi)}\,, \qquad \frac{dA}{dr}=-\frac 13 W\,, \end{split} \eqlabel{pwsolution} \end{equation} where the single integration constant $k$ is related to the hypermultiplet mass $m$ according to \cite{bpp} \begin{equation} k= m L =2 m\,. \eqlabel{kim} \end{equation} The BEFP effective action \cite{Bobev:2013cja} is given by \begin{equation} \begin{split} S_{BEFP}=&\, \int_{{\cal M}_5} d\xi^5 \sqrt{-g}\ {\cal L}_{BEFP}\\ =&\frac{1}{4\pi G_5}\, \int_{{\cal M}_5} d\xi^5 \sqrt{-g}\left[ R-12 \frac{(\partial\eta)^2}{\eta^2} -4 \frac{(\partial{\vec X})^2}{(1-\vec{X}^2)^2} -{\cal V}\right]\,, \end{split} \eqlabel{befp} \end{equation} with the potential \begin{equation} {\cal V}=-\left[\frac{1}{\eta^4}+2\eta^2\ \frac{1+\vec{X}^2}{1-\vec{X}^2} -\eta^8\ \frac{(X_1)^2+(X_2)^2}{(1-\vec{X}^2)^2} \right]\,, \eqlabel{pbefp} \end{equation} where $\vec{X}=\left(X_1,X_2,X_3,X_4,X_5\right)$ are five of the scalars and $\eta$ is the sixth. The symmetry of the action reflects the symmetries of the dual gauge theory: the two scalars $(X_1,X_2)$ form a doublet under the $U(1)_R$ part of the gauge group, while $(X_3,X_4,X_5)$ form a triplet under $SU(2)_V$ and $\eta$ is neutral. The PW effective action is recovered as a consistent truncation of \eqref{befp} with \begin{equation} X_2=X_3=X_4=X_5=0\,, \eqlabel{truncate} \end{equation} provided we identify the remaining BEFP scalars $(\eta,X_1)$ with the PW scalars $(\alpha,\chi)$ as follows \begin{equation} e^\alpha\equiv \eta\,,\qquad \cosh 2\chi =\frac{1+(X_1)^2}{1-(X_1)^2} \,. \eqlabel{id} \end{equation} Note that once $m\ne 0$ (correspondingly $X_1\ne 0$), the $U(1)_R$ symmetry is explicitly broken; on the contrary, $SU(2)_V$ remains unbroken in truncation to PW. \section{Holographic duals to ${\cal N}=2^*$ vacuum states on $S^3$}\label{vacuum} We derive bulk equations of motion and specify boundary conditions representing gravitational dual to vacuum states of strongly coupled ${\cal N}=2^*$ gauge theory on $S^3$. We assume that the vacua are $SO(4)$-invariant. We argue that there is no obstruction of exciting these vacua by arbitrarily small perturbations of the bulk scalar fields $\alpha$ and $\chi$. We review holographic renormalization of the theory and compute the vacuum energy. Next, we solve static gravitational equations perturbatively in the mass deformation parameter $m\ell\ll 1$ --- this would serve as an independent check for the general ${\cal O}(m\ell)$ numerical solutions. We conclude with the plot representing $\epsilon\equiv E_{vacuum}(m\ell)/E_{vacuum}^{{\cal N}=4}$, \begin{equation} E_{vacuum}^{{\cal N}=4}\equiv E_{vacuum}(m\ell=0)= \frac{3N^2}{16\ell}\,, \eqlabel{en4} \end{equation} as a function of $m\ell$. Interestingly, while the vacuum energy of the ${\cal N}=4$ SYM is positive, it is negative\footnote{Prior to imposing causality constraints in EGB gravity, its vacuum energy becomes negative once $\l_{GB}> 5/36$. Vacuum energy of a different nonconformal gauge theory on $S^3$ was also observed to be negative in \cite{Buchel:2011cc}.} for ${\cal N}=2^*$ gauge theory once $m\ell\gtrsim 0.87$. \subsection{Equations of motion and the boundary conditions} We consider the general time-dependent $SO(4)$-invariant ansatz for the metric and the scalar fields: \begin{equation} ds_5^2=\frac{4}{\cos^2 x} \left(-A e^{-2\delta} (dt)^2+\frac{(dx)^2}{A}+\sin^2 x (d\Omega_3)^2\right)\,, \eqlabel{geomdyn} \end{equation} where $(d\Omega_3)^2$ is a metric on a unit\footnote{We set $\ell=1$; the $\ell$ dependence can be easily recovered from dimensional analysis.} round $S^3$, and $\{A,\delta,\alpha,\chi\}$ being functions of a radial coordinate $x$ and time $t$. Introducing \begin{equation} \Phi_\alpha\equiv \partial_x \alpha\,,\ \ \Phi_\chi\equiv \partial_x \chi\,,\ \ \Pi_\alpha\equiv \frac{e^\delta}{A}\partial_t\alpha\,,\ \ \Pi_\chi\equiv \frac{e^\delta}{A}\partial_t\chi\,, \eqlabel{momenta} \end{equation} we obtain from \eqref{action5} the following equations of motion: \nxt the evolution equations, $ \dot{} = \partial_t$, \begin{equation} \begin{split} &\dot{\alpha}=A e^{-\delta} \Pi_\alpha\,,\qquad \dot{\chi}=A e^{-\delta} \Pi_\chi\,,\\ &\dot\Phi_\alpha=\left(A e^{-\delta} \Pi_\alpha\right)_{, x}\,,\qquad \dot\Phi_\chi=\left(A e^{-\delta} \Pi_\chi\right)_{, x}\,,\\ &\dot\Pi_\alpha=\frac{1}{\tan^3 x}\left(\tan^3x A e^{-\delta}\Phi_\alpha\right)_{,x}-\frac {2 }{3\cos^2 x}e^{-\delta} \frac{\partial{\cal P}}{\partial\alpha}\,,\\ &\dot\Pi_\chi=\frac{1}{\tan^3 x}\left(\tan^3x A e^{-\delta}\Phi_\chi\right)_{,x}-\frac {2 }{\cos^2 x}e^{-\delta} \frac{\partial{\cal P}}{\partial\chi}\,, \end{split} \eqlabel{evolution} \end{equation} \nxt the spatial constraint equations, \begin{equation} \begin{split} A_{,x}=&\frac{2+2\sin^2 x}{\sin x\cos x}(1-A)-2\sin (2x) A \left(\Phi_\alpha^2+\Pi_\alpha^2+\frac 13 \Phi_\chi^2 +\frac 13 \Pi_\chi^2\right)\\&-4\tan x \left(1+\frac 43 {\cal P}\right)\,,\\ \delta_{, x}=&-2\sin(2x) \left(\Phi_\alpha^2+\Pi_\alpha^2+\frac 13 \Phi_\chi^2 +\frac 13 \Pi_\chi^2\right)\,, \end{split} \eqlabel{sconstrant} \end{equation} \nxt and the moment constraint equation, \begin{equation} \begin{split} A_{,t}+4 \sin(2x) A^2 e^{-\delta} \left(\Phi_\alpha\Pi_\alpha+\frac 13 \Phi_\chi\Pi_\chi\right)=0\,. \end{split} \eqlabel{momconstraint} \end{equation} It is straightforward to verify that the spatial derivative of \eqref{momconstraint} is implied by \eqref{evolution} and \eqref{sconstrant}; thus is it sufficient to impose this equation at a single point. As $x\to 0_+$, the momentum constraint implies that $A(0,t)$ is a constant\footnote{In fact, the non-singularity of $A(t,x)$ in this limit automatically solves \eqref{momconstraint}.}, and as $x\to \frac{\pi}{2}_-$ the latter constraint is equivalent to the conservation of the boundary stress-energy tensor (see \ref{holren} for details). The general non-singular solution of \eqref{evolution}, \eqref{sconstrant} at the origin takes form \begin{equation} \begin{split} &A(t,x)=1+{\cal O}(x^2)\,,\qquad \delta(t,x)=d^h_0(t)+{\cal O}(x^2)\,,\\ &\alpha(t,x)=\alpha^h_0(t)+{\cal O}(x^2)\,,\qquad \chi(t,x)=\chi^h_0(t)+{\cal O}(x^2)\,. \end{split} \eqlabel{dynir} \end{equation} It is completely characterized by three time-dependent functions: \begin{equation} \{d^h_0\,, \alpha^h_0\,, \chi^h_0\}\,. \eqlabel{ircoefficients} \end{equation} At the outer boundary $x=\frac \pi2$ we introduce $y\equiv \cos^2 x$ so that we have \begin{equation} \begin{split} A=&1+y\ \frac 23 c_{1,0}\ + y^2\ \biggl(a_{2,0}(t)+\biggl(\frac 23 c_{1,0}(c_{1,0}+1)+8\rho_{1,1}^2+ 16\rho_{1,1}\rho_{1,0}(t)\biggr)\ln y\\ &+8 \rho_{1,1}^2\ln^2 y\biggr)+ {\cal O}(y^3\ln^3 y)\,,\\ \delta=&y\ \frac13 c_{1,0}+y^2\ \biggl(\frac12 c_{2,0}(t)-\frac{1}{36} c_{1,0}^2+4 \rho_{1,0}^2(t)-\frac18 c_{1,0}+2 \rho_{1,1}^2 +4 \rho_{1,0}(t) \rho_{1,1}\\ &+\biggl(\frac 14 c_{1,0}+\frac13 c_{1,0}^2+4 \rho_{1,1}^2+8 \rho_{1,0}(t) \rho_{1,1}\biggr) \ln y +4 \rho_{1,1}^2 \ln^2 y\biggr)+ {\cal O}(y^3\ln^3 y)\,,\\ e^\alpha=&1+y\ \left(\rho_{1,0}(t)+\rho_{1,1} \ln y\right)+y^2\ \biggl( \frac{1}{12} c_{1,0}^2+\rho_{1,0}(t)-3 \rho_{1,1} c_{1,0}+6 \rho_{1,1}^2\\ &-4 \rho_{1,0}(t) \rho_{1,1}+\frac43 c_{1,0} \rho_{1,0}(t)+\frac32 \rho_{1,0}^2(t) +\frac14 \partial^2_{tt}\rho_{1,0}(t) +\biggl(\frac43 \rho_{1,1} c_{1,0}+\rho_{1,1}\\&-4 \rho_{1,1}^2 +3 \rho_{1,0}(t) \rho_{1,1}\biggr) \ln y+\frac32 \rho_{1,1}^2 \ln^2 y \biggr)+{\cal O}(y^3\ln^3 y)\,,\\ \cosh2\chi=&1+y\ c_{1,0}+y^2\ \biggl(c_{2,0}(t)+\biggl(\frac12 c_{1,0}+\frac23 c_{1,0}^2\biggr) \ln y\biggr)+ {\cal O}(y^3\ln^2 y)\,, \end{split} \eqlabel{dynuv} \end{equation} where we explicitly indicated time-dependence, {\it i.e.,}\ \begin{equation} \frac{d}{dt} c_{1,0}=0\,,\qquad \frac{d}{dt} \rho_{1,1}=0\,. \eqlabel{timedep} \end{equation} Asymptotic expansion \eqref{dynuv} is completely characterized by two constants\footnote{Prescribing time dependence to these coefficients amounts to study quantum quenches in ${\cal N}=2^*$ gauge theory \cite{Buchel:2012gw}.} $\{\rho_{1,1},c_{1,0}\}$ and three time-dependent functions \begin{equation} \{a_{2,0}\,, \rho_{1,0}\,, c_{2,0}\}\,, \eqlabel{uvcoefficients} \end{equation} constraint by \eqref{momconstraint} to satisfy \begin{equation} 0=\frac{d}{dt}\biggl(a_{2,0}-8\rho_{1,0}^2(t)-16\rho_{1,0}(t)\rho_{1,1}-\frac 23 c_{2,0}(t)\biggr)\,. \eqlabel{uvmom} \end{equation} The non-normalizable coefficients $\rho_{1,1}$ and $c_{1,0}$ are related to the mass deformation parameters of the dual gauge theory. Following \cite{Buchel:2007vy}, the precise relation can be established by matching the asymptotics \eqref{dynuv} with the supersymmetric PW RG flow \eqref{pwsolution}, \begin{equation} \{\rho_{1,1},c_{1,0}\}\bigg|_{PW}=k^2\ \left\{\frac {1}{48},\frac 18\right\}=m^2\ \left\{\frac {1}{12},\frac 12\right\}\,. \eqlabel{matchPW} \end{equation} A specific relation between the non-normalizable coefficients of the bulk scalars $e^\alpha$ and $\cosh2\chi$, {\it i.e.,}\ \begin{equation} c_{1,0}=6\rho_{1,1}\,, \eqlabel{n2susy} \end{equation} realizes ${\cal N}=2$ supersymmetry of the boundary gauge theory in the UV. As in \cite{Buchel:2007vy}, it is possible to study the theory with explicitly broken supersymmetry, {\it i.e.,}\ \begin{equation} \rho_{1,1}\equiv\frac{1}{48}\ (m_b L)^2\qquad \ne\qquad \frac 16\ \times\ c_{1,0}\equiv \frac 16\ \times\ \frac 18\ (m_fL)^2 \,, \eqlabel{genflow} \end{equation} where $m_b$ and $m_f$ are the masses of the bosonic and the fermionic components of the ${\cal N}=2$ hypermultiplet of the boundary gauge theory. A non-equilibrium state of the gauge theory can be specified with the following initial/boundary conditions: \begin{equation} \begin{split} &\alpha(0,x)=\alpha^{init}(x)\,,\qquad \chi(0,x)=\chi^{init}(x)\,,\qquad \Phi_\alpha(0,x)=\Phi_\alpha^{init}=\frac{d\alpha^{init}}{dx}\,,\\ & \Phi_\chi(0,x)=\Phi_\chi^{init}=\frac{d\chi^{init}}{dx}\,,\qquad \Pi_\alpha(0,x)=\Pi_\alpha^{init}(x)\,,\qquad \Pi_\chi(0,x)=\Pi_\chi^{init}(x)\,, \end{split} \eqlabel{init1} \end{equation} and as $y\equiv \cos^2 x\to 0$, \begin{equation} \begin{split} &\alpha^{init}(y)= \rho_{1,1}\ y\ln y +{\cal O}(y)\,,\qquad \cosh\left(2 \chi^{init}(y)\right) =1+ y\ c_{1,0}+{\cal O}(y^2 \ln y)\,,\\ &\Pi_\alpha^{init}(y)={\cal O}(y)\,,\qquad \Pi_\chi^{init}(y)={\cal O}(y^{3/2})\,, \end{split} \eqlabel{init2} \end{equation} \begin{equation} \begin{split} A(0,x)=&1+\frac{\cos^4 x}{\sin^2 x}\ \exp\biggl(-\frac 23 \int_0^x d\xi\ \sin(2\xi)\biggl( \left(\Pi_c^{init}(\xi)\right)^2+\left(\Phi_c^{init}(\xi)\right)^2\\ &+3 \left(\Pi_\alpha^{init}(\xi)\right)^2+3 \left(\Phi_\alpha^{init}(\xi)\right)^2 \biggr) \biggr)\ \times\ g(x)\,,\\ g(x)=&-\frac43\int_0^x d\xi \tan^3\xi \exp\biggl(\frac 23 \int_0^\xi d\eta\ \sin(2\eta)\biggl( \left(\Pi_c^{init}(\eta)\right)^2+\left(\Phi_c^{init}(\eta)\right)^2\\ &+3 \left(\Pi_\alpha^{init}(\eta)\right)^2+3 \left(\Phi_\alpha^{init}(\eta)\right)^2 \biggr) \biggr)\ \times\ \biggl(\frac{4{\cal P}^{init}(\xi)+3}{\cos^2\xi}+\left(\Pi_c^{init}(\xi)\right)^2\\ &+\left(\Phi_c^{init}(\xi)\right)^2 +3\left(\Pi_\alpha^{init}(\xi)\right)^2+3 \left(\Phi_\alpha^{init}(\xi)\right)^2\biggr)\\ &{\cal P}^{init}(\xi)={\cal P}\left(\alpha^{init}(\xi),\chi^{init}(\xi)\right)\,, \end{split} \eqlabel{init3} \end{equation} \begin{equation} \begin{split} &\delta(0,x)=-\frac 23 \int_0^x d\xi\ \sin(2\xi)\biggl( \left(\Pi_c^{init}(\xi)\right)^2+\left(\Phi_c^{init}(\xi)\right)^2 +3 \left(\Pi_\alpha^{init}(\xi)\right)^2+3 \left(\Phi_\alpha^{init}(\xi)\right)^2 \biggr)\,, \end{split} \eqlabel{init4} \end{equation} where we explicitly solved for $A(0,x)$ and $\delta(0,x)$ using constraint equations \eqref{sconstrant}. Notice that while $A(0,x)$ and $\delta(0,x)$ are free from the singularities given arbitrary profiles \eqref{init1}, a large amplitude initial conditions might cause $A(0,x)$ to vanish for some $0<x_0<\frac {\pi}{2}$, {\it i.e.,}\ $A(0,x_0)=0$, --- this corresponds to 'putting a black hole in the initial data'. Clearly, initial conditions arbitrarily small perturbed about static gravitational solutions without a horizon (see below) are well defined. In particular one can can consider perturbations with \begin{equation} \alpha^{init}=\alpha^{v}\,,\quad \chi^{init}=\chi^{v}\,,\quad \Pi_{\alpha,\chi}^{init} =\lambda\ \pi_{\alpha,\chi}(x)\,,\qquad \lambda\to 0 \,, \eqlabel{momonly} \end{equation} where the superscript $ ^{v}$ stands for a static (vacuum) solution and $\lambda$ characterizes an overall amplitude of the perturbation with given initial profiles $\pi_\alpha$ and $\pi_\chi$. The $SO(4)$-invariant vacua of strongly coupled ${\cal N}=2^*$ gauge theory correspond to static solutions of \eqref{evolution}-\eqref{momconstraint}. To avoid unnecessary cluttering of the formulas, we omit the superscript $ ^v$, use a radial coordinate $y\equiv \cos^2 x$, and introduce \begin{equation} A(t,y)=a(y)\,,\qquad \delta(t,y)=d(y)\,,\qquad e^{\alpha(t,y)}=\rho(y)\,,\qquad \cosh(2\chi(t,y))=c(y) \,. \eqlabel{static} \end{equation} We find then \begin{equation} \begin{split} &0=c''-\frac{c (c')^2}{c^2-1}+c' \left(\frac{a'}{a}-d'\right) -\frac{(y+1) c'}{y (1-y)}- \frac{\rho^2 (c^2-1) (\rho^6 c-4)}{4(1-y) y^2 a}\,,\\ &0=\rho''-\frac{(\rho')^2}{\rho}+\rho' \left(\frac{a'}{a}-d'\right)-\frac{(y+1) \rho'}{y (1-y)} -\frac{(c^2-1) \rho^9}{12(1-y) y^2 a}-\frac{1-\rho^6 c}{6\rho^3 y^2 a (1-y)}\,,\\ &0=d'-\frac{2 y (1-y) (c')^2}{3(c^2-1)}-\frac{8 (1-y) y (\rho')^2}{\rho^2}\,,\\ 0&=a'-(y-y^2)a\left(\frac{8 (\rho')^2}{\rho^2}+\frac{2 (c')^2}{3(c^2-1)}\right)+\frac{(y-2) a+y}{y (1-y)} -\frac{(c^2-1) \rho^8-8\rho^2 c}{6y}+\frac{2}{3 y \rho^4}\,, \end{split} \eqlabel{vaceoms} \end{equation} where $ '=\frac{d}{dy}$. The boundary conditions as $y\to 0$ are as in \eqref{dynuv}, once we neglect the time dependence. At the origin, using $z\equiv 1-y$ we have \begin{equation} \begin{split} &a=1+\left(-1+\frac{1}{3(\rho^h_0)^4}-\frac{(\rho^h_0)^8}{12}\left((c^h_0)^2-1\right)+\frac{2c^h_0(\rho^h_0)^2}{3}\right)\ z+{\cal O}(z^2)\,,\\ &d=d^h_0+{\cal O}(z^2)\,,\\ &\rho=\rho^h_0+\left(\frac{(\rho^h_0)^9}{24}\left((c^h_0)^2-1\right)+\frac{1-(\rho^h_0)^6c^h_0}{12 (\rho^h_0)^3}\right)\ z+{\cal O}(z^2)\,,\\ &c=c^h_0+\frac18(\rho^h_0)^2\left((c^h_0)^2-1\right)\left(c^h_0(\rho^h_0)^6-4\right)\ z+{\cal O}(z^2)\,. \end{split} \eqlabel{originz} \end{equation} We consider geometries with ${\cal N}=2$ supersymmetry in the ultraviolet, so we impose the constraint \eqref{matchPW}. Having fixed $m$, the complete set of normalizable coefficients in the UV/IR is given by: \begin{equation} \{a_{2,0}\,,\ \rho_{1,0}\,,\ c_{2,0}\,,\ \rho^h_0\,,\ c^h_0\,,\ d^h_0\}\,. \eqlabel{fullset} \end{equation} Note that the six integration constants \eqref{fullset} is exactly what is needed to uniquely fix a solution of a coupled system of two second-order and two first-order ODEs. \subsection{Holographic renormalization and the vacuum energy}\label{holren} Holographic renormalization of RG flows in PW geometry was discussed in \cite{Buchel:2004hw}. Here we apply the analysis for the gravitational solutions dual to vacua of ${\cal N}=2^*$ gauge theory on $S^3$. The gravitational action \eqref{action5} evaluated on a static solution \eqref{vaceoms} diverges --- this divergence is a gravitational reflection of a standard UV divergence of the free energy in the interacting boundary gauge theory. It is regulated by cutting off the radial coordinate integration at $y=y_c\ll 1$. It is straightforward to verify that the regularized Euclidean gravitational Lagrangian, ${\cal L}_{reg}^E$, is a total derivative, \begin{equation} \begin{split} {\cal L}_{reg}^E=&\frac{1}{4\pi G_5} {\rm vol(\Omega_3)} \int_{1}^{y_c} dy\ \frac{d}{dy} \left(\frac{4(1-y)^2e^{-d}}{y^2}\ \left(a+2y a d'-y a' \right)\right)\\ =&\frac{\rm{vol} (\Omega_3)}{4\pi G_5}\ \left[\frac{4(1-y)^2e^{-d}}{y^2}\ \left(a+2y a d'-y a' \right)\right]\bigg|^{y_c}\,, \end{split} \eqlabel{totder} \end{equation} where in the second equality, using \eqref{originz}, we observe that the only contribution comes from the upper limit of integration. Regularized Lagrangian \eqref{totder} has to be supplemented with contributions coming from the familiar Gibbons-Hawking term, ${\cal L}_{GH}^E$, \begin{equation} \begin{split} S_{GH}^E=&-\frac{1}{8\pi G_5}\int_{\partial{\cal M}_5}d\xi^4 \sqrt{h_E}\nabla_\mu n^\mu \equiv \int dt_E {\cal L}_{GH}^E\,,\\ {\cal L}_{GH}^E=&\frac{\rm{vol}(\Omega_3)}{4\pi G_5}\biggl[ \frac{4 (1-y) e^{-d}}{y^2} \left(a (y-4) -2 d' y(1-y) a +a' y(1-y)\right) \biggr]\bigg|^{y_c}\,, \end{split} \eqlabel{lgh} \end{equation} and the counterterm Lagrangian\footnote{We keep only the counterterms relevant for the $R\times S^3$ background geometry of the gauge theory.}, ${\cal L}_{counter}^E$, \begin{equation} \begin{split} S_{counter}^E\equiv& \int dt_E {\cal L}_{counter}^E\,,\\ {\cal L}_{counter}^E=&\frac{\rm{vol}\Omega_3}{4\pi G_5}\sqrt{h_E} \biggl[ \frac 34+\frac 14 R_4+\frac 12 \chi^2 +3\alpha^2-\frac 32 \frac{\alpha^2}{\ln\epsilon_c} \\&+\ln\epsilon_c \left(-\frac 13 \chi^2 R_4-\frac 23 \chi^4\right)+\frac 16\chi^4 \biggr]\bigg|^{y_c}\,, \end{split} \eqlabel{counter} \end{equation} where $R_4\equiv R_4(h_E)$ is the Ricci scalar constructed from $h_E$, and $\epsilon_c$ parameterizes conformal anomaly terms in terms of the $g_{t_Et_E}$ metric component, \begin{equation} R_4=\frac{3y}{2(1-y)}\,,\qquad \epsilon_c\equiv \sqrt{g_{t_Et_E}} =\frac{2\sqrt{a}e^{-d}}{\sqrt{y}}\,. \eqlabel{r4} \end{equation} The renormalized Lagrangian ${\cal L}_{renom}^E$, finite in the limit $y_c\to 0$, is identified with the free energy ${\cal F}$ of the boundary gauge theory, \begin{equation} \begin{split} &{\cal F}={\cal L}_{renom}^E=\lim_{y_c\to 0}\biggl( {\cal L}^E_{reg}+{\cal L}^E_{GH}+{\cal L}^E_{counter} \biggr)\\ =&\frac{\rm{vol}\Omega_3}{4\pi G_5}\ \frac 32\ \biggl(1+c_{1,0}^2 \left(\frac 49-\frac{16}{9}\ln 2\right) +c_{1,0}\left(-\frac 43-\frac83 \ln 2\right)+ 64 \rho_{1,1}^2\ln 2 \\&+\biggl\{64 \rho_{1,1}\rho_{1,0}+\frac 83 c_{2,0}+32 \rho_{1,0}^2-4 a_{2,0}\biggr\} \biggr)\\ =&\frac{3N^2}{16\ell}\biggl(1+\frac{(m\ell)^4}{9}-\frac 23 (1+2\ln 2)(m\ell)^2 +\biggl\{ 32 \rho_{1,0}^2+\frac{16}{3}(m\ell)^2 \rho_{1,0}+\frac 83 c_{2,0}-4a_{2,0} \biggr\} \biggr)\,, \end{split} \eqlabel{freeenergy} \end{equation} where in the second line we used the asymptotic expansion \eqref{dynuv} and expressed the last line in terms of gauge theory variables using \eqref{g5} and \eqref{matchPW} and restoring the size $\ell$ of the $S^3$. Several comments are in order: \nxt For static gravitational solutions without Schwarzschild horizon (as discussed here), the free energy ${\cal F}$ must coincide with the energy $E$ of the boundary stress-energy tensor. We explicitly verified that, indeed, \begin{equation} {\cal F}=E\equiv E_{vacuum}(m\ell)\,. \end{equation} The latter is identified with the vacuum energy of ${\cal N}=2^*$ gauge theory on $S^3$. \nxt In a limit when all the (non-)normalizable coefficients vanish we recover the vacuum energy of the ${\cal N}=4$ SYM \eqref{en4}. \nxt It is easy to extend discussion for general $SO(4)$-invariant non-equilibrium states of ${\cal N}=2^*$ gauge theory --- the final answer is as \eqref{freeenergy}, except with $\{\rho_{1,0}\,, c_{2,0}\,, a_{2,0}\}$ now being functions of time. Note that \begin{equation} \frac{d{\cal E}}{dt}\propto \frac{d}{dt}\ \biggl(\ 4 \biggl\{16 \rho_{1,1}\rho_{1,0}(t)+\frac 23 c_{2,0}(t)+8 \rho_{1,0}^2(t)- a_{2,0}(t)\biggr\}\ \biggr)=0\,, \eqlabel{conserve} \end{equation} according to \eqref{uvmom}. That is, the boundary gauge theory energy conservation is enforced by the bulk momentum constraint \eqref{momconstraint}. \subsection{Vacuum states for $m\ell\ll 1$}\label{vsmall} In preparation to the full numerical solution of \eqref{vaceoms}, we discuss here its perturbative solution for $\rho_{1,1}\ll 1$. We introduce \begin{equation} \begin{split} &c=\cosh(2\lambda \chi_1(y)+{\cal O}(\lambda^3))\,,\qquad \rho=e^{\lambda^2 \alpha_2(y)+{\cal O}(\lambda^4)}\,,\\ &a=1+\lambda^2 a_2(y)+{\cal O}(\lambda^4)\,,\qquad d=\lambda^2 d_2(y)+{\cal O}(\lambda^2)\,, \end{split} \eqlabel{pertvacuum} \end{equation} where $\lambda$ is a small parameter. Substituting \eqref{pertvacuum} into \eqref{vaceoms} we find \begin{equation} \begin{split} &0=\chi_1''-\frac{1+y}{y(1-y)} \chi_1' +\frac {3}{4y^2(1-y)}\chi_1\,,\\ &0=\alpha_2''-\frac{1+y}{y(1-y)} \alpha_2' +\frac {1}{y^2(1-y)}\alpha_2\,,\\ &0=a_2'-\frac{2-y}{y(1-y)}a_2-\frac 83 y(1-y)(\chi_1')^2+\frac 2y(\chi_1)^2\,,\\ &0=d_2'-\frac 83 y(1-y)(\chi_1')^2\,. \end{split} \eqlabel{perteoms} \end{equation} Solutions to \eqref{perteoms} must satisfy boundary conditions corresponding to \eqref{dynuv} and \eqref{originz}. We can solve equation for $\alpha_2$ analytically, \begin{equation} \alpha_2=\rho_{1,1,(2)}\ \frac{y\ln y }{1-y}\,, \eqlabel{al2anal} \end{equation} where $\rho_{1,1,(2)}$ is the non-normalizable integration coefficient. The remaining equations in \eqref{perteoms} are solved with ``shooting method'' developed in \cite{Aharony:2007vg}. In particular, given the asymptotic expansions in the UV, $y\to 0_+$, \begin{equation} \begin{split} \chi_1=&y^{1/2} \left(1+y\ \left(\chi_{1,0,(1)} +\frac 14 \ln y\right)+{\cal O}(y^2\ln y)\right)\,,\\ a_2=&\frac 43 y + y^2\ \left( a_{2,0,(2)}+\frac 43\ln y\right)+{\cal O}(y^3\ln^2 y)\,,\\ d_2=&\frac 23 y +y^2\ \left(-\frac 14 +2 \chi_{1,0,(1)} +\frac 12 \ln y\right)+{\cal O}(y^3\ln^2 y)\,, \end{split} \eqlabel{pertuv} \end{equation} and in the IR, $z\to 0_+$, \begin{equation} \begin{split} \chi_1=&\chi_{0,(1)}^{h}\left(1-\frac 38 z+{\cal O}(z^2)\right)\,,\\ a_2=&(\chi_{0,(1)}^{h})^2\ \left(z-\frac 58 z^2 +{\cal O}(z^3)\right)\,,\\ d_2=&d^h_{0,(2)}-\frac{3}{16}(\chi_{0,(1)}^{h})^2 z^2 +{\cal O}(z^3)\,, \end{split} \eqlabel{pertir} \end{equation} we find numerically, \begin{equation} \begin{tabular}{ c | c |c| c } $\chi_{1,0,(1)}$ & $a_{2,0,(2)}$ & $\chi_{0,(1)}^{h}$ & $d^h_{0,(2)}$\\ \hline 0.0568528 & -0.363452 & 0.785398 & 0.199266 \end{tabular}\,. \eqlabel{table1} \end{equation} To compare with the full numerical solution, we identify, to order ${\cal O}(\lambda^2)$, \begin{equation} \begin{split} &\rho_{1,1}=\rho_{1,1,(2)} \lambda^2\,,\qquad c_{1,0}=2 \lambda^2\,,\qquad \rho_{1,0}=0\,,\qquad c_{2,0}=4\chi_{1,0,(1)} \lambda^2\,, \\ &a_{2,0}=a_{2,0,(2)}\lambda^2\,,\qquad \rho^h_0=1-\rho_{1,1,(2)}\lambda^2\,,\qquad c^h_0=1+2(\chi_{0,(1)}^{h})^2\lambda^2\,,\qquad d^h_0=d^h_{0,(2)}\lambda^2\,. \end{split} \eqlabel{pertid} \end{equation} Note that ${\cal N}=2$ supersymmetry in the UV at ${\cal O}(\lambda^2)$ leads to (see \eqref{n2susy}) \begin{equation} \rho_{1,1,(2)}=\frac 13 \,. \eqlabel{r112} \end{equation} From \eqref{freeenergy}, \begin{equation} \begin{split} \epsilon\equiv \frac{E_{vacuum}}{E_{vacuum}^{{\cal N}=4}}=& 1+\left(\frac{32}{3}\chi_{1,0,(1)}-4a_{2,0,(2)}-\frac 83(1+2\ln 2)\right)\lambda^2+{\cal O}(\lambda^4)\\ =&1+\left(\frac{8}{3}\chi_{1,0,(1)}-a_{2,0,(2)}-\frac 23(1+2\ln 2)\right)(m\ell)^2+{\cal O}((m\ell)^4)\,. \end{split} \eqlabel{epvacuum} \end{equation} \subsection{Gravitational solution and $E_{vacuum}$ for general $m\ell$} \begin{figure}[t] \begin{center} \psfrag{x}{{$\rho_{1,1}$}} \psfrag{y1}{{$c_{2,0}$}} \psfrag{y2}{{$\rho_{1,0}$}} \psfrag{y3}{{$a_{2,0}$}} \psfrag{y4}{{$\rho_{0}^h$}} \psfrag{y5}{{$c_{0}^h$}} \psfrag{y6}{{$d_{0}^h$}} \includegraphics[width=2.6in]{c20.eps}\qquad \includegraphics[width=2.6in]{r10.eps} \vskip 0.2cm \includegraphics[width=2.6in]{a20.eps}\qquad \includegraphics[width=2.6in]{rh0.eps}\qquad \vskip 0.2cm \includegraphics[width=2.6in]{ch0.eps}\qquad \includegraphics[width=2.6in]{dh0.eps} \end{center} \caption{Normalizable coefficients \eqref{fullset} as functions of $\rho_{1,1}$. The dashed lines represent perturbative predictions \eqref{pertid} with \eqref{table1}.} \label{figure2} \end{figure} Using the shooting method of \cite{Aharony:2007vg}, we solve \eqref{vaceoms} and determine the normalizable coefficients \eqref{fullset} as a function of $m\ell\equiv (12\rho_{1,1})^{1/2}$. The results of the computations for small values of $\rho_{1,1}$ are collected for numerical test in figure \ref{figure2}. The solid curves are obtained from numerical solution of full nonlinear equations \eqref{vaceoms}, and the dashed lines represent perturbative prediction \eqref{pertid} with \eqref{table1}. \begin{figure}[t] \begin{center} \psfrag{x}{{$m\ell$}} \psfrag{y}{{$\epsilon$}} \includegraphics[width=2.6in]{vacuum_energyS.eps}\qquad \includegraphics[width=2.6in]{vacuum_energyF.eps} \end{center} \caption{Vacuum energy of the ${\cal N}=2^*$ gauge theory on $S^3$ relative to ${\cal N}=4$ SYM Casimir energy, see \eqref{energyfull}. The vertical red line marks vanishing of $\epsilon$, see \eqref{m0def}. } \label{figure3} \end{figure} In full nonlinear numerical analysis we constructed vacua for $0< m\ell \lesssim 8.5 $. The vacuum energy of the ${\cal N}=2^*$ gauge theory on $S^3$ relative to ${\cal N}=4$ SYM Casimir energy is given by \begin{equation} \begin{split} \epsilon\equiv \frac{E_{vacuum(m\ell)}}{E_{vacuum}^{{\cal N}=4}}=& 1+\frac{(m\ell)^4}{9}-\frac 23 (1+2\ln 2)(m\ell)^2]\\ &+\biggl\{ 32 \rho_{1,0}^2+\frac{16}{3}(m\ell)^2 \rho_{1,0}+\frac 83 c_{2,0}-4a_{2,0} \biggr\}\,. \end{split} \eqlabel{energyfull} \end{equation} It is presented in figure \ref{figure3}. The vertical red line indicates the mass scale $m_0\ell$, \begin{equation} \epsilon(m_0\ell)=0\qquad \Longrightarrow\qquad m_0\ell \approx 0.87031\,, \eqlabel{m0def} \end{equation} at which the vacuum energy of the ${\cal N}=2^*$ gauge theory vanishes and becomes negative for even larger value of $m\ell$. \section{Stability of ${\cal N}=2^*$ vacuum states within BEFP}\label{vacuumstability} In the previous section we constructed gravitational solutions within PW effective action, identified as vacua of the ${\cal N}=2^*$ gauge theory on $S^3$. While the complete stability analysis of these solutions is beyond the scope of this paper, here we would like to analyze their stability within BEFP effective action. Effective action describing the fluctuations of an arbitrary PW static solution within BEFP has been constructed in \cite{Balasubramanian:2013esa}, \begin{equation} \begin{split} &\delta {\cal L}\equiv {\cal L}_{BEFP}-{\cal L}_{PW}+{\cal O}(X_i^4)\equiv \delta{\cal L}_2+\delta{\cal L}_V\,,\\ &\delta{\cal L}_2=-(1+c)^2 (\partial X_2)^2-\frac {1+c}{4}\left((c^2+c) \rho_6^{4/3}-4 (1+c) \rho_6^{1/3} +\frac{4(\partial c)^2}{c^2-1}\right) (X_2)^2\,,\\ &\delta{\cal L}_V=-(1+c)^2 (\partial \vec{X}_V)^2-\frac {1+c}{4}\left((c^2-1) \rho_6^{4/3}-4 (1+c) \rho_6^{1/3} +\frac{4(\partial c)^2}{c^2-1}\right) (\vec{X}_V)^2\,, \end{split} \eqlabel{linear} \end{equation} where $\rho_6=\rho^6$ and $\vec{X}_V=(X_3,X_4,X_5)$ (see section \ref{action} for more details). Note that $\delta {\cal L} $ is $SU(2)_V$ invariant; as a result it is enough to consider a spectrum of only one of $\vec{X}_V$ components. In what follows we choose the latter to be $X_3$. Introducing \begin{equation} X_2=e^{-i\omega t} F_2(y) \Omega_s(S^3)\,,\qquad X_3(t,y)=e^{-i\omega t} F_3(y)\Omega_s(S^3)\,, \eqlabel{radialeoms} \end{equation} where $\Omega_s(S^3)$ are $S^3$ Laplace-Beltrami operator eigenfunctions with eigenvalues $s=l (l+2)$ for integer $l$, \begin{equation} \Delta_{S^3}\ \Omega_s(S^3)=-s\ \Omega_s(S^3)=-l (l+2)\ \Omega_s(S^3)\,, \eqlabel{harmonics} \end{equation} we find from \eqref{linear} the following equations of motion \begin{equation} \begin{split} 0=&F_2''+F_2' \biggl(\frac{2cc'}{c+1}+\frac{(c^2-1) \rho^8}{6a y} -\frac{4c\rho^2}{3a y}+\frac{2 y-1}{y (y-1)}+\frac{1}{a (y-1)} -\frac{2}{3 a \rho^4 y}\biggr)\\ &+\frac{F_2}{4y (1-y) a} \left(\frac{e^{2 d} \omega^2}{a}-\frac{s}{1-y}\right) +F_2 \biggl(\frac{(c')^2}{(1-c^2) (c+1)}+\frac{\rho^2 (\rho^6 c-4)}{4a y^2 (y-1)}\biggr)\,, \end{split} \eqlabel{F2eom} \end{equation} \begin{equation} \begin{split} 0=&F_3''+F_3' \biggl(\frac{2cc'}{c+1}+\frac{(c^2-1) \rho^8}{6a y} -\frac{4c\rho^2}{3a y}+\frac{2 y-1}{y (y-1)}+\frac{1}{a (y-1)} -\frac{2}{3 a \rho^4 y}\biggr)\\ &+\frac{F_3}{4y (1-y) a} \left(\frac{e^{2 d} \omega^2}{a}-\frac{s}{1-y}\right) +F_3 \biggl(\frac{(c')^2}{(1-c^2) (c+1)}+\frac{\rho^2 (\rho^6 (c-1)-4)}{4a y^2 (y-1)}\biggr)\,. \end{split} \eqlabel{F3eom} \end{equation} The radial wavefunctions $F_{2,3}$ must be regular at the origin, {\it i.e.,}\ $z\to 0_+$, \begin{equation} F_{2}=z^{l/2}\ f_2^{h} \left( 1+ {\cal O}(z)\right)\,,\qquad F_{3}=z^{l/2}\ f_3^{h} \left( 1+ {\cal O}(z)\right)\,, \eqlabel{flir} \end{equation} and normalizable as $y\to 0_+$, \begin{equation} \begin{split} F_{2}=&y^{3/2}\left(1+y \left(\frac s8-\frac 12 c_{1,0}+\frac{9-\omega^2}{8} \right)+{\cal O}(y^2\ln y)\right)\,,\\ F_{3}=&y\left(1+y \left(\frac s4+\frac{4-\omega^2}{4} +4 \rho_{1,1}-2 \rho_{1,0} -\frac16 c_{1,0}-2 \rho_{1,1} \ln y \right)+{\cal O}(y^2\ln y)\right)\,. \end{split} \eqlabel{fluv} \end{equation} Note that we set the normalizable coefficient of $F_{2,3}$ in the UV to one. \begin{figure}[t] \begin{center} \psfrag{x}{{$m\ell$}} \psfrag{y}{{$\omega_{2,\{n,l\}}/\omega_{2,\{n,l\}}^{SYM}$}} \psfrag{z}{{$\omega_{3,\{n,l\}}/\omega_{3,\{n,l\}}^{SYM}$}} \includegraphics[width=2.6in]{w2.eps} \qquad \includegraphics[width=2.6in]{w3.eps} \end{center} \caption{Low energy states in the spectrum of BEFP fluctuations about PW vacua: $\{n,l\}=\{(0,0)\,;\, (0,1)\,;\, (1,0) \}$ (blue, red, green). See section \ref{vacuumstability}. } \label{figure4} \end{figure} When both scalars of the PW flow are set to zero, \eqref{F2eom}-\eqref{fluv} corresponds to fluctuations of gravitational modes dual to dimension-3 (for $F_2$) and dimension-2 (for $F_3$) operators of the ${\cal N}=4$ SYM on $S^3$. In this case the equations can be solved analytically. We find, \begin{equation} \begin{split} F_{2,\{n,l\}}^{SYM}=&y^{3/2} (1-y)^{l/2}\ _2F_1\biggl(-n\,,3+n+l\,\,; l+2\,\,; 1-y\biggr)\,,\\ \omega_{2,\{n,l\}}^{SYM}=&3+2 n +l\,, \end{split} \eqlabel{f2solve} \end{equation} \begin{equation} \begin{split} F_{3,\{n,l\}}^{SYM}=&y (1-y)^{l/2}\ _2F_1\biggl(-n\,,2+n+l\,\,; l+2\,\,; 1-y\biggr)\,,\\ \omega_{3,\{n,l\}}^{SYM}=&2+2n +l\,, \end{split} \eqlabel{f3solve} \end{equation} where $\{n,l\}$ are non-negative integers. For supersymmetric PW flows \eqref{n2susy} we have to resort to numerics. The results of the numerical analysis are presented in figure \ref{figure4}. We look at the states with $\{n,l\}=\{(0,0)\,;\, (0,1)\,;\, (1,0) \}$ for both $F_2$ and $F_3$ radial functions. Over the range of parameters discussed, the embedding of PW flows within BEFP effective action is stable. \section{Black hole spectrum in PW effective action}\label{bh} We begin with the metric ansatz and the boundary conditions representing regular Schwarzschild black hole solutions in PW effective action with the $S^3$ horizon. We explain how the normalizable coefficients of the gravitational solution encode the thermodynamic properties of the black holes: the temperature $T_{BH}$, the energy $E_{BH}$, the entropy $S_{BH}$ and the free energy ${\cal F}_{BH}$. We define the size $\ell_{BH}$ of a black hole as \begin{equation} \left(\frac{\ell_{BH}}{L}\right)^3\equiv \frac{A_{horizon}}{L^3}\,. \eqlabel{ahor} \end{equation} We compute excitation energy $\Delta(\ell_{BH}/L\,, (m\ell))$, \begin{equation} \Delta(\ell_{BH}/L\,, (m\ell)) = \frac{E_{BH}(\ell_{BH}/L\,, m\ell)-E_{vacuum}(m\ell)}{E_{vacuum}^{{\cal N}=4}}\,, \eqlabel{defDelta} \end{equation} as a function of $\ell_{BH}/L$, but for select values of $m\ell$: \nxt perturbatively in $m\ell$, to order ${\cal O}((m\ell)^2)$; \nxt for $\rho_{1,1}=\frac{1}{12}(m\ell)^2=\{1,1.5,2,\cdots 5,5.5,5.8\}$ (the last value corresponds to the largest value of $m\ell$ for which we computed $E_{vacuum}$);\\ and present a strong numerical evidence that \begin{equation} \lim_{\ell_{BH}/L\to 0} \Delta(\ell_{BH}/L\,, (m\ell)) = 0\,. \eqlabel{limigap} \end{equation} Thus, we conclude that there is no gap in the spectrum of black holes in PW geometry; correspondingly, there is no gap in $SO(4)$-invariant equilibrium states of the ${\cal N}=2^*$ gauge theory on $S^3$ in the planar limit and for large 't Hooft coupling, as there is no energy gap for generic $SO(4)$-invariant excitations in this theory. \subsection{Metric ansatz and the boundary conditions for black holes in PW} Recall that the vacuum solutions of section \ref{vacuum} were obtained within metric ansatz \eqref{geomdyn}, \begin{equation} \begin{split} ds_5^2\bigg|_{vacuum}=&\frac{4}{\cos^2 x} \left(-a e^{-2d} (dt)^2+\frac{(dx)^2}{a}+\sin^2 x (d\Omega_3)^2\right)\\ =&\frac{4}{y} \left(-a e^{-2d} (dt)^2+\frac{(dy)^2}{4y(1-y)a}+(1-y) (d\Omega_3)^2\right)\,, \end{split} \eqlabel{metricbh} \end{equation} where in the second line we recalled the radial coordinate $y=\cos^2 x$, $y\in[0,1]$. Regularity at the origin ($y\to 1_-$) required that the metric functions $a$ and $d$ remain finite and non-zero. Notice that the three-sphere shrinks to zero size in this limit. In close analogy to \eqref{metricbh}, to describe regular horizon black holes, we reparameterize the radial coordinate $y\to y_h y$, with a constant $0<y_h<1$, while keeping $y\in[0,1]$. We further require that $a$ has a simple zero and $d$ remains finite as $y\to 1_-$: \begin{equation} \begin{split} &ds_5^2\bigg|_{BH} =\frac{4}{y_h y} \left(-a e^{-2d} (dt)^2+\frac{y_h (dy)^2}{4y(1-y y_h)a}+(1-y y_h) (d\Omega_3)^2\right)\,,\\ &0<y_h<1\,,\qquad y\in[0,1]\,,\qquad \lim_{y\to 1_-} a=0\,,\\ &\lim_{y\to 1_-} a'={\rm finite}\ne 0\,,\qquad \lim_{y\to 1_-} d={\rm finite}\,. \end{split} \eqlabel{s5bh} \end{equation} Given \eqref{s5bh}, \begin{equation} A_{horizon}=16\pi^2\frac{(1-y_h)^{3/2}}{y_h^{3/2}}\qquad \Longrightarrow\qquad \frac{\ell_{BH}}{L}\equiv\frac{A_{horizon}^{1/3}}{L}=(2\pi^2)^{1/3}\frac{(1-y_h)^{1/2}}{y_h^{1/2}}\,. \eqlabel{lbh} \end{equation} The equations of motion describing black holes \eqref{s5bh} can be obtained from \eqref{vaceoms} with the simple change of variables\footnote{We used the last two equations to algebraically eliminate $a'$ and $d'$ from the first two.} $y\to y y_h$, \begin{equation} \begin{split} &0=c''-\frac{c (c')^2}{c^2-1}+c'\biggl(\frac{(c^2-1) \rho^8}{6a y}-\frac{4c \rho^2}{3a y}+\frac{a (2 y y_h-1)+y y_h}{y a (y y_h-1)} -\frac{2}{3 y a \rho^4}\biggr) \\ &- \frac{\rho^2 (c^2-1) (\rho^6 c-4)}{4(1-yy_h) y^2 a}\,,\\ &0=\rho''-\frac{(\rho')^2}{\rho}+\rho' \biggl(\frac{(c^2-1) \rho^8}{6a y}-\frac{4c \rho^2}{3a y}+\frac{a (2 y y_h-1)+y y_h}{y a (y y_h-1)} -\frac{2}{3 y a \rho^4}\biggr)\\ & -\frac{(c^2-1) \rho^9}{12(1-yy_h) y^2 a}-\frac{1-\rho^6 c}{6\rho^3 y^2 a (1-yy_h)}\,,\\ &0=d'-\frac{2 y (1-yy_h) (c')^2}{3(c^2-1)}-\frac{8 (1-yy_h) y (\rho')^2}{\rho^2}\,,\\ 0&=a'-(y-y^2y_h)a\left(\frac{8 (\rho')^2}{\rho^2}+\frac{2 (c')^2}{3(c^2-1)}\right)+\frac{(yy_h-2) a+yy_h}{y (1-yy_h)} -\frac{(c^2-1) \rho^8-8\rho^2 c}{6y}\\ &+\frac{2}{3 y \rho^4}\,. \end{split} \eqlabel{bheoms} \end{equation} The boundary conditions in the UV, {\it i.e.,}\ $y\to 0_+$, specify the asymptotic expansion \begin{equation} \begin{split} a=&1+y\ \frac 23 \hat{c}_{1,0}\ + y^2\ \biggl(\hat{a}_{2,0}+\biggl(\frac 23 \hat{c}_{1,0}(\hat{c}_{1,0}+y_h)+8\hat{\rho}_{1,1}^2+ 16\hat{\rho}_{1,1}\hat{\rho}_{1,0}\biggr)\ln y\\ &+8 \hat{\rho}_{1,1}^2\ln^2 y\biggr)+ {\cal O}(y^3\ln^3 y)\,,\\ d=&y\ \frac13 \hat{c}_{1,0}+y^2\ \biggl(\frac12 \hat{c}_{2,0}-\frac{1}{36} \hat{c}_{1,0}^2+4 \hat{\rho}_{1,0}^2-\frac18 \hat{c}_{1,0} y_h+2 \hat{\rho}_{1,1}^2 +4 \hat{\rho}_{1,0} \hat{\rho}_{1,1}\\ &+\biggl(\frac 14 \hat{c}_{1,0} y_h+\frac13 \hat{c}_{1,0}^2+4 \hat{\rho}_{1,1}^2+8 \hat{\rho}_{1,0} \hat{\rho}_{1,1}\biggr) \ln y +4 \hat{\rho}_{1,1}^2 \ln^2 y\biggr)+ {\cal O}(y^3\ln^3 y)\,,\\ \rho=&1+y\ \left(\hat{\rho}_{1,0}+\hat{\rho}_{1,1} \ln y\right)+y^2\ \biggl( \frac{1}{12} \hat{c}_{1,0}^2+\hat{\rho}_{1,0}y_h-3 \hat{\rho}_{1,1} \hat{c}_{1,0}+6 \hat{\rho}_{1,1}^2\\ &-4 \hat{\rho}_{1,0} \hat{\rho}_{1,1}+\frac43 \hat{c}_{1,0} \hat{\rho}_{1,0}+\frac32 \hat{\rho}_{1,0}^2 +\biggl(\frac43 \hat{\rho}_{1,1} \hat{c}_{1,0}+\hat{\rho}_{1,1} y_h-4 \hat{\rho}_{1,1}^2\\ &+3 \hat{\rho}_{1,0} \hat{\rho}_{1,1}\biggr) \ln y+\frac32 \hat{\rho}_{1,1}^2 \ln^2 y \biggr)+{\cal O}(y^3\ln^3 y)\,,\\ c=&1+y\ \hat{c}_{1,0}+y^2\ \biggl(\hat{c}_{2,0}+\biggl(\frac12 \hat{c}_{1,0}y_h+\frac23 \hat{c}_{1,0}^2\biggr) \ln y\biggr)+ {\cal O}(y^3\ln^2 y)\,. \end{split} \eqlabel{bhuv} \end{equation} In \eqref{bhuv} the non-normalizable coefficients $\hat{\rho}_{1,1}$ and $\hat{c}_{1,0}$ are related to corresponding coefficients of the vacuum solution as \begin{equation} \hat{\rho}_{1,1}=y_h \rho_{1,1}\,,\qquad \hat{c}_{1,0}=y_h c_{1,0}\,, \eqlabel{bhmatch} \end{equation} to be further matched with the mass parameters $\{m_b,m_f\}$ of the dual gauge theory as in \eqref{genflow}. The rest of the coefficients in \eqref{bhuv} are normalizable. The asymptotic expansion in the IR, {\it i.e.,}\ as $z=(1-y)\to 0_+$ is different from the one in \eqref{originz} --- here it reflects the presence of a regular horizon (see \eqref{s5bh}), \begin{equation} \begin{split} &a=\frac z6 \left(\left(1-(\hat{c}^h_0)^2\right)(\hat{\rho}_0^h)^8+8\hat{c}^h_0(\hat{\rho}^h_0)^2+\frac{4}{(\hat{\rho}^h_0)^4}+\frac{6y_h}{1-y_h}\right) +{\cal O}(z^2)\,,\\ &d=\hat{d}^h_0+{\cal O}(z)\,,\\ &\rho=\hat{\rho}^h_0+{\cal O}(z)\,,\\ &c=\hat{c}^h_0+{\cal O}(z)\,.\\ \end{split} \eqlabel{bhir} \end{equation} The full set of the non-normalizable coefficients is \begin{equation} \{\hat{a}_{2,0}\,,\ \hat{\rho}_{1,0}\,,\ \hat{c}_{2,0}\,,\ \hat{\rho}^h_0\,,\ \hat{c}^h_0\,,\ \hat{d}^h_0\}\,. \eqlabel{bhfullset} \end{equation} Note that we have the correct number of non-normalizable coefficients to uniquely specify a solution of two second-order and two first-order ODEs given a choice of \eqref{bhmatch}. \subsubsection{Perturbative black holes solutions} As in section \ref{vsmall}, we can construct solutions to \eqref{bheoms}-\eqref{bhir} perturbatively in $m\ell$ to order ${\cal O}((m\ell)^2)$. We introduce \begin{equation} \begin{split} &c=\cosh(2\lambda \hat{\chi}_1(y)+{\cal O}(\lambda^3))\,,\qquad \rho=e^{\lambda^2 \hat\alpha_2(y)+{\cal O}(\lambda^4)}\,,\\ &a=\frac{(1-y)(1+y(1-y_h))}{1-yy_h}+\lambda^2 \hat a_2(y)+{\cal O}(\lambda^4)\,,\qquad d=\lambda^2 \hat d_2(y)+{\cal O}(\lambda^2)\,, \end{split} \eqlabel{pertbh} \end{equation} where $\lambda$ is a small parameter. Substituting \eqref{pertvacuum} into \eqref{vaceoms} we find \begin{equation} \begin{split} &0=\hat\chi_1''-\frac{\hat\chi_1'}{y(1-y)}\biggl(1+y+\frac{y(1-y_h)((2-y)y y_h-2)}{(1-yy_h)(1+y(1-y_h))}\biggr)+\frac {3\hat\chi_1}{4y^2(1-y)(1+y(1-y_h))}\,,\\ &0=\hat\alpha_2''-\frac{\hat\alpha_2'}{y(1-y)}\biggl(1+y+\frac{y(1-y_h)((2-y)y y_h-2)}{(1-yy_h)(1+y(1-y_h))}\biggr)+\frac {\hat\alpha_2}{y^2(1-y)(1+y(1-y_h))}\,,\\ &0=\hat a_2'-\frac{2-yy_h}{y(1-yy_h)}\hat a_2-\frac 83 y(1-y)(1+y(1-y_h))(\hat\chi_1')^2+\frac 2y(\hat\chi_1)^2\,,\\ &0=\hat d_2'-\frac 83 y(1-y y_h)(\hat\chi_1')^2\,. \end{split} \eqlabel{perbh} \end{equation} For the asymptotic expansions we have: \nxt as $y\to 0_+$, \begin{equation} \begin{split} \hat\chi_1=&y^{1/2} \left(1+y\ \left(\hat\chi_{1,0,(1)} +\frac {y_h}{4} \ln y\right)+{\cal O}(y^2\ln y)\right)\,,\\ \hat\alpha_2=&\hat{\rho}_{1,1,(2)}\biggl(\left(\hat\alpha_{1,0,(2)}+\ln y\right)y+{\cal O}(y^2\ln y)\biggr)\,,\\ \hat a_2=&\frac 43 y + y^2\ \left( \hat a_{2,0,(2)}+\frac {4y_h}{3}\ln y\right)+{\cal O}(y^3\ln^2 y)\,,\\ \hat d_2=&\frac 23 y +y^2\ \left(-\frac {y_h}{4} +2 \hat\chi_{1,0,(1)} +\frac {y_h}{2} \ln y\right) +{\cal O}(y^3\ln^2 y)\,, \end{split} \eqlabel{pertuvbh} \end{equation} \nxt as $z\to 0_+$ \begin{equation} \begin{split} \hat\chi_1=&\hat\chi_{0,(1)}^{h}\left(1-\frac {3}{4(2-y_h)} z+{\cal O}(z^2)\right)\,,\\ \hat\alpha_2=&\hat{\rho}_{1,1,(2)}\biggl(\hat\alpha_{0,(2)}^{h}\left(1-\frac {1}{(2-y_h)} z+{\cal O}(z^2)\right)\biggr)\,,\\ \hat a_2=&2(\hat\chi_{0,(1)}^{h})^2 z +{\cal O}(z^2)\,,\\ \hat d_2=&\hat d^h_{0,(2)}-\frac{3(1-y_h)}{2(2-y_h)^2}(\hat\chi_{0,(1)}^{h})^2 z +{\cal O}(z^2)\,. \end{split} \eqlabel{pertirbh} \end{equation}\\ Equations \eqref{perbh}-\eqref{pertuvbh} have to be solved numerically for different values of $y_h$. To compare with the full numerical solution, we identify, to order ${\cal O}(\lambda^2)$, \begin{equation} \begin{split} &\hat{\rho}_{1,1}=\hat{\rho}_{1,1,(2)} \lambda^2\,,\qquad \hat{c}_{1,0}=2 \lambda^2\,,\qquad \hat{\rho}_{1,0}=\hat{\rho}_{1,1,(2)}\hat\alpha_{1,0,(2)} \lambda^2\,,\qquad \hat{c}_{2,0}=4\hat\chi_{1,0,(1)} \lambda^2\,, \\ &\hat{a}_{2,0}=y_h-1+\hat a_{2,0,(2)}\lambda^2\,,\qquad \hat{\rho}^h_0=1+\hat{\rho}_{1,1,(2)}\hat\alpha_{0,(2)}^{h}\lambda^2\,,\\ &\hat{c}^h_0=1+2(\hat\chi_{0,(1)}^{h})^2\lambda^2\,,\qquad \hat{d}^h_0=\hat d^h_{0,(2)}\lambda^2\,. \end{split} \eqlabel{pertidbh} \end{equation} Note that ${\cal N}=2$ supersymmetry in the UV at ${\cal O}(\lambda^2)$ leads to (see \eqref{n2susy}) \begin{equation} \hat{\rho}_{1,1,(2)}=\frac 13 \,. \eqlabel{r112bh} \end{equation} \subsection{Thermodynamic properties of black holes in PW} Requiring that there is no conical singularity in the analytical continuation $t\to i t_E$ of the metric \eqref{s5bh} as $y\to 1_-$ we compute the Hawking temperature $T_{BH}$ of the black hole using \eqref{bhir}, \begin{equation} \begin{split} T_{BH}=&\frac{e^{-\hat{d}^h_0}}{12 \pi y_h^{1/2}(1-y_h)^{1/2}} \biggl((1-y_h) (1-(\hat{c}^h_0)^2) (\hat{\rho}^h_0)^8 +8 \hat{c}^h_0 (1-y_h) (\hat{\rho}^h_0)^2+6 y_h\\ &+\frac{4 (1-y_h)}{(\hat{\rho}^h_0)^4}\biggr)\,. \end{split} \eqlabel{tbh} \end{equation} The Bekenstein-Hawking entropy of the black hole is given by \begin{equation} S_{BH} = \frac{A_{horizon}}{4 G_5}=\frac{4\pi^2}{G_5}\ \frac{(1-y_h)^{3/2}}{y_h^{3/2}}\,. \eqlabel{sbh} \end{equation} The free energy ${\cal F}_{BH}$ can be computed following holographic renormalization procedure discussed in section \ref{holren}. We find \begin{equation} \begin{split} &{\cal F}_{BH}=\frac{3\pi}{4 G_5}\ \biggl(1+\frac{\hat{c}_{1,0}^2}{y_h^2} \left(\frac 49-\frac{16}{9}\ln 2 +\frac 89\ln y_h\right) +\frac{\hat{c}_{1,0}}{y_h}\left(-\frac 43-\frac83 \ln 2+\frac 43 \ln y_h\right) \\ &+ 32 \frac{\hat{\rho}_{1,1}^2}{y_h^2}\left(2\ln 2-\ln y_h\right) +\frac{1}{y_h^2}\biggl\{64 \hat{\rho}_{1,1}\hat{\rho}_{1,0}+\frac 83 \hat{c}_{2,0}+32 \hat{\rho}_{1,0}^2 -4 \hat{a}_{2,0}\biggr\} \biggr)\\ &-\frac{(1-y_h)\pi e^{-\hat{d}^h_0}}{3y_h^2 G_5}\biggl((1-y_h) (1-(\hat{c}^h_0)^2) (\hat{\rho}^h_0)^8 +8 \hat{c}^h_0 (1-y_h) (\hat{\rho}^h_0)^2+6 y_h +\frac{4 (1-y_h)}{(\hat{\rho}^h_0)^4}\biggr)\,. \end{split} \eqlabel{fbh} \end{equation} The contribution in the last line in \eqref{fbh} comes from the lower limit of integration of the bulk contribution to the regularized free energy, \eqref{totder}; it equals precisely to $(-S_{BH}T_{BH})$. Computing the holographic stress-energy tensor, as described in \cite{Buchel:2004hw} we find \begin{equation} \begin{split} E_{BH}=&\frac{3\pi}{4 G_5}\ \biggl(1+\frac{\hat{c}_{1,0}^2}{y_h^2} \left(\frac 49-\frac{16}{9}\ln 2 +\frac 89\ln y_h\right) +\frac{\hat{c}_{1,0}}{y_h}\left(-\frac 43-\frac83 \ln 2+\frac 43 \ln y_h\right) \\ &+ 32 \frac{\hat{\rho}_{1,1}^2}{y_h^2}\left(2\ln 2-\ln y_h\right) +\frac{1}{y_h^2}\biggl\{64 \hat{\rho}_{1,1}\hat{\rho}_{1,0}+\frac 83 \hat{c}_{2,0}+32 \hat{\rho}_{1,0}^2 -4 \hat{a}_{2,0}\biggr\} \biggr)\\ =&\frac{3N^2}{16\ell}\biggl(1+\frac{(m\ell)^4}{9}-\frac 23 (1+2\ln 2-\ln y_h)(m\ell)^2 \\&+\frac{1}{y_h^2}\biggl\{ 32 \hat{\rho}_{1,0}^2+\frac{16}{3}(m\ell)^2 y_h\hat{\rho}_{1,0}+\frac 83 \hat{c}_{2,0}-4\hat{a}_{2,0} \biggr\} \biggr)\,, \end{split} \eqlabel{ebh} \end{equation} where in the last line we expressed the energy in terms of the dual gauge theory variables using \eqref{bhmatch} and \eqref{matchPW}. Notice that the basic thermodynamic relation, \begin{equation} {\cal F}_{BH}=E_{BH}-S_{BH}T_{BH}\,, \eqlabel{brelation} \end{equation} is satisfied automatically. Using \eqref{pertidbh}, from \eqref{ebh} we have \begin{equation} \begin{split} \frac{E_{BH}}{E_{vacuum}^{{\cal N}=4}}=&1+\frac{4(1-y_h)}{y_h^2}+\left(\frac{8}{3y_h}\hat\chi_{1,0,(1)}-\frac{1}{y_h}\hat a_{2,0,(2)} -\frac 23(1+2\ln 2-\ln y_h)\right)(m\ell)^2\\ &+{\cal O}((m\ell)^4)\,. \end{split} \eqlabel{epbh} \end{equation} \subsection{$ \Delta(\ell_{BH}/L\,, (m\ell))$} We are now ready to present results for $ \Delta(\ell_{BH}/L\,, (m\ell))$ as defined by \eqref{defDelta}. \begin{figure}[t] \begin{center} \psfrag{x}{{$\ell_{BH}/L$}} \psfrag{y}{{$\Delta_2$}} \includegraphics[width=4in]{Delta2.eps} \end{center} \caption{Solid line represents $\Delta_2$ as defined in \eqref{perDelta}. The dotted red line represents the best quadratic fit to the first 10$\%$ of data points, see \eqref{delta2fit}.} \label{figure5} \end{figure} \begin{figure}[t] \begin{center} \psfrag{x}{{$\ell_{BH}/L$}} \psfrag{y}{{$\Delta$}} \psfrag{z}{{$\Delta(m\ell=8.34266)$}} \includegraphics[width=2.6in]{Deltaall.eps}\qquad \includegraphics[width=2.6in]{Deltalast.eps} \end{center} \caption{Left panel: Black hole mass gap relative to $E_{vacuum}^{{\cal N}=4}$, see \eqref{defDelta}, as a function of $\ell_{BH}/L$ for select values of $m\ell$. The green curve represents $\Delta(m\ell=0)$. Right panel: $\Delta$ for the largest value of $m\ell$ computed, $m\ell=8.34266$; the dotted red line represents the best quadratic fit to the first 10$\%$ of data points, see \eqref{deltafit}.} \label{figure6} \end{figure} To order ${\cal O}((m\ell)^2)$, using \eqref{epvacuum} and \eqref{epbh}, we find \begin{equation} \begin{split} \Delta=&\frac{4(1-y_h)}{y_h^2}+\Delta_2\ (m\ell)^2+{\cal O}((m\ell)^4)\,,\\ \Delta_2=&\Delta_2(y_h)=\frac83\left(\frac{\hat\chi_{1,0,(1)}}{y_h}-\chi_{1,0,(1)}\right)-\left(\frac{\hat a_{2,0,(2)}}{y_h}-a_{2,0,(2)}\right)+\frac 23 \ln y_h\,. \end{split} \eqlabel{perDelta} \end{equation} Results of numerical computations of $\Delta_2$ are presented in figure \ref{figure5}. A solid line represents the data points, and the red dotted line is the best quadratic fit using the first 10$\%$ of data points: \begin{equation} \Delta_2\bigg|_{fit}=-0.0269118 \biggl(\frac{\ell_{BH}}{L}\biggr)^2\,. \eqlabel{delta2fit} \end{equation} Our numerical results present a strong evidence that \begin{equation} \lim_{\ell_{BH}/L\to 0} \Delta_2=0\,, \eqlabel{limpert} \end{equation} as a result, we see that $\Delta$ vanishes in this limit to order ${\cal O}((m\ell)^2)$. Using \eqref{freeenergy} and \eqref{ebh} we compute $\Delta$ for $\rho_{1,1}=\frac{1}{12}(m\ell)^2=\{1,1.5,2,\cdots 5,5.5,5.8\}$. The results are presented in the left panel of figure \ref{figure6} (the top-to-bottom blue curves correspond to $\rho_{1,1}$ variation $1\to 5.8$). The green curve represents $\Delta(m\ell=0)$: \begin{equation} \Delta(m\ell=0)=\frac{2^{4/3}}{\pi^{4/3}}\ \biggl(\frac{\ell_{BH}}{L}\biggr)^2 +\frac{2^{2/3}}{\pi^{8/3}}\ \biggl(\frac{\ell_{BH}}{L}\biggr)^4\,. \eqlabel{deltan4} \end{equation} The right panel represents $\Delta$ for the largest value of $m\ell$ computed: $m\ell=8.34266$, with the red dotted line indicating the best quadratic fit to the first $10\%$ of data points: \begin{equation} \Delta(m\ell=8.34266)\bigg|_{fit}=0.339765 \biggl(\frac{\ell_{BH}}{L}\biggr)^2\,. \eqlabel{deltafit} \end{equation} Note that for $m\ell=8.34266$, $\epsilon=-243.785$, implying that for the smallest size black hole studied, $\ell_{BH}/L=0.0855056$, \begin{equation} \frac{E_{BH}-E_{vacuum}}{E_{vacuum}}=1.04285\times 10^{-5}\,. \eqlabel{smallest} \end{equation} We conclude that numerical results strongly suggest \eqref{limigap}. ~\\ \section*{Acknowledgments} I would like to thank Colin Denniston, Martin Kruczenski, Luis Lehner, Steve Liebling and Volodya Miransky for valuable discussions. I thank the Galileo Galilei Institute for Theoretical Physics for the hospitality and the INFN for partial support during the completion of this work. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research \& Innovation. I gratefully acknowledge further support by the NSERC Discovery grant.
3,212,635,537,818
arxiv
\section{Introduction} Fix positive integers $m$ and $n$. In \cite{KT21,KST21}, the first and third author introduced a family of $S_{mn}$ modules of cardinality $n^{mn-2}$ with the property that in the case $m=1$, the resulting representation restricted to $S_{n-1}$ is Haiman's well-known parking function representation \cite{Hai94}. The primary motivation for introducing these modules was to gain a deeper understanding of the work of Berget-Rhoades \cite{BR14}, which studies certain $S_n$-modules with dimension $n^{n-2}$ that also restrict to the parking function representation. In fact, Berget and Rhoades have a more general family of $S_n$-modules with dimension $m^{n-1}n^{n-2}$, which have the property that they carry actions of both $S_{n}$ and $S_{n-1}$. An explicit decomposition into irreducibles for these more general modules is determined for the $S_{n-1}$-action (but not the $S_{n}$-action) in \cite[Theorem 7]{BR14}. We remark here that, unlike in the $m=1$ case \cite[Theorem 2]{BR14}, the case of general $m$ does not explicitly mention any analogue of parking functions. Fix $G=K_n^m$, the complete multigraph on $n$ vertices with exactly $m$ edges between any two distinct vertices. Denote the set of break divisors on $G$ by $\breakd_{m,n}$, and the set of $G$-parking functions (essentially $q$-reduced divisors for some distinguished vertex $q$) by $\park_{m,n}$. The former naturally carries an $S_n$ action while the latter carries an $S_{n-1}$ action. The aim of this note, achieved in Theorem~\ref{thm:frob_permutahedron}, is two-fold. First we `amend' the modules in \cite{KST21} so that we obtain $S_n$-modules $\widehat{\p D}_{m,n}$ of dimension $m^{n-1}n^{n-2}$. These modules are then shown to be $S_n$-isomorphic to the module determined by $\breakd_{m,n}$, and furthermore allow us to show that their restriction to $S_{n-1}$ is isomorphic to the $S_{n-1}$-action on $\park_{m,n}$. This generalizes our main result in \cite{KST21}. Second, by exploiting the isomorphism $\widehat{\p D}_{m,n}\cong_{S_n} \breakd_{m,n}$, we show that the number of $S_n$-orbits on $\breakd_{m,n}$ equals the \emph{(unquantized) Donaldson-Thomas invariants} $\mathrm{DT}_n^{m+1}$ of the $(m+1)$-loop quiver; see \cite{Rei12} for more on combinatorial and other aspects. Very briefly, Donaldson-Thomas invariants of quivers (with potential) were introduced in \cite{KS11} as a mathematical definition of the string-theoretic concept of BPS state count; they are defined formally via Euler product factorizations of motivic generating series. Realizing the latter as Poincar\'e series of so-called Cohomological Hall algebras, integrality and positivity of Donaldson-Thomas invariants of symmetric quivers were established in \cite{Efi12}. In the particular example of the $(m+1)$-loop quiver (and zero potential), the Donaldson-Thomas invariants ${\rm DT}_n^{m+1}$ can be defined concisely by factoring the generating series of $(m+1)$-ary trees with $n$ nodes $$F(t)=\sum_{n\geq 0}\frac{1}{mn+1}{{(m+1)n}\choose{n}}t^n$$ into a (signed) Euler product: $$F(t)=\prod_{n\geq 1}(1-((-1)^{m}t)^n)^{-(-1)^{mn}n{\rm DT}_n^{m+1}}.$$ Thus, from our main result, we obtain another combinatorial proof of the integrality of these numbers, and it is worthwhile to compare it with the earlier interpretation obtained by the second author. In \cite[\S 6]{Rei12}, the natural cyclic action of $\mathbb{Z}_n\coloneqq \mathbb{Z}/n\mathbb{Z}$ on lattice points of the $mn$-fold dilation of the standard simplex in $\mathbb{R}^n$ is used to obtain a combinatorial interpretation for the $\mathrm{DT}_n^{m+1}$. These numbers, in fact quantized analogues thereof, are shown to count \emph{primitive/nearly-primitive elements} under this cyclic action, and the parity of $m$ plays a role. In contrast, we utilize an $S_n\times \mathbb{Z}_n$ action on lattice points in a disjoint union of certain slices of the cube $[0,mn-1]^n$. Identifying lattice points that are in the same $\mathbb{Z}_n$ class then gives an $S_n$-module $\widehat{\p D}_{m,n}$. In fact, each such class contains a unique lattice point belonging to a usual permutahedron closely related to the $m$-fold dilation of the standard permutahedron in $\mathbb{R}^n$. Finally we note that (part of) Theorem~\ref{thm:frob_permutahedron} may be interpreted as saying that the invariants $\mathrm{DT}^{m+1}_n$ for $(m+1)$-loop quivers equal the dimension of the space of $S_n$-invariants of $\widehat{\p D}_{m,n}$. In a similar vein (and in the general setting of symmetric quivers), Efimov \cite{Efi12} interprets the quantized DT-invariants as dimensions of spaces of $S_n$-invariants in certain quotients, but his work does not offer an explicit combinatorial perspective. He raises the question of exploring the underlying combinatorial aspects in \cite[\S 4]{Efi12}. The results in this article suggest looking for a graded analogue of $\widehat{\p D}_{m,n}$ that is $S_n$-isomorphic to Efimov's modules, thereby providing a tantalizing link to Cohomological Hall algebras. This is work in progress. \section{Break divisors, $q$-reduced divisors, and symmetric group actions} \label{sec:setup} We fix positive integers $m$ and $n$ throughout. By $[n]$ we mean $\{1,\dots,n\}$. For all undefined terminology in the context of symmetric functions and symmetric group representations, we refer the reader to \cite[Chapter 7]{St99}. Throughout, given a $G$-set $X$ we refer to both the set and the corresponding $\mathbb{C}G$-module by $X$. We denote by $\mathrm{Frob}$ the \bemph{Frobenius characteristic} map assigning to the irreducible Specht module $V^{\lambda}$ indexed by a partition $\lambda$ the Schur function $s_{\lambda}$. We can extend $\mathrm{Frob}$ linearly and compute the image of any $S_n$-module $V$ by decomposing it into irreducibles. \subsection{Break divisors on connected graphs} \label{subsec:break divisors} Given a finite graph $G$ (with multiple edges between the same vertices allowed), we denote its sets of vertices and edges by $V(G)$ and $E(G)$ respectively. The \bemph{genus} $g(G)$ of a connected graph $G$ is defined to be $|E(G)|-|V(G)|+1$. We now briefly recall some notions from Baker-Norine's theory \cite{BN07}. A map $D:V(G)\to \mathbb{Z}$ is called a \bemph{divisor}. We say that $D$ is \bemph{effective} if $D(v) \geq 0$ for all $v\in V(G)$. The \bemph{degree} $\deg(D)$ of $D$ equals $\sum_{v\in V(G)}D(v)$. We write divisors either as tuples, say after identifying $V(G)$ with the set $[|V(G)|]$, or as formal sums $D = \sum_{v \in V(G)} D(v) (v)$. For any orientation $\p O$ of the edges of $G$, define the divisor $D_{\p O}$ by \begin{align} D_{\p O}=\sum_{v\in V(G)}(\mathrm{indeg}_{\p O}(v)-1)(v). \end{align} Such divisors are called \bemph{orientable}. Given $q\in V(G)$, we say that $\p O$ is $q$-connected if there exists a directed path from $q$ to any other vertex in $G$. A \bemph{$q$-orientable divisor} is a divisor of the form $D_{\p O}$ where $\p O$ is $q$-connected. A \bemph{break divisor} \cite{MZ08,ABKS14} on $G$ is an effective divisor $D$ of degree $g(G)$ such that for all induced subgraphs $H$ of $G$ the following holds: \begin{align} \label{ineq:condition for break} \deg(D|_{H})\geq g(H). \end{align} Here $\deg(D|_{H})$ denotes the degree of $D$ restricted to vertices in $H$. We denote the set of break divisors on $G$ by $\mathrm{Break}(G)$. We record a result next that we have not been able to locate in the literature, though undoubtedly it should be well known to experts. Let $e_1,\dots,e_n$ denote the standard basis vectors in $\mathbb{R}^n$. Let $G$ be a connected multigraph with $V(G)=[n]$. Then $G$ determines a zonotope $\p Z_G$ called the \bemph{graphical zonotope} obtained by taking the Minkowski sum of line segments $[e_i,e_j]$, one for each edge $\{i,j\}\in E(G)$. Suppose that $\Delta_{n-1,n}$ denotes the $(n-1)$th standard hypersimplex in $\mathbb{R}^n$ obtained by taking the convex hull of the $S_n$ orbit of the point $(1^{n-1},0)$. We define the Minkowski difference $P-Q$ of polytopes $P,Q\subset \mathbb{R}^n$ to be $\{x\in \mathbb{R}^n\;|\; x+q\in P\text{ for all }q\in Q\}$. We then have the following result. \begin{proposition} \label{prop:break and lattice points} For any connected multigraph $G$ with $V(G)=\{1,\dots,n\}$ we have \[ \mathrm{Break}(G)=(\p Z_G-\Delta_{n-1,n})\cap \mathbb{Z}^n. \] \end{proposition} \begin{proof} The proof that follows was outlined to us by Chi Ho Yuen. Pick an orientation $\p O$ of the edges of $G$. Let $\mathrm{indeg}_{{\p O}}(v)$ denote the number of edges directed into the vertex $v$. The map $\p O \mapsto (\mathrm{indeg}_{{\p O}}(v))_{v\in V(G)}$ sets up a surjection between orientations on $G$ and lattice points in $\p Z_G$. It follows that the lattice points in $\p Z_G -(1^n)$ are precisely the orientable divisors on $G$. Thus, to establish the claim it suffices to show that \begin{align} \label{eq:reinterpreted_claim} D\text{ is a break divisor} \Longleftrightarrow D-(q) \text{ is orientable } \forall q\in V(G). \end{align} First assume that $D$ is a break divisor. Then \cite[Lemma 3.3]{ABKS14} tells us that $D-(q)$ is $q$-orientable for any $q\in V(G)$. Thus the forward direction is established. Now assume that $D$ is a divisor such that $D-(q)$ is orientable for all $q\in V(G)$. We claim that $D-(q)$ is in fact $q$-orientable for all $q\in V(G)$. Having shown this, it will follow from \cite[Lemma 3.3]{ABKS14} that $D$ is a break divisor. Given $S\subset V(G)$, let $G[S]$ denote the subgraph of $G$ induced by $S$. Let $\chi(S)$ denote the topological Euler characteristic of $G[S]$, i.e. $\chi(S)=V(G[S])-E(G[S])$. Given any divisor $D$ define \begin{align} \chi(S,D)=\deg(D|_S)+\chi(S). \end{align} Fix $q\in V(G)$. Since $D-(q)$ is orientable, by \cite[Theorem 4.8]{ABKS14} we know that \begin{align} \chi(S,D-(q)) \geq 0 \end{align} for every nonempty subset $S\subset V(G)$. If $D-(q)$ is not $q$-orientable, then \cite[Lemma 4.11]{ABKS14} tells us that \begin{align} \chi(S,D-(q)) \leq 0 \end{align} for some nonempty subset $S\subset V(G)\setminus\{q\}$. Thus it must be the case that \begin{align} \label{eq:to be compared} \chi(S,D-(q))=\deg(D-(q)|_S)+\chi(S)=0 \end{align} for some nonempty subset $S\subset V(G)\setminus \{q\}$. For such an $S$, pick any $p\in S$ and consider $\chi(S,D-(p))$. Since $p\in S$, we have that \begin{align} \deg(D-(p)|_S)<\deg(D-(q)|_S). \end{align} On comparing with \eqref{eq:to be compared}, it follows that $\chi(S,D-(p))<0$. By \cite[Theorem 4.8]{ABKS14}, this contradicts our assumption that $D-(p)$ is orientable. It thus follows that $D-(q)$ is $q$-orientable, which concludes the proof. \end{proof} The above proof gives yet another perspective on the following result; see \cite{Yu17} for more fascinating insights on this matter. \begin{corollary} For a connected multigraph $G$, the number of break divisors on $G$ is the number of spanning trees. \end{corollary} \begin{proof} By \cite[Corollary 11.5]{Pos09} we know that $\p Z_G-\Delta_{n-1,n}$ has as many lattice points as the volume of $\p Z_G$, and the latter is well known to equal the number of spanning trees of $G$. \end{proof} \begin{remark}\emph{ Of course, one could take for granted the fact that the number of break divisors equals the number of spanning trees, and then use Proposition~\ref{prop:break and lattice points} to prove Postnikov's result \cite[Corollary 11.5]{Pos09}. This approach is quite different from that in \emph{loc. cit.}, which relies on mixed subdivisions and the Cayley trick. } \end{remark} \subsection{The case of the complete multigraph} Recall that $K_{n}^m$ is the graph on the vertex set $[n]$ with $m$ edges between vertices $i$ and $j$ for all $1\leq i<j\leq n$. Its genus is given by \begin{align} \label{eq:def_genus_knm} g_{m,n}\coloneqq g(K_n^{m})=m\binom{n}{2}-n+1. \end{align} Our focus henceforth is primarily on $K_n^m$. Let $\breakd_{m,n}$ denote the set of break divisors on $K_n^m$. Let ${\p P}_{m,n}$ be the permutahedron in $\mathbb{R}^n$ obtained as the convex hull of the $S_n$-orbit of $(m(n-1)-1,m(n-2)-1,\dots,m-1,0)$. In the case $m=1$, this permutahedron is exactly the \emph{trimmed permutahedron} that plays a key role in \cite{KST21}. \begin{lemma} We have \[ \breakd_{m,n}=\p P_{m,n}\cap \mathbb{Z}^n. \] \end{lemma} \begin{proof} We give two arguments. For the first note that the zonotope $\p Z_{K_n^m}$ is given by the $m$-fold dilation of the \emph{standard permutahedron}, i.e. its vertices are given by the $S_n$-orbit of $m\cdot(n-1,n-2,\dots,1,0)$. It follows that $\p Z_{K_n^m}-\Delta_{n-1,n}$ is indeed $\p P_{m,n}$. The claim now follows from Proposition~\ref{prop:break and lattice points}. Alternatively, identifying $V(K_n^m)$ with $[n]$ as usual, let $D=(d_1,\dots,d_n)\in \breakd_{m,n}$. Then $\sum_{1\leq i\leq n}d_i=g_{m,n}$. Since any induced connected subgraph $H$ of $K_{n}^m$ is isomorphic to $K_{j}^m$ for some positive integer $j$, the condition in \eqref{ineq:condition for break} translates to \begin{align} \sum_{i\in S}d_i \geq g_{m,|S|}=m\binom{|S|}{2}-|S|+1. \end{align} for every nonempty $S\subset [n]$. These inequalities define the permutahedron ${\p P}_{m,n}$; see \cite{Rad52}. \end{proof} Since $K_n^m$ has $m^{n-1}n^{n-2}$ spanning trees, we infer that \begin{align} |{\p P}_{m,n}\cap \mathbb{Z}^n|= m^{n-1}n^{n-2}. \end{align} \begin{example} \emph{ Let $m=2$ and $n=3$. Then $\breakd_{2,3}$ contains 12 elements: the six permutations of $(3,1,0)$, as well as the three permutations each of $(2,2,0)$ and $(2,1,1)$. Note that these elements are exactly the lattice points in the permutahedron ${\p P}_{2,3}$.} \end{example} We now recall another notion of interest. Fix $q\in V(G)$. A \bemph{$q$-reduced divisor} \cite[\S~3.1]{BN07} is a divisor $D$ such that $D(v)\geq 0$ for $v\in V(G)\setminus\{q\}$, and additionally, for every nonempty $S\subset V(G)\setminus \{q\}$ there exists $v\in S$ satisfying $D(v)<\mathrm{outdeg}_S(v)$.\footnote{The chip-firing perspective is helpful here. This condition says that if all vertices in $S$ fire simultaneously, at least one of them will be in debt.} Here $\mathrm{outdeg}_S(v)$ is the number of edges in $G$ connecting $v$ to vertices in $V(G)\setminus S$. Since the quantity $D(q)$ does not play any role in these inequalities, one can ignore it. The function $D$ restricted to $V(G)\setminus \{q\}$ is exactly what is known as a \bemph{$G$-parking function} \cite{Pos04}. We immediately specialize to the case $G=K_n^m$ with vertex set $[n]$, and set $q=n$. By a result of Hopkins-Gaydarov \cite[Theorem 2.5]{GH16}, fortuitously, the set of $K_m^n$-parking functions may be characterized as a set of vector parking functions for an appropriate vector. This is also easy to observe from the characterization of $q$-reduced divisors in the preceding paragraph. Indeed, the sequence $(d_1,\dots,d_{n-1})$ is a $K_{n}^m$-parking function if its weakly increasing rearrangement $(\tilde{d}_1,\dots,\tilde{d}_{n-1})$ satisfies \begin{align} \tilde{d}_i\leq mi-1. \end{align} In other words, $(d_1,\dots,d_{n-1})$ is a $K_{n}^m$-parking function if and only if there are at least $i$ entries $\leq mi-1$ for $1\leq i\leq n-1$. Denote the set of $K_n^{m}$-parking functions by $\park_{m,n}$. When $m=1$ this immediately reduces to the definition of classical parking functions. There is a notion of linear equivalence on divisors defined in \cite{BN07}. For any connected graph $G$, it turns out that each linear equivalence class of degree $g(G)$ divisors contains a unique break divisor. Furthermore, every linear equivalence class contains a unique $q$-reduced divisor \cite[Proposition 3.1]{BN07}, and so we infer that \[ |\breakd_{m,n}|=|\park_{m,n}|=m^{n-1}n^{n-2}. \] The characterization of $q$-reduced divisors as vector parking functions implies that $\park_{m,n}$ carries a permutation action of $S_{n-1}$. That break divisors are lattice points in a certain permutahedron implies that $\breakd_{m,n}$ carries a permutation action of $S_n$. {\sf Is there a relation between the resulting modules?} To motivate our main theorem we consider an example. \begin{example} \label{ex:demo_main} \emph{ Consider $m=2$ and $n=3$. On the one hand, the Frobenius characteristic of the $S_3$ action on $\breakd_{2,3}$ equals \[ \mathrm{Frob}(\breakd_{2,3})=h_{111}+2h_{21}=3s_3+4s_{21}+s_{111}, \] where $h$ denotes the complete homogeneous symmetric function. } \emph{On the other hand, the set $\park_{2,3}$ contains 12 elements: ordered pairs $(d_1,d_2)$ where $0\leq d_1\leq 1$ and $0\leq d_2\leq 3$, and their rearrangements. The Frobenius characteristic of the $S_2$ action on $\park_{2,3}$ is \[ \mathrm{Frob}(\mathrm{Park}_{2,3})=2h_2+5h_{11}=7s_2+5s_{11}. \] The reader may now verify that $\mathrm{Park}_{2,3}=\mathrm{Res}_{S_{2}}^{S_3}\breakd_{2,3}$ where $\mathrm{Res}$ denotes restriction. As we shall see in Theorem~\ref{thm:frob_permutahedron}, this phenomenon is part of a larger picture. } \end{example} Before we offer a unifying perspective on these two symmetric group representations, we make a remark. It is true that one can see the fact that $\mathrm{Park}_{m,n}=\mathrm{Res}_{S_{n-1}}^{S_n}\breakd_{m,n}$ from the fact that linear equivalence classes on divisors of degree $g(K_n^m)$ contain a unique break divisor as well as a unique $q$-reduced divisor. That being said, gleaning any further information about the representation $\breakd_{m,n}$, say character values or the multiplicity of the trivial representation, is not immediate from the definition of break elements. This opacity motivates the perspective we proceed to describe in what follows. \section{The modules \texorpdfstring{$\p D_{m,n}$}{D m,n} and \texorpdfstring{$\widehat{\p D}_{m,n}$}{dD m,n} } Set $N\coloneqq mn$. Consider the set of $n$-tuples defined as follows: \begin{align*} \p D_{m,n}\coloneqq \{(x_1,\dots,x_{n})\;|\; 0\leq x_i\leq N-1, \sum_{1\leq i\leq n}x_i=g_{m,n} \: (\md N)\}. \end{align*} The cardinality of $\p D_{m,n}$ is clearly $N^{n-1}$. The symmetric group $S_n$ acts by permutations and the orbits are indexed by partitions that fit in an $n\times (N-1)$ box and have size congruent to $g_{m,n}$ modulo $N$, or equivalently, multisets of size $n$ with entries drawn from $\{0,\dots,N-1\}$ and summing to $g_{m,n}$ modulo $N$. Let us denote the set of $S_n$-orbits by $S_n\backslash {\p D}_{m,n}$. We first compute the cardinality of this set, as we will need it subsequently. To this end, we recall a result of von Sterneck from the early 1900s; see \cite[Theorem 3]{Ra44} for a statement in English. Given positive integers $a$ and $b$, consider the \bemph{Ramanujan sum} \cite[Theorem 272]{HW08} \begin{align} \label{eq:ramanujan sum} C_b(a)\coloneqq \sum_{\substack{1\leq k\leq b\\ \gcd(k,b)=1}}\mathrm{exp}(2\pi ika/b)=\mu\left(\frac{b}{\gcd(a,b)}\right)\frac{\phi(b)}{\phi\left(\frac{b}{\gcd(a,b)}\right)}, \end{align} where $\mu$ is the number-theoretic M\"{o}bius function and $\phi$ is the Euler phi function. \begin{lemma} The number of multisets of size cardinality $k$ with entries drawn from $\{0,\dots, a-1\}$ with subset sum $b\!\!\mod a$ equals \[ \frac{1}{a}\sum_{d|a,k}\binom{\frac{a+k}{d}-1}{\frac{k}{d}}C_d(b). \] \end{lemma} We are interested in the case $a=N$, $k=n$, and $b=g_{m,n}$. We thus have \begin{align} |S_n\backslash {\p D}_{m,n}|=\frac{1}{mn}\sum_{d|n}\binom{\frac{(m+1)n}{d}-1}{\frac{n}{d}}C_d(g_{m,n}). \end{align} It remains to substitute the Ramanujan sum $C_d(g_{m,n})$ for $d|n$. Equation~\eqref{eq:ramanujan sum} tells us that \begin{align} C_d(g_{m,n})=\mu(d/r)\frac{\phi(d)}{\phi(d/r)}, \end{align} where $r=\gcd(d,g_{m,n})=\gcd(d,m\binom{n}{2}-n+1)$. If $m$ is even or $n$ is odd, we must have $r=1$. This leaves the case $m$ being odd and $n$ being even. Taking cases based on $n\!\mod 4$ we find that if $n\!\mod 4=0$, then $r=1$. Otherwise \[ r=\left\lbrace\begin{array}{ll}1 & d \text{ odd,}\\ 2 & d \text{ even.}\end{array}\right. \] In summary, our calculations above imply that if $m$ is odd and $n\equiv 2\!\!\mod 4$, then \begin{align} |S_n\backslash \p D_{m,n}|= \frac{1}{mn}\displaystyle\sum_{\substack{d|n\\ d \text{ odd}}}\mu(d)\binom{\frac{(m+1)n}{d}-1}{\frac{n}{d}}-\frac{1}{mn}\displaystyle\sum_{\substack{d|n\\ d \text{ even}}}\mu(d)\binom{\frac{(m+1)n}{d}-1}{\frac{n}{d}}. \end{align} In all other cases we get \begin{align} |S_n\backslash \p D_{m,n}|= \frac{1}{mn}\displaystyle\sum_{d|n}\mu(d)\binom{\frac{(m+1)n}{d}-1}{\frac{n}{d}}. \end{align} Rewriting $d$ as $n/d$ and modifying appropriately, we record the preceding computation as \begin{proposition} \label{prop:orbits of D} The number of orbits of $\p D_{m,n}$ under the permutation action of $ S_n$ is \begin{align*} |S_n\backslash \p D_{m,n}|=\frac{1}{n}\sum_{d|n} (-1)^{m(n+d)}\mu(n/d)\binom{(m+1)d-1}{md}. \end{align*} \end{proposition} Up until this point we have only considered the $S_n$ action on ${\p D}_{m,n}$. There is a $\mathbb{Z}_n$ action on this set as well. It does not arise from cyclic rotation of coordinates and instead considers `translation' by the vector $m\cdot (1,\dots,1)$. We explore this action next. \subsection{The module \texorpdfstring{$\widehat{\p D}_{m,n}$}{DHat m,n}} We define a simple cyclic action on $\p D_{m,n}$ which groups its elements into $m^{n-1}n^{n-2}$ cyclic classes. This cyclic action commutes with the action of $S_n$, and thus the cyclic classes end up inheriting an $S_n$ action too. Define the \bemph{shift} map $\shift$ mapping $\{0,\dots,N-1\}^n$ to itself via \begin{align*} \shift(x_1,\dots,x_n)\coloneqq (x_1+m,\dots,x_n+m), \end{align*} where addition is performed modulo $N$. This gives an equivalence relation $\sim$ on $\p D_{m,n}$: two sequences are \emph{shift-equivalent} if one is obtained by applying $\shift^j$ to the other for some $j\in \mathbb{N}$. As mentioned above, $S_n$ acts on $\widehat{\p D}_{m,n}\coloneqq \p D_{m,n}/\sim$. In addition to having the right dimension, $\widehat{\p D}_{m,n}$ turns out to possess an additional desirable property: every equivalence class in $\p D_{m,n}/\!\sim$ contains a unique break divisor in $\breakd_{m,n}$ and, assuming we drop the last coordinate, a unique $G$-parking function in $\park_{m,n}$. Indeed, going back to Example~\ref{ex:demo_main}, note for instance that the shift-equivalence class of $(2,2,0)\in \breakd_{2,3}$ is $\{(2,2,0),(4,4,2),(0,0,2)\}$. Omitting the last coordinates, we see that $(0,0)$ is the unique element in $\park_{2,3}$. Let us henceforth, given $\mathbf{a}=(a_1,\dots,a_{n-1})$, denote by $\pi(\mathbf{a})$ the sequence $(a_1,\dots,a_{n-1})$ obtained by omitting the last coordinate. We now proceed toward establishing the claim in general. \begin{proposition} \label{prop:unique parking function} Given a shift equivalence class $\mathcal{C}$ in $\widehat{\p D}_{m,n}$, there exists a unique element $\mathbf{a}\in {\p C}$ such that $\pi(\mathbf{a})\in \park_{m,n}$. \end{proposition} \begin{proof} We adapt a folklore argument for counting classical parking functions attributed to Pollak. Fix $\mathbf{b}\in {\p C}$. Consider $N$ parking spots labeled $0$ through $N-1$ arranged clockwise along a circle, with $N-1$ neighboring $0$. Consider $n-1$ cars labeled $1$ through $n-1$, and let $b_i$ denote the preferred parking spot for car $i$. The `usual' rules of parking apply: the cars come in the order $1$ through $n-1$, each car takes its preferred spot if it is free, otherwise continues clockwise and parks at the next free spot. Let $O_{\mathbf{b}}$ denote the set of occupied spots after all cars have parked. We consider the parking spots as $n$ contiguous blocks of size $m$ each. Define the sequence $c=(c_1,\dots,c_n)$ by setting \[ c_i=O_{\mathbf{b}}\cap \{m(i-1),\dots,mi-1\}. \] Clearly $c_1+\cdots+c_n=n-1$. A routine application of the cycle lemma implies that there is a unique rotation $\tilde{c}=(c_{j+1},\dots,c_n,c_1,\dots,c_j)$ of $c$ with the property that \[ \tilde{c}_1+\dots+\tilde{c}_k \geq k \] for all $1\leq k\leq n-1$. This in turn means that $\tilde{b}=\shift^{n-j}(\mathbf{b})$ has the property that $\pi(\tilde{b})\in\park_{m,n}$, and that $j$ is the only choice with this property. \end{proof} Note that the proof makes no use of the last coordinate of elements in ${\p D}_{m,n}$. \begin{example} \emph{ Consider $m=3$ and $n=5$. Pick $\mathbf{b}=(3,13,7,13,5)\in {\p D}_{3,5}$. The remaining elements in the shift-equivalence class of $\mathbf{b}$ are \[ \{(6,1,10,1,8),(9,4,13,4,11),(12,7,1,7,14),(0,10,4,10,2)\}. \] When we park cars in the spots given by $\pi(\mathbf{b})=(3,13,7,13)$, the occupied spots are given by $\{3,7,13,14\}$. Thus the sequence $c=(c_1,\dots,c_5)$ is given by $(0,1,1,0,2)$. The rotated version $\tilde{c}$ with the desired property is $(c_5,c_1,\dots,c_4)=(2,0,1,1,0)$. Now consider $\shift^{5-4}(\mathbf{b})=(6,1,10,1,8)$. It is easily checked that $(6,1,10,1)\in \park_{3,5}$.} \end{example} Next we show that every shift-equivalence class in $\widehat{\p D}_{m,n}$ contains a break divisor. We will need a preliminary lemma. Let $\Lambda_n$ denote the set of partitions $\lambda=(\lambda_1\geq \cdots \geq \lambda_n\geq 0)$ in $\mathbb{N}^n$. Given $\mathbf{x}=(x_1,\dots,x_n)\in \mathbb{N}^n$, define $\sort(\mathbf{x})$ to be the partition obtained by sorting $\mathbf{x}$ in nonincreasing order. \begin{lemma}\label{lem:unique_rep_dominated} Given $\lambda=(\lambda_1,\dots,\lambda_n)\in \Lambda_n\cap \breakd_{m,n}$, no element in the set $\{\sort\circ \shift^j(\lambda)\;|\; 1\leq j\leq n-1\}$ belongs to $\Lambda_n\cap \breakd_{m,n}$. \end{lemma} \begin{proof} The argument is entirely similar to \cite[Lemma 4.2]{KST21}; one simply needs to incorporate the parameter $m$ carefully. For the sake of completeness, we give the argument. Any $\lambda$ sitting (in French notation) inside an $n\times N$ box can be viewed as a lattice path $L_{\lambda}$ going from $(N,0)$ to $(0,n)$. We extend this to an bi-infinite path $L_{\lambda}^{\infty}$ by repetition. Label the horizontal steps in $L_{\lambda}$ with integers $0$ through $N-1$ going right to left. Fix a $j$ such that $0\leq j\leq n-1$ and consider the fragment $L'$ of $L_{\lambda}^{\infty}$ of length $(m+1)n$ that starts with the horizontal step labeled $mj$ and proceeds northwest. Observe that $L'$ determines the partition $\sort\circ \shift^j(\lambda)$ when viewed in the $n\times N$ box it lives in. If we let $i$ denote the number of vertical steps in $L_{\lambda}$ preceding the horizontal step labeled $mj$, then we have \begin{align}\label{eqn:size_change_upon_shifting} |\sort\circ \shift^j(\lambda)|=|\lambda|+(j-i)N. \end{align} \medskip Now suppose there exists $1\leq j\leq n-1$ such that $\sort\circ \shift^j(\lambda)\in\Lambda_n\cap \breakd_{m,n}$. By \eqref{eqn:size_change_upon_shifting} we must have $j=i$. Thus, the horizontal step labeled $mj$ must touch the diagonal $x+my=N$. Let $\nu=(\nu_1,\dots,\nu_{n-j})$ be the partition determined by the subpath of $L_{\lambda}$ restricted to the $(n-j)\times m(n-j)$ box in the top left. Let $\mu=(\mu_1,\dots,\mu_j)$ be the partition determined by the subpath of $L_{\lambda}$ restricted to the $j\times mj$ box in the bottom right. Observe that \begin{align} \lambda&=(m(n-j)+\mu_1,m(n-j)+\mu_2,\dots,m(n-j)+\mu_{j},\nu_1,\dots,\nu_{n-j})\\ \sort\circ \shift^j(\lambda)&=(mj+\nu_1,mj+\nu_2,\dots,mj+\nu_{n-j},\mu_1,\dots,\mu_j). \end{align} \medskip Since $\lambda\in\Lambda_n\cap \breakd_{m,n}$, by definition it is dominated by the partition $\delta_{m,n}=(m(n-1)-1,m(n-2)-1,\dots,m\cdot 1-1,0)$. By comparing the sum of the first $k$ parts of $\lambda$ with that of the first $k$ parts of $\delta_{m,n}$, we have \begin{align} \sum_{k=1}^{j}(m(n-j)+\mu_k) \leq (m(n-1)-1)+\cdots+ (m(n-j)-1). \end{align} Since the left-hand side is $|\lambda|-|\nu|=m\binom{n}{2}-n+1-|\nu|$, we may rewrite the above inequality as \begin{align}\label{eqn:initial_ineq} |\nu| \geq m\binom{n-j}{2}-(n-j)+1. \end{align} On the other hand, since our assumption is that $\sort\circ \shift^j(\lambda)$ is also dominated by $\delta_{m,n}$, by comparing the sum of the first $n-j$ parts we obtain \begin{align} \sum_{k=1}^{n-j}(mj+\nu_k)\leq (m(n-1)-1)+\cdots +(mj-1). \end{align} This in turn may be rewritten as \begin{align} \label{eqn:final_ineq} |\nu|\leq m\binom{n-j}{2}-(n-j), \end{align} which is in contradiction with the inequality in \eqref{eqn:initial_ineq}. \end{proof} \begin{proposition} \label{prop:unique break divisor} Given a shift equivalence class $\mathcal{C}$ in $\widehat{\p D}_{m,n}$, there exists a unique element $\mathbf{a}\in {\p C}$ such that $\mathbf{a}\in \breakd_{m,n}$. \end{proposition} \begin{proof} Consider the map $\phi:\breakd_{m,n} \to \widehat{\p D}_{m,n}$ sending a break divisor $\mathbf{b}$ to the unique shift equivalence class $[\mathbf{b}]$ that contains it. Since $|\breakd_{m,n}|=|\widehat{\p D}_{m,n}|$, to prove the claim we only need to show that $\phi$ is an injection. This is straightforward given Lemma~\ref{lem:unique_rep_dominated}, and the details are the same as the proof of \cite[Theorem 4.3]{KST21}. So we omit them. \end{proof} We are now ready to state our main theorem connecting the various pieces. \begin{theorem}\label{thm:frob_permutahedron} The representation $\breakd_{m,n}$ is isomorphic to the representation $\widehat{\p D}_{m,n}$. Furthermore, the number of $S_n$-orbits on $\breakd_{m,n}$ equals the numerical DT-invariant $\mathrm{DT}^{m+1}_n$ for the $(m+1)$-loop quiver \cite{Rei11,Rei12}. More precisely \[ |S_n\backslash \widehat{\p D}_{m,n}|= \frac{1}{n^2}\sum_{d|n} (-1)^{m(n+d)}\mu(n/d)\binom{(m+1)d-1}{md}. \] Finally, we have \[ \mathrm{Res}_{S_{n-1}}^{S_n}(\breakd_{m,n})=\park_{m,n}. \] \end{theorem} \begin{proof} The claim $\breakd_{m,n}\cong_{S_n}\widehat{\p D}_{m,n}$ follows from the fact that the map $\phi$ in Proposition~\ref{prop:unique break divisor} is $S_n$-equivariant. In arriving at Proposition~\ref{prop:orbits of D}, we accounted for the $S_n$-action. To count $S_n\times\mathbb{Z}_n$-orbits on ${\p D}_{m,n}$, we observe that each shift-equivalence class has size $n$, so we only need to divide the expression in Proposition~\ref{prop:orbits of D} by $n$. The fact that the resulting expression is a DT-invariant follows by comparing to \cite[Theorem 3.2]{Rei12}.\footnote{There is a minor typo in the expression in \emph{loc. cit.}; the $n$ in the binomial coefficient should be replaced by a $d$.} This establishes the second claim. For the third claim, consider $S_{n-1}$ naturally as subgroup of $S_n$ consisting of permutations that have $n$ as a fixed point. Let $(a_1,\dots,a_{n-1})\in \park_{m,n}$, and consider the unique element $\mathbf{a}\in {\p D}_{m,n}$ such that $\pi(\mathbf{a})=(a_1,\dots,a_{n-1})$. Then we know that there is a unique break divisor $\mathbf{b}$ shift-equivalent to $\mathbf{a}$. Now let $\sigma\in S_{n-1}$ act by permuting the first $n-1$ coordinates. Since shifts commute with the $S_n$-action on ${\p D}_{m,n}$, we have that $\sigma\cdot \mathbf{b}$ is shift-equivalent to $\sigma\cdot \mathbf{a}$. We conclude that $\mathrm{Res}_{S_{n-1}}^{S_n}(\breakd_{m,n})=\park_{m,n}$. \end{proof} \begin{example} \emph{ Consider $m=2$ and $n=4$. The orbits of the $S_n$ action on $\breakd_{m,n}$ are indexed by partitions $\lambda=(\lambda_1,\dots,\lambda_4)\vdash 9$ dominated by $(5,3,1,0)$. It is easily checked that there are $10$ such partitions. Let us check that this count matches the value of $\mathrm{DT}^{m+1}_{n}$. The sum in Theorem~\ref{thm:frob_permutahedron} becomes \[ \frac{1}{16}\left(\binom{11}{8}-\binom{5}{4}\right)=\frac{1}{16}\left(165-5\right)=10. \] } \end{example} We let $\chi_{m,n}$ denote the character of $\breakd_{m,n}$. The following corollary tells us that $\chi_{m,n}$ has a succinct description, which we do not know how to arrive at without the isomorphism $\breakd_{m,n}\cong_{S_n}\widehat{\p D}_{m,n}$. The proof of this result follows the exact same route as laid out in the proof of \cite[Theorem 3.1]{KT21}, and we direct the reader to \emph{loc.\ cit.} for more details. \begin{corollary}\label{cor:lattice_points_fixed} Let $\sigma\in S_n$ have cycle type $\lambda=(\lambda_1,\dots,\lambda_{\ell})$ where $\lambda_{\ell}>0$. Let $d\coloneqq \GCD(\lambda_1,\dots,\lambda_{\ell})$. Then the number of break divisors in $\breakd_{m,n}$ fixed by $\sigma$, i.e. $\chi_{m,n}(\sigma)$, is given by \begin{align*} \chi_{m,n}(\sigma)=\left\lbrace\begin{array}{ll} m^{\ell-1}n^{\ell-2} & d=1,\\ 2m^{\ell-1}n^{\ell-2} & d=2, m \text{ odd, and } n=2 \: (\md 4),\\ 0 & \text{otherwise.} \end{array}\right. \end{align*} \end{corollary} \section*{Acknowledgements} V.T. is extremely grateful to Chi Ho Yuen for enlightening email correspondence. \bibliographystyle{alpha}
3,212,635,537,819
arxiv
\section{Introduction \label{sec:Intr} Gravitating matter tends to clusterise and form objects on different scales. It is suspected that, similar to stars, star clusters may form groups \citep{1996ApJ...466..802E, 1997AJ....113..249F, 2010MNRAS.403..996B, 2013MNRAS.434..313G} and physical pairs. The fraction of these {\it binary clusters} was first estimated at a level of 20\,\%~\citep{1976Ap.....12..204R} but later decreased to 10\,\%~\citep{2009A&A...500L..13D}. The latter and some other works~\citep[see, e.g.,][]{2010A&A...511A..38V, 2017A&A...600A.106C} give lists of potential cluster binaries and even triples, however many of them were eventually dismissed~\citep{2010A&A...511A..38V, 2018A&A...619A.155S}. Using the Gaia DR2 catalogue \cite{2018A&A...619A.155S} have shortened the list to eleven pairs. The prime candidates in our Galaxy are $h$ and $\chi$~Per -- large and massive open clusters, close to each other on the sky, with nearly the same distance of $2.2\pm0.2$~kpc from the Sun \citep[e.g.,][]{2019A&A...624A..34Z}. This large distance, however, makes their detailed study difficult. Meanwhile, the Magellanic Clouds may contain a lot of binary clusters~\citep[e.g.,][]{1988MNRAS.230..215B, 1990A&A...230...11H, 1999AcA....49..165P, 2002A&A...391..547D}. Numerical simulations \citep[e.g.,][] {2007MNRAS.374..931P} and spectroscopic investigations \citep[e.g.,][]{2019A&A...622A..65M} prove the existence of physical pairs of clusters in the LMC. The difference in the number of binary clusters between the Magellanic Clouds and our own Galaxy can be explained either by some kind of observational bias \citep[e.g.][]{2010A&A...511A..38V} or by peculiarities of the formation and/or destruction processes in different types of galaxies. Numerical simulations of a test binary cluster in the Galactic tidal field demonstrate a complicated dependence between its initial properties and future history before the components actually merge and produce a single rotating star cluster \citep{2016MNRAS.457.1339P}. These results were not tested on real Galactic clusters because of the lack of relevant observational data. The Gaia mission~\citep{2018A&A...616A...1G} resulted in the discovery of many previously unknown clusters \citep[e.g.][]{2018A&A...618A..59C, 2018A&A...618A..93C, 2020A&A...635A..45C}. One of them, UBC~7, was found at a distance of $\sim 300$~pc from the Sun near the well-known open cluster Collinder~135 (hereafter Cr~135) and mentioned as probably related to it \citep{2018A&A...618A..59C}. Before Gaia, the stars now attributed to UBC~7 were considered as part of Cr~135. With the use of Gaia DR2, in this paper, we first aim at disentangling the stellar membership between the two clusters and at obtaining the most probable kinematic parameters of the clusters in 6D space. We use these data to recover plausible initial conditions of the clusters, enabling a future study of their detailed evolution with full-scale N-body simulations. \section{Characterising Cr~135 and UBC~7 with Gaia DR2} \label{sec:data} Cr~135 and UBC~7 are located at a distance of approximately 300~pc in the Vela-Puppis star formation region ($245\,{\rm deg} \lesssim l \lesssim 265$ deg, $-15\,{\rm deg} \lesssim b \lesssim -5$ deg), which was recently thoroughly investigated with respect to its large-scale structure and kinematics using Gaia DR2 data \citep{2019A&A...626A..17C, 2019A&A...621A.115C, 2020MNRAS.491.2205B}. In particular, the region at distances between 250 and 500 pc~hosts several young open clusters divided into groups of similar age. The oldest group \citep[30 to 50 Myr according to different authors, e.g.][]{2019A&A...626A..17C, 2020MNRAS.491.2205B} includes Cr~135 and UBC~7. \subsection{Cluster membership} \label{sec:MWSC} As a compromise allowing to study the outer regions of the clusters and simultaneously avoid contamination from nearby groups, we used all sources of Gaia DR2 within a radius of $6.5$ deg around the center of Cr~135 ($\alpha=108.3$\,deg, $\delta=-37.35$\,deg, also applicable to UBC~7) that satisfy requirements for ``astrometrically pure'' solutions according to \cite{2018A&A...616A...2L} and technical note GAIA-C3-TN-LU-LL-124-01\,\footnote{\url{http://www.rssd.esa.int/doc\_fetch.php?id=3757412}}. This includes requirements $ruwe<1.4$, using limits for the flux excess factor \citep{2018A&A...616A...2L}, and selecting sources with $\sigma_\varpi / \varpi \leq 10\%$. The number of sources satisfying these requirements is 411,153. For each of these sources, we calculate the cluster membership probability (MP) for Cr~135 or UBC~7 following the principles formulated in \cite{2012A&A...543A.156K} with specific adjustments to use Gaia DR2 data described below. Initial estimates of basic parameters $\overline{\mu}^k_l$, $\overline{\mu}^k_b$, $\overline{\varpi}^k$, age $T^k$, $E^k({\rm BP-RP})$ of Cr~135 ($k=1$) and UBC~7 ($k=2$) are obtained for a subsample of evident members of the two clusters based on a visual analysis of astrometric and photometric diagrams: the vector point diagram (VPD), a parallax vs.\ magnitude plot ($\varpi,\,G$), and a Gaia colour--magnitude diagram (CMD). The parameters are further adjusted along with a list of cluster members in an iterative procedure \citep[see details in][] {2012A&A...543A.156K}. Isochrones for Gaia DR2 passbands from \cite{2018A&A...619A.180M} are obtained from the Padova webserver CMD3.3\,\footnote{\url{http://stev.oapd.inaf.it/cmd}}, based on the calculations by \cite{2012MNRAS.427..127B} for solar metallicity $Z=0.0152$. We apply systematic corrections for the $G$ magnitudes of sources\footnote{\url{https://www.cosmos.esa.int/web/gaia/dr2-known-issues\#PhotometrySystematicEffectsAndResponseCurves}} to adjust for the use of these passbands. We use a relation between $E({\rm BP-RP})$ and $A_{G}$ based on coefficients provided at CMD3.3 for $A_\lambda/A_V$ for Gaia photometric bands following the relations by \cite{1989ApJ...345..245C} and \citet{1994ApJ...422..158O}, which leads to $A_{G}/E({\rm BP-RP}) \approx 2.05$. We find that the ages of the two clusters cannot be distinguished, and neither their reddenings, so we use only one value for both clusters. In contrast, proper motions and mean parallaxes of the selected groups of stars are clearly different. The modification to derive the mean parameters of the clusters with respect to that described in \cite{2012A&A...543A.156K} involves taking into account the parallax probability $P^{i,k}_{\rm \varpi}$ and the photometric probability $P^{i,k}_{\rm ph}$ based on the $(G, {\rm BP-RP})$ CMD to derive MP. The resulting MP is given by: \begin{equation} P^{i,k}=\min(P^{i,k}_{\rm kin}, P^{i,k}_{\rm \varpi}, P^{i,k}_{\rm ph}). \end{equation} The values for parameters $\varepsilon_{\mu_l}, \varepsilon_{\mu_b}, \varepsilon^i_{\rm \varpi}, \varepsilon^i_{\rm ph}$, characterising the expected dispersion of cluster member parameters in calculation of the probabilities $P^{i,k}_{\rm kin}$, $P^{i,k}_{\rm \varpi}$, and $P^{i,k}_{\rm ph}$, are estimated as follows. The dispersion of proper motions in the clusters at 300~pc distance is mainly due to actual velocity dispersion of cluster members rather than due to proper motion errors of individual stars (mean value of 0.15~mas/yr, maximum of 0.45~mas/yr for cluster member candidates). We set $\varepsilon_{\mu_l}, \varepsilon_{\mu_b}= 1.8$~mas/yr. In turn, the dispersion of parallaxes is dominated by the accuracy of the observations rather than by the actual dispersion of the distances. The pre-defined limit of a relative error of $10\%$ corresponds to approximately 25~pc at the given distance, which is definitely more than the spatial dispersion of members of moderate-size open clusters. For the allowances for parallaxes of possible cluster members, we use an expression for the external calibration of the parallax error\,\footnote{\label{Lindegren}\url{https://www.cosmos.esa.int/documents/29201/1770596/Lindegren\_GaiaDR2\_Astrometry\_extended.pdf/1ebddb25-f010-6437-cb14-0e360e2d9f09}}. The value of $\varepsilon^i_{\rm ph}$ is defined from the dispersion of mean parallaxes obtained at the initial selection of reliable cluster members, equal to $0.12$~mas for each cluster, and individual photometric error $\sigma^i_G$ estimated by flux and flux error in G. Fig.~\ref{fig:obsdata} represents the distribution of the selected members over the map of the considered region (a), VPD (b), parallax-magnitude diagram (c), and CMD (d). One can see that the dispersion of observational data of probable members allows to distinguish the clusters in panels (a,b,c) but not in (d). \begin{figure*}[!htb] \centering \includegraphics[width=0.9\textwidth]{obs5corr5.eps} \caption{Observational data for Cr~135 and UBC~7: Red dots represent sources having larger Cr~135 MPs, and blue dots sources having larger UBC~7 MPs. Shade dots are for Gaia DR2 sources having $P^{i,k}<0.01$ for both clusters. (a) Location of probable members of Cr~135 and UBC~7 in the Galactic $l,b$-plane; (b) VPD; (c)~magnitude -- parallax diagram; (d) CMD. In (d) red and blue lines represent isochrones for 40~Myr. The green line is for zero-age main sequence built in the present study as hot envelope of the related set of Padova isochrones of different ages. Absolute magnitudes and colours are reduced to the apparent scale using mean parallaxes and reddening of central probable members. Large circles in (a) mark central regions of the clusters (see Sec.\ref{sec:param}). } \label{fig:obsdata} \end{figure*} \subsection{Parameters of the clusters} \label{sec:param} With the MPs to the two clusters figured out, we derive ages between 40 and 50 Myr. We select sources with $P^{i,k}>0.6$ as the most probable members of Cr~135 and UBC~7. 244 and 184 stars are identified as probable members of Cr~135 and of UBC~7, respectively. Twelve stars are probable members of both clusters. The distribution of probable cluster members in the sky looks like two relatively compact cores surrounded by a halo of stars extending to more than 5~deg from the centres of the clusters. We estimate the probability of a random contamination of the dataset with field stars, applying the search procedure to stars satisfying the same conditions as those selected as probable members of Cr~135 and UBC~7 in eight equal areas at the perimeter of the Vela-Puppis region. The resulting selections provide 0 to 7 false members in the area. Thus, the extended structures hosting tenths of probable members around Cr~135 and UBC~7 are not due to occasional field contamination and may be a part of the extended common halo of the two clusters, or a signature of filamentary structures discovered by \cite{2020MNRAS.491.2205B}. Sometimes, it is difficult to attribute a star satisfying the membership conditions to one cluster or the other, or to the outer structure. We select only the probable members belonging to the central parts of the two clusters to access motion of the cluster centres. The coordinates and parallaxes of the cluster centres are derived from the position of the maxima of the distribution of stellar density of probable members of Cr~135 and UBC~7, respectively. The radii of the central parts of clusters are selected as estimates of their apparent half-mass radii \citep[HMR, see, e.g., ][]{2011A&A...531A..92R}. For this purpose, masses are attributed to probable members by setting them onto the isochrone for 40~Myr (unresolved binaries neglected). Such lower boundary estimates for the total mass based on the most probable members with pure astrometric data are $126 M_\odot$ and $87 M_\odot$ for Cr~135 and UBC~7, respectively. The consequent angular HMR are $1.20$~deg and $1.13$~deg. These estimates agree with those for tidal radii according to King's formula \citep{1962AJ.....67..471K}, and half-number radii, within $10\%$ to $30\%$. Further, we use the HMR to define the central parts of the clusters (shown with large circles in Fig.~\ref{fig:obsdata}\,a) to obtain estimates for the mean basic parameters of Cr~135 and UBC~7. The central parts contain 91 stars for Cr~135 and 80 stars for UBC~7. In Table~\ref{tab:data}, we quote cluster parameters computed via averaging of individual data on $l,b,\varpi, \mu_l, \mu_b$ for probable members residing within the central areas. For $l, b$ the accuracies are computed as standard deviations. For $\varpi, \mu_l, \mu_b$ the accuracies are computed as a combination of error of the mean and expected systematic error estimate\,\textsuperscript{\ref{Lindegren}}. The number of stars with radial (line-of-sight, LOS) velocity measurements in the central parts of clusters is low, 14 and 5 sources only for Cr~135 and UBC~7. To improve the statistics, we take into account not only astrometrically pure but all probable cluster members with LOS velocity measurments (30 and 24 sources, respectively), and quote their median values. The accuracies are evaluated as $(V_r^{Q3}-V_r^{Q1})/2$ where $V_r^{Q1},V_r^{Q3}$ are LOS velocities corresponding to lower and upper quartiles. \begin{table*}[tbp] \setlength{\tabcolsep}{4pt} \centering \caption{The cluster main parameters evaluated for the most reliable members.} \label{tab:data} \begin{tabular}{ccccccccccc} \hline \hline Cluster & $N$ & mass, $M_\odot$ & $N_c$ & $l$, deg & $b$, deg & $\varpi$, mas & $\mu_l$, mas/yr & $\mu_b$, mas/yr & $N_{V_r}$ & $V_r$, km/s \\ \hline Cr~135 & 244 & 126 & 91 & 248.98 $\pm$ 0.06 & $-$11.10 $\pm$ 0.05 & 3.31 $\pm$ 0.02 & $-$9.92 $\pm$ 0.05 & $-$6.47 $\pm$0.06 & 30 & 17.4 $\pm$ 1.3 \\[-10pt] UBC~7 & 184 & 87 & 80 & 248.62 $\pm$ 0.04& $-$13.37 $\pm$ 0.05 & 3.56 $\pm$ 0.02 &$-$10.25 $\pm$ 0.05 &$-$5.98 $\pm$ 0.05 & 24 & 16.7 $\pm$ 1.5\\ \hline \end{tabular} \vspace{6pt} \end{table*} The kinematics and age obtained for the two considered clusters are very similar. The separation between the centres of Cr~135 and UBC~7 is $24.2 \pm 2.1$~pc, the difference in proper motion $0.6 \pm 0.1$~mas/yr (the relative tangential velocity is $1.42 \pm 0.15$~km/s). A simple model of the local vicinity based on parameters by \cite{2019A&A...626A..17C} hosts six clusters in a region with dimensions [100, 50, 200]~pc with $\mu_l, \mu_b$ range within 6~mas/yr. If their positions and proper motions are distributed randomly, the likelihood of some pair of clusters simultaneously having spatial distance less than 25~pc and differences in proper motion less than 0.7~mas/yr is quite small ($P_r=2.4\%$), so that it seems unlikely that Cr~135 and UBC~7 are located so close together by chance. \section{Orbital integration of the star clusters in the Milky Way potential} \label{sec:dyn} In this paper, we restrict ourselves to the simplest model of the star clusters as attracting point masses orbiting in the fixed Milky Way external potential~\citep{2011A&A...536A..64E}. We carry out the integration backwards in time up to $-50$\,Myr using our own developed high order Hermite4 code $\varphi-$GRAPE~ \citep{HGM2007}. The current version of the $\varphi-$GRAPE\footnote{\url{ftp://ftp.mao.kiev.ua/pub/berczik/phi-GRAPE/}} code uses GPU/CUDA based GRAPE emulation YEBISU library \citep{NM2008}; it was tested and successfully applied in our previous large scale simulations~\citep{2020MNRAS.492.4819P, 2016MNRAS.460..240K, 2014ApJ...780..164W, 2014ApJ...792..137Z, 2012ApJ...748...65L, 2012ApJ...758...51J}. The largest uncertainty of kinematic data resides in the LOS velocities. So we probe these velocities on a uniform mesh of $101\times101$ runs covering $\pm3\sigma$-confidence intervals, while other parameters are kept fixed at the averaged values. A pair of initial ($t=0$) LOS velocities gives coordinates ${\mathbf r}_k(t)$ and velocities ${\mathbf v}_k(t)$ of the clusters. Our fiducial series of runs assumes the cluster mass ratio close to that from Table\,\ref{tab:data}, but the actual values $M_1 = 465\,M_\odot$ and $M_2 = 302\,M_\odot$ are larger because the table data contain the present-day masses of the most reliable members only. First of all, we are looking for runs with small primordial separations $S(-T)$, where $S(t) = |{\mathbf r}_1(t) - {\mathbf r}_2(t)|$ is the distance between the clusters. Small colour squares in Fig.\,\ref{fig:sim}\,a) mark runs with separations $S$ below 15 pc at age $T=50$ Myr (for $T=40$ Myr the plot is similar). They settle in a narrow band near the lines of equal LOS velocities. The rimmed circles mark `bound' runs with negative (specific) energy \begin{equation} {E}_{\rm K} \equiv \frac{|{\mathbf v}_1-{\mathbf v}_2|^2}2 - \frac{GM}{S}\,,\quad M \equiv M_1+M_2\, \end{equation} that does not account for the external Galactic field. At age $40$\,Myr, none of these runs are bound, although some with the positive $E_{\rm K}$ lead to primordial separations below 10 pc. At 50\,Myr age the runs show a minimum separation of 8.8 pc, and for the bound runs 9.1 pc. \begin{figure \centering \includegraphics[width=\columnwidth]{sim_q2.eps} \caption{Numerical backward integration of two point masses representing the clusters in the external Milky Way potential. \newline (a) The run mesh (fragment) for cluster age $T=50$ Myr: small squares represent individual runs with different present-day ($t=0$) LOS velocities. Coloured squares mark runs with primordial separations smaller than 15 pc. The black plus and star denote the central $V_r$ values and run {\tt \#(53,61)} given in panel (b). Rimmed circles mark runs with negative primordial energy $E_{\rm K}$. The dashed black line and the orange rectangle depict the line of equal LOS velocities and the runs within $\pm 1\sigma$ from mid values. \newline (b) Typical curves of separation $S(t)$. \newline (c, d) The cumulative number of runs vs. the primordial separation below 15 pc for 40 and 50\,Myr. The dashed lines show all runs within $1\sigma$-rectangle, the solid lines show only runs with negative energy $E_{\rm K}$ (bound). Colours of the lines code the total mass $M$ of the clusters. } \label{fig:sim} \end{figure} Fig.\,\ref{fig:sim}\,b) presents typical separation curves. Runs in the grey zone and some on both sides of the colour zone have curves similar to {\tt \#(52,58)} with a trend of initial convergence and then increase of the separation. Monotonic separation growth is met in runs with very similar LOS velocities (either with negative and positive primordial energy), see, e.g., runs {\tt \#(53,61)} and {\tt \#(53,63)}. The dashed line indicate the position of the tidal (Jacobi) radius for the system of two attracting points. Clusters formed closer than the Jacobi radius with the total energy below the Jacobi energy will stay close forever. In contrast, clusters formed with energy larger than the Jacobi energy can defy mutual attraction either because of the tidal force ($E_{\rm K}<0$) or high relative velocity ($E_{\rm K}>0$). To account for the mass uncertainty, in addition to the fiducial series we explored ten more series increasing and decreasing masses proportionally in the range of $M$ between 380 and 2300 $M_\odot$. Similarly to the fiducial series, the runs with small separations always settle near the lines of equal LOS velocities. Fig.\,\ref{fig:sim}\,c),\,d) present the cumulative number of runs within the 1$\sigma$-rectangle vs. the primordial separation at 40 and 50~Myr, respectively. The fiducial and lighter series give no bound runs at 40~Myr, although there are some runs with $E_{\rm K}>0$. Meanwhile, the 50~Myr plots contain bound runs at any explored value of the total mass $M$. At the high-mass end, all runs with primordial separation $S$ smaller than 15~pc are bound at 50~Myr, while only 46\% are bound at 40~Myr. The separation is smaller and the width of the colour bands is narrower in $T=50$ Myr plots. The latter explains smaller cumulative numbers at $S\sim 15$ pc compared to $T=40$ Myr. If we look at the bound runs only, clusters of age 40~Myr require a total mass $M$ above $\sim 1500$ $M_\odot$ to obtain small primordial separations starting from reasonable values of LOS velocities. The 50~Myr clusters are less restrictive in this aspect. \section{Conclusions} \label{sec:conc} Based on Gaia DR2 data, we have selected probable members of Cr~135 and UBC~7 and determined their parameters in 6D space ($l, b, \varpi, \mu_l, \mu_b, V_r$). The clusters are proven to be close but distinctly separated, while their CMDs are indistinguishable. We assume the cluster ages to be virtually equal and estimate it between 40 and 50~Myr. A model with randomized spatial and kinematic parameters shows a likelihood of only $P_r=2.4\%$ for their chance coincidence. Besides, the observations show coronae enveloping both clusters. This suggests their possible physical binarity. The clusters may have formed closer together than they appear now. In order to show this, we use a simple model in which the clusters are replaced by point masses and integrate backwards in time in the fixed external Galactic potential. The masses of the points were constant during the integration, and their values accounted also for the unresolved stars and possible mass loss. Given the uncertainty in the observational data, we performed an optimisation over initial LOS velocities leading to desirable runs with small primordial separations. They are obtained only in the case of very similar LOS velocities. Then we explore the question of how many desirable runs with plausible LOS velocities (within 1$\sigma$-confidence rectangle) occur in series with different total mass and age. We report that independent of age, clusters with a total mass $M \gtrsim 1500\,M_\odot$ are favourable for the scenario in which the clusters were initially close in the beginning and then were tidally separated. On the other hand, relatively young clusters with a total mass $M \lesssim 750\,M_\odot$ require LOS velocities above the confidence intervals determined from observations. The initial masses of the clusters are very uncertain. Accounting for incompleteness and mass loss by stellar evolution may result in a factor of two larger initial masses compared to the observed masses given in this paper, which is insufficient to reach the high mass regime of our investigations. But since clusters are formed in molecular clouds with low star formation efficiency, they are most probably supervirial after gas expulsion. This leads to a significant dynamical mass loss on a dynamical timescale of 10--20\,Myr. In case of a centrally peaked star formation efficiency the mass of the surviving cluster can be as low as 5\% of the initial mass \citep{2017A&A...605A.119S,2018ApJ...863..171S}. The observed extended corona of cluster stars around Cr~135 and UBC~7 is a hint to that kind of strong cluster mass loss by violent relaxation in the first 20\,Myr. In our simple model for the cluster orbits, the mass loss of the clusters was completely ignored. As a continuation of this work, in Paper~II (in preparation) we shall extend our numerical simulations using realistic star cluster N-body modelling by forward integrating star-by-star cluster models to the present day and make a direct comparison of the stellar populations to the observations, including selection effects and binary stars. \begin{acknowledgements} This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. The use of TOPCAT, an interactive graphical viewer and editor for tabular data \citep{2005ASPC..347...29T}, is acknowledged. The reported study was partly funded by RFBR and DFG according to the research project No. 20-52-12009. The work of PB and MI was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) Project-ID 138713538, SFB 881 ("The Milky Way System") and by the Volkswagen Foundation under the Trilateral Partnerships grant No. 97778. PB acknowledges support by the Chinese Academy of Sciences (CAS) through the Silk Road Project at NAOC, the President’s International Fellowship (PIFI) for Visiting Scientists program of CAS and the National Science Foundation of China (NSFC) under grant No. 11673032. MI acknowledges support by the National Academy of Sciences of Ukraine under the Young Scientists Grant No. 0119U102399. The work of PB was also partially supported under the special program of the National Academy of Sciences of Ukraine "Support for the development of priority fields of scientific research" (CPCEL 6541230). We thank the referee for the helpful comments. \end{acknowledgements} \normalfont % % \bibliographystyle{aa}
3,212,635,537,820
arxiv
\section{Introduction} The covariant open superstring description of D-branes \cite{Lambert:1999id,Bain:2002tq} has been a useful way in classifying supersymmetric D-branes, especially 1/2-BPS D-branes, in a given supersymmetric background. It has been successfully applied to some important backgrounds in superstring theories such as the flat spacetime \cite{Lambert:1999id}, IIB plane wave \cite{Bain:2002tq}, IIA plane wave \cite{Hyun:2002xe}, AdS$_5 \times$ S$^5$ \cite{Sakaguchi:2003py,Sakaguchi:2004md, ChangYoung:2012gi,Hanazawa:2016lvo}, and AdS$_4 \times \mathbf{CP}^3$ \cite{Park:2018gop} backgrounds. The data obtained after the classification of supersymmetric D-branes are however `primitive' in a sense that they do not tell us about which configuration of a given D-brane is really supersymmetric and which part of the background supersymmetry is preserved on the D-brane worldvolume. If the background spacetime geometry is for example given by a product of some subspaces, the only information contained in the data is the possible numbers of Neumann directions of open superstring end point in each of the subspaces. In spite of this fact, the data provide us an efficient guideline for further exploration of supersymmetric D-branes, which allows us to avoid a brute force approach. Indeed, in our previous work \cite{Park:2017ttx}, this has been illustrated in the investigation of 1/2-BPS D-brane configurations in the AdS$_5 \times$ S$^5$ background. The work of \cite{Park:2017ttx} has been based on the result of covariant open superstring description \cite{Sakaguchi:2003py,Sakaguchi:2004md,ChangYoung:2012gi,Hanazawa:2016lvo} given in Table \ref{tablebps} and focused on the Lorentzian D-branes. By the way, as one can see from the table, there are two purely Euclidean or instantonic D-branes. One is the well known D(-1)-brane or D-instanton \cite{Chu:1998in, Kogan:1998re, Bianchi:1998nk}, which has played an important role in the study of nonperturbative aspects of IIB superstring theory in the AdS$_5 \times$ S$^5$ background. Another is the D1-brane labeled by (0,2) which spans a two dimensional subspace inside S$^5$. As of now, compared to the D(-1)-brane, the 1/2-BPS instantonic D1-brane looks somewhat exceptional and its role or nature is not obvious. Since instantons are always important in understanding the nonperturbative structure of a given theory, it is certain that such object seems to deserve to be investigated for further development of superstring theory in the AdS$_5 \times$ S$^5$ background. In this paper, having the information (0,2) of Table \ref{tablebps}, we will try to identify the 1/2-BPS instantonic D1-brane configurations and their supersymmetry structures. Partly, the present work may be regarded as the continuation of the study carried out in \cite{Park:2020hgt} where 1/2-BPS instantonic membrane configurations in the AdS$_4 \times$ S$^7 / \mathbf{Z}_k$ background have been successfully classified. In the next section, we give a brief description of the AdS$_5 \times$S$^5$ background and its Killing spinor. The Euclidean action for D1-brane with its basic structure is discussed in Sec.~\ref{d1-action}. The identification of 1/2-BPS instantonic D1-brane configurations with their supersymmetric sturcutures is worked out in Sec.~\ref{bpsd1}. Finally, we evaluate the action values of the 1/2-BPS configurations and discuss the possibility of supersymmetric multiple D1-instantons in Sec.~\ref{concl}. \begin{table} \begin{center} \begin{tabular}{c|cccccc} \hline & D(-1) &D1 & D3 & D5 & D7 & D9 \\ \hline\hline ($n$,$n'$) & (0,0) & \begin{tabular}{c} (2,0) \\ (0,2) \end{tabular} & \begin{tabular}{c} (3,1) \\ (1,3) \end{tabular} & \begin{tabular}{c} (4,2) \\ (2,4) \end{tabular} & \begin{tabular}{c} (5,3) \\ (3,5) \end{tabular} & -- \\ \hline \end{tabular} \caption{\label{tablebps} 1/2-BPS D-branes in the AdS$_5\times$S$^5$ background. $n$ ($n'$) represents the number of Neumann directions in AdS$_5$ (S$^5$).} \end{center} \end{table} \section{AdS$_5 \times$S$^5$ background and Killing spinor} \label{bg} In this section, we briefly describe the AdS$_5 \times$S$^5$ background and its Killing spinor. In the Poincar\'{e} patch coordinate system, the metric of the AdS$_5 \times$S$^5$ geometry is given by \begin{align} \label{metric} ds^2 = g_{\mu\nu} dX^\mu dX^\nu = \frac{R^2}{z^2} \left[ - (dx^{\hat{0}})^2 + (dx^{\hat{1}})^2 + (dx^{\hat{2}})^2 + (dx^{\hat{3}})^2 + dz^2 \right] + ds_{S^5}^2 \,, \end{align} where $ds_{S^5}^2$ is the metric of five sphere S$^5$ of radius $R$. As usual, the common radius $R$ is written as \begin{align} R^4 = 4 \pi g_s N \ell_s^4 \,, \label{adsrad} \end{align} where $g_s$ is the string coupling constant, $\ell_s$ the string length scale, and $N$ the number of D3-branes leading to the above geometry in the near-horizon limit. As for the five sphere part, one may parametrize it by using the usual four polar angles and one azimuthal angle. However, because instantonic D1-brane configurations spanning two dimensional subspace inside S$^5$ are of our interest, it is more convenient to take another parametrization in which two dimensional substructures are manifest as follows: \begin{align} \label{s5metric} ds_{S^5}^2 = & R^2 \left[ d \alpha^2 + \cos^2 \alpha (d \theta_1^2 + \sin^2 \theta_1 d \phi_1^2) + \sin^2 \alpha (d \theta_2^2 + \sin^2 \theta_2 d \phi_2^2) \right] \,, \end{align} where the ranges of angles are $0 \le \alpha \le \pi/2$, $0 \le \theta_{1,2} \le \pi$, and $0 \le \phi_{1,2} \le 2\pi$. From the metric (\ref{metric}) with (\ref{s5metric}), we choose the zehnbein to be \begin{align} & e^{\hat{0}, \hat{1}, \hat{2}, \hat{3}} = \frac{1}{z} d x^{\hat{0}, \hat{1}, \hat{2}, \hat{3}} \,, \quad e^z = \frac{dz}{z} \,, \notag \\ & e^1 = R d \alpha \,, \notag \\ & e^2 = R \cos \alpha d \theta_1 \,, \quad e^3 = R \cos \alpha \sin \theta_1 d \phi_1 \,, \notag \\ & e^4 = R \sin \alpha d \theta_2 \,, \quad e^5 = R \sin \alpha \sin \theta_2 d \phi_2 \,. \label{zehn} \end{align} Here, we have adopted hatted numbers and $z$ as the tangent space index values for the AdS$_5$ part to distinguish them from those of the S$^5$ part. With this convention, the ten dimensional tangent space index (denoted by $A$, $B$, $\dots$) is denoted as \begin{align} A = ( m, z, a) \,, \quad m= \hat{0}, \hat{1}, \hat{2},\hat{3}, \quad a = 1,2,3,4,5 \,, \end{align} and the Ramond-Ramond (RR) five-form field strength, another constituent of the AdS$^5 \times$S$^5$ background in addition to the metric (\ref{metric}), is written as \begin{align} F_5 = 4 e^{\hat{0}} \wedge e^{\hat{1}} \wedge e^{\hat{2}} \wedge e^{\hat{3}} \wedge e^z + 4 e^1 \wedge e^2 \wedge e^3 \wedge e^4 \wedge e^5 \,. \label{rr5} \end{align} The AdS$^5 \times$S$^5$ background composed of (\ref{metric}) and (\ref{rr5}) with (\ref{s5metric}) is maximally supersymmetric. Its supersymmetry structure is encoded in the solution of the spacetime Killing spinor equation. The Killing spinor equation itself for the AdS$_5 \times$S$^5$ background is given by \begin{align} \left( \nabla \delta^{IJ} + \frac{1}{2R} e^A \hat{\gamma} \Gamma_z \Gamma_A \tau_2^{IJ} \right) \eta^J =0 \,, \label{kse} \end{align} where$\nabla = d + \frac{1}{4} \omega^{AB} \Gamma_{AB}$, \begin{align} \tau_1 = \sigma_1 \,, \quad \tau_2 = i \sigma_2\,, \end{align} ($\sigma_{1,2}$ are the usual Pauli matices.) and \begin{align} \hat{\gamma} \equiv \Gamma^{\hat{0}\hat{1}\hat{2}\hat{3}} \,. \label{ghat} \end{align} The spacetime Killing spinors $\eta^I$ ($I=1,2$) as the solution of (\ref{kse}) are two Majorana-Weyl spinors and taken to have ten dimensional positive chirality in this work, $\Gamma^{11} \eta^I = \eta^I$. As demonstrated in \cite{Claus:1998yw,Skenderis:2002vf}, the Killing spinor equation (\ref{kse}) is solved rather easily if we split $\eta^I$ as \begin{align} \eta^I = \eta^I_+ + \eta^I_- \,, \end{align} where $\eta^I_\pm$ are defined by \begin{align} \eta^I_\pm = P^{IJ}_\pm \eta^J \label{projeta} \end{align} with the projection operator \begin{align} P^{IJ}_\pm = \frac{1}{2} (\delta^{IJ} \pm \hat{\gamma} \tau_2^{IJ}) \,. \label{proj} \end{align} In this splitting, we note that $\eta^1_\pm$ and $\eta^2_\pm$ are not independent from each other because \begin{align} \eta^2_\pm = \mp \hat{\gamma} \eta^1_\pm \,. \label{eta12} \end{align} Thus, to avoid this redundancy, it is convenient to define \begin{align} \eta_\pm \equiv \eta^1_\pm \,, \label{eta} \end{align} to which $\eta^1$ and $\eta^2$ are related by \begin{align} \eta^1 = \eta_+ + \eta_- \,, \quad \eta^2 = - \hat{\gamma} (\eta_+ - \eta_-) \,. \end{align} If we now use $\eta_\pm$, the solution of the Killing spinor equation (\ref{kse}) is obtained as \begin{align} \eta_+ &= z^{-1/2} U (\epsilon_+ + \Gamma_m x^m \epsilon_-) \,, \notag \\ \eta_- &= z^{1/2} \Gamma_z U \epsilon_- \,, \label{ks} \end{align} where $\epsilon_\pm$ are constant spinors and $U$ is a spinorial function of five angles of S$^5$. The function $U$ satisfies \begin{align} \left( d + \frac{1}{4} \omega^{ab} \Gamma_{ab} + \frac{1}{2R} e^a \Gamma_{za} \right) U = 0 \,, \label{ueq} \end{align} which can be read off from the Killing spinor equation (\ref{kse}). If we took the standard parametrization of S$^5$, we could simply adopt the solution of this equation obtained in \cite{Lu:1998nu}. However, since we take a different parametrization (\ref{s5metric}), we should solve the equation. Now the necessary ingredients for solving Eq.~(\ref{ueq}) are the S$^5$ part of zehnbein (\ref{zehn}) and the corresponding spin connections computed as \begin{gather} \omega^{12} = \sin \alpha d \theta_1 \,, \quad \omega^{13} = \sin \alpha \sin \theta_1 d \phi_1 \,, \notag \\ \omega^{14} = - \cos \alpha d \theta_2 \,, \quad \omega^{15} = - \cos \alpha \sin \theta_2 d \phi_2 \,, \notag \\ \omega^{23} = - \cos \theta_1 d \phi_1 \,, \quad \omega^{45} = - \cos \theta_2 d \phi_2 \,. \end{gather} Since the Eq.~(\ref{ueq}) is a first order differential equation, it is not so difficult to solve it, and the resulting solution $U$ is obtained as \begin{align} U = e^{ -\frac{1}{2} \alpha \Gamma_{z1} } e^{ - \frac{1}{2} \theta_1 \Gamma_{z2} } e^{ \frac{1}{2} \phi_1 \Gamma_{23} } e^{ \frac{1}{2} \theta_2 \Gamma_{14} } e^{ \frac{1}{2} \phi_2 \Gamma_{45} } \,. \label{usol} \end{align} \section{Euclidean D1-brane action} \label{d1-action} Before moving on to the investigation of 1/2-BPS instantonic D1-branes in the AdS$_5 \times$S$^5$ background, let us consider the D1-brane action and its basic structure. Since the instantonic D1-brane we are concerned about is an object in the Euclidean spacetime, its action is the Euclidean one given by\footnote{We note that the bosonic action is sufficient for our purpose.} \begin{align} S = T \int d^2 \zeta \sqrt{ | G + \mathcal{F} | } - i T \int ( C_{(2)} + \mathcal{F} C_{(0)} ) \,. \label{action} \end{align} Here, $T$ is the D1-brane tension given in terms of the string coupling constant $g_s$ and the string length scale $\ell_s$ ($\ell_s^2 = \alpha'$), \begin{align} T = \frac{1}{2 \pi g_s \ell_s^2} \,, \label{d1tension} \end{align} and $| G + \mathcal{F} |$ is the determinant of the sum of two objects which are the induced worldvolume metric\footnote{The worldvolume indices $i,j$ take values of 1 and 2.} \begin{align} G_{ij} = \partial_i X^\mu \partial_j X^\nu g_{\mu\nu} \,, \end{align} and the combination of worldvolume gauge field strength $F_{ij}$ and the induced NS-NS two-form gauge field $B^{NS}_{ij}$, \begin{align} \mathcal{F}_{ij} = 2 \pi \alpha' F_{ij} - B^{NS}_{ij} \,. \end{align} The background fields $C_{(0)}$ and $C_{(2)}$ are the induced R-R zero and two-form gauge fields respectively. As described in the last section, the AdS$_5 \times$S$^5$ background does not have nontrivial profiles for $C_{(0)}$, $C_{(2)}$ and $B^{NS}$. Thus we can eliminate these fields in the action (\ref{action}). From the resulting action, the equation of motion for the worldvolume gauge field is obtained as \begin{align} \partial_i \left( \frac{\mathcal{F}_{ij}}{\sqrt{ |G+\mathcal{F}|}} \right) = 0 \,. \end{align} It is easy to solve this eqaution and we see that the solution can be parametrized by a constant $\beta$ as \begin{align} \frac{B}{\sqrt{ |G+\mathcal{F}|}} = \sin \beta \,, \label{emsol0} \end{align} where we have defined \begin{align} B \equiv \mathcal{F}_{12} = 2 \pi \alpha' F_{12} \,, \end{align} and $\beta$ has the range of $-\pi/2 \le \beta \le \pi/2$. The solution (\ref{emsol0}) can be rewritten as \begin{align} B = \sqrt{|G|} \tan \beta \,, \label{emsol} \end{align} and enables us to express the D1-brane action as \begin{align} S = \frac{T}{\cos \beta} \int d^2 \zeta \sqrt{|G|} \,. \label{d1} \end{align} \section{1/2-BPS D1-instantons} \label{bpsd1} In this section, we identify the 1/2-BPS instantonic D1-brane configurations based on the data from the covariant open string description of D-branes in the AdS$_5 \times$S$^5$ background. For the investigation of instantonic object, the spacetime is taken to be Euclidean. However, the Killing spinor $\eta_\pm$ of (\ref{ks}) with (\ref{usol}) have been obtained in the Lorentzian signature. Thus, for the expressions in the previous section, we should take the Wick rotation $x^{\hat{0}} \rightarrow -i x^{\hat{0}}$, under which\footnote{We use the same definition for $\hat{\gamma}$ given in (\ref{ghat}). Thus $\hat{\gamma}^2 = -1$ in the Lorentzian signature, while $\hat{\gamma}^2 = +1$ in the Euclidean one.} \begin{align} \Gamma^{\hat{0}} \longrightarrow - i \Gamma^{\hat{0}} \,, \quad \hat{\gamma} \longrightarrow - i \hat{\gamma} \,. \label{wick} \end{align} In ten dimensional Euclidean spacetime, $\eta_\pm$ and $\epsilon_\pm$ are no longer Majorana-Weyl and become Weyl spinors. This change of nature, however, does not make any annoying issue in the present work, because our concern is the consistent projection operators acting on $\eta_\pm$ (strictly speaking $\epsilon_\pm$) which identify the 1/2-BPS configurations. Having the Killing spinor, the 1/2-BPS instantonic D1-brane configurations can be investigated by using the usual equation \cite{Bergshoeff:1997kr} \begin{align} \eta^I = \Gamma^{IJ} \eta^J \,, \label{branesusy} \end{align} which is obtained by combining the spacetime supersymmetry and the worldvolume $\kappa$-symmetry transformation. The symbol $\Gamma$ represents the spinorial matrix appearing in the $\kappa$ symmetry projection operator and satisfies $\Gamma^2 =1$ and $\mathrm{Tr} \, \Gamma = 0$. Its explicit form for the D1-brane in the Euclidean spacetime is \begin{align} \Gamma^{IJ} = \frac{i}{2 \sqrt{ |G + \mathcal{F}|}} \epsilon^{ij} ( \gamma_{ij} \tau_1^{IJ} + \mathcal{F}_{ij} \tau_2^{IJ} )\,, \end{align} where $\gamma_{ij} = \partial_i X^\mu \partial_j X^\nu e_\mu^A e_\nu^B \Gamma_{AB}$. Now, by acting the projection operator $P^{IJ}_\pm$ of (\ref{proj}) on the above equation (\ref{branesusy}), we can express (\ref{branesusy}) in terms of $\eta_\pm$ of (\ref{eta}) as\footnote{Due to Eq.~(\ref{wick}), the projector $P^{IJ}_\pm$ of (\ref{proj}) changes to $P^{IJ}_\pm = \frac{1}{2} (\delta^{IJ} \mp i \hat{\gamma} \tau_2^{IJ})$. Because of this, the relation between $\eta^1_\pm$ and $\eta^2_\pm$ of (\ref{eta12}) becomes $\eta^2_\pm = \pm i\hat{\gamma} \eta^1_\pm$.} \begin{align} \begin{pmatrix} \eta_+ \\ \eta_- \end{pmatrix} = \frac{\hat{\gamma}}{2 \sqrt{ |G + \mathcal{F}|}} \epsilon^{ij} \begin{pmatrix} - \mathcal{F}_{ij} & \gamma_{ij} \\ - \gamma_{ij} & \mathcal{F}_{ij} \end{pmatrix} \begin{pmatrix} \eta_+ \\ \eta_- \end{pmatrix} \,. \end{align} We have two equations. However, they are actually equivalent, because the equation from the first row is obtained by acting \begin{align} 1 + \frac{\hat{\gamma}}{2 \sqrt{ |G + \mathcal{F}|}} \epsilon^{ij} \mathcal{F}_{ij} \end{align} on the equation from the second row. Thus it is enough to consider only one of two equations. Here, we will take the equation from the first row, \begin{align} \eta_+ = \frac{\hat{\gamma}}{2 \sqrt{ |G + \mathcal{F}|}} \epsilon^{ij} ( \gamma_{ij} \eta_- -\mathcal{F}_{ij} \eta_+ ) \,. \end{align} Then, by plugging the Killing spinor (\ref{ks}) into this equation, we get \begin{align} \frac{1}{\sqrt{z}}( 1 + \hat{\gamma} \sin \beta) ( \epsilon_+ + \Gamma_m x^m \epsilon_- ) = \frac{\sqrt{z} \cos \beta}{2 \sqrt{|G |} } \hat{\gamma} U^{-1} \epsilon^{ij} \gamma_{ij} \Gamma_z U \epsilon_- \,, \label{susychk} \end{align} where $U$ is given in (\ref{usol}) and we have used (\ref{emsol0}) and (\ref{emsol}). The Eq.~(\ref{susychk}) is the key for identifying the 1/2-BPS instantonic D1-branes. Since the covariant open string description tells us that the instantonic D1-branes embedded only in the S$^5$ of AdS$_5 \times$ S$^5$ geometry may have the possibility of being 1/2-BPS, we will consider the D1-brane configurations each of which spans a certain two dimensional subspace of the S$^5$ and is a point in the AdS$_5$ space. Thus the coordinates $(z, x^m)$ in (\ref{susychk}), the position of D1-brane in the AdS$_5$ space, are taken to be constants and the $U$ has a specific expression corresponding to a given configuration. If we evaluate the right hand side of (\ref{susychk}) for a given D1-brane configuration and obtain the result which does not depend on any worldvolume coordinate, then the configuration is confirmed to be 1/2-BPS.\footnote{One may argue that any D1-brane embedded in the S$^5$ is always 1/2-BPS if it is placed at $z=0$ or $\infty$. However, it seems that the object at such position should be handled with care because such supersymmetry structure may not be obtained by taking the limit $z \rightarrow 0$ or $\infty$ after solving (\ref{susychk}). In our study, it is presumed that the position of D1-brane in the AdS$_5$ space is generic and Eq.~(\ref{susychk}) is solved preferentially before taking any limit.} Specifying a D1-brane configuration or embedding in S$^5$ is to choose a static gauge for the worldvolume reparametrization. From the five sphere metric (\ref{s5metric}), we can figure out largely two types of static gauge fixing conditions according to whether or not the coordinate $\alpha$ is transverse to the D1-brane worldvolume. We first investigate the cases where $\alpha$ is a transverse direction. Perhaps, the most immediate one would be the D1-brane that wraps a two sphere parametrized by ($\theta_1, \phi_1$) or ($\theta_2, \phi_2$). If we choose the static gauge as \begin{align} \zeta^1 = \theta_1 \,, \quad \zeta^2 = \phi_1 \,, \label{config1} \end{align} with constant $\alpha$ and $\theta_2 = \phi_2 = 0$, then $\sqrt{|G|} = R^2 \cos^2 \alpha \sin \theta_1$ and $U$ of (\ref{usol}) reduces to \begin{align} U = e^{ -\frac{1}{2} \alpha \Gamma_{z1} } e^{ - \frac{1}{2} \theta_1 \Gamma_{z2} } e^{ \frac{1}{2} \phi_1 \Gamma_{23} } \,. \end{align} The right hand side of (\ref{susychk}) is evaluated by using the following computation: \begin{align} \frac{1}{2 \sqrt{|G |} } U^{-1} \epsilon^{ij} \gamma_{ij} \Gamma_z U &= U^{-1} \Gamma_{23} \Gamma_z U \notag \\ &= \left( \cos \alpha + \sin \alpha e^{ - \frac{1}{2} \phi_1 \Gamma_{23} } e^{ \theta_1 \Gamma_{z2} } e^{ \frac{1}{2} \phi_1 \Gamma_{23} } \right) \Gamma_{23} \Gamma_z \,. \end{align} We see that this expression depends explicitly on the worldvolume coordinates $\theta_1$ and $\phi_1$. Though this seems to make the present configuration non-supersymmetric, we can get rid of the dependence by taking the transverse position of D1-brane in the $\alpha$ direction to be $\alpha = 0$, at which the size of the S$^2$ parametrized by ($\theta_1, \phi_1$) is maximized as one can see from (\ref{s5metric}). Then, the Eq.~(\ref{susychk}) becomes \begin{align} ( 1 + \hat{\gamma} \sin \beta) ( \epsilon_+ + \Gamma_m x^m \epsilon_- ) = z \cos \beta \hat{\gamma} \Gamma_{23} \Gamma_z \epsilon_- \,. \label{d1s2} \end{align} If we now introduce a projection operator \begin{align} P^x_{\pm} = \frac{1}{2} ( 1 \pm \hat{\gamma} ) \,, \label{xproj} \end{align} and act on (\ref{d1s2}), we finally get \begin{align} \epsilon_{++} &= \frac{\cos \beta}{1 + \sin \beta} z \Gamma_{23} \Gamma_z \epsilon_{-+} - \Gamma_m x^m \epsilon_{--} \,, \notag \\ \epsilon_{+-} &= - \frac{\cos \beta}{1 - \sin \beta} z \Gamma_{23} \Gamma_z \epsilon_{--} - \Gamma_m x^m \epsilon_{-+} \,, \label{d1s2res} \end{align} where we have defined \begin{align} \epsilon_{+ \pm} \equiv P^x_\pm \epsilon_+ \,, \quad \epsilon_{- \pm} \equiv P^x_\pm \epsilon_- \,. \label{epm} \end{align} The Eq.~(\ref{d1s2res}) clearly shows that $\epsilon_{+ \pm}$ is given in terms of $\epsilon_{- \pm}$, while $\epsilon_{- \pm}$ are still free parameters. Thus it is concluded that the D1-brane wrapping the S$^2$ parametrized by ($\theta_1$, $\phi_1$) placed at $\alpha = 0$ is 1/2-BPS. We note that this result continues to hold for the D1-brane wrapping another S$^2$ parametrized by ($\theta_2$, $\phi_2$). The differences are that $\Gamma_{23}$ in (\ref{d1s2res}) is to be replaced by $\Gamma_{45}$ and the position in the $\alpha$ direction should be $\alpha = \pi/2$. The second case is the configuration corresponding to the following static gauge, \begin{align} \zeta^1 = \phi_1 \,, \quad \zeta^2 = \phi_2 \,, \end{align} with constant $\alpha$, $\theta_1$ and $\theta_2$. In this gauge choice, $\sqrt{|G|} = R^2 \cos^2 \alpha \sin \alpha \sin \theta_1 \sin \theta_2$ and $U$ is just given by (\ref{usol}). Then, the important part of the right hand side of (\ref{susychk}) is computed as \begin{align} \frac{1}{2 \sqrt{|G |} } U^{-1} \epsilon^{ij} \gamma_{ij} \Gamma_z U &= U^{-1} \Gamma_{35} \Gamma_z U \notag \\ &= \bigg( \cos \alpha e^{ - \phi_2 \Gamma_{45} } e^{ -\frac{1}{2} \phi_1 \Gamma_{23} } e^{ \theta_1 \Gamma_{z2} } e^{ -\frac{1}{2} \phi_1 \Gamma_{23} } \notag \\ & + \sin \alpha \Gamma_{z1} e^{ - \frac{1}{2} \phi_2 \Gamma_{45} } e^{ \theta_2 \Gamma_{14} } e^{ - \frac{1}{2} \phi_2 \Gamma_{45} } e^{ - \phi_1 \Gamma_{23} } \bigg) \Gamma_{35} \Gamma_z \,. \end{align} We see that there are explicit dependencies on the worldvolume coordinates $\phi_1$ and $\phi_2$. Unlike the previous case, one cannot eliminate such dependencies for any choice of $\alpha$, $\theta_1$ and $\theta_2$. Therefore, the only solution of (\ref{susychk}) is $\epsilon_\pm = 0$ and thus the D1-brane wrapping $\phi_1$ and $\phi_2$ is not supersymmetric. The third case, the last one in which $\alpha$ is a transverse direction, is the D1-brane wrapping the `diagonal' S$^2$ composed of two S$^2$'s parametrized by $\theta_{1,2}$ and $\phi_{1,2}$.\footnote{This configuration is inspired by that of instantonic D2-brane \cite{Drukker:2011zy} in the AdS$_4 \times \mathbf{CP}^3$ background.} If we let \begin{align} \vartheta = \theta_1 = \theta_2 \,, \quad \varphi_\pm = \phi_1 = \pm \phi_2 \,, \label{diagvar} \end{align} the corresponding static gauge is \begin{align} \zeta^1 = \vartheta \,, \quad \zeta^2 = \varphi_+ \; ( \mathrm{or} \; \varphi_-) \,, \label{gaugediag} \end{align} with constant $\alpha$. Let us first consider the configuration where $\zeta^2 = \varphi_+$. It leads us to have $\sqrt{ |G| } = R^2 \sin \vartheta$ and \begin{align} U = e^{ -\frac{1}{2} \alpha \Gamma_{z1} } e^{ - \frac{1}{2} \vartheta ( \Gamma_{z2} - \Gamma_{14} ) } e^{ \frac{1}{2} \varphi_+ ( \Gamma_{23} + \Gamma_{45} ) } \,, \end{align} from (\ref{usol}), which in turn are used to get \begin{align} \frac{1}{2 \sqrt{|G |} } U^{-1} \epsilon^{ij} \gamma_{ij} \Gamma_z U &= U^{-1} \Gamma_{23} e^{\alpha (\Gamma_{24} + \Gamma_{35})} \Gamma_z U \notag \\ &= \Gamma_z \Gamma_{23} e^{\alpha (\Gamma_{24} + \Gamma_{35} - \Gamma_{z1} ) } \,. \label{diap} \end{align} Obviously, the last expression in (\ref{diap}) is independent from any of the worldvolume coordinates and hence implies that the present configuration is 1/2-BPS. Then, after a bit of manipulation with (\ref{diap}), Eq.~(\ref{susychk}) determining the supersymmetry structure becomes \begin{align} \epsilon_{++} &= \frac{\cos \beta}{1 + \sin \beta} z \Gamma_z \Gamma_{23} e^{\alpha (\Gamma_{24} + \Gamma_{35} - \Gamma_{z1} ) } \epsilon_{-+} - \Gamma_m x^m \epsilon_{--} \,, \notag \\ \epsilon_{+-} &= - \frac{\cos \beta}{1 - \sin \beta} z\Gamma_z \Gamma_{23} e^{\alpha (\Gamma_{24} + \Gamma_{35} - \Gamma_{z1} ) } \epsilon_{--} - \Gamma_m x^m \epsilon_{-+} \,, \label{d1diap} \end{align} where $\epsilon_{+\pm}$ and $\epsilon_{-\pm}$ are defined in (\ref{epm}) in terms of the projection operator (\ref{xproj}). These equations clearly show that the D1-brane wrapping the `diagonal' S$^2$ is 1/2-BPS. Compared to the previous 1/2-BPS configuration (\ref{config1}), one distinguishing feature of this configuration is that it is 1/2-BPS for any transverse position in $\alpha$ and its size is fixed. However, we should notice from (\ref{d1diap}) that the dependence of $\epsilon_{+\pm}$ on $\epsilon_{-\pm}$ changes continuously with $\alpha$. On the other hand, the configuration corresponding to the static gauge $\zeta^2 = \varphi_-$ in (\ref{gaugediag}) leads to the same result with that of (\ref{d1diap}) but with the replacement $\Gamma_{35} \rightarrow - \Gamma_{35}$. Therefore, it is also 1/2-BPS. We now turn to the configurations where $\alpha$ is a worldvolume coordinate not a transverse one. Then another worldvolume coordinate is along a circle embedded in the space composed of two S$^2$'s parametrized by $\theta_{1,2}$ and $\phi_{1,2}$. The possible candidates for such circle are the circle given by $\phi_1$ or $\phi_2$ and the `diagonal' one by $\varphi_+$ or $\varphi_-$ defined in (\ref{diagvar}). For the first case, the static gauge is chosen to be \begin{align} \zeta^1 = \alpha \,, \quad \zeta^2 = \phi_1 \,, \label{hs1} \end{align} with constant $\theta_1$, for which we have $\sqrt{|G|} = R^2 \cos \alpha \sin \theta_1$ and \begin{align} U = e^{ -\frac{1}{2} \alpha \Gamma_{z1} } e^{ - \frac{1}{2} \theta_1 \Gamma_{z2} } e^{ \frac{1}{2} \phi_1 \Gamma_{23} } \end{align} from (\ref{usol}). The next step is to compute \begin{align} \frac{1}{2 \sqrt{|G |} } U^{-1} \epsilon^{ij} \gamma_{ij} \Gamma_z U &= U^{-1} \Gamma_{13} \Gamma_z U \notag \\ &= \left( \cos \theta_1 e^{-\phi_1 \Gamma_{23}} + \sin \theta_1 \Gamma_{z2} \right) \Gamma_{13} \Gamma_z \,. \end{align} Although the last expression explicitly depends on the worldvolume coordinate $\phi_1$, the dependence can be removed by taking $\theta_1 = \pi/2$, corresponding to the great circle in S$^2$ parametrized by ($\theta_1$, $\phi_1$). At such position in $\theta_1$, Eq.~(\ref{susychk}) gives the supersymmetry structure of the configuration as \begin{align} \epsilon_{++} &= \frac{\cos \beta}{1 + \sin \beta} z \Gamma_{123} \epsilon_{-+} - \Gamma_m x^m \epsilon_{--} \,, \notag \\ \epsilon_{+-} &= - \frac{\cos \beta}{1 - \sin \beta} z \Gamma_{123} \epsilon_{--} - \Gamma_m x^m \epsilon_{-+} \,, \label{d1s1} \end{align} where $\epsilon_{+\pm}$ and $\epsilon_{-\pm}$ are defined in (\ref{epm}) in terms of the projection operator (\ref{xproj}). This result shows us that the configuration (\ref{hs1}) at $\theta_1 = \pi/2$ is 1/2-BPS. Similarly, we can confirm that another configuration ($\zeta^1 = \alpha$, $\zeta^2 = \phi_2$) at $\theta_2 = \pi/2$ is also 1/2-BPS. Its supersymmetry structure is also given by (\ref{d1s1}) but with the replacement $\Gamma_{123} \rightarrow \Gamma_{z45}$. Finally, as for the second configuration where $\varphi_+$ or $\varphi_-$ is a worldvolume coordinate, the corresponding static gauge is ($\zeta^1 = \alpha$, $\zeta^2 = \varphi_+$ or $\varphi_-$) at constant $\theta_1$ and $\theta_2$. However, this configuration turns out to be non-supersymmetric since there is no way to eliminate the dependence on the worldvolume coordinate in the calculation of $ U^{-1} \epsilon^{ij} \gamma_{ij} \Gamma_z U / 2 \sqrt{|G |}$ even by choosing certain values of $\theta_1$ and $\theta_2$. \section{Conclusion and Discussion} \label{concl} \begin{table} \begin{center} \begin{tabular}{c|cc} \hline & ($\zeta^1$, $\zeta^2$) & position \\ \hline\hline \begin{tabular}{c} (i) \\ (ii) \\ (iii) \\ (iv) \end{tabular} & \begin{tabular}{c} ($\theta_1$, $\phi_1$) \\ ($\theta_2$, $\phi_2$) \\ ($\vartheta$, $\varphi_+$) \\ ($\vartheta$, $\varphi_-$) \end{tabular} & \begin{tabular}{c} $\alpha = 0$ \\ $\alpha = \pi/2$ \\ any $\alpha$ \\ any $\alpha$ \end{tabular} \\ \hline \begin{tabular}{c} (v) \\ (vi) \end{tabular} & \begin{tabular}{c} ($\alpha$, $\phi_1$) \\ ($\alpha$, $\phi_2$) \end{tabular} & \begin{tabular}{c} $\theta_1 = \pi/2 $ \\ $\theta_2 = \pi/2 $ \end{tabular} \\ \hline \end{tabular} \caption{\label{configs} 1/2-BPS D1-instanton configurations. ($\zeta^1$, $\zeta^2$): static gauge for a given D1-brane configuration. position: transverse position. The definitions for $\vartheta$ and $\varphi_\pm$ are given in (\ref{diagvar}).} \end{center} \end{table} So far, we have identified the 1/2-BPS instantonic D1-brane configurations in the AdS$_5\times$S$^5$ background, which are summarized in Table \ref{configs}. Having the configurations, if we evaluate the Euclidean action for all the configurations listed in Table \ref{configs} by using the D1-brane action (\ref{d1}) with $\sqrt{|G|}$'s computed for the configurations in the last section, we see that the configurations from (i) through (iv) have the same action value as\footnote{For the evaluation, the explicit expressions for the AdS radius (\ref{adsrad}) and the D1-brane tension (\ref{d1tension}) have been utilized.} \begin{align} S_{\mathrm{D1}} = \frac{4}{\cos \beta} \sqrt{ \frac{\pi N}{g_s} } \,, \label{s2value} \end{align} As for the remaining configurations, (v) and (vi), the action values seem to be half of $S_{\mathrm{D1}}$ at first glance basically because the range of $\alpha$ is $ 0 \le \alpha \le \pi/2$ and thus the configurations look like hemispheres. However, this is not the case and the action values are still given by $S_{\mathrm{D1}}$. Let us explain the reason for the configuration (v) by following the prescription of \cite{Hofman:2006xt} given in a similar parametrization of S$^5$. If we pick a pair of antipodal points on S$^2$ parametrized by ($\theta_2, \phi_2$), which are $\theta_2 = 0, \pi$ in the present case, then the pair together with the coordinates $\alpha$ and $\phi_1$ form an S$^2$.\footnote{We note that the coodinate $\theta_1$ is fixed at $\pi/2$ for the configuration (v).} This S$^2$ is the actual space that gives the shape of configuration (v). Thus the configuration (v) has the action value of (\ref{s2value}), not half of it, and importantly its supersymmetry structure (\ref{d1s1}) is not spoiled for the spherical shape. The same process applies to the configuration (vi). As a result, the whole configurations in Table \ref{configs} have the spherical shape and have the action value of (\ref{s2value}). Additionally, one notable fact is that they have the maximum radius equal to that of S$^5$. By using six Cartesian coordinates, S$^5$ of unit radius is described as $\sum_{i=1}^6 X_i^2=1$. Associated with the S$^5$ metric (\ref{s5metric}), we may let $X_1 = \cos \alpha \cos \theta_1$, $X_2 = \cos \alpha \sin \theta_1 \cos \phi_1$, $X_3 = \cos \alpha \sin \theta_1 \sin \phi_1$, $X_4 = \sin \alpha \cos \theta_2$, $X_5 = \sin \alpha \sin \theta_2 \cos \phi_2$, and $X_6 = \sin \alpha \sin \theta_2 \sin \phi_2$. Then each configuration appearing in Table \ref{configs} satisfies an algebraic equation describing a sphere of unit radius, for example, $X_1^2 + X_2^2 + X_3^2 = 1$ for the configuration (i) and $X_2^2 + X_3^2 + X_4^2 = 1$ for (v). This means that all the configurations appearing in Table \ref{configs} are related by SO(6) rotations, the symmetry group of S$^5$. The action value of (\ref{s2value}) allows us to estimate the saddle-point contribution of D1-instanton to any physical process. For such estimation, it is instructive to rewrite the D1-instanton action in terms of the well known D(-1)-brane or D-instanton action in the AdS$_5\times$S$^5$ background \cite{Chu:1998in, Kogan:1998re, Bianchi:1998nk}. Namely, by using the D(-1)-brane action given by \begin{align} S_{\mathrm{D(-1)}} = \frac{2 \pi}{g_s} = \frac{8 \pi^2 N}{\lambda} \,, \end{align} where $\lambda = 4 \pi g_s N$ is the usual 'tHooft parameter, let us rewrite (\ref{s2value}) as \begin{align} S_{\mathrm{D1}} = \frac{\sqrt{\lambda}}{\pi} S_{\mathrm{D(-1)}} \,, \label{d1vsd-1} \end{align} where the worldvolume gauge field strength has been turned off for simplicity ($\beta = 0$). Because the D(-1)-brane action is proportional to $N$ and thus its contribution is suppressed by $e^{-N}$, it is now clear that the contribution of D1-instanton also leads to the same suppression factor. If we turn on the worldvolume gauge field strength and increase it, the suppression becomes much stronger as one can see from (\ref{s2value}). In this sense, the D1-instanton with vanishing or weak worldvolume gauge field is preferable. The single 1/2-BPS D1-instanton has been of our interest. As a next step, we may ask if multiple D1-instanton configuration is supersymmetric or not. The simplest case would be that of two D1-instantons of the same type, one of six types of configurations in Table \ref{configs}, at different positions in the AdS$_5$ space. However, it turns out that such D1-instanton configuration breaks all the supersymmetry. For example, let us take two D1-instantons of the configuration (i) in Table \ref{configs} as a representative whose positions in the AdS$_5$ space are $(z_{(1)}, x^m_{(1)})$ and $(z_{(2)}, x^m_{(2)})$ respectively. Then we have two sets of equations from (\ref{d1s2res}); \begin{align} \epsilon_{++} &= z_{(1)} \Gamma_{23} \Gamma_z \epsilon_{-+} - \Gamma_m x_{(1)}^m \epsilon_{--} \,, \notag \\ \epsilon_{++} &= z_{(2)} \Gamma_{23} \Gamma_z \epsilon_{-+} - \Gamma_m x_{(2)}^m \epsilon_{--} \,, \notag \\ \epsilon_{+-} &= - z_{(1)} \Gamma_{23} \Gamma_z \epsilon_{--} - \Gamma_m x_{(1)}^m \epsilon_{-+} \,, \notag \\ \epsilon_{+-} &= - z_{(2)} \Gamma_{23} \Gamma_z \epsilon_{--} - \Gamma_m x_{(2)}^m \epsilon_{-+} \,, \end{align} where the worldvolume gauge field strength has been turned off ($\beta = 0$) because it does not play any crucial role for the consideration of supersymmetry. We note that the two $\epsilon_{++}$'s ($\epsilon_{+-}$'s) on the left hand side should be the same if some fraction of supersymmetry is preserved. This allows us to get two equations by subtracting the first (last) two equations; \begin{align} 0 &= \left(z_{(1)} - z_{(2)} \right) \Gamma_{23} \Gamma_z \epsilon_{-+} - \Gamma_m \left(x_{(1)}^m - x_{(2)}^m \right)\epsilon_{--} \,, \notag \\ 0 &= - \left(z_{(1)} - z_{(2)} \right) \Gamma_{23} \Gamma_z \epsilon_{--} - \Gamma_m \left(x_{(1)}^m - x_{(2)}^m \right) \epsilon_{-+} \,. \end{align} By using these, one can check supersymmetry for the three cases, ($z_{(1)} \neq z_{(2)}$, $x_{(1)}^m = x_{(2)}^m$), ($z_{(1)} = z_{(2)}$, $x_{(1)}^m \neq x_{(2)}^m$), and ($z_{(1)} \neq z_{(2)}$, $x_{(1)}^m \neq x_{(2)}^m$). However, as one can see easily, none of them are supersymmetric. Thus, two or more D1-instantons of the same type are supersymmetric (1/2-BPS) only if they are coincident in the AdS$_5$ space. This is in contrast to the D(-1)-brane case where multiple D(-1)-branes do not spoil the supersymmetry structure even if they are apart. Some more generalized case is that of two different types of D1-instantons. From the Table \ref{configs}, we can think of fifteen combinations. However, we will not investigate all of them but instead intend to show just the possibility for the existence of supersymmetric configuration by taking one example. It is the combination composed of D1-instantons (i) and (ii). If the D1-instantons are taken to be coincident in the AdS$_5$ space\footnote{Actually, like in the previous case, they do not form a supersymmetric object if they are separated in the AdS$_5$ space.} and the worldvolume gauge field strengths on both of them are turned off for simplicity, Eq.~(\ref{d1s2res}) for (i) becomes \begin{align} \epsilon_{++} = z \Gamma_{23} \Gamma_z \epsilon_{-+} \,, \quad \epsilon_{+-} = - z \Gamma_{23} \Gamma_z \epsilon_{--} \,, \label{d1-1} \end{align} and the corresponding equation for (ii) is \begin{align} \epsilon_{++} = z \Gamma_{45} \Gamma_z \epsilon_{-+} \,, \quad \epsilon_{+-} = - z \Gamma_{45} \Gamma_z \epsilon_{--} \,, \label{d1-2} \end{align} which follows from (\ref{d1s2res}) with the replacement $\Gamma_{23} \rightarrow \Gamma_{45}$ as mentioned below (\ref{d1s2res}). Because $\Gamma_{23}$ and $\Gamma_{45}$ commute with each other,\footnote{They also commute with $\hat{\gamma}$ and $\Gamma^{11}$ measuring the ten dimensional chirality.} it is convenient to split $\epsilon_{ab}$ ($a, b = \pm$) in terms of their eigenvalues as $\epsilon_{abss'}$; \begin{align} i\Gamma_{23} \epsilon_{ab \pm s'} = \pm \epsilon_{ab \pm s'} \,, \quad i\Gamma_{45} \epsilon_{ab s \pm} = \pm \epsilon_{ab s \pm} \,. \label{extproj} \end{align} Although, generically, the constant spinors $\epsilon_{++}$ ($\epsilon_{+-}$) appearing in (\ref{d1-1}) and (\ref{d1-2}) have different dependences on $\epsilon_{-+}$ ($\epsilon_{--}$), they should be the same if the present D1-instanton combination is supersymmtric. If we now subtract the first (second) equation of (\ref{d1-2}) from that of (\ref{d1-1}) and project the resulting equation according to the eigenvalues of $\Gamma_{23}$ and $\Gamma_{45}$ by using (\ref{extproj}), then we get four equations as \begin{align} 0 = 2 i z \Gamma_z \epsilon_{-++-} \,, \quad 0 = - 2 i z \Gamma_z \epsilon_{-+-+} \,, \notag \\ 0 = - 2 i z \Gamma_z \epsilon_{--+-} \,, \quad 0 = 2 i z \Gamma_z \epsilon_{---+} \,. \end{align} This shows that the four constant spinors on the right hand sides, one half of $\epsilon_{-\pm}$, should vanish for generic $z$ position. In the end, the following ones, another half of $\epsilon_{-\pm}$, remain free parameters, \begin{align} \epsilon_{-+++} \,, \quad \epsilon_{-+--} \,, \quad \epsilon_{--++} \,, \quad \epsilon_{----} \,, \end{align} and thus the combination of D1-instantons (i) and (ii) coincident in the AdS$_5$ space turns out to be 1/4-BPS. Having an explicit example of supersymmetric multiple D1-instatons, we may expect other possibilities from the remaining two D1-instanton configurations. However we will not pursue them further since our primary concern is single 1/2-BPS D1-instanton. We hope to have an opportunity to deal with them in a near future. \section*{Acknowledgments} This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education with Grant No.~NRF-2018R1A2B6007159, NRF-2021R1A6A1A10042944 and NRF-2021R1A2C1012440 (JP), and NRF-2018R1D1A1B07045425 (HS). \providecommand{\href}[2]{#2}\begingroup\raggedright
3,212,635,537,821
arxiv
\section{Introduction} \label{sec:intro} Voice-only, spoken conversational systems such as Google Home, Amazon Echo, or Apple Homepod, are becoming widely used. These systems can answer factoid questions. However, they are yet not able to engage in complex information seeking tasks where multiple turns are needed to exchange information, reformulate queries, or proactively recommend different search strategies. \citet{trippas2018informing} suggested that existing information seeking models do not suffice for the increased interactivity, complexity, or the agency of SCS systems. To the best of our knowledge, only a few models include the system as an integral part of the search process~\cite{azzopardi2018conceptualizing,reichman1985getting, sitter1992modeling, vakulenko2019qrfa}. Recently, \citet{azzopardi2018conceptualizing} created a conceptual framework of the probable action and interaction space for conversational agents as a first step, acknowledging that their initial framework would need expansion and empirical evidence. Understanding the communication behaviours of dialogue is crucial to SCS and many different annotation schemas have been developed~\cite{bunt2009dit++}. These schemas are classifications of dialogues and consider an utterance as an action inside the information exchange. Even though many different domain independent annotation schemas exist, such as DAMSL~\cite{allen1997draft}, these were mainly used for the creation of \change{spoken dialogue systems} (SDS) or the general understanding of dialogue. Information seeking actions were not covered in depth but instead broad categories such as ``answer" or ``info-request" are presented. Recent work has made initial steps towards understanding such actions~\cite{radlinski2017theoretical, trippas2017people, trippas2019}, however, no complete set has been developed so far. We create a first annotation schema for SCS: the \change{spoken conversational search annotation schema} (SCoSAS). The schema reveals the different atomic actions or utterance functions and interactions taken by prospective user and system in the stages of an information seeking process. Thematic analysis was used to identify and summarise communicative activities, strategies, and challenges~\cite{braun2013successful}. We provide a schematic overview of the possible interactions by either actor, which allows us to understand a seeker's conversational patterns~\cite{lai2009conversational}. The analysis is based on an experimental lab study simulating a natural dialogue to understand how a user and system may interact. Since no existing SCS systems can reliably manage multiple turns, we used conversations between two people, the Seeker, the other an Intermediary. The information seeking dialogues between the two actors were filmed, transcribed, and annotated. The annotated data is publicly available.\footnote{Transcripts together with codes/labels of the experiment are available from the corresponding author upon request.} The contributions of this work include: \begin{enumerate}[noitemsep] \item We create the first fully labelled dataset for SCS, the SCSdata; \item We define a multi-level annotation schema for SCS, SCoSAS, to identify the interaction choices for SCS; \item We create a new model based on multi-turn activities and multi-move utterances; \item We validate the proposed annotation schema by re-annotating the SCSdata and with annotating another conversational dataset; \item We suggest new design recommendations and hypotheses for SCS. \end{enumerate} The aim of this qualitative research is to explore SCS as a new search paradigm and seeks to understand the exhibited behaviours demonstrated in an ideal scenario for SCS. Thus, we aim to gain a deeper understanding and overview of the interaction behaviours of a group of participants through qualitative analysis. We first create a rich and detailed dataset through a \change{natural dialogue study} (NDS)~\cite{yankelovich2008using}, which we refer to as our observational study, to explore SCS interaction behaviours and to seek patterns within this data through an inductive method. \change{In particular, we use our observational study dataset to better understand the communication behaviours at first-hand instead of relying on questionnaires or self-report.} The strengths of our qualitative analysis are that it provides an in-depth and detailed analysis to explain complex interactions during the information seeking process. The remainder of the paper is organised as follows. In Section~\ref{sec:relatedwork}, we present related work. In Section~\ref{sec:methodology}, we describe our methodological approach to create the SCSdata and analysis. In Section~\ref{sec:Results}, we present all the identified themes and sub-themes, and we then validate the coding consistency within the SCSdata. We further validate our coding schema in Section~\ref{sec:Validation of SCoSAS}. Then, discussion, implications, and design recommendations for SCS are presented in Section~\ref{sec:discussion}, before concluding in Section~\ref{sec:conclusions}. \section{Related Work} \label{sec:relatedwork} We organise previous work into four sections: Spoken Dialogue Systems (Section~\ref{subsec:Spoken Dialogue Systems}), annotating dialogues (Section~\ref{subsec:Annotating Dialogues}), interaction space in conversational search (Section~\ref{subsec:Interaction Space in Conversational Search}), and natural dialogue studies (Section~\ref{subsec:Natural Dialogue Study}). \subsection{Spoken Dialogue Systems} \label{subsec:Spoken Dialogue Systems} SDS provide a platform for people to interact with computer applications such as databases with the use of spoken natural language. These systems exchange information on a turn-by-turn basis providing an interface between the user and the computer~\cite{gibbon1997handbook}. In recent years, interest in SCS has grown, as speech technology~\cite{xiong2018microsoft} and machine learning for spoken systems~\citep{yang2018response} have developed. A range of SDS are available, from question answering to semi-conversational systems~\citep{mctear2016conversational}. Research has been devoted to task-oriented SDS which has defined search boundaries, such as travel planning or route planning, and can be developed with slot filling approaches~\citep{walker2001quantitative}. Thus, task-oriented dialogue systems are created on a particular closed domain. However, non-task-oriented dialogue systems or open-domain conversations such as search for SCS systems may not benefit from a rigid plan-based dialogue approach and introduce many new challenges~\citep{higashinaka2014towards}. These challenges include how to deal with the variety of user utterances and how answers or replies could be simplified or abstracted to generate appropriate system responses~\citep{sugiyama2013open}. \subsection{Annotating Dialogues} \label{subsec:Annotating Dialogues} Research interest in SCS has increased the recording of spoken search interactions~\cite{thomas2017MISC, vakulenko2019qrfa}. Such records are a valuable source of data to understand how users interact and which tactics are used for driving effective search performance in this new search paradigm. Thus, this data is useful to understanding the characteristics of a search conversation to build SCS systems acting as a dialogue participant~\cite{gibbon1997handbook}. \change{The spoken data recordings themselves need to be appropriately transcribed and ``annotated''~\cite{larson2012spoken}.} Thus, exposing the structure of the conversations by annotating the actions taken is one of the first steps towards analysing these spoken interactions~\cite{zarisheva2015dialog}. Previously, much research has been devoted to creating annotation schemas and classifying taxonomies for dialogue and SDS~\citep{allen1997draft, bunt2009dit++, searle1969speech}. Annotating these dialogues has been based on the understanding that classifying utterances provides insight into the dialogue behaviour~\citep{reithinger1995utilizing} additionally research on dialogue is often based on the assumption that dialogue acts provide a useful way of characterising dialogue behaviors in human--human dialogue, and potentially in human--computer dialogue as well~\cite{allen1997draft, belkin1995cases, bunt1999dynamic}. For example, annotated conversations can help to identify answers in texts or characterise user intents~\citep{qu2018analyzing}. \change{Annotating dialogue transcriptions has been explored by sociologists (via {\em conversation analysis\/}, e.g., \cite{schegloff2000overlapping}) and socio-psychologists (e.g., \cite{clark1991grounding}) for the purpose of understanding the organisation and communicative purpose of dialogue contributions. Within Computational Linguistics, annotation via {\em dialogue acts} -- which extend Searle's {\em speech acts} \cite{searle1969speech} by adding the social-communicative purpose -- has been used to analyse dialogue transcriptions for the purpose of designing computational models of dialogue management. While Allen and Core's original DAMSL framework was designed for task-based dialogue, subsequent formulations have been designed for specific types of dialogues~\cite{allen1997draft}.} Thus, several different annotation schemas have been proposed which cover the general speech interactions. Such schemas emphasised information seeking, such as the \change{dynamic interpretation theory} (DIT) by~\citet{bunt1999dynamic}. The DIT was based on the empirical investigation of spoken human--human information dialogues. \citeauthor{bunt1999dynamic} suggested that these information dialogues have two motivational sources, namely, to proceed in the task and to exchange communicative functions to drive the conversation~\cite{bunt1999dynamic}. He noticed that an information dialogue consisted of the expected greetings, apologies, and acknowledgements but also included information-exchange utterances such as questions, answers, checks, and confirmations. Later, \citeauthor{bunt2009dit++} developed an annotation schema called DIT++ for these information dialogues~\cite{bunt2009dit++}. Nevertheless, DIT++ lacks the detailed distinctions made when a user interacts with a search system while satisfying their information need, for example the techniques used to represent documents or information units. \subsection{Interaction Space in Conversational Search} \label{subsec:Interaction Space in Conversational Search} Different schemas have been proposed for information-seeking dialogues based on dialogue acts (DAs)~\cite{searle1969speech} which try to capture the role of an utterance. In particular, schemas such as the COnversational Roles (COR)~\cite{stein1995structuring} and Query Request Feedback Answer (QRFA)~\cite{vakulenko2019qrfa} aim to provide the structure of a single dialogue contribution or move. In our study, we are interested in interactions between a user and SCS system in a more exhaustive manner: for example, utterances such as relevance feedback statements or physical actions (i.e., a mouse click to open a document). These are not covered by general-purpose DA models. A more relevant conceptual framework was recently created by~\citet{azzopardi2018conceptualizing}. This framework combined the action and interaction space discussed in~\citet{radlinski2017theoretical} and~\citet{trippas2018informing}. The conceptual framework, therefore, is not restricted to DAs but provides an overview of the possible actions taken by either actor. We develop the action and interaction space while enriching the current frameworks. \subsection{Natural Dialogue Study} \label{subsec:Natural Dialogue Study} A first step to conceptualising SCS is to explore how people interact or speak in the SCS task they are trying to accomplish~\citep{lai2009conversational}. In the case of a SCS system, one could investigate the reference interview techniques or record elicitation processes librarians undertake with information seekers~\citep{dervin1986neutral, belkin1987knowledge}. However, a more direct approach is to record a situation where people are acting as closely as possible to the task of interest~\citep{lai2009conversational}. A natural setting will encourage participants to converse more intuitively and thus provide insights into the language or vocabulary people use, their turn-taking behaviours, and the information flow~\citep{bunt1999dynamic, yankelovich2008using}. A natural dialogue study (NDS) supports an understanding of the accepted conversational patterns in human dialogue. Thus, more natural and usable conversational systems can be created by studying human dialogue~\cite{yankelovich2008using}. In other words, NDS helps to explore the behavioural patterns and provides insights to improve the design of the system while creating a conceptual understanding of human dialogue behaviour~\cite{bunt1999dynamic}. NDS is not a Wizard of Oz (WOZ) technique. In a WOZ setting, a human acts as a system while the user thinks they are interacting with a live system~\citep{gould1983composing}. \section{Methodology} \label{sec:methodology} We conducted a laboratory study to collect utterances and search interactions to develop the SCSdata. This dataset captures the utterances of two participants or actors communicating to fulfil an information need. \change{In particular, the purpose of the SCSdata is to understand how users communicate in an audio-only search setting where no screens are available to exchange information and focuses on the issues one could encounter when using such a search system.} Thus, observing how people search in this setting provides initial insight into the interactions taken~\citep{trippas2018informing}. To this end we conducted a study to collect a set of utterances and search interactions from two actors communicating to fulfil an information need: \textit{SCSdata}~\citep{trippas2018informing}. We developed an annotation scheme for SCS, the \textit{SCoSAS}, analysed this and validated it with inter-rater reliability; further tested it with an independent data set, Microsoft Information-Seeking Conversation data (MISC)~\citep{thomas2017MISC, thomas2018style, trippas2019data}. Our analysis provides insight into the interaction space and design recommendations for further research into SCS. \subsection{Approach} \label{subsec:Approach} The development of spoken language datasets is a work-intensive and time-consuming process. Nevertheless, these datasets are invaluable for conversational modelling, as a resource for system development, or defining of vocabulary coverage~\cite{gibbon1997handbook}. The development and evaluation of SDS is a well-studied problem and has shown that iterative analysis and assessment is needed. To enhance our understanding of SCS, we adopt NDS as a well-established technique used in SDS to develop a spoken language dataset and utilise qualitative analysis to identify meaningful patterns in our dataset~\cite{gibbon1997handbook, braun2013successful}. The purpose of our experimental setup is to specify the interaction possibilities in SCS. By outlining these different interactions, we provide the first step towards uncovering the details of the SCS process~\cite{gibbon1997handbook}. Our observational study consisted of a number of \change{sessions} with two participants, where one participant acted as the \textit{Seeker} and the other participant as the \textit{Intermediary} as illustrated in Figure~\ref{fig:experimental_setup}. \begin{figure}[htbp] \centering \includegraphics[trim={0 0.5cm 0 0cm},clip, width=.85\textwidth]{Image/experimental_setup_SCoSAS.pdf} \caption{Experimental setup.} \label{fig:experimental_setup} \end{figure} The Seeker received a \textit{backstory}: a short information need statement, to motivate and contextualise the search need.\footnote{Information needs and backstories used in our experiments are listed in~\ref{sec:appendixB}.} The Seeker had to read the backstory and verbalise the information need without reading out the backstory to the Intermediary. Instead, the Seeker had to personally formulate their information need problem to convey it to the Intermediary. The Intermediary had access to a search engine through a desktop computer. In effect, the Seeker acted as the searcher and the Intermediary simulated the audio-only interface and search system. Participants could not access each others' tasks or search engine, were not able to see each others' facial expressions, and could only verbally communicate. All backstories were randomised and the participant roles were randomly assigned. Participants completed pre-test, pre-task, post-task, and exit questionnaires, as well as a semi-structured interview. Sessions took around 90~minutes. \subsubsection{Thematic Analysis} \label{subsubsec:Thematic Analysis} Thematic analysis involves identifying, analysing, and reporting patterns (themes) within qualitative data~\citep{braun2013successful}. This method allows analysing qualitative data in an accessible and theoretically flexible manner, and it is often seen as a fundamental way of examining this kind of data~\citep{braun2013successful}. We adopted the six-step process as outlined by Braun and Clarke~\citep{braun2013successful}: (Step~1) familiarising self with data, (Step~2) generating initial codes, (Step~3) searching for themes, (Step~4) reviewing themes, (Step~5) defining and naming themes, and (Step~6) producing the report. \change{All steps were completed by the first author with continuous systematic feedback sessions with two other authors from Step~2 onward and a second independent annotator as validation of the full schema.} We illustrate the two-tiered coding process in the following example from Participant~8 (Seeker). The example utterance was coded as ``Intent clarification'' as it describes the Seeker further explaining their query intent. This code is then grouped with similar codes into the ``Information Request'' sub-theme in the Task Level theme. We describe all themes and sub-themes in Section~\ref{sec:Results}. \begin{description}[style=multiline, labelwidth=\widthof{Intermediary long : }, font=\normalfont\textsc , leftmargin=\labelwidth, align=right] \item[P8 -Seeker:] Yeah, so I just want to know where it comes from \\ \textit{\small [Intent clarification]} \end{description} \change{To the best of our knowledge, we are the first to use thematic analysis to create of an annotation schema for SCS.} \subsubsection{Validation of the SCoSAS schema} \label{subsubsec:Validation of SCoSAS} To reduce the possibility of missing important data points, we validate our coding schema in two ways. We computed (1) inter-rater reliability and code overlap, and (2) overlap and coverage based on the coding of a different dataset, the MISC\footnote{The MISC data was accessed at \url{http://aka.ms/MISCv1}.}, with our predefined codes~\cite{thomas2017MISC}. A second independent annotator, who is familiar with information seeking and information retrieval research, recoded all utterances in the SCSdata to obtain the inter-rater reliability with Cohen's Kappa and code overlap~\cite{landis1977measurement}. \change{The second annotator used the codebook for closed coding (i.e., the categories were already determined).} Identifying useful actions for SCS which have not been covered in the SCoSAS provides an understanding of the scope of our coding schema. Therefore we apply the SCoSAS to a second and similar dataset, the MISC, to calculate the overlap and coverage~\citep{thomas2017MISC}. We took a random sample from the MISC and coded the utterances according to our dataset. Nevertheless, it may not be possible to achieve complete coverage with our annotations given the complexity and unexplored interactivity of a SCS information seeking dialogue~\cite{stent2000rhetorical}. In addition, achieving full coverage is difficult and often not possible to achieve~\cite{stent2000annotating}. Hence, declarations which were not covered in SCSdata received new codes according to the steps of thematic analysis. \subsection{Data Collection Setup} \label{subsec:Data Collection Setup} This section introduces the experimental setup by describing the tasks used in the experiment, an overview of the participants, and the annotation steps. \subsubsection{Task Design} \label{subsubsec:Task Design} We used nine search tasks and backstories from~\citet*{bailey2015user} (\ref{sec:appendixB}). These tasks covered three levels of the Taxonomy of Learning~\citep{anderson2001taxonomy}: \textit{Remember}, \textit{Understand}, and \textit{Analyse}. \subsubsection{Participants} \label{subsubsec:Participants} The study involved 26 participants recruited through a mailing list.\footnote{ The protocol was reviewed and approved by RMIT University's Ethics Board (ASEHAPP 08-16). The mailing list is created and maintained by the Behavioural Business Lab at RMIT University: \url{https://orsee.bf.rmit.edu.au/public/index.php}.} Fifteen participants were female and 11 were male with a mean age of 30 years ($SD$=11, range 18--54). Twenty-two participants reported being a native English speaker, and four participants said they had a high level of English proficiency. The highest level of degree held was a Master's degree. Eighteen participants reported that they were awarded a Bachelor's degree or higher and eight participants said their highest level of degree awarded was High School graduation. The majority of participants were students (73\%), 19\% was employed, and 7\% were unemployed. The most common fields of education were Science and Engineering (both 19\% respectively) and Law (11\%). Participants reported that they had been using a computer for more than ten years (85\%) and 15\% reported using a computer for 5--10 years. All participants said that they used search engines daily with the majority of participants reporting that they used a search engine more than eight times per day (54\%). Participants rated their search skills on a 5-point scale, where 1=novice and 5=expert. Participants' mean search skills were 3.9 ($SD$=0.5), with a minimum score of 3 and a maximum of 5. Participants' search self-efficacy was measured with the Search Self-Efficacy scale~\citep{brennan2016factor}, which contains 14 items describing different search activities. Participants indicated their confidence in completing each activity using a 10-point scale, where 1=totally unconfident and 10=totally confident. Participants' average Search Self-Efficacy was 7.3 ($SD$=1.51 and Cronbach's alpha=0.93). Participants reported their usage of intelligent personal assistants, such as Google Now, Apple's Siri, Amazon Alexa or Microsoft Cortana. Four participants had never used an intelligent assistant and eight had used one a couple of times but did not use them anymore. The majority (54\%) of the participants said they used an assistant, consisting of five participants using one at least once a month and nine participants using one at least weekly. \subsection{Data Analysis and Annotation Schema Creation} \label{subsec:Annotation Schema Creation} \subsubsection{SCS Dataset} \label{subsubsec:SCS Dataset} The SCSdata consists of 1044 turns between the 13 pairs of actors. Seekers took a total of 528 turns and Intermediaries 516. (Seekers instigated and could conclude the search, so they took 12~turns more than Intermediaries.) We recorded an average of 80 turns per pair and 26.76 turns per task. \change{Participants exchanged 15.82 words per utterance on average with a minimum of one word per turn and a maximum of 359 words per turn. This maximum involved an Intermediary reading out a document, an action which is unusual for the dataset where the median words per turn was 9.} \change{The SCSdata which was manually transcribed and subjected to the three-pass-per-tape policy~\citep{mclellan2003beyond}. An editor then proofread the SCSdata transcription~\citep{trippas2017protocol}.} \change{To mitigate automatic speech recognition (ASR) problems such as out-of-vocabulary utterances, we transcribed the SCSdata manually allowing us to conceptualise the user-system interactions. For future systems, investigation will be necessary to understand the impact of ASR transcriptions on the user-system interactions.} \subsubsection{Coding Transcriptions With Thematic Analysis to Develop SCoSAS} \label{subsubsec:Coding of Transcriptions} \change{We coded (i.e., labelled) our transcriptions using thematic analysis as described previously in Section~\ref{subsubsec:Thematic Analysis}. The labels of the SCSdata form the annotation schema, SCoSAS.} We recorded both participants, and the Intermediary's screen. The recordings were synchronised and merged for transcription. We adopted the following steps: \begin{description}[noitemsep] \item [Step 1:] Identifying when each participant spoke, i.e., identifying turns. We used the approach of \textit{taking the initiative equals taking the turn}, as described by~\citet{hagen1999approach}. This means that one turn can consist of multiple moves, actions, or communication goals~\citep{tracy1990multiple}. \item [Step 2:] Transcribing each turn of the full dataset. However, we deliberately did not eliminate any errors, false starts, or confirmations since these occur in real case voice search scenarios and to preserve the morphological naturalness of the transcription and the naturalness of the transcription structure~\cite{mclellan2003beyond, trippas2017protocol}. Instances where either Seeker or Intermediary was unintelligible were not transcribed but were coded as \textit{[inaudible segment]}. We assumed that if the audio recording was not clear, it was probably not clear to the other participant either. \item [Step 3:] Designing and assigning codes to each turn with ELAN~\cite{trippas2017protocol, lausberg2009coding}. Observational notes were added. The full dataset was coded with each utterance receiving equal attention. We classified concepts from the recordings and devised a coding scheme according to the similarities across different actors. \change{The codes were designed to identify the action(s) of that particular turn, describing features of the data and defining the function of the turn (i.e., one turn/utterance could consist of multiple codes).} Thus, turns were annotated with the actions taking place. Consequently, meaningful labels were developed from the original annotations. \textit{Controlled Vocabulary} was added to a \textit{dictionary} which was created during coding. This dictionary was then developed into a \textit{codebook}. \item [Step 4:] Combining codes to themes for further analysis. Themes may consist of \textit{sub-themes} which capture specific concepts as illustrated in Figure~\ref{fig:utterance_coding_example}. \item [Step 5:] Checking quality assurance. Transcriptions and codes were exported from ELAN to a text file. Spelling and codes were checked. \item [Step 6:] Importing files into R and aggregating codes to check whether codes within a theme conceptually belonged to that theme. \item [Note:] Steps 3--6 were conducted iteratively. This process reduced the initial 100 codes to 84 through the identification of overlapping codes. To preserve the nuanced action described in the codes for future information seeking research, distinctions between closely defined codes were retained. For example, the codes ``Information request'' or ``Information request within document'' were retained to identify in which section of the interaction particular information was requested. \newline Steps 3--4 were conducted iteratively by Trippas with feedback sessions with two other authors. Random samples were investigated and compared against the coding schema, and feedback was incorporated in the next coding iteration. \end{description} \begin{figure*} \centering \begin{tabular}{@{}c@{}} \includegraphics[trim={0 0.5cm 0 0.5cm}, clip, width=.95\linewidth]{Image/example1_Seeker.png} \\[\abovecaptionskip] \end{tabular} \vspace*{-0.7cm} \vspace{\floatsep} \begin{tabular}{@{}c@{}} \includegraphics[trim={0 1cm 0 1cm}, clip, width=.95\linewidth]{Image/example2_intermediary.png} \\[\abovecaptionskip] \end{tabular} \caption{Example of coding utterances.} \label{fig:utterance_coding_example} \end{figure*} \section{Results} \label{sec:Results} In this section, we present the themes derived from the thematic analysis together with the sub-themes which are based on the constructed codes/labels (Sections~\ref{subsec:Themes for SCS}--\ref{subsec:Theme 3}). These themes provide the characteristics of information seeking dialogues in a conversational setting, the actor's role, and the actor's relationship with the conversation. Then, we focus on the inter-rater reliability and code overlap calculations addressing the consistency of our coding schema (Section~\ref{subsec:Inter-rater Reliability}). \subsection{Themes for Spoken Conversational Search} \label{subsec:Themes for SCS} Every utterance received one or more codes, based on the action taken in that utterance. For example, when an Intermediary read out a document to the Seeker, this utterance was coded with ``Scanning document''. However, when there were two actions present in one utterance -- such as when an Intermediary read out a document and then asked the Seeker whether that was useful for them -- then the utterance received two codes: ``Scanning document'' and ``Asking about usefulness''. Other examples of multiple actions and the coding of these actions are provided in Figure~\ref{fig:utterance_coding_example}. To understand which actions are taken, we split all codes where more than one code was attached to an utterance --- thus creating atomic actions per utterance for a more natural grouping of these actions into themes and sub-themes. We present the three themes and their corresponding sub-themes \change{and codes} as follows. The first theme, \textit{Task Level}, is related to search interactions and the topical investigation. The second theme, \textit{Discourse Level}, is associated with communicative functions between the Intermediary and Seeker for smooth collaboration. The third theme, \textit{Other}, consists of utterances that belong to neither the Task nor the Discourse levels. Example utterances are provided for each sub-theme. Tables of all the themes, corresponding sub-themes, participants (or actors), and codes are included in~\ref{sec:appendixA}.\change{\footnote{The data provided with this paper supplies all the transcripts and corresponding sub-themes and codes.}} \subsection{Theme 1: The Task Level} \label{subsec:Theme 1} The Task Level theme covers search actions such as queries and search results presentation. In other words, this theme is related to the performed search task. The theme includes four sub-themes: \paragraph*{\textbf{Information Request}} This sub-theme covers utterances which are associated with topical information requests. It includes all utterances \change{with codes} which are related to forming, suggesting, refining, confirming, repeating, spelling, or embellishing information requests. The following example is of two information request sub-theme utterances: \begin{description}[noitemsep,style=multiline, labelwidth=\widthof{Intermediary extra long: }, font=\normalfont\textsc , leftmargin=\labelwidth, align=right] \item[P13 -Seeker:] So which state in Australia consumes the most alcohol per person? \\ \textit{\small [Information request]} \item[P14 -Intermediary:] Again 2016 or the most recent information?\\ \textit{\small [Information request]} \end{description} Information requests from Seekers could be expressed at any time, and they often asked for information \emph{from} a document itself, asked for meta-information \emph{about} a document or search engine results page (SERP), or provided clarification about their search intent. Intermediaries were more likely to provide support in (re)forming the information request, for example by providing information request refinements, suggesting query expansions (i.e., whereby the initial query is augmented with query terms), or eliciting extra information. \paragraph*{\textbf{Results Presentation}} These sub-theme utterances convey the results from the search engine or documents: reading, interpreting, or providing an overview of a SERP or document. Only Intermediaries use this sub-theme, and the majority of Intermediary actions are linked to this sub-theme. In the next example the Intermediary reads out the results exactly as they were displayed in a document: \begin{description}[style=multiline, labelwidth=\widthof{Intermediary extra long: }, font=\normalfont\textsc, leftmargin=\labelwidth, align=right] \item[P6 -Intermediary:] The history of valuable cinnamon. The first mention of cinnamon is in Chinese documents dating from 2800 BC. The ancient Egyptians logged cinnamon as a spice used in the embalming process... \\ \textit{\small [Results presentation]} \end{description} Other categories of utterances where Intermediaries conveyed the documents or search engine results but modified them (i.e., interpreting the results so that they would be most beneficial for the user) are also sorted in this theme. Intermediaries modified SERPs or documents via synthesis; interpretation; paraphrase; summarisation; clarification; and comparison. \paragraph*{\textbf{Search Assistance}} This sub-theme captures interactions where the Intermediary assisted the search process by providing explicit search suggestions, advice, or relevance judgements: \begin{description}[style=multiline, labelwidth=\widthof{Intermediary extra long: }, font=\normalfont\textsc , leftmargin=\labelwidth, align=right] \item[P2 -Intermediary:] there is a lot on health benefits conversation uhm [long pause] I don't see how some of these are relevant \\ \change{\textit{\small [Results presentation + search assistance]}} \end{description} In contrast to directly providing assistance, Intermediaries also asked how to help the Seekers in their search process. This was seen by asking about the usefulness of a result, requesting spelling, or suggesting a different search engine. Additionally, this sub-theme captures the Seeker explicitly asking for assistance during their search session: for example by asking for recommendations or judgements on whether they covered enough of the information space. \paragraph*{\textbf{Search Progression}} This sub-theme is only used by the Seeker to provide feedback on progress: for example by giving performance feedback, rejecting search results, or informing whether they found enough information for a topic: \begin{description}[style=multiline, labelwidth=\widthof{Intermediary extra long: }, font=\normalfont\textsc , leftmargin=\labelwidth, align=right] \item[P15 -Seeker:] OK that's probably enough information \\ \textit{\small [Search progression]} \end{description} In summary of this first theme, the Seeker evoked all sub-themes in the Task Level theme except the Results Presentation sub-theme. The Results Presentation sub-theme was only used by the Intermediary allowing them to present found information from the search engine to the Seeker. The Intermediary also evoked all sub-themes except the Search Progression sub-theme, which was used by the Seeker to provide feedback to the Intermediary. \subsection{Theme 2: The Discourse Level} \label{subsec:Theme 2} The Discourse Level theme covers aspects which are not linked to performing a topical (search) task but instead are concerned about the audio channel between participants. The Discourse Level theme consists of four sub-themes: \textit{Discourse Management} which allows the conversation to take place between the actors; \textit{Grounding} captures interactions for creating mutual knowledge, beliefs, and assumptions between the two actors~\citep{traum1999speech, clark1991grounding}; \textit{Navigation} which covers the communications of moving around web pages, documents, and browser tabs; and \textit{Visibility of System Status} which allows actors to provide feedback on what is happening throughout interactions. \paragraph*{\textbf{Discourse Management}} This sub-theme includes conversational coherence and cohesion between the actors~\citep{schiffrin1985conversational}. In other words, the utterances in this sub-theme are part of the communication between the actors to check whether the other actor has understood a message. In our dataset, these discourse building utterances are independent of the participant role. For example, both Seeker and Intermediary confirmed, checked, or asked to repeat and repeated utterances as illustrated in the snippet below. \begin{description}[noitemsep, style=multiline, labelwidth=\widthof{Intermediary extra long: }, font=\normalfont\textsc , leftmargin=\labelwidth, align=right] \item[P1 -Seeker:] So uhm can you go and change the search question to effectiveness of uhm... passenger and baggage screenings at airport \item[P2 -Intermediary:] Passenger and \\ \textit{\small [Discourse management]} \item[P1 -Seeker:] Baggage \\ \textit{\small [Discourse management]} \end{description} Often an information request was echoed or either actor confirmed a command. These discourse actions are crucial to have a meaningful conversation, for example indicating that one actor has understood the other. \paragraph*{\textbf{Grounding}} \textit{Grounding in communication} as described by~\citeauthor{clark1991grounding} is ``sharing and synchronising mutual beliefs and assumptions'' and is fundamental for communication between actors~\cite{clark1991grounding}. The two actors' mental model of each others' beliefs needs to be continuously updated to coordinate the build of a mutual understanding. We observed utterances belonging to this particular sub-theme which was used by Seekers to coordinate the shared information or common ground~\cite{clark1991grounding}. Seekers summarised or paraphrased the information given to them and created a bigger picture of the search results as a way of synchronisation. Through this dynamic process, Seekers provided insight on what they understood from the information provided; Intermediaries then knew whether the information was correctly conveyed. \begin{description}[style=multiline, labelwidth=\widthof{Intermediary extra long: }, font=\normalfont\textsc , leftmargin=\labelwidth, align=right] \item[P14 -Intermediary:] [...] yeah 20 to 29 is the most high risk drinking people in Australia for alcohol related harm... I don't know what that means about consumption \item[P13 -Seeker:] Yeah so they consume a lot \\ \textit{\small [Grounding]} \end{description} Grounding differs from Search Progression and Discourse Management. While Grounding involves sharing the beliefs and values of the information, Search Progression is concerned with the feedback on the search task progress and Discourse Management is related to effective information transfer. The Grounding sub-theme was only seen in Seekers' utterances. This is because Intermediaries, by having the information to hand, summarised results presented and they did not need to confirm or share their beliefs or meaning of the content. As such, their utterances are captured by Results Presentation. \paragraph*{\textbf{Navigation}} Navigational utterances allow \change{actors to progress} the task by manoeuvring around the online information space. We observed Seekers navigate the search results by instructing the Intermediaries. Seekers asked to access specific sources, navigated between documents, singled out particular documents, and read more from a document or the next document: \begin{description}[style=multiline, labelwidth=\widthof{Intermediary extra long: }, font=\normalfont\textsc , leftmargin=\labelwidth, align=right] \item[P9 -Seeker:] Uhm maybe uhm can you go into the result [...] that mentions how uhm outsourcing damages the industry \\ \textit{\small [Navigation]} \end{description} \paragraph*{\textbf{Visibility of System Status}} Seekers asked the Intermediaries to provide information on what was occurring throughout the interactions: for example, whether what they asked for was fulfilled, or what the results were. Intermediaries provided feedback on what was taking place on their side of the conversation by giving input on what was happening (i.e., keeping each other informed~\cite{nielsen2005ten}) if they had seen certain items before, or by way-finding (i.e., orienting where they were positioned). For example: \begin{description}[noitemsep, style=multiline, labelwidth=\widthof{Intermediary extra long: }, font=\normalfont\textsc, leftmargin=\labelwidth, align=right] \item[P25 -Seeker:] Oh TIBER sorry Tiber yeah\\ \textit{\small [Discourse management]} \item[P26 -Intermediary:] Yeah uhm just searching just one second \\ \textit{\small [Visibility of system status]} \item[P25 -Seeker:] Any luck?\\ \textit{\small [Visibility of system status]} \end{description} \subsection{Theme 3: Other} \label{subsec:Theme 3} Five utterances from the Seeker were not classified in any of the above \mbox{(sub-)themes}. Two of these utterances were disfluencies from the Seeker, one utterance was where the Seeker provided information about the search engine, one utterance was asking if the Seeker was allowed to embellish a query, and the last unclassified utterance involved the Seeker offering to spell a word. These five categories were not classified after much deliberation and given the theme ``Other'' instead. \hspace{\parskip} To finalise the themes examination, an overview of the themes and sub-themes used by each actor is presented in Table~\ref{tab:Themes and sub-themes}. The development of the classifications in themes, sub-themes, and codes form the basis of the Spoken Conversational Search Annotation Schema (SCoSAS). {\renewcommand{\arraystretch}{.8} \begin{table}[htp] \centering \smaller{} \caption{Themes and sub-themes used by different actors} \label{tab:Themes and sub-themes} \begin{tabular}{llcc} \toprule \textbf{Theme} & \textbf{Sub-theme} & \multicolumn{1}{l}{\textbf{Seeker}} & \multicolumn{1}{l}{\textbf{Intermediary}} \\ \midrule \multirow{4}{*}{Task Level} & Information Request & \checkmark & \checkmark \\ & Results Presentation & & \checkmark \\ & Search Assistance & \checkmark & \checkmark \\ & Search Progression \ignore{or Meta-discussion} & \checkmark & \\\midrule \multirow{4}{*}{Discourse Level} & Discourse Management & \checkmark & \checkmark \\ & Grounding & \checkmark & \\ & Navigation & \checkmark & \\ & Visibility of System Status & \checkmark & \checkmark \\ \midrule Other & & \checkmark & \\ \bottomrule \end{tabular} \end{table} } \subsection{Inter-rater Reliability and Code Overlap} \label{subsec:Inter-rater Reliability} \change{As part of the validation of the SCoSAS, we calculate the inter-rater reliability and code overlap (i.e., the utterance code overlap and code usage overlap).} The first author (Assessor~1) created the codes as described above. A second independent researcher (Assessor~2) used the codebook for closed coding of all utterances in the SCSdata. \change{The inter-rater reliability on code level (i.e., atomic action identified on an utterance) was moderate (Cohen's $\kappa=0.59$)~\citep*{landis1977measurement}, and substantial at the sub-theme level (i.e., classification based on the code) ($\kappa=0.71$).} The overlap of codes was high with 90\% of the predefined codes being used by both assessors. More precisely, Assessor~1 applied 84 different codes consisting of 41 codes for the Seeker and 43 for the Intermediary. Assessor~2 used 76 codes, 38 codes for the Seeker and 38 for the Intermediary as seen in Table~\ref{tab:Independent Assessors' Code Overlap}. {\renewcommand{\arraystretch}{.8} \begin{table}[htp] \centering \smaller \caption{Independent Assessors' Code Overlap} \label{tab:Independent Assessors' Code Overlap} \begin{tabular}{lcc} \toprule & \textbf{Assessor 1} & \textbf{Assessor 2} \\ \midrule Total number of utterances & 1,044 & 1,044 \\ Total number of codes used & 84 & 76 \\ Total number of codes for Seeker & 41 & 38 \\ Total number of codes for Intermediary & 43 & 38 \\ Unused codes & 0 & 8 (10\%) \\ \bottomrule \end{tabular} \end{table}} The eight codes used by Assessor~1 but not Assessor~2 could potentially be consolidated in a future refinement. \section{Validation of SCoSAS} \label{sec:Validation of SCoSAS} \change{To explore the extent to which SCoSAS covers SCS interactions, we applied the SCoSAS to a subset of interactions from a second dataset, the MISC~\citep*{thomas2017MISC}.} \subsection{Using the MISC dataset to validate SCoSAS} \label{subsec:Validation of SCoSAS with MISC} As with the SCSdata, the MISC dataset~\citep{thomas2017MISC} is a collection of recorded information-seeking conversations between a Seeker and an Intermediary. MISC contains audio and video recordings with \change{automatic speech recognition transcriptions of these recordings.} \change{We coded the MISC dataset according to our predefined codes to investigate which actions were covered or not covered by our coding schema.} Thus, by using our predefined codes, we validate the coverage (i.e., is there an action applicable for every situation) and overlap (i.e., is there a situation where more than one action could be relevant). We selected a random set of four participant pairs for the labelling: participants 1--2, 7--8, 19--20, and 27--28. The MISC setup has five tasks for each pair, of which one is a practice. We labelled the four remaining tasks per participant pair for a total of 16 task-instances. \change{The labelling was completed by Trippas.} \change{Among} the four pairs, we have a total of 701 turns with an average of 175.25 turns per pair and an average of 43.81 turns per task. However, 5\% of the total turns in the MISC transcriptions were inserted by the ASR and were not present in the audio. These turns were ignored which means that a total of 666 turns were labelled on code-level with an average of 166.5 turns per pair and an average of 41.62 turns per task. \subsection{Differences Between the SCSdata and MISC} \label{subsec:Differences Between the SCS and MISC Datasets} The setup and instructions between the SCSdata and MISC protocols were marginally different, which led to differences in the data. We provide an overview of the differences in this section; a fuller account is in~\citet{trippas2019data}. \paragraph{Setup of SCSdata and MISC} \label{subsubsec:Setup of MISC and SCS} As with SCSdata, MISC Seekers did not have access to any information source, but received an information need which they relayed to an Intermediary over an audio connection. Unlike SCSdata, MISC Seekers were allowed to read out the need as given, but were also asked to record an answer. \paragraph{Transcription Differences} \label{subsubsec:Transcription Differences} The MISC dataset was transcribed using ASR in contrast to the SCSdata which was manually transcribed, subjected to the three-pass-per-tape policy, and proofread by a professional editor. \change{The ASR was prone to error, in particular ``recognising'' utterances such as ``thank you'' that were not in the audio. The following snippet of a conversation is an illustration: speakers appear more polite than they were.} \begin{description}[noitemsep, style=multiline, labelwidth=\widthof{Intermediary extra long: }, font=\normalfont\textsc, leftmargin=\labelwidth, align=right, itemsep=0mm] \item[P20 -Intermediary:] [...] She wanted them to donate to charity \item[P19 -Seeker:] Thanks \\ \textit{\small [Utterance not present in audio]} \item[P20 -Intermediary:] To provide clean water // and she um \item[P19 -Seeker:] Thank you \\ \textit{\small [Utterance not present in audio]} \end{description} We encountered an occasion where the researcher interfered due to a technical issue and sections where the ASR created many unnecessary turns between the actors because it falsely believed that someone was talking. We excluded these utterances from this analysis. \subsection{Creating Comparable Datasets} \label{subsec:Creating Comparable Datasets} \label{subsec:Utterance Labelling} The MISC dataset contains ASR errors and the subset we used did not include screen capture video. We labelled MISC at the code level. However, subtleties such as whether an Intermediary was reading from a SERP or document could not be distinguished without screen captures and all Results Presentation utterances were labelled just with that sub-theme. \if0 \begin{table}[ht] \centering \smaller \caption{SCSdata and MISC Dataset Descriptives} \label{tab:SCS and MISC Dataset Descriptives} \begin{tabular}{lcc} \toprule & \textbf{SCSdata} & \textbf{MISC subset} \\ \midrule Total number of utterances & 1044 & 666 \\ Total number of unique codes* & 66* & 49* \\ Unique codes Seeker & 41 & 25 \\ Unique codes Intermediary & 31 & 18 \\ Number of code instances & 1158 & 746 \\ Code instances Seeker & 570 & 366 \\ Code instances Intermediary & 588 & 380 \\ \bottomrule \end{tabular} \\ \small{*NOTE: All utterances related to results presentation did not receive their own code but instead were aggregated to the sub-theme level ``Results Presentation'' due to insufficient details. The SCSdata's unique number of codes without aggregation of the Results Presentation is 135.} \end{table} \fi \subsection{Results: Overlap and Coverage Between SCSdata and MISC} \label{subsec:Overlap and Coverage Between SCS and MISC Data} We are interested in the number of actions shared between the SCSdata and MISC, and where actions are different. After collapsing all Results Presentation utterances to the sub-theme level, for compatibility with MISC, SCSdata used 66~distinct codes: 41 from Seekers and 25 from Intermediaries. MISC used 31 for Seekers and 18 for Intermediaries (Table~\ref{tab:SCS and MISC Dataset Descriptives}). {\renewcommand{\arraystretch}{.8} \begin{table}[ht] \centering \smaller \begin{threeparttable} \caption{SCSdata and MISC Descriptives} \label{tab:SCS and MISC Dataset Descriptives} \begin{tabular}{lcc} \toprule & \textbf{SCSdata} & \textbf{MISC subset} \\ \midrule Total number of utterances & 1044 & 666 \\ Total number of unique codes* & 66* & 49* \\ Unique codes Seeker & 41 & 31 \\ Unique codes Intermediary & 25 & 18 \\ \bottomrule \end{tabular} \begin{tablenotes} \smaller \item *NOTE: Due to insufficient details, utterances which were related to presenting results were aggregated to the Results Presentation sub-theme level. The SCSdata's unique number of codes without aggregation of the Results Presentation is 84. \end{tablenotes} \end{threeparttable} \end{table} } To label MISC, we needed 49 codes, of which 35~were used in both sets; 14~additional codes were needed to cover actions not seen in SCSdata. These additional codes however were infrequently used and 94\% of MISC utterances could be coded with SCoSAS. The 14~additional codes, covering 6\% of utterances, are summarised in Table~\ref{tab:MISC codes which are not present in SCS}, and we discuss these below. \if0 \begin{figure}[htbp] \centering \includegraphics[trim={0 0 0 5.5cm}, clip, width=.32\textwidth]{Image/SCS_MISC_action_overlap.png} \caption{Overlap between atomic actions in SCSdata and MISC. Number within each circle is the total number of actions reported per dataset, in isolation or combination.} \label{fig:SCS_MISC_action_overlap} \end{figure} \fi {\renewcommand{\arraystretch}{.8} \begin{table}[ht] \centering \smaller \caption{Set difference between SCSdata and MISC.} \label{tab:MISC codes which are not present in SCS} \begin{tabular}{lcc} \toprule \textbf{Code} & \textbf{Actor} & \textbf{Nr used} \\ \midrule Chitchat & \multirow{8}{*}{Seeker} & 1 \\ Communication about the task & & 2 \\ Decision offloading & & 1 \\ Feedback on writing down the answer for the given task & & 3 \\ Negotiation & & 7 \\ Rejects spelling offer & & 1 \\ Requests spelling & & 1 \\ Uncertainty expression of what to search & & 2 \\ \midrule Chitchat & \multirow{6}{*}{Intermediary} & 5 \\ Enough information? & & 9 \\ Negotiation & & 6 \\ Offers to spell & & 1 \\ Spells & & 5 \\ Too many results to sum up & & 1 \\ \midrule \begin{tabular}[c]{@{}l@{}}Total number of instances of code \\ used by MISC and not by SCSdata\end{tabular} & & 45 (6\%)\\ \bottomrule \end{tabular} \end{table} } \paragraph{\textbf{Chitchat or Negotiation}} \label{subsubsec:Chitchat or Negotiation} We encountered new types of utterances in the MISC where the actors were negotiating or chitchatting. The negotiation utterances were used to bridge differences and reach agreements~\citep*{zuckerman2015first}. Examples include instances where actors share their own experiences about particular topics or subjects. However, this is not to be confused with the already defined Grounding sub-theme which covers utterances from the Seeker expressing their beliefs and values of information provided by the Intermediary. Chitchat and negotiation utterances have greater overlap between speakers, meaning that more than one actor at a time is speaking~\cite{schegloff2000overlapping}. For example, the following utterances overlapped while the Seekers and Intermediary negotiated their shared understanding of non-traditional medicine: \begin{description}[noitemsep, style=multiline, labelwidth=\widthof{Intermediary extra long}, font=\normalfont\textsc, leftmargin=\labelwidth, align=right, itemsep=0mm] \item[P1 -Seeker:] I think herb sounds more like // not \\ \textit{\small [Negotiation]} \item[P2 -Intermediary:] More like medicine \\ \textit{\small [Negotiation]} \item[P1 -Seeker:] I think it sounds more like naturopathic but that fits it \\ \textit{\small [Negotiation]} \end{description} Participants seemed forthcoming in sharing their own opinions and experiences. The following example is from an Intermediary who shares her own travel experiences which are related to the task: \begin{description}[noitemsep, style=multiline, labelwidth=\widthof{Intermediary extra long}, font=\normalfont\textsc , leftmargin=\labelwidth, align=right] \item[P8 -Intermediary:] That's what I love to do actually when I traveled all the public transportation and all sorts of continents \\ \textit{\small [Chitchat]} \end{description} \paragraph{\textbf{Communication about the task}} \label{subsubsec:Communication about the task} SCSdata participants were instructed not to share the given search task but instead rephrase request. However, for MISC, participants were allowed to read out their search task. This led to Seekers talking informally about the task itself. For example, \begin{description}[style=multiline, labelwidth=\widthof{Intermediary long : }, font=\normalfont\textsc , leftmargin=\labelwidth, align=right] \item[P1 -Seeker:] Yeah the task is a bit // um very generalised so um \end{description} \paragraph{\textbf{Agency and Decision Offloading or Taking Control}} \label{subsubsec:Agency and Decision Offloading or Taking Control} In MISC, both Seeker and Intermediary share the same information and underlying ideas of what they need to search for. This created an equal level of collaboration between the two actors. However, it also allowed the Intermediary to instantiate more agency. \change{In contrast, Intermediaries in the SCSdata acted more as the interface between the Seeker and the found information.} We noticed this idea of agency throughout the subset of the MISC in actions resulting in the following codes ``Enough information?'' (Intermediary), ``Too many results to sum up'' (Intermediary), and ``Decision offloading'' (Seeker). For example, the Intermediaries suggested that a search task has been finished \textit{``excellent, so we are finished...''} (P8) or that they are not going to sum up all the results. The Seekers also handed over the decision making to Intermediaries: e.g. \textit{''it's up to you [ed.\ if we look at the other site or not]''} (P20). \change{\citet{trippas2018informing} suggested decision offloading and taking control may be an artefact of the linear audio channel. The system thus creates a cost-benefit estimation of whether further information from the Seeker is required and therefore receives more autonomy. Simultaneously the Seeker can suggest the system to make the discussion due to the limited knowledge which can be transferred in the audio channel.} \paragraph{\textbf{Writing Down the Answer}} \label{subsubsec:Feedback on Writing Down Answer for the Given Task} Seekers in the MISC setup were asked to write down an answer to their given information seeking task. Seekers' utterances therefore include how they were progressing with the writing. We also encountered several different instances of spelling actions in MISC which we had not encountered in the SCSdata. \paragraph{\textbf{Uncertainty in What to Search}} \label{subsubsec:Uncertainty expression of what to search} As mentioned, MISC Seekers were allowed to read their search task aloud. Here, the Seeker is expressing their confusion with the task: \begin{description}[style=multiline, labelwidth=\widthof{Intermediary extra long: }, font=\normalfont\textsc , leftmargin=\labelwidth, align=right] \item[P19 -Seeker:] I am not sure what you're supposed search \end{description} This could be interpreted as identifying a gap in the Seeker's knowledge. However, the information need expression is not formalised~\citep{taylor1962process}. Recently, \citet{trippas2018informing} suggested that formulations of needs in SCS do not conform to the typical textual query. In a voice environment, users can use natural language to describe their search, and the information request may not go through~\citeauthor{taylor1962process}'s four stages of information need~\cite{taylor1962process}. In the above example, we could even argue that users may now have the freedom to tell the system that they have identified a gap in their knowledge before yet formalising their problem. \subsection{Discussion of SCoSAS Validation} \label{subsec:Discussion of SCoSAS Validation} The majority of the codes (71\%) seen in MISC overlapped with the SCoSAS, and the novel codes only covered 6\% of utterances. Some of the new codes were not encountered in the SCSdata due to the difference in experimental setups, such the array of possible spelling requests, suggestions, or declines. These would be valuable expansions to the SCoSAS. \section{Discussion} \label{sec:discussion} In this work, we used a qualitative analysis approach to uncover the range of possible interactions of information moves for Seekers and Intermediaries in a SCS setting. We constructed insight into the conversational structure of information seeking processes. To do so, we first created a spoken dataset, the SCSdata, and then derived an annotation schema for conversational search via a thematic analysis approach. Finally, we evaluated these actions against a similar dataset as evaluation. \subsection{Schematic SCS Themes Model} \label{subsec:Schematic SCS Themes Model} The SCoSAS presents the Task Level at the centre of conversations. The Discourse Level surrounds this, representing the statements which are about the mechanism, not the task (see Figure~\ref{fig:Schematic Model of Themes and Sub-themes}). The Discourse Level would still exist if the search task is changed to a different, unrelated, task. Previous research in communication goal studies suggests a similar two-tiered model~\citep{tracy1990multiple,bunt1999dynamic}. \change{Furthermore, the goal studies community argues that ordinary discourse is segmented in different types of goals such as communicative functions or interaction outcomes which is similar to our two themes of Task and Discourse. \citet{bunt1999dynamic} provided a two-tiered model where general information dialogues consist of two motivations, that is, one tier was concerned about the task communication and the second.} \begin{figure}[t!] \centering \includegraphics[width=.84\textwidth trim=4cm 1.7cm 4.2cm 1.5cm,clip]{Image/SCS_themes_usecase_colour.pdf} \caption{Schematic Model of Themes and Sub-themes used by each actor.} \label{fig:Schematic Model of Themes and Sub-themes} \end{figure} Our results highlight the importance and need for integrating discourse in SCS systems and to the best of our knowledge, discourse functions are yet to be integrated in information seeking models~\cite{stein1995structuring, vakulenko2019qrfa}. Furthermore, including these discourse utterance inherently creates a system which interacts in a mixed-initiative information seeking communication (the system can ask for clarification and thus takes initiative). Such mixed-initiative dialogue is a requirement of what makes a SCS system truly conversational~\cite{culpepper2018research, trippas2018informing}. This first attempt at creating an interaction model of two actors in a SCS setting may not have included all possible future actions. One action we believe may be observed in a real system is for the user to test the abilities of the system or access the settings of the system itself. This might be coded as a System Level theme, overlapping both Discourse and Task Levels. \subsection{Design Recommendations for SCS Systems} \label{subsec:Design Suggestions} Our analysis leads to some design recommendations for SCS systems. \paragraph{\textbf{Integrating Search Assistance}} \label{subsubsec:Integrating Search Assistance} Search assistance is integrated in many different ways in browser-based search, for example by query or spelling suggestions. We could extend these assistance functions to include the system providing relevance feedback to the user about a given document, suggesting to move on, or even asking about the usefulness of a given result. These pro-active features can become part of the model of the user preferences given the interaction history~\citep[c.f.][]{radlinski2017theoretical}. \paragraph{\textbf{Grounding as Relevance Feedback}} \label{subsubsec:Grounding as a Relevance Feedback} \change{Grounding (i.e., discourse for the creation of mutual knowledge and beliefs) is when participants in a conversation engage in a specific discourse activity to share their mutually understood utterances~\cite{clark1991grounding}. We observed grounding actions in the SCSdata. For example, Seekers provided indirect feedback by reciting their interpretation of the found results. This grounding process could enable a future SCS system to better understand a user’s awareness of the results or information space, including helping the SCS system to disambiguate a users’ information need.} \paragraph{\textbf{Visibility of System Status}} \label{subsubsec:Visibility of System Status} Visibility of system status enables greater control, explainability, and transparency of the system processes and outputs~\citep{culpepper2018research}. However, providing constant feedback on what is happening in a system is not convenient in a spoken environment and will overwhelm the user with too much unnecessary information. It will be essential to understand which aspects should be given to the user. At any point in time, the system should be able to disclose how it retrieved or computed specific information. \paragraph{\textbf{Navigation}} \label{subsubsec:Navigation} Navigational interactions often contain a mix of selecting links on a web page or using backtracking techniques such as the back button or history list~\citep{fu2007snif}. These navigation actions have been extensively studied in a text environment~\citep{catledge1995characterizing,kellar2007field}. Recently~\citet{azzopardi2018conceptualizing} and~\citet{trippas2018informing} recognised the importance of these actions in a conversational search setting. Instead of interacting with lists in a spoken environment as often is done in a SDS, users can freely navigate in a multi-dimensional information space. \change{Navigational interactions in this study may have been influenced by how a system works through back buttons and links. In the future we expect the navigation space to expand together with the adoption and creation of conversational systems. Being able to present a traceable history also provides further transparency for the user and supports the explainability of the system. For example, breadcrumbs could refer to previous information spaces or provide summaries of information the user visited instead of titles of documents as in a browser-based back-button action.} \subsection{Evaluating Existing Search Behaviour Models with SCoSAS} \label{subsec:Contrasting Existing Search Behaviour Models with SCoSAS} \change{To our knowledge many well-known models such as~\citeauthor{belkin1980anomalous}'s ASK~\cite{belkin1980anomalous} or~\citeauthor{marchionini1997information}'s ISP~\cite{marchionini1997information} do not include the system's ``responsibility'' of interacting with the user and thus do not capture all SCS behaviours.} \change{Other models, such as~\citeauthor{sitter1992modeling}'s COR model~\cite{sitter1992modeling}, \citeauthor{belkin1995cases}'s scripts~\cite{belkin1995cases}, or the recently proposed QRFA model by~\citet{vakulenko2019qrfa} encompass the interaction between two actors. However, these models either lack the flexibility of the speech aspect, such as multiple moves in one turn, or are based on broad DA categorisations. Additionally, the broad DA categorisation only provides a high level insight of the actions users take while the SCoSAS discloses more refined details of the users' and systems' state in each turn.} \change{Finally, \citeauthor{saracevic1997stratified}'s stratified model includes the system as an active participant in the information seeking process~\cite{saracevic1997stratified}. Furthermore, Saracevic specifies that the process consists of a dialogue between the two actors. He also mentions that the dialogue can be used for not only ``searching'' utterances but also for a number of ``other engagements'' beyond the searching, for example, obtaining and providing different types of feedback, judgements, or states. In the SCS model, we also identify the system as an active participant throughout the search process, which is in itself a conversation. In addition, the ``other engagements''~\citeauthor{saracevic1997stratified} mentions could be interpreted as our Discourse Level interactions, such as our identified grounding utterances. Furthermore, the stratified model could be used to illustrate the effect of the audio-only interaction channel limitation. That is, \citeauthor{saracevic1997stratified} says that a weak point in the system could hamper the desirable outcome for the search process~\cite{saracevic1997stratified}. The stratified model and the schematic SCS themes model may be complementary for the abstraction of a SCS process.} \subsection{Future Extensions} \label{subsec:Limitations} \textit{Human to human interaction: }Human--human interaction may differ from the human--machine interactions we really want to model. We plan to conduct further studies to test our hypotheses in a human--machine interaction setting. \change{Thus, further research will investigate if the mindlessness projection of users' own believes and expectations of computers transfers to SCS~\cite{nass2000machines}.} \textit{Laboratory setting: }Participating in a laboratory setting \change{influences} the participants' behaviour. Even though this study was conducted in a laboratory setting, we believe findings will apply to a day-to-day environment. Investigating the information needs for SCS which arise in a natural setting will be crucial to develop natural systems. This will include understanding the different information needs and creating new taxonomies for these needs. \textit{Taking initiative equals one turn: }Our coding schema allows for coding per turn since we segmented the users' utterances with the idea that taking the initiative equals one turn. This means that slight subtleties inside a turn may be lost such as long pauses. However, we believe this was necessary to understand the broader context of SCS. \change{This study should be interpreted within the context of its limitations. Firstly, it is possible that cross-coding a larger dataset from the MISC may have added further sub-themes. However, we feel that the discrepancies identified through the current cross-coding are due to differences in experimental set-up rather than to substantial content differences.} \subsection{Informing Wider Research Agendas} \label{subsec:Informing Wider Research Agendas} Existing systems and models have difficulties with multi-turn actions, utterances which consist of multiple moves, or intent extraction. In this paper, we attempted to better understand these unique features of SCS by creating a labelling schema and schematic model of these labels. Our model and annotation schema are a more in-depth study than prior preliminary models~\citep{trippas2017people} or conceptual framework~\citep{azzopardi2018conceptualizing}. While the labelling schema developed in this paper focuses on the interaction space of conversational search, we expect it will be useful for non-search related or discourse actions. \change{The implications of this analysis are many. Firstly, this analysis can support the feature extraction of particular utterance-types, or assist with the engineering and evaluation of conversational retrieval. The analysis can also be used for language modelling of information seeking conversations and the development of results presentation strategies.} \section{Conclusions} \label{sec:conclusions} In this paper, we address the challenge of \change{spoken conversational search} (SCS), where no screens are available and user--system interactions are entirely voice-based. After identifying the limitations of existing information seeking models, we used a qualitative analysis approach to explore how people interact in an audio-only communication search setting. We created the first dataset for SCS (SCSdata), defined a labelling set identifying the interaction choices for this dataset (i.e., the SCoSAS annotation schema), and translated these interactions in a schematic model. This approaches both actors in the seeking process, the Seeker and the Intermediary, as equal, leveraging multi-turn activities and multi-move utterances. The validation of SCoSAS using an independent dataset demonstrated high overlap and coverage. \change{Furthermore, our transparent annotation process contributes by strengthening the analysis and the methodological foundations of annotation schema development.} The significance of this paper is twofold: we (i) develop a classification schema, (ii) test and validate this schema, and (iii) provide a transparent annotation schema process. Furthermore, our contributions highlight the need of new models for SCS, especially integrating discourse. The resources described and validated in this paper -- including the SCoSAS annotation schema -- also allow us to suggest possible extensions of the schematic model and inform the design of SCS systems in the future. \section{Themes, Sub-themes, and Codes in SCoSAS} \label{sec:appendixA} \subsection{Theme 1: Task Level} \label{appendix:Theme 1: Task Level} {\renewcommand{\arraystretch}{.8} \begin{table}[ht] \centering \caption{Information Request Codes (Seeker)} \label{tab:Information Request Codes (Seeker)} \adjustbox{max width=1\textwidth}{ \begin{tabular}{lllp{6cm}c} \toprule \textbf{Theme} & \textbf{Sub-theme} & \textbf{Actor} & \textbf{Code} & \textbf{Frequency} \\ \midrule \multirow{24}{*}{Task Level} & \multirow{24}{*}{Information Request} & \multirow{12}{*}{Seeker} & Automated repetitive search\change{~\cite{trippas2018informing}} & 3 \\ & & & Definition explanation \change{(or clarification, i.e., intent clarification)} & 1 \\ & & & \change{(Information request for a)} Definition lookup or person & 1 \\ & & & \change{(Requesting)} Information about document\change{~\cite{trippas2018informing}} & 6 \\ & & & Information about SERP overview & 2 \\ & & & Information request & 67 \\ & & & Information request within document & 80 \\ & & & Information request within SERP & 15 \\ & & & Initial information request & 39 \\ & & & Intent clarification & 52 \\ & & & Query embellishment\change{~\cite{trippas2017people}} & 20 \\ & & & Spells (query or query word) & 2 \\ \cmidrule{3-5} & & \multirow{12}{*}{Intermediary} & \change{(Requests)} Definition clarification \change{(i.e., requests more details about the information request)} & 1 \\ & & & Enquiry for further information & 11 \\ & & & Google query expansion suggestion & 3 \\ & & & Query refinement offer & 57 \\ & & & Query rephrase & 12 \\ & & & Requests more details about information request & 5 \\ & & & Query formulation for information found in document & 1 \\ & & & Asking what they \change{(i.e., the Seeker)} are looking for & 2 \\ & & & Within-Document search result entity lookup request & 1 \\ \bottomrule \end{tabular} } \end{table} } {\renewcommand{\arraystretch}{.8} \begin{table}[ht] \centering \caption{Result Presentation Codes (Intermediary)} \label{tab:Result Presentation Codes (Intermediary)} \adjustbox{max width=1\textwidth}{ \begin{tabular}{llllc} \toprule \textbf{Theme} & \textbf{Sub-theme} & \textbf{Actor} & \textbf{Code} & \textbf{Frequency} \\ \midrule \multirow{22}{*}{Task Level} & \multirow{22}{*}{Results Presentation} & \multirow{22}{*}{Intermediary} & Source information & 8 \\ & & & Image overview on SERP & 2 \\ & & & Interpretation of photos & 1 \\ & & & Multi-document summary & 3 \\ & & & \begin{tabular}[c]{@{}l@{}}Paraphrasing from document which is \\ not in front of them\end{tabular} & 1 \\ & & & Scanning document with modification & 51 \\ & & & Scanning document without modification & 79 \\ & & & \begin{tabular}[c]{@{}l@{}}Scanning document without modification \\but with interpretation of photos \end{tabular} & 1 \\ & & & SERP Card & 16 \\ & & & SERP overview without modification & 1 \\ & & & SERP with modification & 19 \\ & & & SERP without modification & 72 \\ & & & Within SERP search result & 4 \\ & & & Within-Document command response & 1 \\ & & & Within-Document search result & 60 \\ & & & \begin{tabular}[c]{@{}l@{}}Interpretation biased towards information\\ request or clarification given by the User \end{tabular} & 1 \\ & & & Comparing results against each other & 1 \\ & & & Interpretation & 22 \\ \bottomrule \end{tabular} } \end{table} } {\renewcommand{\arraystretch}{.8} \begin{table}[htp] \centering \caption{Search Assistance (Seeker and Intermediary)} \label{my-label} \adjustbox{max width=1\textwidth}{ \begin{tabular}{lllp{7cm}c} \toprule \textbf{Theme} & \textbf{Sub-theme} & \textbf{Actor} & \textbf{Code} & \textbf{Frequency} \\ \midrule \multirow{10}{*}{Task Level} & \multirow{10}{*}{Search Assistance} & \multirow{2}{*}{Seeker} & \change{(Requests further search)} Recommendations & 1 \\ & & & Requests ``enough information" judgement & 1 \\ \cmidrule{3-5} & & \multirow{8}{*}{Intermediary} & Asking about usefulness \change{(of presented result)} & 4 \\ & & & Requests spelling & 2 \\ & & & Suggestion to move on & 2 \\ & & & Relevance judgement & 6 \\ & & & Suggestion to search more & 1 \\ & & & Requests to access search engine & 1 \\ & & & Search suggestion based on info encountered in document & 1\\ \bottomrule \end{tabular} } \end{table} } {\renewcommand{\arraystretch}{.8} \begin{table}[htp] \centering \smaller{} \caption{Search Progression (Seeker)} \label{tab:Search Progression or Meta-discussion (Seeker)} \adjustbox{max width=1\textwidth}{ \begin{tabular}{llllc} \toprule \textbf{Theme} & \textbf{Sub-theme} & \textbf{Actor} & \textbf{Code} & \textbf{Frequency} \\ \midrule \multirow{3}{*}{Task Level} & \multirow{3}{*}{Search Progression} & \multirow{3}{*}{Seeker} & Enough information & 6 \\ & & & Performance feedback & 18 \\ & & & Rejects \change{(suggestion from Intermediary)} & 9 \\ \bottomrule \end{tabular} } \end{table} } \clearpage \subsection{Theme 2: Discourse Level} \label{appendix:Theme 2: Discourse Level} {\renewcommand{\arraystretch}{.8} \begin{table}[H] \centering \caption{Discourse Management (Seeker and Intermediary)} \label{Discourse Management (Seeker and Intermediary)} \adjustbox{max width=1\textwidth}{ \begin{tabular}{llllc} \toprule \textbf{Theme} & \textbf{Sub-theme} & \textbf{Actor} & \textbf{Code} & \textbf{Frequency} \\ \midrule \multirow{10}{*}{Discourse Level} & \multirow{10}{*}{Discourse Management} & \multirow{5}{*}{Seeker} & Asks to repeat & 31 \\ & & & Asks to repeat first search result & 6 \\ & & & Asks to repeat Nth search result & 1 \\ & & & Confirms & 114 \\ & & & Query repeat & 14 \\ \cmidrule{3-5} & & \multirow{5}{*}{Intermediary} & Asks to repeat & 38 \\ & & & Checks navigational command & 13 \\ & & & Confirms & 46 \\ & & & Repeats & 12 \\ & & & Repeats the query back & 9 \\ \bottomrule \end{tabular} } \end{table} } {\renewcommand{\arraystretch}{.8} \begin{table}[H] \centering \smaller{} \caption{Grounding (Seeker)} \label{Grounding (Seeker)} \begin{tabular}{llllc} \toprule \textbf{Theme} & \textbf{Sub-theme} & \textbf{Actor} & \textbf{Code} & \textbf{Frequency} \\ \midrule \multirow{2}{*}{Discourse Level} & \multirow{2}{*}{Grounding} & \multirow{2}{*}{Seeker} & Creating bigger picture & 1 \\ & & & Interpretation & 12 \\ \bottomrule \end{tabular} \end{table} } {\renewcommand{\arraystretch}{.8} \begin{table}[H] \centering \smaller{} \caption{Navigation (Seeker)} \label{Navigation (Seeker)} \adjustbox{max width=1\textwidth}{ \begin{tabular}{llllc} \toprule \textbf{Theme} & \textbf{Sub-theme} & \textbf{Actor} & \textbf{Code} & \textbf{Frequency} \\ \midrule \multirow{10}{*}{Discourse Level} & \multirow{10}{*}{Navigation} & \multirow{10}{*}{Seeker} & Access link within document & 1 \\ & & & Access search engine & 2 \\ & & & Access source & 29 \\ & & & Access source (implicit) & 2 \\ & & & Between-document navigation & 1 \\ & & & Is there more information & 6 \\ & & & Leave document & 1 \\ & & & Next & 3 \\ & & & Read more from the document & 1 \\ & & & Within-document command & 3 \\ \bottomrule \end{tabular} } \end{table} } {\renewcommand{\arraystretch}{.8} \begin{table}[H] \centering \caption{Visibility of System Status (Seeker and Intermediary)} \label{Visibility of System Status (Seeker and Intermediary)} \adjustbox{max width=1\textwidth}{ \begin{tabular}{lp{2cm}lp{5cm}c} \toprule \textbf{Theme} & \textbf{Sub-theme} & \textbf{Actor} & \textbf{Code} & \textbf{Frequency} \\ \midrule \multirow{7}{*}{Discourse Level} & \multirow{7}{2cm}{Visibility of system status} & \multirow{3}{*}{Seeker} & Access source feedback-request & 3 \\ & & & Feedback on what is happening & 1 \\ & & & Results? & 10 \\ \cmidrule{3-5} & & \multirow{4}{*}{Intermediary} & Feedback on what is happening & 13 \\ & & & Misheard & 1 \\ & & & Previously seen results & 2 \\ & & & Wayfinding & 3 \\ \bottomrule \end{tabular} } \end{table} } \subsection{Theme 4: Other Level} \label{subsec:Theme 4: Other Level} {\renewcommand{\arraystretch}{.8} \begin{table}[H] \centering \caption{Other Level (Seeker)} \label{Other Level (Seeker)} \adjustbox{max width=1\textwidth}{ \begin{tabular}{llp{11cm}c} \toprule \textbf{Theme} & \textbf{Actor} & \textbf{Code} & \textbf{Frequency} \\ \midrule \multirow{6}{*}{Other Level} & \multirow{6}{*}{Seeker} & Utter (``So I'm'' and ``Well so they are saying'') & 2 \\ & & Provides information about the Search Engine (``So it's [a] search engine'') & 1 \\ & & Asks if allowed to query embellish (``Actually can I add something else to that?'') & 1 \\ & & Offers to spell (``[...] would you like me to spell it?'') & 1 \\ \bottomrule \end{tabular} } \end{table} } \section{Tasks and Backstories} \label{sec:appendixB} {\renewcommand{\arraystretch}{.8} \begin{table}[H] \centering \smaller \caption{Example Search Tasks\label{tab:Example Search Tasks}} \adjustbox{max width=1\textwidth}{ \begin{tabular}{p{1.5cm}p{11.5cm}} \toprule \textbf{Dimension} & \textbf{Query and Example Backstory} \\ \midrule \multirow{13}{*}{Remember} & \texttt{What river runs through Rome, Italy?} \\ & Many great cities have rivers running through them, as rivers facilitated trade and commerce as well as supplying fresh water to drink. You remember that Paris has the Seine, London has the Thames, but what does Rome have? \\ \cmidrule{2-2} & \texttt{What language do they speak in New Caledonia?} \\ & You and your partner are thinking of places to go on holiday. New Caledonia is an option, but you realize you don't know what language is spoken there and you decide to find out. \\ \cmidrule{2-2} & \texttt{Where does cinnamon come from?} \\ & The other day you were eating some spiced biscuits from Europe, when it occurred to you that cinnamon probably isn't native to that part of the world. You would like to know where it comes from. \\ \midrule \multirow{12}{*}{Understand} & \texttt{recycle, automobile tires } \\ & You need to buy new tires for your car, and the local dealer has offered to take the old ones for recycling. You didn't know tires could be recycled and you wonder what new uses they are being put to. \\ \cmidrule{2-2} & \texttt{Outsource job India } \\ & A recent report on the radio quoted a politician as saying that one of the causes of rising unemployment in the U.S. was the outsourcing of jobs to India. This has made you interested in finding out what jobs that used to be in the U.S. have been outsourced to India. \\ \cmidrule{2-2} & \texttt{Marine Vegetation } \\ & You recently heard a commercial about the health benefits of eating algae, seaweed and kelp. This made you interested in finding out about the positive uses of marine vegetation, both as a source of food, and as a potentially useful drug. \\ \midrule \multirow{16}{*}{Analyse} & \texttt{Turkey Iraq Water} \\ & Looking at a map, you realize that there are several rivers that commence in Turkey and then flow over the border into Iraq. You wonder if Turkish river control projects, including dams and irrigation schemes, have affected Iraqi water resources. \\ \cmidrule{2-2} & \texttt{Airport Security } \\ & Every time you go through the security screening at an airport, you wonder whether it is making any difference. Find out how effective the many new measures (beyond just standard screening) at airports actually are, both for scrutinizing of passengers and their checked and carry-on baggage. \\ \cmidrule{2-2} & \texttt{per capita alcohol consumption } \\ & You recently attended a big party and woke up with a hangover, and have decided to learn more about the average consumption of alcohol. You are particularly interested in any information that reports per capita consumption, and want to compare across groups, for example at the country, state, or province Level. \\ \bottomrule \end{tabular} } \end{table} } \if0 \subsection{Theme 3: System (Capability Discovery) Level} \label{subsec:Theme 3: System (Capability Discovery) Level} \begin{table}[ht] \centering \caption{System (Capability Discovery) Level (Seeker)} \label{System (Capability Discovery) Level (Seeker)} \adjustbox{max width=1\textwidth}{ \begin{tabular}{llllc} \toprule \textbf{Theme} & \textbf{Sub-theme} & \textbf{Actor} & \textbf{Code} & \textbf{Frequency} \\ \hline System Level & & Seeker & Asks if allowed to query embellish & 1\\ \hline \end{tabular} } \end{table} \fi \section{Acknowledgements} The authors would like to thank the participants who took part in the study. This research was partially supported by Australian Research Council Projects LP130100563 and LP150100252, Real Thing Entertainment Pty Ltd, and JSPS KAKENHI JP19H04418. \def\section*{References}{\section*{References}} \setlength{\bibsep}{0pt plus 0.5ex} \bibliographystyle{abbrvnat}
3,212,635,537,822
arxiv
\section{Introduction}\label{sec:introduction} One of the challenging aspects in the design of autonomous vehicles is their communication with other, non-autonomous participants in traffic. Specifically the interaction with pedestrians requires clear communication of intent to allow for safe interactions \citep{rasouli2017agreeing}. If autonomous vehicles will be more prevalent in the future, yielding to pedestrians under all circumstances (i.e. conservative driving behavior) may no longer be feasible as an interaction strategy. It has been shown that communicating the intention not to yield to pedestrians in certain traffic situations can significantly increase traffic flow \citep{gupta2018negotiation}. Finding ways to communicate such intentions to pedestrians in a way that is easy to understand and assertive but \textit{safe} for the pedestrian remains an open challenge of autonomous driving. In this paper we investigate how vehicle kinematics can be ``hacked'' to project intent and manufacture non-verbal communication cues that are actionable and interpretable by the interacting pedestrian. \section{Related Work}\label{sec:related_work} Pedestrian-vehicle-interactions in the form of road crossings have thus far mostly been studied as a problem of gap size and time to arrival, among the methods used are two-dimensional as well as curved screens \citep{oxley2005crossing}, announcing crossing intent while observing actual intersections \citep{schwebel2008validation} and immersive Virtual Reality (VR) \citep{clancy2006road, simpson2003investigation}. While these studies do of course consider vehicle movement, it is taken in a physical context and explored in terms of remaining distance or time for the pedestrian to reach the other side of the road. Current research regarding the general interaction between \emph{autonomous} vehicles and pedestrians has been focused on external Human Machine Interfaces (EHMIs). These concepts revolve around variations of displays, lights or projections placed inside or outside of the vehicle \citep{mahadevan2018communicating, clamann2017evaluation, risto2017human, deb2018investigating, dey2018interface}. Such mechanisms are intended to replace explicit gestures from the driver towards pedestrians intending to cross \citep{mahadevan2018communicating, risto2017human}. Such mechanisms have previously also been studied using virtual reality \citep{deb2018investigating}. As EHMIs are a novel concept in driver pedestrian interactions they bring with them various issues and design challenges which have yet to be overcome. Such challenges include for instance the design of interfaces which are discernable at the distance of an approaching vehicle \citep{clamann2017evaluation}, as well as visible and understandable in the context of busy intersections \citep{risto2017human}. In addition, the extend to which the driver cues they are intended to replace actually aide in pedestrian-vehicle interactions as they occur today is questionable \citep{dey2017pedestrian, rothenbucher2016ghost}. Our work aims to explore vehicle kinematics as an alternative form of vehicle pedestrian communication under special consideration of Autonomous Vehicles (AVs). It has ben shown that the way non-humanoid robots use shared space in a ``passive'' or ``assertive'' manner when interacting with humans is perceived as giving social cues conveying the ``emotional state'' and consequently the intentions of said robots. This holds true ``regardless of whether that robot is capable of having emotional states or not.''\citep{fiore2013toward} The role of vehicle kinematics in particular as a means of social communication has previously been studied by means observation, for instance the concept of ``motion in context'' in \citep{risto2017human}, as well as the importance of ``motion patterns and vehicle behavior'' as observed in \citep{dey2017pedestrian}. Specifically interactions between pedestrians and (seemingly) autonomous vehicles have been investigated as a Wizard-of-Oz study in absence of EHMIs \citep{rothenbucher2016ghost}. As is apparent form the previous paragraphs, virtual reality has already been established as a tool for studying vehicle-pedestrian-interactions \citep{clancy2006road, simpson2003investigation, deb2018investigating, bhagavathula2018reality}. VR is successfully used in various various fields, including psychology and visual perception experiments \citep{wilson2015use}. The use of screen based, two dimensional virtual interactions for studying pedestrian interactions in particular has been validated by multiple studies \citep{oxley2005crossing, schwebel2008validation}. While an objective measurement of immersion (the overall realism and fidelity of a virtual environment) is difficult, it has been established that increased immersion is a desireable trait in experiment design \citep{wilson2015use} and beneficial towards the spacial understanding of the simulated environment \citep{bowman2007virtual}. Such effects are aided by stereoscopic rendering (providing a distinct image to each eye of the VR user, allowing for life like depth perception), head tracking (translating the visual virtual perception according to the actual head movements of the VR user) and a large field of regard (FOR) (the overall size of the visual field a VR user can cover by means of head movement) \citep{bowman2007virtual}. While some previous studies have found that the the scale of virtual worlds is not always perceived correctly, it has been shown that this effect can be mitigated if participants are allowed to traverse such environments on foot \citep{wilson2015use, kelly2013more}. A investigation into the applicability of immersive virtual reality for studying road crossing decisions based on time to arrival found that, while there are differences in the estimated vehicle speed between real-world and virtual scenarios, these did not have a measurable effect on pedestrian crossing decisions \citep{bhagavathula2018reality}. \input{trajectories} \section{Methods}\label{sec:methods} \begin{figure} \begin{center} \includegraphics[width=0.8\columnwidth]{images/vive_with_tpcast} \caption{HTC Vive Virtual Reality Headset with TPCast Wireless Transceiver} \label{fig:vive} \end{center} \end{figure} \begin{figure*} \centering \begin{subfigure}[b]{0.33\paperwidth} \includegraphics[width=0.33\paperwidth]{images/setup_photo} \caption{Participant with HMD stepping into virtual street} \label{fig:setup_photo} \end{subfigure} \hspace{0.01in} \begin{subfigure}[b]{0.47\paperwidth} \includegraphics[width=0.47\paperwidth]{images/setup_illustration_all} \caption{Diagram of the experimental setup} \label{fig:setup_illustration} \end{subfigure} \caption{Experimental Setup: (1) Participant with HMD, Wireless Transceiver and Battery, (2) Experimenter, (3) Walkable VR space (6 m x 2 m), (4) Virtual Curb, (5) End of Lane, (6) Simulation Computer, (7) VR Transceiver, (8) Tracking Base-Station} \label{fig:setup} \end{figure*} To understand the potential for social cues in vehicle kinematics, we studied the reaction of pedestrians towards vehicles exhibiting different kinds of behaviors in a road crossing situation. We engineered these behaviors to juxtapose interactions which comply with what we expected to be the social convention of such interactions (with reference also to \citep{risto2017human}) with behaviors which would be unexpected. For both scenarios the time available to the pedestrian to cross is kept identical. Observing a difference in reaction between the regular and subversive vehicle behaviors would then allow us to conclude that participants derive cues towards the intentions of the vehicle from the vehicle kinematics, as our testing environment features no other means of communication from the vehicle. ``Social cues'' in the context of this paper refer to the presence of information aiding pedestrians in inferring the current and future behavior of the vehicle, beyond the pure physicality of the executed movement. The question is if pedestrians view the movement of the vehicle simply as a function of distance over time or as decisions of an intelligent entity whose goals need to be aligned with their own. To study these interactions between pedestrians and vehicles we created an immersive virtual reality environment: \subsection{Setup and Virtual Environment}\label{sec:setup} Virtual reality offers multiple benefits in this situation: It allows us to explore edge cases in human vehicle communication without any risk to our human participants in cases where the communication fails. The simulation inside a virtual environment further provides precise experimental control over the vehicle movements and repeatability of scenarios across participants, as well as precise data-recording mechanisms. We created our virtual reality setup using the \textit{Unity3D} game engine, which allows for quick prototyping and easy integration of virtual reality. The \textit{Head Mounted Display (HMD)} we chose for this experiment is the \textit{HTC Vive} (\figref{vive}). The tracking of the HMD allows the participant to traverse our virtual environment on foot with a natural range of motion. Our experiment makes use of a virtual staging environment, depicted in \figref{staging_environment}, where the participants remain between crossing attempts, with a marker for the crossing starting position. During crossing attempts participants are placed in an alleyway, 3 meters wide, 2 meters from the curb of the road. The walls of the alleyway extend up until 0.5 m from the road, preventing the participant from seeing any approaching vehicles until they have stepped out of their initial starting position. The road is 6 meters wide with a continuous yellow lane marking down the middle. \figref{topdown} shows an overview of the virtual environment, \figref{setup} shows the physical setup. As we only had 6 meters of total physical distance available for both the ally and the road (\figref{setup_illustration} - 3), we returned participants to the staging environment after crossing the first 2.5 m of the first lane (\figref{setup_illustration} - 5), giving them 1.5 meters of buffer space to decelerate. \figref{walker} illustrates the interaction between a participant and a virtual test-vehicle via a visual mockup. \figref{fpv} shows how the participant perceives this interaction in the HMD. \subsection{Procedure}\label{sec:procedure} Participants were informed that the intention of the experiment would be to study how the behavior of oncoming vehicles would affect the decisions of pedestrians to cross the road. They were instructed to treat the virtual interactions as they would treat interactions in reality. They were specifically reminded to avoid any risks they would not take with real cars. They were further instructed to act as if in a hurry, to cross ``rather sooner than later'', however not at the risk of bodily harm. After the instructions the participants put on the Head Mounted Display and were familiarized with the virtual environment. We demonstrated the mechanism which warns VR users when they are about to approach the limits of the VR space and encouraged participants to explore the limits of the virtual environment before beginning the trail. Once they felt comfortable walking inside the environment wearing the HMD, we began the actual study with two introductory interactions. We demonstrated to the participants what would happen if they were to come into contact with the virtual vehicle (an acoustic signal and the immediate return to the staging environment), to discourage them from provoking a ``collision'' out of curiosity. In the second scenario we allowed the participants to cross the street in front of a stopped car to introduce them to the mechanism which would return them to the staging environment after traversing the fist lane. For each of the crossing attempts, participants would go through the following steps: \begin{enumerate} \item The participant stands in a marked position in the staging environment, gazing at a second marker placed in the direction of our virtual street. \item The scene switches to the street environment, placing the participant in the alley with a limited view of the street. \item The participant walks out of an ally and sees a vehicle approach from the left \item The moment the car becomes visible to the pedestrian the the trajectory is triggered. Due to this mechanism all participants experience the same \textit{time to arrival (TTA)} for each trajectory. The vehicle approaches the intersection in a straight line in the middle of the lane, with speed, starting distance and acceleration at any point in time being determined by the trajectory under test in the given attempt. \item The participant has to asses whether they want to try to physically walk across the first lane (3 m) \item The result and timing of all crossing events is logged automatically. Additionally the participant is requested to provide feedback on a series of questions. \end{enumerate} Participants were further asked to answer the following questions after each attempt: \begin{itemize} \item “Describe briefly, what did the car do?” \textit{(open question)} \item "Would you say the car was accelerating, decelerating, going at a constant speed or doing something else? \textit{(4\,options)} \item “How safe did you feel in this situation?” \textit{(Likert Item)} \item “Did the actions of the car surprise you?” \textit{(yes/no)} \item “How much trust did you have in this car?” \textit{(Likert Item)} \item “Do you believe the car reacted to your presence?” \textit{(yes/no)} \item “Would you have acted the same way in the real world?” \textit{(yes/no)} \end{itemize} \begin{figure} \begin{center} \includegraphics[width=0.9\columnwidth]{images/staging_environment} \caption{Virtual Staging Environment with Starting Position (red) and View Direction Indicator} \label{fig:staging_environment} \end{center} \end{figure} \subsection{Trajectories}\label{sec:trajectories} As stated before our crossing scenarios were designed to gauge participant reactions towards different kinds of vehicle behaviors, with the goal to identify a difference in participant reactions between vehicle behaviors designed to comply with social expectations and vehicle behaviors designed to subvert social expectations. To achieve this, the vehicles in our crossing scenarios followed different \textit{trajectories}. For our purposes a trajectory describes the behavior of an approaching vehicle by determining the vehicle speed and acceleration for any given point in time. Some of these trajectories were interactive, while others were following a predetermined acceleration curve. For the purpose of the aforementioned comparison we created two distinct groups of trajectories: \begin{description} \item[\tgroup{Yield} (\textcolor{tgreen}{\textbf{green}}):] Trajectories intended to comply with social expectations. These trajectories were designed to encourage pedestrians to cross the street. The vehicle slows down aggressively at a certain distance from the pedestrian but keeps rolling at a slow speed in order to elicit a decision for or against crossing. \item[\tgroup{subversion}: (\textcolor{tred}{\textbf{red}})] Trajectories in this category were designed with the intention to subvert social expectations. The trajectories display varying degrees of unusual vehicle behaviors, some are just confusing while other are outright malicious. Trajectories in this set are dynamic and react to the actions of the pedestrian, in many cases by accelerating towards them. \end{description} In addition to these basic attempts at communication we included two sets of trajectories to study if basic changes in acceleration would yield different reactions. Each of these two sets consists of three trajectories with a common final approach velocity and identical TTA. One of the trajectories starts at a lower velocity and accelerates towards the terminal velocity, one trajectory which starts at a higher velocity and decelerates towards the terminal velocity and finally one trajectory with no acceleration change for comparison. \begin{description} \item[\tgroup{15 kph Set} (\textcolor{tlightblue}{\textbf{light blue}}):] Three trajectories with 15 km/h as the final approach velocity of the vehicle, all with a TTA of 8s. \item[\tgroup{40 kph Set} (\textcolor{tdarkblue}{\textbf{dark blue}}):] Three trajectories with the final approach speed of 40 km/h and a TTA of 8s. \end{description} All trajectories up to this point shared a time to arrival between 8s and 9s, in order to make crossing decisions comparable between them. In addition to these we tested some trajectories with a lower TTA: \begin{description} \item[\tgroup{deterrent} (\textcolor{tgrey}{\textbf{grey}}):] Trajectories designed to be challenging to impossible to cross safely, with a time to arrival as low as two seconds. As trajectories from almost all other groups have a TTA of 8s or more or more these are interspersed to prevent participants form believing that crossing the street is possible for all interactions, forcing them to carefully consider the decision to cross each time. \item[\tgroup{other} (\textcolor{tpurple}{\textbf{purple}}):] This group consists only of the trajectory \traj{breaking\_on\_enter}. Vehicles following this trajectory have a comparatively low TTA of 4.8s, but will slow down if the participant steps into the lane of travel. \end{description} Excluding our introductory scenarios we tested a total of 15 trajectories, The individual trajectories are described in Table \ref{tab:trajectories}. Participants completed each trajectory once. The number of trajectories was limited to keep the duration of one session within thirty minutes. \subsection{Participants}\label{sec:participants} Participants were recruited from the immediate surroundings of our lab, members of the MIT Center for Transportation and Logistics not involved in the project. All participants reported living around the greater Cambridge and Boston area. Participants ranged from 22 to 55 years of age, the average age being 32.96, with a standard deviation of 9.15. The total number of participants was 22, 9 female and 13 male. Participants were compensated with bananas and donuts. \section{Results and Discussion}\label{sec:results} \paragraph{Road Crossing Decisions} \begin{figure} \includegraphics[width=\columnwidth]{images/graphs/all_trajectories} \caption{Results of crossing attempts. Label color indicates trajectory group.} \label{fig:trajectory_results} \end{figure} \begin{figure*}[tp] \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{images/graphs/reacted} \caption{Participants perception of vehicle's reaction to their presence.} \label{fig:reacted} \end{subfigure} \hspace{0.01in} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{images/graphs/surprised} \caption{Participants surprised by vehicle behavior.} \label{fig:surprise} \end{subfigure} \caption{Participant reaction towards trajectories.} \label{fig:reactions} \end{figure*} We recorded a total of 328 individual crossing attempts, excluding two training attempts per participant. Two crossing attempts could not be recorded due to technical issues and were excluded from analysis. Excluding trajectories from the \tgroup{deterrent} (\textcolor{tgrey}{\textbf{grey}}) group as well as the trajectory \traj{conf\_distance\_mirr}, as those trajectories were designed to inhibit road-crossing, that left 263 individual crossing opportunities to study crossing decisions. Out of those 263 attempts participants crossed in front of the approaching vehicle 81.75\% of the time. Four of the remaining cases resulted in collisions, the remainder are cases were participants decided not to cross or crossed after the vehicle. In the following, ``successful crossing'' will refer to crossing attempts completed by entering the street in front of the approaching vehicle without any collisions. This high success-rate for crossing opportunities fits the circumstances as for all of these interactions the TTA was 8s and participants were primed to cross if possible. It is further consistent with the real-world observations in \cite{rothenbucher2016ghost} where the majority of pedestrians crossed in front of a seemingly autonomous vehicle even if it had shown a transgression towards them during its approach. \figref{trajectory_results} provides the success-rate for each trajectory, showing which percentage of participants crossed in front of the approaching vehicle, which percentage crossed after the vehicle had passed (or not at all) and which percentage of participants collided with the vehicle. Crossing decisions are an important metric given the long-term goal of influencing pedestrian crossing decisions as stated in \ref{sec:introduction}. Furthermore deciding not to cross despite a sufficient gap-distance could be interpreted as a strong signal of a participants reaction to the vehicle behavior in the given trajectory. Looking at trajectories with a lower TTA (see \tabref{trajectories} in \figref{trajectory_results} we can see observations of previous studies regarding crossing decisions hold true in our environment, as these trajectories with a low TTA (five seconds or less), such as the \tgroup{deterrent} (\textcolor{tgrey}{\textbf{grey}}) trajectories as well as \traj{braking\_on\_enter} show the least amount of crossings completed successfully. This is an argument towards the perceived realism of our simulation. \traj{con\_distance\_mirr} has a high number of ``collisions'' as this trajectory did not offer any other solution to the scenario except waiting for the time limit to pass. \paragraph{Reacting to Presence} \begin{figure* \centering \centering \includegraphics[width=\textwidth]{images/transgression} \captionsetup{width=0.9\linewidth} \captionof{figure}{Participants Reacting Strongly To Transgressions From The Simulated Vehicle} \label{fig:transgression} \end{figure*} Given the overall goal of using vehicle kinematics as a means for communicating with pedestrians it is important that pedestrians perceive actions taken by the vehicle as a reaction to their presence, otherwise communication cannot occur, at least on a conscious level. \figref{reacted} shows which percentage of participants believed the actions of the vehicle were a reaction to their presence for each trajectory. This was self reported by participants after each crossing attempt. It can be observed that the trajectories belonging to the two sets designed to communicate with pedestrians, the \tgroup{subversion} (\textcolor{tred}{\textbf{red}}) set as well as the \tgroup{yield} (\textcolor{tgreen}{\textbf{green}}) set, were indeed perceived as interactive by the largest percentage of participants. Furthermore we see that trajectories designed without the intention to communicate, such as the \tgroup{DETERRENT} (\textcolor{tgrey}{\textbf{grey}}) trajectories as well as a trajectories featuring a ``uniform speed'' rank a lot lower in comparison. This strongly supports the possibility that trajectories can be used to intentionally convey information. Looking closer at the four trajectories belonging to the \tgroup{SUBVERSION} (\textcolor{tred}{\textbf{red}}) set, we a difference between the trajectories meant to be irritating \traj{conf\_jump\_stopped}, \traj{conf\_jump\_stopped} and the hostile trajectories \traj{conf\_distance\_mirr} and \traj{conf\_malicious\_acc}, with the latter ones ranking lower in perceived interactivity. This is consistent with comments made by some participants who did not consider malicious behavior to be a possibility, providing statements such as \textit{``The fact that it accelerated into my path made me believe that was [originally] stopping for a factor that was not me''} (\traj{conf\_distance\_mirr}). Instead, such behavior was often attributed to negligence. In terms of breaking social conventions this would imply that the malicious behavior is so far removed from the expected norm that it is not even considered as a possibility for these interactions, which points towards the existence of a social norm. \paragraph{Subverted Expectations} \label{par:subverted_expectations} To determine if we succeeded in subverting the expectations of street-crossing interactions we queried our participants after every attempt if they were surprised by the behavior of the vehicle. \figref{surprise} shows for each trajectory which percentage of participants were surprised by the actions of the vehicle. The trajectories from the \tgroup{SUBVERSION} (\textcolor{tred}{\textbf{red}}) set were perceived as surprising by a greater percentage of participants than all other trajectories. I can therefore be stated that the \tgroup{SUBVERSION} (\textcolor{tred}{\textbf{red}}) trajectories succeeded in their design goal of subverting pedestrian expectations, which in combination with the participant feedback we received suggests a social component in the interpretation of vehicle kinematics exists. \traj{15\_kph\_acceleration} was perceived as surprising by twice as many participants than the other two trajectories from the same set (\tgroup{15\_KPH\_SET}, \textcolor{tlightblue}{\textbf{light blue}}), suggesting that accelerating in the presence of pedestrians might be considered to be outside of the social norm, however multiple participants also cited the slow initial speed of the vehicle as being unusual and the reason for their confusion. \begin{figure* \centering \begin{minipage}[t]{.5\textwidth} \centering \includegraphics[width=\textwidth]{images/graphs/safety} \captionsetup{width=0.9\linewidth} \captionof{figure}[3in]{Perceived safety during Interaction. Label color indicates trajectory group.} \label{fig:safety} \end{minipage}% \begin{minipage}[t]{.5\textwidth} \centering \includegraphics[width=\textwidth]{images/graphs/trust} \captionsetup{width=0.9\linewidth} \captionof{figure}{Trust in vehicle trajectory. Label color indicates trajectory group.} \label{fig:trust} \end{minipage} \end{figure*} \paragraph{Interpreting the Vehicle Behavior in a Social Context} \label{par:social_context} In searching for a social context the interpretation of vehicle kinematics the open feedback provided by participants was very instructive. Looking specifically at subversive trajectories of \traj{conf\_jump\_stopped} and \traj{conf\_jump\_moving}, we observed two different interpretations of the vehicle behavior by the participants. For both trajectories the car is either stopped or moving very slowly and then accelerates briefly when a participant approaches the curb while looking at the vehicle, before returning to the initial speed. Depending on the behavior of the participant this can be repeated multiple times (see also \tabref{trajectories}). Our participants were split in their interpretation of this behavior: The first group of participants believed the vehicle started to accelerate as a reaction to their presence, which is in accordance with the design of the trajectory. Some of these participants were perplexed by this behavior as we had intended, with their evaluation of the situation ranging from \textit{``they were kind of being annoying''}, \textit{``so weird''}, \textit{``unclear''} and \textit{``unpredictable''} to \textit{``it was intentionally trying to make me scared''}. The second group of participants assumed that the acceleration of the vehicle happened because the vehicle was not aware of their presence (\textit{``a failure of attention''}), while the deceleration was seen as a reaction to the vehicle registering their presence, with one participant explaining: \textit{``[I felt] high trust, because [the car] immediately braked when it saw me''}. The following statements were of particular interest: \begin{itemize} \item \textit{``Call me paranoid, but the way it stopped I wasn't sure it wasn't going to accelerate as I started to cross.''} \item \textit{``He first accelerated like he wanted to be first but then stopped.''} \item \textit{``[It] felt like it was trying to intimidate me or something [it then stopped] to let me go, after he thought about possibly not letting me go.''} \item \textit{``It appeared the driver was not sure if they wanted to let me go or not''} \item \textit{``It was a \textbf{social thing} - you go, no, you go''} \item \textit{``The driver clearly saw me, but he did not see me right away so I did not know how much attention he was paying to me.''} \item \textit{``One strike for not seeing me in the beginning, but it then compensated for that by stopping.''} \end{itemize} It is important to emphasize that our questions always explicitly referred to ``the car'', meaning any mention of a driver as well as personifications using 'he' are an unprompted choice by the individual participants. The previous exemplary statements not only show that the participants perceived a social component in the trajectory, they were reflecting on the intentions of the vehicle as an entity in the context of their own actions and intentions. These responses strongly support the surprise metric as seen in \figref{surprise}. We believe the comments given are another strong indication of the sense of presence experienced by the participants in the simulation. This is further supported by the fact that some participants were gesturing towards the virtual vehicle and reacted very strongly towards the ``physical'' presence of the vehicle, especially during the transgressions induced by the subversive trajectories as can be seen in \figref{transgression}. \paragraph{Trust and Safety} Our expectation was that the adherence of vehicles to a potential social construct would affect the predictability of their behavior and by extension the how for pedestrians trust them in an interaction. \figref{trust} shows a Likert-item rating of the trust participants felt towards vehicles following the different trajectories, on a scale of ``1'' - ``no trust at all'' to ``5' - ``complete trust''. The trajectories in \figref{trust} are ordered based on the total number of Likert-item responses given which are less than ``three''. We can see that the trajectories designed to subvert social expectations, \tgroup{subversion} (\textcolor{tred}{\textbf{red}}) and discourage crossing all together \tgroup{deterrent} (\textcolor{tgrey}{\textbf{grey}}), did in fact receive the lowest trust-ratings from our participants. The trajectory mirroring pedestrian behavior, \traj{conf\_distance\_mirr} received a distinctly negative rating with the highest number of ``no trust at all'' ratings out of all trajectories. Several participants describe the car as \textit{``playing a game''}, with one person labeling the vehicle as a \textit{``psychopath.''} We can see that all of our subversive trajectories were in fact perceived as irritating. Since we prompted our participants ``to cross if possible'' as if they were in a hurry, it is hard to tell if under other conditions the diminished trust in the vehicle would have let to fewer decisions to cross in front of it. Looking back at \figref{trajectory_results} that a majority of pedestrians still crossed despite feeling uneasy in the case of the \tgroup{subversion} (\textcolor{tred}{\textbf{red}}) trajectories is an interesting observation. In any case, the lower rating of trust compared to other trajectories not designed to subvert social expectations, such as the \tgroup{yield} (\textcolor{tgreen}{\textbf{green}}) trajectories further supports the notion that vehicle kinematics are used to judge the the these interactions with a social component. Besides asking about trust, we also asked participants to rate their feeling of safety in the interactions as a Likert-item (\figref{safety}). We were interested if the unpredictable nature of some of our trajectories would affect how safe participants would feel in these interactions. With reference to \figref{safety} we can see that participants reported feeling safe for a great majority of the interactions. This is not particularly surprising as participants were instructed not attempt a road crossing if the situation could result in injury if were to happen outside of our simulation. Nevertheless it is interesting that one of our openly malicious trajectories, \traj{conf\_distance\_mirr} only received ratings below a neutral ``3''. \paragraph{Judging Acceleration} The ability to communicate by means of vehicle kinematics requires the ability in pedestrians to perceive and identify how the vehicle moves, particularly if it is changing its velocity. To test this ability, we queried our participants after each interaction to sort the movement into one of four categories: ``accelerating'', ``decelerating'' or ``going at a constant speed''. \figref{guess_answers} shows how the responses per category given by participants for each trajectory. The figure features only those trajectories containing a single acceleration change, as labeling multiple sequential changes as occurred in some of the interactive trajectories would be significantly more complicated. It can be observed that at higher speeds and greater distances a majority of participants default to ``constant speed'' independent of the presence of acceleration changes in the trajectory as those changes become harder to observe, while at lower speeds with the \tgroup{YIELD} (\textcolor{tgreen}{\textbf{green}}) set and the \tgroup{15\_kph\_set} (\textcolor{tlightblue}{\textbf{light blue}}) the majority of participants identify acceleration and deceleration correctly. The limit of such perception poses a limit on the situations in which communication via kinematics could be applied and requires further study. \section{Conclusion}\label{sec:conclusion} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{images/graphs/guess_answers} \caption{Vehicle acceleration behavior as observed by participants.} \label{fig:guess_answers} \end{center} \end{figure} Our goal was to study if pedestrians derive social clues from vehicle kinematics, if such interactions could be studied in virtual reality and to estimate the potential in using vehicle kinematics for effective communication in autonomous vehicles. We confronted our participants with different vehicle kinematics, some of witch were designed to subvert social expectations while others were intended to be conform with expectations. We were able to show that our participants perceived the changes in vehicle motion as a direct reaction to their presence. We were able to show that vehicles following intentionally atypical trajectories let to confusion and in some cases mistrust among participants, while more conventional trajectories did not. Previously vehicle kinematics in the context of pedestrian interactions have been viewed as a matter of physics, with pedestrians assessing if the approaching vehicle leaves them enough time to cross its path of travel (evaluation of gap distance). The data we collected and the remarks we received from our participants show, that pedestrians evaluate vehicle kinematics beyond a consideration of time to arrival, as a social interaction from which they derive cues, going so far as to reflecting on the driving entities perception of their own intentions. We were able to make these observations in an immersive virtual reality simulation, which leads us to conclude that VR is a valid tool for further exploration of this concept. We believe that future work will enable the use of vehicle kinematics to communicate driving intentions to pedestrians. \section*{Acknowledgment} This work was in part supported by the Toyota Collaborative Safety Research Center. The views and conclusions being expressed are those of the authors and do no necessarily reflect those of Toyota. \balance
3,212,635,537,823
arxiv
\section{Introduction} The Boltzmann equation describes the underlying microscopic dynamics of dilute classical gases \cite{cercignanibook}. It is widely employed to model a variety of nonequilibrium phenomena in several areas of physics such as the dynamics of the hot hadronic matter produced in the late stages of ultrarelativistic heavy ion collisions \cite{Bass:1998ca,Petersen:2014yqa}, some aspects of the expansion of our universe in cosmology applications \cite{bernstein}, the description of micro and nano-flows \cite{Struchtrup}, among others. In addition to these applications, exact solutions of the relativistic generalization of the Boltzmann equation \cite{degroot,cercignanikremer} in the relaxation time approximation \cite{BGK,AW} have been recently employed to improve our understanding of the domain of applicability of relativistic dissipative fluid dynamics in the context of relativistic heavy ion collisions \cite% {Florkowski:2013lza,Florkowski:2013lya,Denicol:2014mca,Denicol:2014xca,Denicol:2014tha}% . Even though less complete, the Anderson-Witting-Boltzmann (AWB) equation and its solutions can be used to understand certain properties of solutions of the Boltzmann equation itself, as well as its hydrodynamic limit. Analytic solutions of the relativistic Boltzmann equation are extremely rare (see \cite{Bazow:2015dha} for the first analytical solution in an expanding background). The same can be said even for simplified versions of the relativistic Boltzmann equation, such as the AWB equation. Recently, an exact solution of the AWB equation \cite{AW} was derived for a conformal system undergoing simultaneously longitudinal and transverse expansion in \cite% {Denicol:2014xca,Denicol:2014tha} (for an extension involving anisotropic hydrodynamics see \cite{Nopoush:2014qba}). The remarkable agreement between these solutions and those of relativistic dissipative fluid dynamics (under the same symmetries) has brought great insights about the validity of the hydrodynamic description of the evolution of the quark-gluon plasma. However, even in this case the solutions of the AWB equation were obtained using iterative numerical methods and it was not known how to obtain analytic expressions for the momentum dependence of the single particle distribution function, $f$, and the spatial dependence of its moments. In this paper, we expand on the arguments developed in Ref.\ \cite% {Denicol:2014xca,Denicol:2014tha} to obtain a new fully analytical solution for the single particle distribution function of the AWB equation for conformal kinetic systems. The key difference with respect to the exact solutions previously derived in \cite{Denicol:2014xca,Denicol:2014tha} involves the global symmetries imposed on the conformal system. The symmetry assumptions \cite{Gubser:2010ze,Gubser:2010ui,Marrochio:2013wla} previously employed in \cite{Denicol:2014xca,Denicol:2014tha} were more applicable to the matter created in ultracentral relativistic heavy ion collisions while in this work we broaden our focus and consider symmetries more appropriate for conformal systems undergoing three dimensional radial expansion, such as the early universe\footnote{An important distinction with respect to the physics of the early universe is that here we still consider an underlying flat spacetime.}. We note that the same set of symmetries has already been imposed to conformal fluids in \cite{Hatta:2014gqa,Hatta:2014gga} in order to find the first analytical solutions of second order conformal fluid dynamics. The possession of an analytical solution for $f$ allowed us to directly explore here important technical aspects in kinetic theory such as the imposition of matching conditions, the decomposition of $f$ in its moments in a nontrivial setting as well as its positivity. More importantly, this analytical solution has also revealed a new feature of conformally invariant, radially expanding systems described by the AWB equation: the ability to flow as a perfect fluid even though the overall dynamics is intrinsically dissipative (e.g., the non-equilibrium entropy component is nonzero). In fact, we show that in this solution the energy-momentum tensor is exactly that of an ideal fluid at any spacetime point (even though the shear viscosity coefficient is nonzero) while the entropy density, computed directly using the full distribution function, is different than its ideal limit. In this case, this non-equilibrium contribution to the entropy density is due to higher order scalar moments (which possess no hydrodynamical interpretation) of the Boltzmann equation \cite{Denicol:2012cn} that remain out of equilibrium while the energy-momentum tensor retains its local equilibrium form. Therefore, in the system considered here, slowly moving hydrodynamic degrees of freedom can exhibit true perfect fluidity while being totally decoupled from the fast moving, non-hydrodynamical microscopic scalar degrees of freedom that lead to entropy production. This paper is organized as follows. In the next section we briefly review how $\mathrm{AdS}_{2}\otimes \mathrm{S}_{2}$ invariant solutions of fluid dynamics were obtained in Refs.~\cite{Hatta:2014gqa,Hatta:2014gga}. In Sec.\ \ref{SecII} we derive the main results of this paper and solve the Anderson-Witting-Boltzmann equation for a conformal system in $\mathrm{AdS}_{2}\otimes \mathrm{S}_{2}$ geometry. We show in Sec.\ \ref{SecIII} how these solutions appear from the perspective of the method of moments. We then conclude with a summary of our results. Throughout this paper, we use natural units $\hbar =c=k_{B}=1$. \section{Relativistic hydrodynamics in $\mathrm{AdS}_{2}\otimes \mathrm{S}% _{2}$} \label{SecI} We follow \cite{Hatta:2014gga} and consider the out-of-equilibrium dynamics of a conformal system in $\mathrm{AdS}_{2}\otimes \mathrm{S}_{2}$ geometry. This curved geometry is conformally equivalent to 4-dimensional Minkowski spacetime (in spherical coordinates), \begin{equation} d\hat{s}^{2}=\frac{-dt^{2}+dr^{2}+r^{2}d\Omega ^{2}}{r^{2}}=-\cosh ^{2}\rho \,d\tau ^{2}+d\rho ^{2}+d\Omega ^{2}\,, \end{equation}% where $d\Omega ^{2}=d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2}$ is the usual angular piece involving the angles $\theta \in \lbrack 0,\pi ]$ and $\phi \in \lbrack 0,2\pi ]$ while $\tau $ and $\rho $ are global $\mathrm{AdS}_{2}$ coordinates defined using the Minkowski time, $t$, and 3-dimensional spatial radius, $r$, in the following way \cite{Hatta:2014gga} \begin{equation} \text{\ }\tan \tau =\frac{L^{2}+r^{2}-t^{2}}{2Lt},\text{ \ \ \ \ \ \ }\cosh \rho =\frac{1}{2Lr}\sqrt{\left( L^{2}+(r+t)^{2}\right) \left( L^{2}+(r-t)^{2}\right) }\,, \label{definetaurho} \end{equation}% with $L$ being the radius of $\mathrm{AdS}_{2}$. In this curved space, quantities evolve in $\tau$ while $\rho$ plays the role of a spatial radial coordinate. In this Weyl rescaled coordinate system the nonzero Christoffel symbols are \begin{equation} \Gamma _{\theta \phi }^{\phi }=\frac{1}{\tan \theta }\,,\qquad \Gamma _{\phi \phi }^{\theta }=-\cos \theta \,\sin \theta \,,\qquad \Gamma _{\tau \tau }^{\rho }=\cosh \rho \,\sinh \rho \,,\qquad \Gamma _{\tau \rho }^{\tau }=\tanh \rho \,. \end{equation} The energy-momentum tensor, $T^{\mu \nu }$, of a relativistic conformal fluid is usually decomposed in terms of the time-like (normalized) local velocity field, $u^{\mu }$, as% \begin{equation*} T^{\mu \nu }=\varepsilon u^{\mu }u^{\nu }+P\Delta ^{\mu \nu }+\pi ^{\mu \nu }% \text{.} \end{equation*}% Above, we introduced the energy density $\varepsilon \equiv u_{\mu }u_{\nu }T^{\mu \nu }$, the thermodynamic pressure $P\left( \varepsilon \right) =\varepsilon /3$, and the shear stress tensor $\pi ^{\mu \nu }\equiv \Delta _{\alpha \beta }^{\mu \nu }T^{\alpha \beta }$. We further defined the projection operator onto the space orthogonal to $u^{\mu }$, $\Delta ^{\mu \nu }=g^{\mu \nu }+u^{\mu }u^{\nu }$, and the double, symmetric, traceless projection operator $\Delta _{\alpha \beta }^{\mu \nu }=\left( \Delta _{\alpha }^{\mu }\Delta _{\beta }^{\nu }+\Delta _{\alpha }^{\nu }\Delta _{\beta }^{\mu }\right) /2-\Delta ^{\mu \nu }\Delta _{\alpha \beta }/3$. Our convention is to define the fluid velocity using the Landau picture, $T^{\mu \nu }u_{\nu }=-\varepsilon u^{\mu }$, which implies that the energy diffusion is always zero. The bulk viscous pressure of a conformal fluid is always zero, which means that the dissipative processes involving energy and momentum in such systems are solely governed by the shear stress tensor. The main equations of motion satisfied by this fluid are given by the conservation laws of energy-momentum, which we decompose in the following form,% \begin{eqnarray} u_{\nu }D_{\mu }T^{\mu \nu } &=&u^{\mu }D_{\mu }\ln T+\frac{1}{3}D_{\mu }u^{\mu }+\frac{1}{3}\frac{\pi ^{\mu \nu }}{Ts}D_{\mu }u_{\nu }=0, \label{eq1} \\ \Delta _{\nu }^{\lambda }D_{\mu }T^{\mu \nu } &=&u^{\mu }D_{\mu }u^{\lambda }+\Delta ^{\lambda \mu }\partial _{\mu }\ln T+\Delta _{\nu }^{\lambda }D_{\mu }\pi ^{\mu \nu }=0, \label{eq2} \end{eqnarray}% where $D_{\mu }$ is the general relativistic covariant derivative. The equations above are then complemented by the equations of motions for the shear-stress tensor, $\pi ^{\mu \nu }$, which, at second order in gradients \cite{IS,Baier:2007ix,Denicol:2012cn}, correspond to a relaxation-type equation% \begin{equation} \tau _{\pi }\Delta _{\alpha }^{\mu }\Delta _{\beta }^{\nu }u^{\lambda }D_{\lambda }\pi ^{\alpha \beta }+\pi ^{\mu \nu }=-2\eta \sigma ^{\mu \nu }-% \frac{4}{3}\tau _{\pi }\pi ^{\mu \nu }D_{\lambda }u^{\lambda }+\frac{10}{7}% \tau _{\pi }\pi ^{\lambda \left\langle \mu \right. }\sigma _{\lambda }^{\left. \nu \right\rangle }+\text{\textrm{higher-order terms} }, \label{eq3} \end{equation}% where\ $\eta $ is the shear viscosity and $\tau _{\pi }$ is the shear relaxation time. For a conformal fluid, the shear viscosity must be proportional to the entropy, $\eta \sim s$, while the shear viscosity relaxation time must be inversely proportional to the temperature, $\tau _{\pi }\sim 1/T$. Above, we introduced the shear tensor of the fluid, $\sigma ^{\mu \nu }=D^{\left\langle \mu \right. }u^{\left. \nu \right\rangle }$. The brackets $% \left\langle {}\right\rangle $ denote the transverse and traceless projection of a tensor $A^{\left\langle \mu \nu \right\rangle }=\Delta _{\alpha \beta }^{\mu \nu }A^{\alpha \beta }$. The hydrodynamical solution studied in \cite{Hatta:2014gga} was constructed using a static though non-uniform local velocity, $u_{\mu }=(-\cosh \rho ,0,0,0)$ in $\mathrm{AdS}_{2}\otimes \mathrm{S}_{2}$ space with coordinates $\left( \tau ,\rho ,\theta ,\phi \right) $. This implies that the system is undergoing a certain type of spherically symmetric radial flow in the usual Minkowski coordinates that is equivalent to the conformal soliton flow that was first introduced in \cite{Friess:2006kw} in the context of the gauge/gravity duality \cite{Maldacena:1997re} (see, e.g., \cite{Hatta:2014gga} for more details about our flow velocity in Minkowski coordinates). With this static flow configuration in $\mathrm{AdS}_{2}\otimes \mathrm{S}_{2}$, the expansion rate of the fluid vanishes, i.e., $D_\mu u^\mu =0$, and so does the shear tensor, $\sigma ^{\mu \nu }=0$. Thus, Eqs.\ (\ref{eq1}) and (\ref{eq2}) can only be satisfied if the temperature and $\pi^{\mu\nu}$ depend solely on the spatial coordinate $\rho$, e.g., $T\left( \tau ,\rho ,\theta ,\phi \right) \rightarrow T\left( \rho \right) $. Moreover, note that in this space $\pi^{\mu\nu}$ is trivial: a quick look at Eq.\ \eqref{eq3} (and its generalization including terms involving higher order derivatives of the flow) reveals that in this problem $\pi^{\mu\nu}$ is identically zero. In fact, since here the flow is static and $\sigma^{\mu\nu}=0$, $D_\mu u^\mu =0$, and $\pi^{\mu\nu}=\pi^{\mu\nu}(\rho)$, in our conformal theory there are no dynamical sources available to induce a nontrivial spatial profile for the shear stress tensor, which must then vanish in all space. If nonlinear terms quadratic (or of higher-order) in $\pi^{\mu \nu }$ were present in (\ref{eq3}), nontrivial solutions of these homogeneous algebraic equations for $\pi^{\mu\nu}$ could be found \cite{Hatta:2014gqa,Hatta:2014gga} but those would necessarily assume that $\pi^{\mu \nu}$ must be nonzero for any value of $\rho$. Therefore, this nontrivial branch of solutions is not smoothly connected to the usual hydrodynamic gradient expansion for which, in this problem, the first-order Navier-Stokes contribution vanishes. In any case, this type of solutions is not going to play a role in our discussion since the nonlinear terms in $\pi^{\mu\nu}$ cannot appear in an effective hydrodynamic theory obtained from the Boltzmann equation with a linearized collision term \cite{Denicol:2010xn} such as in AWB. Therefore, one can safely set $\pi^{\mu\nu}=0$ in the following. Also, since $% \pi _{\mu \nu }$ transforms covariantly under Weyl transformations \cite% {Baier:2007ix}, the fact that this quantity vanishes in $\mathrm{AdS}% _{2}\otimes \mathrm{S}_{2}$ implies that it will also vanish in Minkowski coordinates. In this case, the momentum equation \eqref{eq2} leads to an equation of motion for the temperature that can be easily solved \cite{Hatta:2014gga} \begin{equation} \partial _{\rho }\ln T=-\tanh \rho \Longrightarrow T\left( \rho \right) \sim (\cosh \rho)^{-1}\,. \label{defineT} \end{equation}% The interesting feature of this solution is that it corresponds to the solution of an ideal fluid. This happened without making any assumptions about the magnitude of the shear viscosity coefficient -- it simply appeared as a feature of this highly symmetrical flow configuration. That is, even though the system in principle has a nonzero shear viscosity coefficient, its hydrodynamic degrees of freedom cannot dissipate since all gradients are exactly zero in $% \mathrm{AdS}_{2}\otimes \mathrm{S}_{2}$ (note that dissipation via bulk viscosity is forbidden due to exact conformal invariance). In Minkowski space, the temperature evolves in time as it would in a genuine, dissipationless fluid. In the next sections we investigate the same problem of out-of-equilibrium $% \mathrm{AdS}_{2}\otimes \mathrm{S}_{2}$ dynamics from a kinetic theory perspective using the AWB equation. We then clarify which non-hydrodynamic degrees of freedom of the microscopic theory are responsible for dissipation in this case and why such degrees of freedom do not couple with the hydrodynamic modes. \section{Anderson-Witting-Boltzmann equation} \label{SecII} The \textit{on-shell} AWB equation in curved spacetime is \cite% {Denicol:2014xca,Denicol:2014tha} \begin{equation} p^{\mu }\partial _{\mu }f+\Gamma _{\mu i}^{\lambda }p_{\lambda }p^{\mu }\,% \frac{\partial f}{\partial p_{i}}=\frac{p^{\mu }u_{\mu }}{\tau _{\mathrm{rel}% }}\left( f-f_{\mathrm{eq}}\right) \,, \label{AWBeq} \end{equation}% where the distribution function $f=f(x^{\mu },p_{i})$ is defined in a 7-dimensional phase space \cite{debbasch} in which each point is described by seven coordinates, i.e., the $\mathrm{AdS}_{2}\otimes \mathrm{S}_{2}$ spacetime coordinates $x^{\mu }=(\tau ,\rho ,\theta ,\phi )$ and the three spatial covariant momentum components $p_{i}=(p_{\rho },p_{\theta },p_{\phi })$. The zeroth component of the momentum is obtained from the on-shell condition for massless particles $p_{\mu }p^{\mu }=0$. Moreover, $f_{\mathrm{% eq}}=\exp \left( p^{\mu }u_{\mu }/T\right) $ is the local equilibrium distribution function for massless particles with Boltzmann statistics, $T$ is the local temperature, $u^{\mu }$ is the local velocity of the system, and $\tau _{\mathrm{rel}}$ is the relaxation time associated with the collision operator. Conformal invariance imposes that the relaxation time must be inversely proportional to the temperature, $\tau _{\mathrm{rel}}=c/T$, with $c$ being a constant that is directly related to the shear viscosity to entropy density ratio, $\eta /s=5c $ \cite{Denicol:2010xn,Denicol:2011fa} (thus, the free streaming limit corresponds to $c\rightarrow \infty $). At first glance, it may appear that the AWB equation is a linear equation in $f$. However, we note that Eq.\ \eqref{AWBeq} must be solved simultaneously with the equations of motion for the temperature and velocity, Eqs.\ (\ref% {eq1}) and (\ref{eq2}). In these, one must also use the definition of the shear stress tensor of a dilute single component gas,% \begin{equation*} \pi ^{\mu \nu }=T^{\left\langle \mu \nu \right\rangle }=\int \frac{d^{3}p}{% \left( 2\pi \right) ^{3}}\frac{p^{\left\langle \mu \right. }p^{\left. \nu \right\rangle }}{p^{\tau }\sqrt{-g}}f. \end{equation*}% In the end, one has a coupled set of nonlinear integro-differential equations for $f$, $T$, and $u^{\mu }$. It is commonly very challenging to solve these types of equations even numerically. However, as mentioned above, exact solutions of this system of equations have been recently obtained using iterative numerical methods \cite% {Florkowski:2013lza,Denicol:2014xca,Denicol:2014tha}. For the type of flow and symmetries considered in this paper, we demonstrate in the following sections that it is possible to obtain analytic solutions of this system of equations. We note that the collisionless limit of a system with a flow equivalent to ours in Minkowski space was previously studied in \cite{Nagy:2009eq} using very different techniques than the ones used below. \subsection{Analytic Solution} As mentioned in the previous section when we discussed the fluid dynamical equations, the symmetry for the static flow imposes that $u_{\mu }=(-\cosh \rho ,0,0,0)$. Also, for this type of static flow $f$ may depend only on the spatial coordinates $\rho $, $\theta $, and $\phi $ (though we shall see that $f$ does not depend on this coordinate in the end) and their corresponding momenta. Since in the AWB equation the collision term is approximated to be linear in $f-f_{\mathrm{eq}}$, it is impossible for terms quadratic or quartic in $\pi ^{\mu \nu }$ to appear in the equation of motion for $\pi ^{\mu \nu }$ at any order in the hydrodynamic series \cite{Denicol:2010xn,Denicol:2012cn}. Such terms can only originate from the nonlinear terms of the collision operator and, assuming that higher order tensorial moments \cite{Denicol:2012cn} initially vanish, the shear stress tensor constructed using the solution $f$ of Eq.\ % \eqref{AWBeq} must be zero. Therefore, since the bulk viscous pressure has to be zero due to the underlying conformal invariance, the temperature that enters the AWB equation will satisfy Eq.\ \eqref{defineT} with solution \begin{equation} T\left( \rho \right) =\frac{T_{0}}{\cosh \rho }\,, \end{equation}% where $T_0$ is a constant. Note that this is not usually the case and in general the temperature has to be solved simultaneously with the AWB equation \cite{Denicol:2014xca,Denicol:2014tha}. The fact that the velocity profile is static and the temperature profile can be solved analytically will be extremely useful here since it will allow us to find analytical solutions of the AWB equation for this system. These solutions for $T$ and $u^{\mu }$ serve to considerably simplify the expression for the local equilibrium distribution function and the relaxation time, which take the following form% \begin{eqnarray} f_{\mathrm{eq}} &=&\exp \left[ -p^{\tau }\cosh \rho /T\left( \rho \right) % \right] \,, \label{important} \\ \tau _{\mathrm{rel}} &=&\frac{c}{T\left( \rho \right) }=\frac{c}{T_{0}}\cosh \rho \,, \label{important2} \end{eqnarray}% where $p^{\tau }=\sqrt{p_{\rho }^{2}+p_{\theta }^{2}+\left( p_{\phi }^{2}/\sin ^{2}\theta \right) }/\cosh \rho $. The AWB equation then becomes \begin{eqnarray} &&p_{\rho }\partial _{\rho }f-\tanh \rho \left( p_{\rho }^{2}+p_{\theta }^{2}+\frac{p_{\phi }^{2}}{\sin ^{2}\theta }\right) \,\frac{\partial f}{% \partial p_{\rho }}+p_{\theta }\partial _{\theta }f \notag \\ &+&\frac{1}{\tan \theta }\frac{p_{\phi }^{2}}{\sin ^{2}\theta }\frac{% \partial f}{\partial p_{\theta }}=-\frac{T_{0}}{c}\frac{1}{\cosh \rho }\sqrt{% p_{\rho }^{2}+p_{\theta }^{2}+\frac{p_{\phi }^{2}}{\sin ^{2}\theta }}% \,\left( f-f_{\mathrm{eq}}\right) \,, \label{Great} \end{eqnarray}% where we used Eq.\ (\ref{important2}). We note that $f_{\mathrm{eq}}$ itself satisfies this equation, as is expected for a stationary solution (see also the collisionless study of \cite{Nagy:2009eq}). We also remark that there are no terms including $\partial f/\partial p_{\phi }$, which is consistent with spherical symmetry in these coordinates and, thus, $f$ does not depend on $\phi $. It is then easy to see that the general solution of this equation can be written as a sum of an equilibrium piece and a non-equilibrium part as follows: $f(\rho ,\theta ;p_{\rho },p_{\theta },p_{\phi })=f_{\mathrm{eq}}+f_{\mathrm{eq}}\Phi (\rho ,\theta ;p_{\rho },p_{\theta },p_{\phi })$ where the non-equilibrium piece is \begin{equation} \Phi (\rho ;p_{\rho },p_{\Omega})=\mathcal{J}\left( \frac{% \sqrt{p_{\rho }^{2}+p_{\Omega }^{2}}\cosh \rho }{T_{0}}\right) \,\exp \left[ -\frac{T_{0}}{c}\,\frac{p_{\rho }}{|p_{\rho }|}\,\mathrm{arctan}\left( \sinh \rho \sqrt{1+\frac{p_{\Omega }^{2}}{p_{\rho }^{2}}}\right) \right] . \label{phisolution} \end{equation}% Here, $\mathcal{J}(\gamma )$ is an arbitrary function of its argument $% \gamma $ and we have defined the short-hand notation $p_{\Omega }^{2}\equiv p_{\theta }^{2}+\left( p_{\phi }^{2}/\sin ^{2}\theta \right) $. By taking $% c\rightarrow \infty $, one can see that $\mathcal{J}$ is actually the solution of this equation in the free-streaming limit, $\mathcal{J}=\Phi _{% \mathrm{free-streaming}}$. As will be discussed in the following, the functional form of $\mathcal{J}$ can be determined by using the matching condition for the energy density while requiring that $f$ is positive-definite at any point of phase space. As far as we are aware, this is the first analytical solution of the AWB equation that describes a radially expanding system. \subsection{Matching condition and positivity} In kinetic theory it is quite common to define the temperature of the system by requiring that the energy density of the system is solely determined by its equilibrium value,% \begin{equation*} \varepsilon =u_{\mu }u_{\nu }T^{\mu \nu }=\varepsilon _{\mathrm{eq}}\left( T\right) . \end{equation*}% This condition implies that the following integral must always vanish \begin{equation} \int \frac{\,d^{3}p}{(2\pi )^{3}}\,\frac{p^{\tau }\cosh \rho }{\,\sin \theta }\,f_{\mathrm{eq}}\,\Phi (\rho ,\theta ;p_{\rho },p_{\theta },p_{\phi })\,=0. \end{equation}% Using the analytic solution derived in the previous section, Eq.\ (\ref% {phisolution}), it is possible to reduce this integral to a considerably simpler form \begin{equation} \int_{0}^{\infty }d\gamma \,\gamma ^{3}\mathcal{J}(\gamma )\exp \left( -\gamma \right) =0\,. \label{energycondition} \end{equation} Now, the condition (\ref{energycondition}) can be used to determine $% \mathcal{J}(\gamma )$. For simplicity, in this work we consider a polynomial \textit{Ansatz} \begin{equation} \mathcal{J}(\gamma )\sim a\gamma -1\,, \end{equation}% and one can easily find that condition (\ref{energycondition}) is met as long as $a=1/4$. Therefore, \begin{equation} \mathcal{J}(\gamma )\sim \frac{\gamma }{4}-1\,. \end{equation}% Note that this function is not positive-definite for $\gamma \in \lbrack 0,4] $. However, we still have the freedom to fix the overall multiplicative constant. A mandatory physical constraint is that, in the end, the distribution function must be a non-negative real-valued function of its arguments. In fact, positivity can be obtained as follows. First, note that the sign of the exponent in our solution for $\Phi $, in Eq.\ (\ref{phisolution}), is determined by the sign of $p_{\rho }$: in the limit of $\rho \rightarrow \infty $, the solution is bounded by $\exp \left[ -\pi T_{0}/(2c)\right] $, when $p_{\rho }>0$, and by $\exp \left[ \pi T_{0}/(2c)\right] $, when $% p_{\rho }<0$. To make sure that $f$ is positive-definite and, at the same time, that $\lim_{c\rightarrow 0}f=f_{\mathrm{eq}}$ (i.e., for a vanishing relaxation time one must recover the local equilibrium) we fix the overall multiplicative constant to be $\exp \left[ -T_{0}\pi (1+\xi )/(2c)\right] $ with $\xi >0$ and, thus, \begin{equation} \mathcal{J}(\gamma )=\left( \frac{\gamma }{4}-1\right) \exp \left[ -\frac{% \pi T_{0}}{2c}(1+\xi )\right] \,. \label{defineJ} \end{equation}% In principle, other forms of $\mathcal{J}(\gamma )$ may be used in order to achieve the same outcome, which would then generate a class of solutions of the AWB equation. In this work, however, we limit our discussion to the form % \eqref{defineJ} for $\mathcal{J}(\gamma )$. It is instructive to study the dependence of $f$ on some of its arguments. For instance, for $\rho =0$ \begin{equation} \frac{f}{f_{\mathrm{eq}}}\Big|_{\rho =0}=1+\exp \left[ -\frac{\pi T_{0}}{2c}% (\xi +1)\right] \left( \frac{1}{4T_{0}}\sqrt{p_{\rho }^{2}+p_{\theta }^{2}+% \frac{p_{\phi }^{2}}{\sin ^{2}\theta }}-1\right) \label{dontcare} \end{equation}% while for $p_{\theta }=p_{\phi }=0$ \begin{equation} \frac{f}{f_{\mathrm{eq}}}\Big|_{p_{\theta },p_{\phi }=0}=1+\exp \left\{ -% \frac{T_{0}}{2c}\left[ \pi (1+\xi )+2\,\frac{p_{\rho }\,}{|p_{\rho }|\,}% \mathrm{Gd}(p_{\rho })\right] \right\} \left( \frac{1}{4T_{0}}|p_{\rho }|\cosh \rho -1\right) \,, \end{equation}% where $\mathrm{Gd}(x)=2\tan ^{-1}\left( \exp x\right) -\pi /2$ is the Gudermannian function. One can see that these expressions are positive-definite and that they reduce to the equilibrium distribution in the zero mean free path limit $c\rightarrow 0$. \subsection{Non-equilibrium entropy} The local entropy density is computed using the solution for $f$ as follows \cite{degroot} \begin{equation} s=\frac{1}{(2\pi )^{3}}\int \frac{d^{3}p}{\sqrt{-g}\,p^{\tau }}u_{\mu }p^{\mu }\,\,f\left( \ln \,f-1\right) \,. \end{equation}% It is easy to show that the equilibrium result is $s_{\mathrm{eq}% }=4T^{3}(\rho )/\pi ^{2}$, which is what one would expect for an ideal conformal gas with degeneracy factor equal to one. From the form of $f$ assumed in this paper, $f=f_{\mathrm{eq}}\left( 1+\Phi \right) $, one can write the nonequilibrium correction to the entropy as \begin{equation} \Delta s\equiv s-s_{\mathrm{eq}}=\int \frac{d^{3}p}{\sqrt{-g}\,p^{\tau }}% u_{\mu }p^{\mu }\,f_{\mathrm{eq}}\left\{ \left( 1+\Phi \right) \ln \left[ 1+\Phi \right] +\Phi \left( \ln f_{\mathrm{eq}}-1\right) \right\} \,. \end{equation}% The second term can be reduced to \begin{eqnarray} \int \frac{d^{3}p}{\sqrt{-g}\,p^{\tau }}u_{\mu }p^{\mu }\,f_{\mathrm{eq}% }\Phi \left( \ln f_{\mathrm{eq}}-1\right) \,\, &=&\frac{T^{3}}{4\pi ^{2}}% \mathcal{H}(\rho )\left[ \int_{0}^{\infty }d\gamma \,\gamma ^{2}e^{-\gamma }\left( 1+\gamma \right) \mathcal{J}(\gamma )\right] \\ &=&-\frac{T^{3}}{8\pi ^{2}}\mathcal{H}(\rho )\,\exp \left[ -\frac{\pi T_{0}}{% 2c}\left( 1+\xi \right) \right] \,, \notag \end{eqnarray}% where \begin{equation} \mathcal{H}(\rho )=2\int_{0}^{1}dx\,\cosh \left[ \frac{T_{0}}{c}\mathrm{% arctan}\left( \frac{\sinh \rho }{x}\right) \right] \, \label{defineHfinal} \end{equation}% is a positive-definite function. When going from the first line to the second line above, we replaced the form of $\mathcal{J}(\gamma )$ obtained in the previous sections. The full result is \begin{eqnarray} \frac{\Delta s}{s_{eq}} &=&-\int_{0}^{\infty }\frac{d\gamma }{16}\,\gamma ^{2}\,e^{-\gamma }\int_{0}^{1}dx\,\left\{ 1+\mathcal{J}(\gamma )\,\exp \left[ -\frac{T_{0}}{c}\tan ^{-1}\left( \frac{\sinh \rho }{x}\right) \right] \right\} \notag \\ &\times &\ln \left\{ 1+\mathcal{J}(\gamma )\,\exp \left[ -\frac{T_{0}}{c}% \tan ^{-1}\left( \frac{\sinh \rho }{x}\right) \right] \right\} \notag \\ &&-\int_{0}^{\infty }\frac{d\gamma }{16}\,\gamma ^{2}\,e^{-\gamma }\int_{0}^{1}dx\,\left\{ 1+\mathcal{J}(\gamma )\,\exp \left[ \frac{T_{0}}{c}% \tan ^{-1}\left( \frac{\sinh \rho }{x}\right) \right] \right\} \notag \\ &\times &\ln \left\{ 1+\mathcal{J}(\gamma )\,\exp \left[ \frac{T_{0}}{c}\tan ^{-1}\left( \frac{\sinh \rho }{x}\right) \right] \right\} -\frac{\mathcal{H}% (\rho )}{32}\,\exp \left[ -\frac{\pi T_{0}}{2c}(1+\xi )\right] . \label{entropyprod} \end{eqnarray} It is easy to see that $\Delta s(\rho )$ is even in $\rho$ and that $\Delta s(\rho )<0$, as expected on physical grounds. We show in Fig.\ \ref{fig:1} a plot of $\Delta s/s_{eq}$ as a function of $\rho$ for different values of $% T_0/c$. For small values of $T_0/c$ one can see that the full entropy density becomes different than the equilibrium one (though by a small amount) and that this effect becomes more pronounced for large values of $% \rho$, where it reaches a stationary value that depends on the parameters $c$ and $T_0$. \begin{figure}[t] \includegraphics[width=0.6\linewidth]{difentropy.eps} \caption{(Color online) Relative entropy production $\Delta s/s_{eq}$ in Eq.\ \eqref{entropyprod} for different values of $T_0/c$ (with fixed $% \protect\xi=0.01$). The solid black line was computed using $T_0/c=10$, the dashed red curve is for $T_0/c=1$, while the dotted-dashed blue curve is for $T_0/c=0.1$.} \label{fig:1} \end{figure} Note that $\Delta s/s_{eq}$ in Eq.\ \eqref{entropyprod} does not change under Weyl transformations and, thus, one can find its value in flat spacetime via simple substitution $\frac{\Delta s}{s_{eq}}(\rho) = \frac{\Delta s}{s_{eq}}(\rho(t,r))$. Also, since large values of $\rho$ correspond to large values of $t$ for fixed $r$ (see Eq.\ \eqref{definetaurho}), this quantity approaches a constant at large times in flat spacetime. From the result shown in Fig.\ \ref{fig:1}, one can see that the spatial integral of $\Delta s$ in flat spacetime \begin{equation} \frac{128 L^3 T_0^3}{\pi} \int_0^\infty dr\,\frac{r^2}{\left[ L^2 +(r+t)^2 \right]^{3/2}\left[ L^2 +(r-t)^2 \right]^{3/2}}\,\frac{\Delta s}{s_{eq}}\Big|_{\rho=\rho(t,r)} \end{equation} goes to zero when $t \to \infty$, which indicates that the entropy approaches its equilibrium value as time increases. The equation above was obtained using that the equilibrium entropy density in flat spacetime is $s_{eq}(t,r) = 4 T^3(t,r)/(\pi^2 r^3)$. \section{Comparison to the method of moments} \label{SecIII} In order to better understand some features of the solution derived in the previous sections it is convenient to expand $\Phi =\left( f-f_{\mathrm{eq}% }\right) /f_{\mathrm{eq}}$ in terms of its moments, using irreducible tensors and a complete basis of polynomials \cite{Denicol:2012cn}. The irreducible tensors, $1$, $k^{\left\langle \mu \right\rangle }$, $% k^{\left\langle \mu \right. }k^{\left. \nu \right\rangle }$, $% k^{\left\langle \mu \right. }k^{\nu }k^{\left. \lambda \right\rangle }$, $% \cdots $, are used to expand the angular part of the single-particle distribution function. They form a complete and orthogonal set, analogously to the spherical harmonics \cite{degroot}, and are defined as $% k^{\left\langle \mu _{1}\right. }...k^{\left. \mu _{m}\right\rangle }\equiv \Delta _{\nu _{1}...\nu _{m}}^{\mu _{1}...\mu _{m}}k^{\nu _{1}}...k^{\nu _{m}}$, where the transverse, symmetric, and traceless projectors $\Delta _{\nu _{1}...\nu _{m}}^{\mu _{1}...\mu _{m}}$ are defined in \cite{degroot}. Our solution in Eq.\ \eqref{phisolution} is anisotropic in momentum space and hence it possesses both scalar and higher rank moments. For the sake of illustration, in this section we focus on the scalar moments of our solution. The scalar part of the distribution function is expanded using a set of orthogonal polynomials, $P_{\mathbf{k}n}^{(\ell )}=\sum_{r=0}^{n}a_{nr}^{(\ell )}\left( -u_{\mu }k^{\mu }\right) ^{r}$, where the coefficients $a_{nr}^{(\ell )}$ were calculated so that% \begin{equation} \frac{N^{\ell }}{\left( 2\ell +1\right) !!}\int \frac{dK}{\sqrt{-g}}\left( u_{\mu }k^{\mu }\right) ^{2\ell }P_{\mathbf{k}n}^{(\ell )}P_{\mathbf{k}% m}^{(\ell )}=\delta _{nm}, \end{equation}% using the Gram-Schmidt orthogonalization method as demonstrated in \cite% {Denicol:2012cn}. Here, we defined $dK=d^{3}k/\left[ \left( 2\pi \right) ^{3}k^{\tau}\right] $ and $N_{\ell }=(-1)^{\ell }/I_{2\ell ,\ell }$ where, for a nongenerate massless gas of particles, \begin{equation*} I_{nq}=\frac{\left( n+1\right) !}{\left( 2q+1\right) !!}\frac{T^{n+2}}{2\pi ^{2}}. \end{equation*}% The irreducible tensors also satisfy orthogonality conditions,% \begin{equation} \int \frac{dK}{\sqrt{-g}}F_{\mathbf{k}}\,k^{\left\langle \mu _{1}\right. }\cdots k^{\left. \mu _{m}\right\rangle }\,k_{\left\langle \nu _{1}\right. }\cdots k_{\left. \nu _{n}\right\rangle }=\frac{m!\,\delta _{mn}}{\left( 2m+1\right) !!}\,\Delta _{\nu _{1}\cdots \nu _{m}}^{\mu _{1}\cdots \mu _{m}}\int \frac{dK}{\sqrt{-g}}\frac{N^{\ell }}{\left( 2\ell +1\right) !!}\,F_{\mathbf{k}}\left( u_{\mu }k^{\mu }\right) ^{2m}, \label{orthogonality1} \end{equation}% where $F_{\mathbf{k}}$ is an arbitrary function of $u_{\mu }k^{\mu }$. Using this basis, the moment expansion of $\Phi $ is% \begin{equation*} \Phi =\sum_{\ell =0}^{\infty }\sum_{n=0}^{\infty }\mathcal{P}_{\mathbf{k}% n}^{(\ell )}\Theta _{n}^{\mu _{1}\cdots \mu _{\ell }}k_{\left\langle \mu _{1}\right. }\cdots k_{\left. \mu _{\ell }\right\rangle }, \end{equation*}% where the moments can be obtained using the orthogonality relations satisfied by the basis elements and are given by \begin{equation} \Theta _{n}^{\mu _{1}\cdots \mu _{\ell }}=\int \frac{d^{3}k}{(2\pi )^{3}\,\sqrt{-g}}\,\frac{(-k\cdot u)^{n}}{k^\tau}k^{\left\langle \mu _{1}\right. }\,\cdots k^{\left. \mu _{\ell }\right\rangle }f_{\mathrm{eq}}\Phi \,. \label{moment} \end{equation}% For the sake of convenience, we defined above% \begin{equation*} \mathcal{P}_{\mathbf{k}n}^{(\ell )}\equiv \frac{N_{\ell }}{\ell !}% \sum_{m=n}^{\infty }a_{mn}^{(\ell )}P_{\mathbf{k}}^{\left( n\ell \right) }. \end{equation*} We note that the scalar moments can also be calculated analytically, by replacing Eq.\ (\ref{phisolution}) into Eq.\ (\ref% {moment}). The solution is \begin{equation} \Theta _{n}=\frac{n-2}{16\pi ^{2}}\,T^{n+2}(\rho )\,\Gamma (n+2)\,\mathcal{H}% (\rho )\,\exp \left[ -\frac{\pi T_{0}}{2c}\left( 1+\xi \right) \right] \,, \label{resultscalarmoments} \end{equation}% where $\Gamma (n)$ is the Gamma function. Note that this quantity vanishes for $n=2$, as expected, from the energy matching condition and that $\Theta_0/T^2=\Theta_1/T^3 <0$ while $\Theta_n >0$ for $n>2$. For the sake of completeness, in Fig.\ 2 we plot $\Theta_3/T^5$ for different values of $T_0/c$ with $\xi =0.01$. \begin{figure}[t] \includegraphics[width=0.6\linewidth]{plotscalar.eps} \caption{(Color online) Normalized scalar moment $\Theta_3/T^{5}$ for different values of $T_0/c$ (with fixed $\protect\xi=0.01$). The solid black line was computed using $T_0/c=10$, the dashed red curve is for $T_0/c=1$, while the dotted-dashed blue curve is for $T_0/c=0.1$.} \label{fig:2} \end{figure} The actual moment expansion of $\Phi $ then becomes% \begin{equation*} \Phi =\sum_{n=0}^{\infty }\mathcal{P}_{\mathbf{k}n}^{(0)}\Theta _{n}\text{ }. \end{equation*}% Truncating this expression at $n=2$ (note that the matching condition fixes $% \Theta _{2}=0$), we obtain something analogous to the 14-moment approximation \cite{IS}, \begin{equation*} \Phi =\mathcal{P}_{\mathbf{k}0}^{(0)}\Theta _{0}+\mathcal{P}_{\mathbf{k}% 1}^{(0)}\Theta _{1}. \end{equation*}% For a gas of nondegenerate massless particles, it is easy to show that% \begin{equation*} \mathcal{P}_{\mathbf{k}0}^{(0)}=\frac{2\pi ^{2}}{T^{2}}\left( 3+\frac{1}{T}% u_{\mu }k^{\mu }\right) \text{, }\mathcal{P}_{\mathbf{k}1}^{(0)}=-\frac{2\pi ^{2}}{T^{3}}\left( 1+\frac{1}{2T}u_{\mu }k^{\mu }\right) \text{ }. \end{equation*}% where we used that% \begin{equation*} a_{00}^{(0)}=1\text{, }\left[ a_{11}^{(0)}\right] ^{2}=\frac{1}{2T^{4}}\text{% , \ }\frac{a_{10}^{(0)}}{a_{11}^{(0)}}=-2T\text{.} \end{equation*} In this truncation scheme, the distribution function is then approximated to be% \begin{equation*} \Phi =-\frac{1}{2}\,\mathcal{H}(\rho )\,\exp \left[ -\frac{\pi T_{0}}{2c}% \left( 1+\xi \right) \right] \left( 1-\frac{1}{4T_{0}}\sqrt{p_{\rho }^{2}+p_{\theta }^{2}+\left( p_{\phi }^{2}/\sin ^{2}\theta \right) }% \,\right) \text{ }. \end{equation*}% For $\rho =0$, this is exactly the same as our analytical solution, see Eq.\ (% \ref{dontcare}). This shows that a finite number of scalar moments are able to provide a reasonable description of this system at least when $\rho=0$. \section{Conclusions} \label{SecIV} In this paper we derived the first analytical solution of the Anderson-Witting-Boltzmann equation for a radially expanding system (known as conformal soliton flow) of massless particles. We further demonstrated how the matching conditions, commonly used to define temperature in kinetic theory, restrict the form of the solution of the single particle distribution function. The solution we found has some very interesting features. In this system the slowly moving hydrodynamic degrees of freedom do not see dissipation, e.g., sound waves propagate without any distortion from viscosity. However, faster degrees of freedom are still present and they produce a finite amount of entropy. This may be the first example of a kinetic system that does not have a viscous hydrodynamic behavior: between its ideal fluid and free-streaming limits, there is no region in space and time where a viscous fluid dynamical description is valid. This conclusion regarding the perfect fluidity of the conformal soliton flow, studied here at weak coupling in the context of kinetic theory, was also found in the case of an infinitely coupled $\mathcal{N}=4$ Supersymmetric Yang-Mills plasma \cite{Friess:2006kw}. In fact, even though this strongly-coupled system has nonzero shear viscosity $\eta/s=1/(4\pi)$ \cite{Kovtun:2004de}, the underlying symmetries of the flow together with conformal invariance impose that the energy-momentum tensor of the system retains its perfect fluid form. This shows that the exact cancellation of shear viscous effects in the energy-momentum tensor discussed here also happens in strongly coupled systems. For any finite value of $c$ in the relaxation time \eqref{important2}, our solution for the distribution function does not return to local thermal equilibrium even at sufficiently large times. In fact, one can see from \eqref{definetaurho} that large times (for fixed radius $r$) correspond to large $\rho$'s and, in this case, the non-equilibrium contribution given by \eqref{phisolution} and \eqref{defineJ} does not vanish if $c\neq 0$. Thus, in our system the effects of the expansion overcome the collision term and the distribution function does not relax to its equilibrium form. We note that a similar conclusion was found for a different type of rapidly expanding gas in \cite{Bazow:2015dha}, which went beyond the relaxation time approximation and took into account the full nonlinearities of the collision term of the relativistic Boltzmann equation. The essential approximations made here to find this novel many-body effect were: relaxation time approximation, conformal dynamics, and spherical symmetry (implemented via the $\mathrm{AdS}_{2}\otimes \mathrm{S}_{2}$ construction). The effects discussed in this paper may appear when describing a perfectly radially symmetric and homogeneous droplet of quark-gluon plasma, at very high temperatures and vanishing chemical potentials, expanding in vacuum. In this limit, QCD is approximately conformal and the flow configuration should resemble the one discussed in this paper. \section*{Acknowledgements} The authors thank Y.~Hatta, B.~Xiao, and M.~Martinez for collaboration in the early stage of this work. G.~S.~Denicol is currently supported under DOE Contract No. DE-SC0012704 and acknowledges previous support of a Banting fellowship provided by the Natural Sciences and Engineering Research Council of Canada. J.~N.\ thanks Columbia University's Physics Department for the hospitality and Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'{o}gico (CNPq) and Funda\c{c}\~{a}o de Amparo \`{a} Pesquisa do Estado de S\~{a}o Paulo (FAPESP) for financial support.
3,212,635,537,824
arxiv
\section{Introduction} The tunneling phenomenon is a salient feature of quantum physics that even physicists sometimes find peculiar. One such example of it is resonant tunneling. The tunneling effect is usually characterized by an exponential suppression by a potential barrier; however, if a pair of potential barriers exists and certain conditions are met, tunneling occurs with a probability of unity, as we will explain in the next section. Resonant tunneling has been extensively studied, but interesting applications and researches of it still exist. This paper presents one such application\cite{Tye:2011xp}\cite{Tye:2009rb}\cite{Saffin:2008vi}\cite{Copeland:2007qf}\cite{Sarangi:2007jb}. In the following sections, we investigate resonant tunneling in the context of matrix models and explore its consequences. Matrix models may seem an unlikely arena for the tunneling phenomenon because, first, quantum fluctuations in the models described by $N \times N$ matrices are suppressed when one takes a large-$N$ limit, in which almost all the useful analyses can be conducted. There, the system usually stays at a stable vacuum whose nature is well understood, until a change in the parameters sets some of the degrees of freedom unstable and causes a divergence. This is the critical point of usual matrix models. Our point is that new instabilities caused by resonant tunneling unfold, while ordinary tunneling effects are much subtle in matrix models \footnote{The tunneling of an eigenvalue of matrix models is attributed to non-perturbative effect of non-critical string theory \cite{Shenker:1990uf,Eynard:1992sg,David:1990sk,David:1992za,Lechtenfeld:1991kc}. It can be understood as a non-critical version of D-branes\cite{Polchinski:1994fq, Zamolodchikov:2001ah,Fukuma:1999tj,Neves:1997xt}\cite{Klebanov:2003km,Martinec:2003ka,McGreevy:2003kb,Alexandrov:2003nn}. This perspective prompted further investigations \cite{Takayanagi:2003sm,Douglas:2003up,Hanada:2004im}.}. Since we need to incorporate the tunneling phenomenon, our current analysis is, among other various matrix models, limited to the one-dimensional one, where tunneling effect is most straightforward to calculate. The other cases are left for future study. \section{Resonant tunneling}\label{rt} Let us first illustrate resonant tunneling\cite{Merzbacher:1997} \cite{Bohm:1989} . Consider the one-dimensional Schr\"odinger equation \begin{equation} \frac{d^2 \psi}{dx^2}+\frac{2m(E-V(x))}{\hbar^2}\psi=0, \end{equation} where, for the sake of simplicity, we take $V(x)$ to be symmetric under the reflection at the origin $x \leftrightarrow -x$ as shown in Fig. \ref{fig}. \begin{figure}[htbp] \begin{center} \includegraphics[width=12cm]{potential1.pdf} \caption{Potential $V(x)$ divided into five regions} \label{fig} \end{center} \end{figure} Now suppose a situation in which an incident wave enters from the left at $x<-b$ (region $I$ in Fig. \ref{fig}) and eventually exits from the right at $x>b$ (region $V$ in Fig. \ref{fig}.). Then, within the WKB (Wentzel-Kramers-Brillouin) approximation, the incident wave function in region $I$ can be expressed as \begin{equation} \psi_{I}= \frac{A}{k} e^{i \int^x_{-b}k dx} +\frac{B}{k} e^{-i \int^x_{-b}k dx}, \end{equation} where \begin{equation} k(x)\equiv \sqrt{\frac{2m}{\hbar}(E-V(x))}. \end{equation} The outgoing wave function in the region $V$ is given by \begin{equation} \psi_{V}=\frac{F}{k} e^{i \int^x_{a}k dx} , \end{equation} and the wave function in region $III$ $(-a < x <a)$ is given by \begin{equation} \psi_{III}= \frac{C}{k} e^{i \int^x_{a}k dx} +\frac{D}{k} e^{-i \int^x_{a}k dx . \end{equation} The relationships among the coefficients $A, B, \dots , F$ in the above expressions can be calculated using the connection formula. In fact, these relationships can be conveniently cast into a matrix form; for example, \begin{equation} \left(\begin{array}{c}C \\D\end{array}\right) =\frac12 \left(\begin{array}{cc}2\theta+\frac{1}{2\theta} & i(2\theta-\frac{1}{2\theta}) \\-i(2\theta-\frac{1}{2\theta}) & 2\theta+\frac{1}{2\theta}\end{array}\right)\left(\begin{array}{c}F \\0\end{array}\right), \end{equation} where \begin{equation} \theta=e^{\int^b_a \kappa dx}, \ \ \kappa(x)\equiv\sqrt{\frac{2m}{\hbar}\left(V\left(x\right)-E\right)}. \label{thetadef} \end{equation} Then, the transmission coefficient between regions $III$ and $V$ can be determined as follows: \begin{equation} T_{III\rightarrow V}=\frac{|F|^2}{|C|^2}= \frac{4}{\left(2\theta +\frac{1}{2\theta}\right)^2}. \end{equation} Note that $\theta$ contains the exponential factor, as shown in Eq. (\ref{thetadef}); therefore, the transmission from regions $III$ to $V$ is exponentially suppressed as expected. In addition, one can apply the connection formula between regions $I$ and $III$ and combine the result with the above calculation to obtain \begin{eqnarray} \left(\begin{array}{c}A \\B\end{array}\right)&=&\frac12 \left(\begin{array}{cc}2\theta+\frac{1}{2\theta} & i(2\theta-\frac{1}{2\theta}) \\-i(2\theta-\frac{1}{2\theta}) & 2\theta+\frac{1}{2\theta}\end{array}\right) \left(\begin{array}{c}e^{-\frac{J}{2\hbar}i}\frac12\left(2\theta+\frac{1}{2\theta}\right) F \\-ie^{\frac{J}{2\hbar}}\frac12\left(2\theta-\frac{1}{2\theta}\right)F\end{array}\right), \\ &=&\frac{F}{4}\left(\begin{array}{c}2\cos \frac{J}{2\hbar} \left(4\theta^2+\frac{1}{4\theta^2}\right)-4i\sin \frac{J}{2\hbar} \\-2i\cos \frac{J}{2\hbar} \left( 4\theta^2-\frac{1}{4\theta^2}\right)\end{array}\right), \end{eqnarray} where \begin{equation} J\equiv 2\int^a_{-a}\sqrt{\frac{2m}{\hbar}(E-V(x))} dx =\oint p dq.\label{Jdef} \end{equation} Then, the total transmission coefficient is \begin{equation} T_{I\rightarrow V}=\frac{|F|^2}{|A|^2}= \frac{4}{\left(4\theta^2 +\frac{1}{4\theta^2}\right)^2\cos^2 \frac{J}{2\hbar}+4\sin^2\frac{J}{2\hbar}}. \label{totaltransmission} \end{equation} Note that the transmission coefficient given in Eq. (\ref{totaltransmission}) also exhibits an exponential suppression through its $\theta$ dependence. However, a special situation occurs if the following condition is met: \begin{equation} J=2\pi\hbar\left(n+\frac12\right), \ \ n \hbox{: integer}. \label{Jcond} \end{equation} In this case, the cosine in the denominator of Eq. (\ref{totaltransmission}) vanishes and the sine becomes unity, yielding a transmission coefficient $T_{I\rightarrow V}$ of exactly unity. The exponential suppression of tunneling vanishes completely, and the transmission occurs with a probability of unity. This remarkable phenomenon is known as resonant tunneling \cite{Merzbacher:1997}\cite{Bohm:1989}. \section{Matrix model} In the previous section, we treated a single-particle wave function. Now we consider the case for $N^2$ particles, each of mass $m$. Various possible interactions among these $N^2$ particles may exist, but we choose the interactions and potential such that all the degrees of freedom can be cast into an $N\times N$ Hermite matrix $M_{ij}$ and the Hamiltonian $H$ takes the following form: \begin{eqnarray} H&=&-\frac{\hbar^2}{2m} \Delta_M + \hbox{Tr} V(M) , \nonumber \\ &\Delta_M & =\sum_{1 \leqslant i \leqslant N} \frac {\partial^2}{\partial M_{ii}^2} +\frac12 \sum_{1 \leqslant i <j \leqslant N} \Biggl[ \frac {\partial^2}{\partial Re M_{ij}^2}+\frac {\partial^2}{\partial Im M_{ij}^2}\Biggr] \label{mm} \\ &V(M)&=\frac12 M^2 + \sum_p g_p N^{1-\frac{p}{2}} M^p.\nonumber \end{eqnarray} For the sake of simplicity, $\hbar$ is set to unity here. In their seminal paper \cite{Brezin:1977sv}, Brezin, Itzykson, Parisi and Zuber made an ingenious observation that because of invariance under the $U(N)$ transformation $M \rightarrow UMU^{-1}$, the ground state of the above $N^2$ particles, which is assumed to possess the invariance just mentioned, is governed by the following Schr\"odinger equation for N eigenvalues of $M$: \begin{equation} \sum_{1 \leqslant i \leqslant N} \Bigl\{ -\frac{\hbar^2}{2m} \frac {\partial^2}{\partial \lambda_{i}^2} + \frac12\lambda_i^2 + \sum_p g_p N^{1-\frac{p}{2}}\lambda_i^p \Bigr\} \phi_N(\{\lambda_i\})=N^2E_{g}\phi_N(\{\lambda_i\}). \label{Schreq} \end{equation} Here, the scale of $E_{g}$ (the total energy of the system) is set in accordance with the total number of the degrees of freedom $N^2$, and the subscript indicates its dependence on the potential variables $\{g_p\}$. Considering the Van der Monde determinant, the above wave function $\phi(\lambda)$ must be completely antisymmetric among its $N$ variables; hence, Eq. (\ref{Schreq}) represents $N$ free fermions under the following potential: \begin{equation} V(\lambda)=\frac12\lambda^2+\sum_p g_p N^{1-\frac{p}{2}}\lambda^p. \end{equation} Thus, the physics of the sector of interest is governed by N free fermions, each of which is subject to the following Schr\"odinger equation: \begin{equation} \frac{d^2\phi}{dx^2}+\frac{2m}{\hbar^2}\left( N\epsilon -V\left( x\right) \right)\phi=0, \end{equation} where the scale of $\epsilon$ (the energy of each fermion) is again set in accordance with the number of the degrees of freedom, because each fermion carries approximately $N$ times the energy of the original particle. Then, the ground state of the whole system is easy to construct by filling the energy levels from the bottom up to the Fermi energy $\epsilon_F$ (Fig. \ref{fig2}). \begin{figure}[htbp] \begin{center} \includegraphics[width=12cm]{symmetricpotentialcenter.pdf} \caption{Fermions filling the energy levels.} \label{fig2} \end{center} \end{figure} $\epsilon_F$ is determined from knowing that the total number of fermions is $N$. For the large $N$ approximation, the following equation holds: \begin{equation} N=\int \frac{d\lambda dp}{2\pi \hbar}\Theta \left( \epsilon_F-\frac{p^2}{2m} -V\left(\lambda\right) \right), \label{NeF} \end{equation} where $\Theta$ denotes the step function. The integral in Eq. (\ref{NeF}) implicitly determines $\epsilon_F$ but a singularity exists when the level of $\epsilon_F$ reaches any local maximum of the potential $V(\lambda)$. This singularity corresponds to the critical point where the present matrix model given in Eq. (\ref{mm}) becomes equivalent to the $D=2$ non-critical string theory in the so-called double-scaling limit \cite{c1doublescaling}. The physics behind this criticality is that beyond that critical point, a fermion with the Fermi energy is no longer confined in the valley formed by the local maxima, and it can cross over one of them. Therefore, the system described by these fermions goes through a phase transition. Details of this critical behavior have been investigated \cite{Kazakov:1988ch}. \section{New criticality through resonant tunneling} Now we argue that the matrix model presented in Eq. (\ref{mm}) exhibits a different kind of criticality than that explained in the previous section if resonant tunneling is considered. Consider a case with the potential shown in Fig. \ref{fig3}. \begin{figure}[htbp] \begin{center} \includegraphics[width=12cm]{symmetricpotentialfilling.pdf} \caption{Energy levels starting at the left minimum are filled up to the Fermi energy.} \label{fig3} \end{center} \end{figure} The energy levels that corresponds to the motion near the origin are the first to be filled, and they are filled up to the Fermi energy $\epsilon_{F}$, which is determined using Eq. (\ref{NeF}). Because different values of the parameter $g_{p}$ in the potential lead to different values of the Fermi energy $\epsilon_{F}$, one can tune the level of $\epsilon_{F}$ by gradually varying the value $g_{p}$. Figure \ref{fig4} depicts a case in which $\epsilon_{F}$ is zero and the central local minimum is slightly negative. Note that the shape of the potential is completely symmetric under reflection at the central local minimum, and the Fermi energy $\epsilon_{F}$ is less than the local maxima. \begin{figure}[htbp] \begin{center} \includegraphics[width=12cm]{symmetricpotentialtunneling.pdf} \caption{Resonant Tunneling. The region shaded in green is relevant for $J$.} \label{fig4} \end{center} \end{figure} One can calculate a quantity similar to that in Eq. (\ref{Jdef}) in Section \ref{rt} where we discussed resonant tunneling: \begin{equation} J\equiv 2\int^{z_{2}}_{z_{3}}\sqrt{\frac{2m}{\hbar}(-V(x))} dx, \end{equation} where $z_{2}$ and $z_{3}$ are the nearest zero points of the potential to the local minimum. Now suppose $J$ satisfies the following: \begin{equation} J=\pi\hbar,\label{J0} \end{equation} which is nothing but the condition for resonant tunneling given in Eq. (\ref{Jcond}) with $n=0$. Therefore, the analysis described in Section \ref{rt} indicates that a fermion with energy $E=0$ can penetrate and pass through two of the potential barriers into the other potential minimum region with a transmission coefficient of unity. From the viewpoint of the fermions, the Fermi energy gradually increases to zero as the potential parameters change, and if the potential was set to simultaneously satisfy Eq. (\ref{J0}), the potential barrier formed by the twin peaks suddenly disappears (Fig. \ref{fig45}). \begin{figure}[htbp] \begin{center} \includegraphics[width=12cm]{symmetricpotentialmerged.pdf} \caption{Potential felt by fermions due to resonant tunneling} \label{fig45} \end{center} \end{figure Therefore, the situation is similar to the criticality discussed in the previous section in the sense that the fermion with the highest energy crosses the potential barrier and senses the potential beyond the barrier. We claim that this is a novel criticality realized by the resonant tunneling phenomenon. The novelty here is not only the association with the resonant tunneling. Let us introduce a more general potential such as the one shown in Figure \ref{fig5}. \begin{figure}[htbp] \begin{center} \includegraphics[width=12cm]{asymmetricpotential.pdf} \caption{More general potential} \label{fig5} \end{center} \end{figure} In this case, the Fermi energy can assume a positive value without hitting the criticality, even if the potential satisfies the condition for the resonant tunneling: \begin{equation} J=2\int^{z_{3}}_{z_{2}}\sqrt{\frac{2m}{\hbar}(-V(x))} dx =2\pi\hbar\left(n+\frac12\right), \ \ n \hbox{: integer}. \label{Jcondnew} \end{equation} This is because for a general potential, the transmission coefficient takes the following form instead of that presented in Eq. (\ref{totaltransmission}): \begin{equation} T= \frac{4}{\left(4\theta \theta' +\frac{1}{4\theta\theta'}\right)^2\cos^2 \frac{J}{2\hbar}+\left( \frac{\theta}{\theta'}+\frac{\theta'}{\theta} \right)^2\sin^2\frac{J}{2\hbar}}, \end{equation} where \begin{equation} \theta=e^{\int^{z_1}_{z_2} \kappa dx}, \ \ \kappa(x)\equiv\sqrt{\frac{2m}{\hbar}V\left(x\right)}, \end{equation} and \begin{equation} \theta'=e^{\int^{z_3}_{z_4} \kappa dx}, \ \ \kappa(x)\equiv\sqrt{\frac{2m}{\hbar}V\left(x\right)}, \end{equation} respectively. When the condition for resonant tunneling presented in Eq. (\ref{Jcondnew}) is satisfied, the transmission coefficient is \begin{equation} T= \frac{4}{\left( \frac{\theta}{\theta'}+\frac{\theta'}{\theta} \right)^2}. \end{equation} The transmission coefficient $T$ would be unity only if $\theta=\theta'$, otherwise it would exhibit an exponential suppression arising from $\theta$ and $\theta'$. Now suppose that the potential in Fig. \ref{fig5} is tuned so that $\theta=\theta'$. Then, fermions with energy $0<E<\epsilon_F$ could freely travel to the region $x>z_4$. Therefore, it would appear that the energy carried by such fermions escapes from the system instantaneously. One may interpret the escaped energy as latent heat. This observation suggests that the criticality we introduced corresponds to the first-order transition rather than a second-order one. \footnote{We owe this observation to H. Kawai.} \section{Conclusion} In this paper, we demonstrated that the Hermitian matrix models with resonant tunneling could exhibit a new criticality. We suggested that this new criticality corresponds to the first-order transition, in contrast to the conventional critical point that corresponds to a second-order transition. While it is well established that the criticality associated with the second-order transition manifests $c=1$ non-critical string, the physical implications of the present criticality remains to be explored. In particular, future studies could examine the detailed behavior of this new criticality. \section*{Acknowledgements} The author is indebted to H. Kawai for illuminating discussions and many insights provided towards the present paper. The author would like to thank S. H. Tye for bringing the subject of resonant tunneling to his attention and giving a clear explanation of it. This work is supported in part by the Basic Science Interdisciplinary Project "Study on the Genesis of Matter."
3,212,635,537,825
arxiv
\section{Introduction} This document is a model and instructions for \LaTeX. Please observe the conference page limits. \section{Ease of Use} \subsection{Maintaining the Integrity of the Specifications} The IEEEtran class file is used to format your paper and style the text. All margins, column widths, line spaces, and text fonts are prescribed; please do not alter them. You may note peculiarities. For example, the head margin measures proportionately more than is customary. This measurement and others are deliberate, using specifications that anticipate your paper as one part of the entire proceedings, and not as an independent document. Please do not revise any of the current designations. \section{Prepare Your Paper Before Styling} Before you begin to format your paper, first write and save the content as a separate text file. Complete all content and organizational editing before formatting. Please note sections \ref{AA}--\ref{SCM} below for more information on proofreading, spelling and grammar. Keep your text and graphic files separate until after the text has been formatted and styled. Do not number text heads---{\LaTeX} will do that for you. \subsection{Abbreviations and Acronyms}\label{AA} Define abbreviations and acronyms the first time they are used in the text, even after they have been defined in the abstract. Abbreviations such as IEEE, SI, MKS, CGS, ac, dc, and rms do not have to be defined. Do not use abbreviations in the title or heads unless they are unavoidable. \subsection{Units} \begin{itemize} \item Use either SI (MKS) or CGS as primary units. (SI units are encouraged.) English units may be used as secondary units (in parentheses). An exception would be the use of English units as identifiers in trade, such as ``3.5-inch disk drive''. \item Avoid combining SI and CGS units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity that you use in an equation. \item Do not mix complete spellings and abbreviations of units: ``Wb/m\textsuperscript{2}'' or ``webers per square meter'', not ``webers/m\textsuperscript{2}''. Spell out units when they appear in text: ``. . . a few henries'', not ``. . . a few H''. \item Use a zero before decimal points: ``0.25'', not ``.25''. Use ``cm\textsuperscript{3}'', not ``cc''.) \end{itemize} \subsection{Equations} Number equations consecutively. To make your equations more compact, you may use the solidus (~/~), the exp function, or appropriate exponents. Italicize Roman symbols for quantities and variables, but not Greek symbols. Use a long dash rather than a hyphen for a minus sign. Punctuate equations with commas or periods when they are part of a sentence, as in: \begin{equation} a+b=\gamma\label{eq} \end{equation} Be sure that the symbols in your equation have been defined before or immediately following the equation. Use ``\eqref{eq}'', not ``Eq.~\eqref{eq}'' or ``equation \eqref{eq}'', except at the beginning of a sentence: ``Equation \eqref{eq} is . . .'' \subsection{\LaTeX-Specific Advice} Please use ``soft'' (e.g., \verb|\eqref{Eq}|) cross references instead of ``hard'' references (e.g., \verb|(1)|). That will make it possible to combine sections, add equations, or change the order of figures or citations without having to go through the file line by line. Please don't use the \verb|{eqnarray}| equation environment. Use \verb|{align}| or \verb|{IEEEeqnarray}| instead. The \verb|{eqnarray}| environment leaves unsightly spaces around relation symbols. Please note that the \verb|{subequations}| environment in {\LaTeX} will increment the main equation counter even when there are no equation numbers displayed. If you forget that, you might write an article in which the equation numbers skip from (17) to (20), causing the copy editors to wonder if you've discovered a new method of counting. {\BibTeX} does not work by magic. It doesn't get the bibliographic data from thin air but from .bib files. If you use {\BibTeX} to produce a bibliography you must send the .bib files. {\LaTeX} can't read your mind. If you assign the same label to a subsubsection and a table, you might find that Table I has been cross referenced as Table IV-B3. {\LaTeX} does not have precognitive abilities. If you put a \verb|\label| command before the command that updates the counter it's supposed to be using, the label will pick up the last counter to be cross referenced instead. In particular, a \verb|\label| command should not go before the caption of a figure or a table. Do not use \verb|\nonumber| inside the \verb|{array}| environment. It will not stop equation numbers inside \verb|{array}| (there won't be any anyway) and it might stop a wanted equation number in the surrounding equation. \subsection{Some Common Mistakes}\label{SCM} \begin{itemize} \item The word ``data'' is plural, not singular. \item The subscript for the permeability of vacuum $\mu_{0}$, and other common scientific constants, is zero with subscript formatting, not a lowercase letter ``o''. \item In American English, commas, semicolons, periods, question and exclamation marks are located within quotation marks only when a complete thought or name is cited, such as a title or full quotation. When quotation marks are used, instead of a bold or italic typeface, to highlight a word or phrase, punctuation should appear outside of the quotation marks. A parenthetical phrase or statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.) \item A graph within a graph is an ``inset'', not an ``insert''. The word alternatively is preferred to the word ``alternately'' (unless you really mean something that alternates). \item Do not use the word ``essentially'' to mean ``approximately'' or ``effectively''. \item In your paper title, if the words ``that uses'' can accurately replace the word ``using'', capitalize the ``u''; if not, keep using lower-cased. \item Be aware of the different meanings of the homophones ``affect'' and ``effect'', ``complement'' and ``compliment'', ``discreet'' and ``discrete'', ``principal'' and ``principle''. \item Do not confuse ``imply'' and ``infer''. \item The prefix ``non'' is not a word; it should be joined to the word it modifies, usually without a hyphen. \item There is no period after the ``et'' in the Latin abbreviation ``et al.''. \item The abbreviation ``i.e.'' means ``that is'', and the abbreviation ``e.g.'' means ``for example''. \end{itemize} An excellent style manual for science writers is \cite{b7}. \subsection{Authors and Affiliations} \textbf{The class file is designed for, but not limited to, six authors.} A minimum of one author is required for all conference articles. Author names should be listed starting from left to right and then moving down to the next line. This is the author sequence that will be used in future citations and by indexing services. Names should not be listed in columns nor group by affiliation. Please keep your affiliations as succinct as possible (for example, do not differentiate among departments of the same organization). \subsection{Identify the Headings} Headings, or heads, are organizational devices that guide the reader through your paper. There are two types: component heads and text heads. Component heads identify the different components of your paper and are not topically subordinate to each other. Examples include Acknowledgments and References and, for these, the correct style to use is ``Heading 5''. Use ``figure caption'' for your Figure captions, and ``table head'' for your table title. Run-in heads, such as ``Abstract'', will require you to apply a style (in this case, italic) in addition to the style provided by the drop down menu to differentiate the head from the text. Text heads organize the topics on a relational, hierarchical basis. For example, the paper title is the primary text head because all subsequent material relates and elaborates on this one topic. If there are two or more sub-topics, the next level head (uppercase Roman numerals) should be used and, conversely, if there are not at least two sub-topics, then no subheads should be introduced. \subsection{Figures and Tables} \paragraph{Positioning Figures and Tables} Place figures and tables at the top and bottom of columns. Avoid placing them in the middle of columns. Large figures and tables may span across both columns. Figure captions should be below the figures; table heads should appear above the tables. Insert figures and tables after they are cited in the text. Use the abbreviation ``Fig.~\ref{fig}'', even at the beginning of a sentence. \begin{table}[htbp] \caption{Table Type Styles} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textbf{Table}&\multicolumn{3}{|c|}{\textbf{Table Column Head}} \\ \cline{2-4} \textbf{Head} & \textbf{\textit{Table column subhead}}& \textbf{\textit{Subhead}}& \textbf{\textit{Subhead}} \\ \hline copy& More table copy$^{\mathrm{a}}$& & \\ \hline \multicolumn{4}{l}{$^{\mathrm{a}}$Sample of a Table footnote.} \end{tabular} \label{tab1} \end{center} \end{table} \begin{figure}[htbp] \centerline{\includegraphics{fig1.png}} \caption{Example of a figure caption.} \label{fig} \end{figure} Figure Labels: Use 8 point Times New Roman for Figure labels. Use words rather than symbols or abbreviations when writing Figure axis labels to avoid confusing the reader. As an example, write the quantity ``Magnetization'', or ``Magnetization, M'', not just ``M''. If including units in the label, present them within parentheses. Do not label axes only with units. In the example, write ``Magnetization (A/m)'' or ``Magnetization \{A[m(1)]\}'', not just ``A/m''. Do not label axes with a ratio of quantities and units. For example, write ``Temperature (K)'', not ``Temperature/K''. \section*{Acknowledgment} The preferred spelling of the word ``acknowledgment'' in America is without an ``e'' after the ``g''. Avoid the stilted expression ``one of us (R. B. G.) thanks $\ldots$''. Instead, try ``R. B. G. thanks$\ldots$''. Put sponsor acknowledgments in the unnumbered footnote on the first page. \section*{References} Please number citations consecutively within brackets \cite{b1}. The sentence punctuation follows the bracket \cite{b2}. Refer simply to the reference number, as in \cite{b3}---do not use ``Ref. \cite{b3}'' or ``reference \cite{b3}'' except at the beginning of a sentence: ``Reference \cite{b3} was the first $\ldots$'' Number footnotes separately in superscripts. Place the actual footnote at the bottom of the column in which it was cited. Do not put footnotes in the abstract or reference list. Use letters for table footnotes. Unless there are six authors or more give all authors' names; do not use ``et al.''. Papers that have not been published, even if they have been submitted for publication, should be cited as ``unpublished'' \cite{b4}. Papers that have been accepted for publication should be cited as ``in press'' \cite{b5}. Capitalize only the first word in a paper title, except for proper nouns and element symbols. For papers published in translation journals, please give the English citation first, followed by the original foreign-language citation \cite{b6}. \section{Motivation} Very-high-energy (VHE; from about 20 GeV to 300 TeV) gamma rays provide a critical probe of the Universe's most extreme environments, offering the opportunity to study exotic astrophysics and fundamental physics at high energies and cosmological distances. Gamma rays in this energy range can be indirectly detected on the ground using arrays of imaging atmospheric Cherenkov telescopes (IACTs), which detect the Cherenkov light emitted from air showers produced by VHE gamma rays when they are absorbed by the atmosphere. A wide variety of scientific studies can be performed with VHE gamma rays \cite{Consortium2019}. VHE gamma rays are observed from supernova remnants and pulsar wind nebulae in the Milky Way and supermassive black holes in distant galaxies, providing insight into the nature of these sources, such as how and where in these sources particles are accelerated to relativistic energies. Astrophysicists also search for VHE gamma-ray emission from dark-matter-dominated objects such as dwarf galaxies, looking for gamma rays hypothesized to be produced by dark matter annihilation or decay. In addition, IACTs play a key role in multimessenger astronomy, regularly searching for VHE emission produced by gamma-ray bursts and by the sources of gravitational wave events, and having recently detected TeV gamma-ray emission from a flaring blazar coincident with a highly energetic neutrino detected by the IceCube Neutrino Observatory \cite{Aartsen2018}. \begin{figure*}[!t] \centering \subfloat[Example IACT image]{\includegraphics[width=0.85\columnwidth]{000FlashCam7_hexagonal_DNC.png} \label{fig:flashcam}} \hfil \subfloat[Same image mapped using rebinning]{\includegraphics[width=0.85\columnwidth]{000FlashCam7_rebinning_DNC.png} \label{fig:rebinning}} \caption{Left: An example IACT image from a CTA FlashCam camera simulation, illustrating the hexagonally spaced grid of pixels typical of many IACT cameras. Right: The same image mapped to a square matrix of pixels by rebinning, which preserves the image's overall amplitude. Both images are from \cite{Nieto2019b}.} \label{fig:mapping_methods} \end{figure*} Measurements with IACTs enable these scientific studies by extracting information about VHE particles from the air showers they produce in the atmosphere. In a conventional IACT analysis, images from multiple telescopes are parameterized and stereoscopically combined to extract the spatial, temporal, and calorimetric information of the originating VHE particle. \section{Gamma-ray Image Analysis} The sensitivity of IACTs depends strongly on efficiently rejecting the background of much more numerous cosmic-ray showers, which resemble those produced by gamma rays but tend to have a more complex morphology. Using the information contained in the shapes of the shower images is therefore critical to maximizing IACT sensitivity. Supervised learning algorithms, like random forests and boosted decision trees, have been shown to effectively classify IACT events based on event-level parameters constructed using images from multiple telescopes (e.g. \cite{Krause2017}). Deep learning techniques, such as convolutional neural networks (CNNs), may be used to improve on these methods because they do not require the images to be parameterized and may therefore access features of these images that would be washed out by the parameterization \cite{Nieto2017}. A deep learning approach that combines CNNs with a recurrent neural network (RNN) has been shown to improve background rejection performance using data from the H.E.S.S. IACT array \cite{Shilon2019}. In previous work, the input images to such a network have been sorted by total amplitude. In this study, we apply a similar model to simulated data from the Cherenkov Telescope Array (CTA) \cite{Acharya2013}, the next-generation observatory for gamma-ray astronomy, to determine the effect of this sorting procedure on classification performance. \section{CTLearn} We implement our neural network model using CTLearn\footnote{\url{https://github.com/ctlearn-project/ctlearn}} \cite{ari_brill_2019_3345947}, an open-source Python package for using deep learning to analyze pixel-wise camera data from arrays of IACTs. CTLearn provides an application-specific framework for configuring and training machine learning models with TensorFlow\footnote{\url{https://www.tensorflow.org}} and applying the trained models to generate predictions on a test set \cite{Nieto2019a}. CTLearn v0.3.0 was used for training the models used in this work. Through the associated DL1-Data-Handler package \cite{bryan_kim_2019_3336561}, CTLearn can load and preprocess IACT data from any major current- or next-generation IACT. In particular, because many IACT cameras have pixels arranged in a hexagonal layout, posing a challenge for convolutional neural networks that conventionally require as input a rectangular matrix of input pixels, DL1-Data-Handler provides a number of methods to map hexagonally spaced pixels to a square grid. In this work, the rebinning method was chosen (Fig. \ref{fig:rebinning}), which is one of several mapping methods that provide comparably good performance \cite{Nieto2019b}. \begin{figure}[htbp] \centerline{\includegraphics[width=0.85\columnwidth]{cnn-rnn-model.pdf}} \caption{Diagram of the CNN-RNN particle classification model implemented in CTLearn, from \cite{Nieto2019a}. The model uses a CNN block (labeled as a deep convolutional network or DCN) to derive a vector representation of each image in an event. The vectors are combined using a Long Short Term Memory network (LSTM), a type of recurrent neural network (RNN).} \label{fig:cnn_rnn} \end{figure} \section{CNN-RNN Particle Classification Model} A challenge of using deep learning methods with IACT data is combining images from multiple telescopes providing different views of an air shower event. Each event triggers multiple telescopes, and the number of triggered telescopes may vary from event to event. One approach to deal with this challenge is to break the problem into two stages. First, each image is processed into a vector representation by a CNN, using the same weight parameters for each image. The vectors are then combined by a recurrent neural network (RNN), a type of neural network that takes as input a sequence of vectors, and, by maintaining an internal state, produces an output vector that depends not only on the most recent input but on all preceding inputs in the sequence. This vector is then fed into a set of densely connected layers that produce the final prediction. Connecting these networks allows a single model trained end-to-end to classify events consisting of images from multiple telescopes. For this work, the built-in CNN-RNN model of CTLearn was used, which implements an architecture similar to the CRNN network presented in \cite{Shilon2019}. More details on the model and the default hyperparameter settings that were used can be found in \cite{Nieto2019a}. The RNN in this model is specifically a Long Short-Term Memory (LSTM) network. Recurrent neural networks are capable of processing sequential data in which the ordering of inputs may affect their interpretation. Therefore, having a meaningful ordering of telescope images in a CNN-RNN network may improve performance. In previous work using a CNN-RNN network for classifying Cherenkov air showers as produced by a gamma ray or a cosmic-ray proton, the telescope images were ordered by total image amplitude, or size. As size can be considered to be a proxy for proximity to the shower center, sorting on this parameter may provide an ordering given the absence of temporal information \cite{Shilon2019}. To understand the effect of this ordering on performance, we trained two CNN-RNN networks as described above to classify IACT images as produced by a gamma ray or a cosmic-ray proton, changing only the ordering of the input images. As a control, in one network the images were ordered by telescope ID number, an arbitrary but consistent ordering, while in the other the images were ordered by size. The networks were trained using a sample of 250,000 simulated events from 25 FlashCam telescopes \cite{Gadola2015}, part of a proposed CTA array in Paranal, Chile. Ten percent of the events in the sample were reserved as a validation set, which was not used for training. \begin{figure}[htbp] \centerline{\includegraphics[width=0.85\columnwidth]{image3.png}} \caption{Validation accuracy of the CNN-RNN model with images ordered by ID (dark blue) and total brightness (light blue) as a function of number of training steps (batches of 16 events). The models reach respective accuracies of 80.6\% and 80.2\%.} \label{fig:accuracy} \end{figure} \begin{figure}[htbp] \centerline{\includegraphics[width=0.85\columnwidth]{image5.png}} \caption{Validation AUC with images ordered by ID (dark blue) and total brightness (light blue) as a function of number of training steps (batches of 16 events). AUC is the numerically integrated area under the receiver operating characteristic curve, measuring sensitivity and specificity. The models reach respective AUCs of 0.899 and 0.894.} \label{fig:auc} \end{figure} \section{Results and Discussion} The results of this experiment are shown in Fig.~\ref{fig:accuracy} and Fig.~\ref{fig:auc}. The validation metrics of the two models were approximately the same, with those of the control model being slightly higher. The control model attained validation accuracy and AUC of 80.6\% and 0.899, while the model with images sorted by size reached 80.2\% and 0.894. We therefore find no evidence that sorting images by size improves gamma-proton classification performance with a CNN-RNN model. This finding leaves open the possibility that a different ordering of telescope images could result in improved performance. In particular, an ordering which provides sufficient information about the telescopes' position on the ground could help a CNN-RNN to perform stereoscopic reconstruction of Cherenkov air showers. While ordering by size as a proxy for distance to the shower center should provide some relative position information, it is possible this information is too incomplete to be useful to the network. In addition to performing background rejection, deep learning algorithms could be used to determine the arrival direction and energy of the particles initiating Cherenkov air showers \cite{Mangano2018}, tasks for which stereoscopic reconstruction is particularly important. Ensuring that telescope position information is effectively provided to CNN-RNN networks may therefore not only improve their performance on background rejection but also on additional tasks critical for IACT image analysis. \bibliographystyle{IEEEtran}
3,212,635,537,826
arxiv
\section*{Bigger Picture} The genome contains instructions for building the function and structure, and guiding the evolution of molecules and organisms. Recent high-throughput techniques allow the generation of a vast amount of genomics data. However, the path of transforming genomics data into tangible therapeutics is filled with obstacles. We observe that genomics data alone are insufficient but rather require investigation of its interplay with data such as compounds, proteins, electronic health records, images, texts, etc. To make sense of these complex data, machine learning techniques are often utilized for identifying patterns and drawing insights from data. In this review, we study an extensive set of genomics applications of machine learning that can enable faster and more efficacious therapeutic development. Challenges remain, including technical issues such as learning under different contexts given specific low resource constraints, and practical issues such as mistrust of models, privacy, and fairness. \section*{Summary} Thanks to the increasing availability of genomics and other biomedical data, many machine learning approaches have been proposed for a wide range of therapeutic discovery and development tasks. In this survey, we review the literature on machine learning applications for genomics through the lens of therapeutic development. We investigate the interplay among genomics, compounds, proteins, electronic health records (EHR), cellular images, and clinical texts. We identify twenty-two machine learning in genomics applications across the entire therapeutics pipeline, from discovering novel targets, personalized medicine, developing gene-editing tools all the way to clinical trials and post-market studies. We also pinpoint seven important challenges in this field with opportunities for expansion and impact. This survey overviews recent research at the intersection of machine learning, genomics, and therapeutic development. \section*{Data Science Maturity} DSML 3: Development/Pre-production: Data science output has been rolled out/validated across multiple domains/problems \section*{Keywords} machine learning $\cdot$ therapeutics discovery and development $\cdot$ genomics \section{Introduction} Genomics studies the function, structure, evolution, mapping, and editing of genomes~\citep{hieter1997functional}. The genome contains chapters of instructions for building various types of molecules and organisms. Probing genomes allows us to understand a biological phenomenon, such as identifying the roles that the genome play in diseases. A deep understanding of genomics has led to a vast array of successful therapeutics to cure a wide range of diseases, both complex and rare~\citep{wong2004monoamines,chin2011cancer}. It also allows us to prescribe more precise treatments~\citep{hamburg2010path}, or seek more effective therapeutics strategies such as genome editing~\citep{makarova2011evolution}. Recent advances in high-throughput technologies have led to an outpouring of large-scale genomics data~\citep{reuter2015high,heath2021nci}. However, the bottlenecks along the path of transforming genomics data into tangible therapeutics are innumerable. For instance, diseases are driven by multifaceted mechanisms, so to pinpoint the right disease target requires knowledge about the entire suite of biological processes, including gene regulation by non-coding regions~\citep{rinn2012genome}, DNA methylation status~\citep{singal1999dna}, and RNA splicing~\citep{rogers1980mechanism}; personalized treatment requires accurate characterization of disease sub-types, and the compound's sensitivity to various genomics profiles~\citep{hamburg2010path}; gene-editing tools require an understanding of the interplay between guide RNA and the whole-genome to avoid off-target effects~\citep{fu2013high}; monitoring therapeutics efficacy and safety after approval requires the mining of gene-drug-disease relations in the EHR and literature~\citep{corrigan2018real}. We argue that genomics data alone are insufficient to ensure clinical implementation, but it requires integration of a diverse set of data types, from multi-omics, compounds, proteins, cellular image, electronic health records (EHR), and scientific literature. This heterogeneity and scale of data enable application of sophisticated computational methods such as machine learning (ML). Over the years, ML has profoundly impacted many application domains, such as computer vision~\citep{krizhevsky2012imagenet}, natural language processing~\citep{devlin2018bert}, and complex systems ~\citep{silver2016mastering}. ML has changed computational modeling from expert-curated features to automated feature construction. It can learn useful and novel patterns from data, often not found by experts, to improve prediction performance on various tasks. This ability is much-needed in genomics and therapeutics as our understanding of human biology is vastly incomplete. Uncovering these patterns can also lead to the discovery of novel biological insights. Also, therapeutic discovery consists of large-scale resource-intensive experiments, which limit the scope of experiment and many potent candidates are therefore missed. Using accurate prediction by ML can drastically scale up and facilitate the experiments, catching or generating novel therapeutics candidates. Interests in ML for genomics through the lens of therapeutic development have also grown for two reasons. First, for pharmaceutical and biomedical researchers, ML models have undergone proof-of-concept stages in yielding astounding performance often of previously infeasible tasks~\citep{stokes2020deep,senior2020improved}. Second, for ML scientists, large/complex data and hard/impactful problems present exciting opportunities for innovation. This survey summarizes recent ML applications related to genomics in therapeutic development and describes associated challenges and opportunities. Several reviews of ML for genomics have been published~\citep{leung2015machine,eraslan2019deep,zou2019primer}. Most of these previous works focused on studying genomics for biological applications, whereas we study them in the context of bringing genomics discovery to therapeutic implementations. We identify twenty-two “ML for therapeutics” tasks with genomics data, ranging across the entire therapeutic pipeline, which were not covered in previous surveys. Moreover, most of the previous reviews focused on DNA sequences, while we go beyond DNA sequences and study a wide range of interactions among DNA sequences, compounds, proteins, multi-omics, and EHR data. In this survey, we organize ML applications into four therapeutic pipelines: (1) target discovery: basic biomedical research to discover novel disease targets to enable therapeutics; (2) therapeutic discovery: large-scale screening designed to identify potent and safe therapeutics; (3) clinical study: evaluating the efficacy and safety of the therapeutics in vitro, in vivo, and through clinical trials; and (4) post-market study: monitoring the safety and efficacy of marketed therapeutics and identifying novel indications. We also formulate these tasks and data modalities in ML languages, which can help ML researchers with limited domain background to understand those tasks. In summary, this survey presents a unique perspective on the intersection of machine learning, genomics, and therapeutic development. The survey is organized as follows. In Section~\ref{sec:primer}, we provide a brief primer on genomics-related data. We also review popular machine learning model for each data type. Next, in Sections~\ref{sec:target}-\ref{sec:post-market}, we discuss ML applications in genomics across the therapeutics development pipeline. Each section describes a phase in the therapeutics pipeline and contains several ML applications and ML models and formulations. Lastly, in Section~\ref{sec:challenge}, we identify seven open challenges that present numerous opportunities for ML model development and also novel applications. \begin{figure}[t] \centering \includegraphics[width=0.95\textwidth]{FIG/fig1.pdf} \caption{\textbf{Organization and coverage of this survey}. Our survey covers a wide range of important ML applications in genomics across the therapeutics pipelines (Section~\ref{sec:target}-\ref{sec:post-market}). In addition, we provide a primer on biomedical data modalities and machine learning models (Section~\ref{sec:primer}). At last, we identify seven challenges filled with opportunities (Section~\ref{sec:challenge}).} \label{fig:summary} \end{figure} \section{A Primer on Genomics Data and Machine Learning Models} \label{sec:primer} With advances in high-throughput technologies and data management systems, we now have vast and heterogeneous datasets in the field of biomedicine. This section introduces the basic genomics-related data types and their machine learning representation and provides a primer on popular machine learning methods applied to these data. \subsection{Genomics-related biomedical data} \label{sec:data} \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{FIG/fig2.pdf} \caption{\textbf{Therapeutics data modalities and their machine learning representation.} Detailed descriptions of each modality can be found in Section~\ref{sec:data}. \textbf{a.} DNA sequences can be represented as a matrix where each position is a one-hot vector corresponding to A, C, G, T. \textbf{b.} Gene expressions are a matrix of real value, where each entry is the expression level of a gene in a context such as a cell. \textbf{c.} Proteins can be represented in amino acid strings, a protein graph, and a contact map where each entry is the connection between two amino acids. \textbf{d.} Compounds can be represented as a molecular graph or a string of chemical tokens, which are a depth-first traversal of the graph. \textbf{e.} Diseases are usually described by textual descriptions and also symbols in the disease ontology. \textbf{f.} Networks connect various biomedical entities with diverse relations. They can be represented as a heterogeneous graph. \textbf{g.} Spatial data are usually depicted as a 3D array, where 2 dimensions describe the physical position of the entity and the 3rd dimension corresponds to colors (in cell painting) or genes (in spatial transcriptomics). \textbf{h.} Texts are typically represented as a one-hot matrix where each token corresponds to its index in a static dictionary. Credits: The protein image is adapted from \cite{gaudelet2020utilising}; the spatial transcriptomics image is adapted from 10x Genomics; the cell painting image is from Charles River Laboratories.} \label{fig:data} \end{figure} \xhdr{DNAs} The human genome can be thought of as the instructions for building functional individuals. DNA sequences encode these instructions. Like a computer, where we build a program based on 0/1 bit, the basic DNA sequence units are called nucleotides (A, C, G, and T). Given a list of nucleotides, a cell can build a diverse range of functional entities (programs). There are approximately 3 billion base pairs for the human genome, and more than 99.9\% are identical between individuals. If a subset of the population has different nucleotides in a genome position than the majority, this position is called a variant. This single nucleotide variant is often called a single nucleotide polymorphism (SNP). While most variants are not harmful (they are said to be functionally neutral), many correspond to the potential driver for phenotypes, including diseases. \textit{Machine learning representations:} A DNA sequence is a list of ACGT tokens of length $N$. It is typically represented in three ways: (1) a string $\{A, C, G, T\}^N$; (2) a two dimensional matrix $\mathbf{W} \in \mathbb{R}^{4 \times N}$, where the $i$-th column $\mathbf{W}_i$ corresponds to the $i$-th nucleotide and is an one-hot encoding vector of length 4, where A, C, T and G are encoded as [1,0,0,0], [0,1,0,0], [0,0,1,0], and [0,0,0,1], respectively; or (3) a vector of $\{0, 1\}^N$, where 0 means it is not a variant, and 1 a variant. Example illustration in Figure \ref{fig:data}a. \xhdr{Gene expression/transcripts} In a cell, the DNA sequence of each gene is transcribed into messenger RNA (mRNA) transcripts. While most cells share the same genome, the individual genes are expressed at very different levels across cells and tissue types and given different interventions and environments. These expression levels can be measured by the count of mRNA transcripts. Given a disease, we can compare the gene expression in people with the disease with expression to people in healthy cohorts (without the disease of interest) and associate various genes with the underlying biological processes in this disease. With the advance of single-cell RNA sequencing (scRNA-seq) technology, we can now obtain gene expression for the different types of cells that make up a tissue. The availability of transcriptomes of tens of thousands of cells creates new opportunities for understanding interactions among cell types and the impact of heterogeneity. \textit{Machine learning representations:} Gene expressions/transcripts are counts of mRNA. For a scRNA-seq experiment, given $M$ cells with $N$ genes, we can obtain a gene expression matrix $\mathbf{W} \in \mathbb{Z}^{M \times N}$, where each entry $\mathbf{W}_{i,j}$ corresponds to the transcript counts of gene $j$ for cell $i$. Example illustration in Figure \ref{fig:data}b. \xhdr{Proteins} Most of the genes encoded in the DNA provide instructions to build a diverse set of proteins, which perform a vast array of functions. For example, transcription factors are proteins that bind to the DNA/RNA sequence, and regulate their expression in different conditions. A protein is a macro-molecule and is represented by a sequence of 20 standard amino acids or residues, where each amino acid is a simple compound. Based on this sequence code, it naturally folds into a 3D structure, which determines its function. As the functional units, proteins present a large class of therapeutic targets. Many drugs are designed to inhibit/promote proteins in the disease pathways. Proteins can also be used as therapeutics such as antibodies and peptides. \textit{Machine learning representations:} Proteins have diverse forms. For a protein with $N$ amino acids, it can be represented in the following formats: (1) a string $\{A, R, N, D, ...\}^N$ of amino acid sequence tokens; (2) a contact map matrix $\mathbf{W} \in \mathbb{R}^{N \times N}$ where $\mathbf{W}_{i,j}$ is the physical distance between $i$-th and $j$-th amino acids; (3) a protein graph $G$ with nodes corresponding to amino acids, where nodes are connected based on rules such as a physical distance threshold or k-nearest neighbors; (4) a protein 3D grid with three-dimensional discretized tensor, where each grid point $(x, y, z)$ corresponds to amino acids in the 3D space. Example illustration in Figure \ref{fig:data}c. \xhdr{Compounds} Compounds are molecules that are composed of atoms connected by chemical bonds. They can interact with proteins and drive important biological processes. In their natural form, compounds have a 3D structure. Small-molecule compounds are the major class of therapeutics. \textit{Machine learning representations:} A compound is usually represented as (1) a SMILES string where it is a depth traversal order of the molecule graph; (2) a molecular graph $G$ where each node is an atom and edges are the bonds. Example illustration is in Figure \ref{fig:data}d. \xhdr{Diseases} A disease is an abnormal condition that affects the function and/or modifies the structure of an organism. It is derived from both genotypes and environmental factors, with intricate mechanisms driven by biological processes. They are observable and can be described by certain symptoms. \textit{Machine learning representations:} Diseases are represented by (1) symbols such as disease ontology; (2) text description of the specific disease. Example illustration is in Figure \ref{fig:data}e. \xhdr{Biomedical networks} Biological processes are not driven by individual units but consist of numerous interactions among various types of entities such as cell signaling pathways, protein-protein interactions, and gene regulation. These interactions can be characterized by biomedical networks, where they provide a systems view toward biological phenomena. \textit{Machine learning representations:} Biomedical networks are represented as graphs, where each node is a biomedical entity, and an edge corresponds to relations among them. Example illustration is in Figure \ref{fig:data}f. \xhdr{Spatial data} With the advance of microscopes and fluorescent probes, we can visualize cell dynamics through cellular images. By imaging cells under various conditions such as drug treatment, they allow us to identify the effect of conditions at a cellular level. Furthermore, spatial genomic sequencing techniques now allow us to visualize and understand the gene expression for cellular processes in the tissue environment. \textit{Machine learning representations:} Cellular image or spatial transcriptomics can be represented as a matrix of size $M \times N$, where $M, N$ is the width and height of the data or number of pixels/transcripts along this dimension, and each entry corresponds to the pixel of the image or the transcript count in the case of spatial transcriptomics. Additional channels (a separate matrix of size $M \times N$) to encode for information such as colors or various genes for spatial transcriptomics. After aggregation, the spatial data can be represented as a tensor of size $M \times N \times H$, where $H$ is the number of channels. Example illustration in Figure \ref{fig:data}g. \xhdr{Texts} The first important example of text encountered in therapeutics development include clinical trial design protocols, where texts describe inclusion and exclusion criteria for trial participation, often as a function of genome markers. For example, in a trial to study Gefitinib for EGFR-mutant Non-Small Cell Lung Cancer, one of the trial eligibility criteria would be "An EGFR sensitizing mutation must be detected in tumor tissue"~\citep{trial}. A second type of clinical text is clinical notes documented in electronic health records, containing valuable information for post-market research on treatments. \textit{Machine learning representations:} Clinical texts are similar to texts in common natural language processing. The standard way to represent them is a matrix of size $M \times N$, where $M$ is the number of total vocabularies and $N$ is the number of tokens in the texts. Each column is a one-hot encoding for the corresponding token. Example illustration is in Figure \ref{fig:data}h. \subsection{Machine Learning Methods for Biomedical Data} \label{sec:methods} Machine learning models learn patterns from data and leverage these patterns to make accurate predictions. Numerous ML models have been proposed to tackle different challenges. In this section, we briefly introduce the main mechanisms of popular ML models used to analyze genomic data. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{FIG/fig3.pdf} \caption{\textbf{Machine learning for genomics workflow.} \textbf{a.} The first step is to curate a machine learning dataset. Raw data are extracted from databases of various sources, and they are processed into data points. Each data point corresponds to an input of a series of biomedical entities and a label from annotation or experimental result. These data points constitute a dataset, and they are split into three sets. The training set is for the ML model to learn and identify useful and generalizable patterns. The validation set is for model selection and parameter tuning. The testing set is for the evaluation of the final model. The data split could be constructed in a way to reflect real-world challenges. \textbf{b.} Various ML models can be trained using the training set and tuned based on a quantified metric on the validation set such as loss $\mathcal{L}$ that measures how good this model predicts the output given the input. Lastly, we select the optimal model given the lowest loss. \textbf{c.} The optimal model can then predict on the test set, where various evaluation metrics are used to measure how good is the model on new unseen data points. Models can also be probed with explainability methods to identify biological insights captured by the model. Experimental validation is also common to ensure the model can approximate wet-lab experiment results. Finally, the model can be deployed to make predictions on new data without labels. The prediction becomes a proxy for the label from downstream tasks of interest. } \label{fig:ml_workflow} \end{figure} \xhdr{Preliminary} A typical ML model for genomics usage is as follows: given an input of a set of data points, where each data point consists of input features and a ground truth biological label, a machine learning model aims to learn a mapping from input to a label based on the observed data points, which are also called training data. This setting of predicting by leveraging known supervised labels is also called supervised learning. The size of the training data is called the sample size. ML models are data-hungry and usually need a large sample size to perform well. The input features can be DNA sequences, compound graphs, or clinical texts, depending on the task at hand. The ground truth label is usually obtained via biological experiments. The ground truth also presents the goal for an ML model to achieve. Thus, the quality of the ground truth directly affects ML model performance, highlighting the necessity of label curation. There are various forms of ground truth labels. If the labels are continuous (e.g., binding scores), the learning problem is a {\it regression} problem. And if the labels are discrete variables (e.g., the occurrence of interaction), the problem is a {\it classification} problem. Models focusing on predicting the labels of the data are called {\it discriminative models}. Besides making predictions, ML models can also generate new data points by modeling the statistical distribution of data samples. Models following this procedure are called {\it generative models}. When labels are not available, an ML model can still identify the underlying patterns within the unlabeled data points. This problem setting is called {\it unsupervised learning}, where models discover patterns or clusters (e.g., cell types) by modeling the relations among data points. {\it Self-supervised learning} uses supervised learning methods for handling unlabeled data. It creatively produces labels from the unlabeled data (e.g., masking out a motif and use the surrounding context to predict the motif)~\citep{devlin2018bert,hu2019strategies}. In many biological cases, ground truth labels are scarce, where few-shot learning can be considered. {\it Few-shot learning} assumes only a few labeled data points but many unlabeled data points. Another strategy is called {\it meta-learning}, which aims to learn from a set of related tasks to form the ability to learn quickly and accurately on an unseen task. If a model integrates multiple data modalities (e.g., DNA sequence plus compound structure), it is called {\it multimodal learning}. When a model predicts multiple labels (e.g., multiple target endpoints), it is called {\it multi-task learning}. \begin{figure}[t] \centering \includegraphics[width = 0.85\textwidth]{FIG/fig4.pdf} \caption{\textbf{Machine learning models illustrations.} Details about each model can be found in Section~\ref{sec:methods}. \textbf{a.} Classic machine learning models featurize raw data and apply various models (mostly linear) to classify (e.g., binary output) or regress (e.g., real value output). \textbf{b.} Deep Neural Networks map input features to embeddings through a stack of non-linear weight multiplication layers. \textbf{c.} Convolutional Neural Networks apply many local filters to extract local patterns and aggregate local signals through pooling. \textbf{d.} Recurrent Neural Networks generate embeddings for each token in the sequence based on the previous tokens. \textbf{e.} Transformers apply a stack of self-attention layers that assign a weight for each pair of input tokens. \textbf{f.} Graph Neural Networks aggregate information from the local neighborhood to update the node embedding. \textbf{g.} Autoencoders reconstruct the input from an encoded compact latent space. \textbf{h.} Generative models generate novel biomedical entities with more desirable properties. } \label{fig:models} \end{figure} \xhdr{Classic Machine Learning Models} Traditional ML usually requires a transformation of input to tabular real-valued data, where each data point corresponds to a feature vector. In our context, these are predefined features such as the SNP vector, polygenic risk scores, and chemical fingerprints. These tabular data can then be fed into a wide range of supervised models, such as linear/logistic regression, decision trees, random forest, support vector machine, and naive Bayes~\citep{mitchell1997machine}. They work well when the features are well defined. A multilayer perceptron~\citep{rosenblatt1961principles} (MLP) consists of at least three layers of neurons, where each layer is fed into a nonlinear activation function to capture these patterns. When the number of layers is large, it is called a deep neural network (DNN). \textit{Suitable biomedical data:} any real-value feature vectors built upon biomedical entities such as SNP profile and chemical fingerprints. \xhdr{Convolution Neural Network (CNN)} CNNs represent a class of DNNs widely applied for image classification, natural language processing, and signal processing such as speech recognition~\citep{lecun1995convolutional}. A CNN model has a series of convolution filters, which allow it to identify local patterns in the data (e.g., edges, shapes for images). Such networks can automatically extract hierarchical patterns in data. The weight of each filter reveals patterns (such as conserved motifs). \textit{Suitable biomedical data:} short DNA sequence, compound SMILES strings, gene expression profile, cellular images. \xhdr{Recurrent Neural Network (RNN)} An RNN is designed to model sequential data, such as time series, event sequences, and natural language text~\citep{de2015survey}. The RNN model is sequentially applied to a sequence. The input at each step includes the current observation and the previous hidden state. RNN is natural to model variable-length sequences. There are two widely used variants of RNNs: long short-term memory (LSTM)~\citep{hochreiter1997long} and gated recurrent units (GRU)~\citep{cho2014properties}. \textit{Suitable biomedical data:} DNA sequence, protein sequence, texts. \xhdr{Transformer} Transformers~\citep{vaswani2017attention} are a recent class of neural networks that leverage self-attentions: assigning a score of interaction among every pair of input features (e.g., a pair of DNA nucleotides). By stacking these self-attention units, the model can capture more expressive and complicated interactions. Transformers have shown superior performances on sequence data, such as natural language processing. They have also been successfully adapted for state-of-the-art performances on proteins~\citep{Rivese2016239118} and compounds~\citep{huangmoltrans}. \textit{Suitable biomedical data:} DNA sequence, protein sequence, texts, image. \xhdr{Graph Neural Networks (GNN)} Graphs are universal representations of complex relations in many real-world objects. In biomedicine, graphs can represent knowledge graphs, molecules, protein-protein interaction networks, and medical ontologies. However, graphs do not follow rigid data structures as in sequences and images. GNNs are a class of models that convert graph structures into embedding vectors (i.e., node representation or graph representation vectors)~\citep{kipf2016semi}. In particular, GNNs generalize the concept of convolution operations to graphs by iteratively passing and aggregating messages from neighboring nodes. The resulting embedding vectors capture the node attributes and the network structures. \textit{Suitable biomedical data:} biomedical networks, compound/protein graphs, similarity network. \xhdr{Autoencoders (AE)} Autoencoders are an unsupervised method in deep learning. Autoencoders map the input data into a latent embedding (encoder) and then reconstruct the input from the latent embedding (decoder)~\citep{kramer1991nonlinear}. Their objective is to reconstruct the input from a low-dimensional latent space, thus allowing the latent representation to focus on essential properties of the data. Both encoders and decoders are neural networks. AE can be considered as a nonlinear analog to principal component analysis (PCA). The generated latent representation capture patterns in the input data and can thus be used to do unsupervised learning tasks such as clustering. Among its variants, the denoising autoencoders (DAEs) take partially corrupted inputs and are trained to recover original undistorted inputs~\citep{vincent2010stacked}. Variational autoencoders (VAEs) model the latent space with probabilistic models. As these probabilities are complex and usually intractable, they adopt a variational inference technique to approximate these probabilistic models~\citep{kingma2013auto}. \textit{Suitable biomedical data:} unlabeled data. \xhdr{Generative Models} In contrast to making a prediction, generative models aim to learn a sufficient statistical distribution that characterizes the underlying datasets (e.g., a set of DNA sequences for a disease) and its generation process~\citep{wittrock1974learning}. Based on the learned distribution, various kinds of downstream tasks can be supported. For example, from this distribution, one can intelligently generate optimized data points. These optimized samples can be novel images, compounds, or RNA sequences. One popular model is called generative adversarial networks (GAN)~\cite{goodfellow2014generative}. It consists of two sub-models: a {\it generator} that captures the data distribution of a training dataset in a latent representation and a {\it discriminator} that determines whether a sample is real or generated. These two sub-models are trained iteratively such that the resulting generator can produce realistic samples that potentially fool the discriminator. \textit{Suitable biomedical data:} data where new variants can have more desirable properties (e.g., molecule generation for drug discovery)~\citep{fu2020core,jin2018junction}. Depending on the data modality, different encoders can be chosen for the generative models. \begin{table}[t] \centering \caption{High quality machine learning datasets references and pointers for genomics therapeutics tasks.} \adjustbox{max width=\textwidth}{ \begin{tabular}{l|l|l|l} \toprule \textbf{Pipeline} & \textbf{Task} & \textbf{Data Reference} & \textbf{Data Link} \\ \midrule \multirow{10}{*}{Target Discovery (Sec.\ref{sec:target})} & DNA/RNA-protein binding & \cite{zeng2016convolutional} & \url{http://cnn.csail.mit.edu/} \\ & Methylation state & \cite{levy2019pymethylprocess} & \url{https://bit.ly/3rVWgR9}\\ & RNA splicing & \cite{harrow2012gencode} & \url{https://www.gencodegenes.org/}\\ & Spatial gene expression & \cite{weinstein2013cancer} & \url{https://bit.ly/3fOLgTi}\\ & Cell composition analysis & \cite{cobos2020benchmarking}& \url{https://go.nature.com/3mxCZEv}\\ & Gene network construction & \cite{shrivastava2020grnular} & \url{https://bit.ly/3mBMB1f} \\ & Variant calling & \cite{chen2019systematic} & \url{https://bit.ly/39RJcG6} \\ & Variant prioritization & \cite{landrum2014clinvar} & \url{https://www.ncbi.nlm.nih.gov/clinvar/} \\ & Gene-disease association & \cite{pinero2016disgenet} & \url{https://www.disgenet.org/}\\ & Pathway analysis & \cite{fabregat2018reactome} & \url{https://reactome.org/}\\ \midrule \multirow{5}{*}{Therapeutics Discovery (Sec.\ref{sec:discovery})} & Drug response & \cite{yang2012genomics}& \url{https://www.cancerrxgene.org/}\\ & Drug combination & \cite{liu2020drugcombdb} & \url{http://drugcombdb.denglab.org/}\\ & CRISPR on-target& \cite{leenay2019large} & \url{https://bit.ly/3rXlKxi} \\ & CRISPR off-target& \cite{stortz2021crisprsql} & \url{http://www.crisprsql.com/}\\ & Virus vector design& \cite{bryant2021deep}& \url{https://bit.ly/31RRKIP} \\ \midrule \multirow{4}{*}{Clinical Study (Sec.\ref{sec:clinical})} & Cross-species translation & \cite{poussin2014species}& \url{https://bit.ly/3mykFLC} \\ & Patient stratification & \cite{curtis2012genomic} & \url{https://bit.ly/3cWTW8d} \\ & Patient-trial matching & \cite{zhang2020deepenroll} & \url{https://bit.ly/3msp0A0}\\ & Mendelian randomization & \cite{hemani2017automating} & \url{https://www.mrbase.org/}\\ \midrule \multirow{2}{*}{Post-Market Study (Sec.\ref{sec:post-market})} & Clinical texts mining & Proprietary & N/A\\ & Biomedical literature mining & \cite{pyysalo2007bioinfer} & \url{https://bit.ly/3cUtpYZ}\\ \midrule \end{tabular} } \label{tab:database} \end{table} \section{Machine Learning for Genomics in Target Discovery} \label{sec:target} A therapeutic target is a molecule (e.g., a protein) that plays a role in the disease biological process. The molecule could be targeted by a drug to produce a therapeutic effect such as inhibition, thereby blocking the disease process. Much of target discovery relies on fundamental biological research in depicting a full picture of human biology, and based on this knowledge, we identify target biomarkers. In this section, we review ML for genomics tasks in target discovery. In Section~\ref{sec:human_bio}, we review six tasks that use ML to facilitate understanding of human biology, and in Section~\ref{sec:biomarker}, we describe four tasks in using ML to help identify druggable biomarkers more accurately and more quickly. \subsection{Facilitating Understanding of Human Biology} \label{sec:human_bio} \begin{figure}[t] \centering \includegraphics[width = 0.8\textwidth]{FIG/fig5.pdf} \caption{\textbf{Task illustrations for the theme "facilitating understanding of human biology"}. \textbf{a.} A model predicts if a DNA/RNA sequence can bind to a protein. After training, one can identify binding sites based on feature importance (Section~\ref{sec: dna-protein}). \textbf{b.} A model predicts missing DNA methylation state based on its neighboring states and DNA sequence (Section~\ref{sec:methy}). \textbf{c.} A model predicts the splicing level given the RNA sequence and the context (Section~\ref{sec:splice}). \textbf{d.} A model predicts spatial transcriptomics from tissue image (Section~\ref{sec:spatial}). \textbf{e.} A model predicts the cell type compositions from the gene expression (Section~\ref{sec:composition}). \textbf{f.} A model constructs a gene regulatory network from gene expressions (Section~\ref{sec:network_construction}). Credits: Figure c is adapted from, \cite{xiong2015human} and the spatial transcriptomics image in Figure d is from \cite{he2020integrating}.} \label{fig:human_biology} \end{figure} Oftentimes, the first step for developing any therapeutic agent is to generate a disease hypothesis and understand the disease mechanisms. It requires some understanding of basic human biology since diseases are complicated and driven by many factors. Machine learning applied to genomics can facilitate basic biomedical research and help understand disease mechanisms. A wide range of relevant tasks have been tackled by machine learning, from predicting splicing patterns~\citep{jha2017integrative,xiong2015human}, DNA methylation status~\citep{angermueller2017deepcpg}, to decoding the regulatory roles of genes~\citep{liu2016pedla,deepsea}. The majority of previous reviews have focused on this theme only. While there are numerous tasks under this category, we will describe just six important and popular tasks here. \subsubsection{DNA-protein and RNA-protein binding prediction} \label{sec: dna-protein} DNA-binding proteins bind to specific DNA strands (binding sites/motifs) to influence the transcription rate to RNA, chromatin accessibility, and so on. These motifs regulate gene expression and, if mutated, can potentially contribute to diseases. Similarly, RNA-binding proteins bind to RNA strands to influence RNA processing, such as splicing and folding. Thus, it is important to identify the DNA and RNA motifs for these binding proteins. Traditional approaches are based on position weight matrices (PWMs), but they require existing knowledge about the motif length and typically ignore interactions among the binding site loci. Machine learning models trained directly on sequences to predict binding scores circumvent these challenges. \citep{alipanahi2015predicting} uses a convolutional neural network to train large-scale DNA/RNA sequences with varying lengths to predict the binding scores. The use of CNN is a great match for this task because the CNN’s filters work according to a similar mechanism as PWMs, which means that we can visualize binding site motifs through CNN filter weights. While motifs are useful, they have lower predictive power than evolutionary features~\citep{kircher2014general} for identifying chromatin proteins/histone marks binding. \citep{deepsea} shows that integrating another CNN model on additional information from the epigenomics profile could better predict these marks. Extending CNN-based models, a large body of works has been proposed to predict DNA-, RNA-protein binding~\citep{kelley2016basset,zhang2018high,zeng2016convolutional,cao2019simple}. \textit{Machine learning formulation: } Given a set of DNA/RNA sequences predict their binding scores. After training, use feature importance attribution methods to identify the motifs. Task illustration is in Figure~\ref{fig:human_biology}a. \subsubsection{Methylation state prediction} \label{sec:methy} DNA methylation adds methyl groups to individual A or C bases in the DNA to modify gene activity without changing the sequence. It has been shown to be a commonly used mediator for biological processes such as cancer progression and cell differentiation~\citep{robertson2005dna}. Thus, it is important to know the methylation status for DNA sequences in various cells. However, since the single-cell methylation technique has low coverage, most of the methylation status at specific DNA positions is missing, so it requires accurate imputation. Classical methods can only predict population-level status instead of cell-level as cell-level prediction require annotations that are unavailable~\citep{zhang2015predicting,whitaker2015predicting}. Machine learning models can tackle this problem. Given a set of cells with their available sequenced methylation status for each DNA position and the DNA sequence, \citep{angermueller2017deepcpg} accurately infers the unmeasured methylation statuses in a single-cell level. More specifically, the imputation of DNA methylation positions uses a bidirectional recurrent neural network on a sequence of cells' neighboring available methylation states and a CNN on the DNA sequence. The combined embedding takes into account information between DNA and methylation status across cells and within cells. Alternative architecture choice has also been proposed, such as using Bayesian clustering~\citep{kapourani2019melissa}, or a variational auto-encoder~\citep{levy2020methylnet}. Notably, it can also be extended to RNA methylation state prediction. \citep{zou2019gene2vec} applies CNN on the neighboring methylation status and the word2vec model on RNA subsequence for RNA methylation status prediction. \textit{Machine learning formulation: } For a DNA/RNA position with missing methylation status, given its available neighboring methylation states and the DNA/RNA sequence, predict the methylation status on the position of interest. Task illustration in Figure~\ref{fig:human_biology}b. \subsubsection{RNA splicing prediction} \label{sec:splice} RNA splicing is a mechanism to assemble the coding regions and remove the non-coding ones to be translated into proteins. A single gene can have various functions by splicing the same gene in different ways given different conditions. \cite{lopez2005splicing} estimates that as many as 60\% of pathogenic variants responsible for genetic diseases may influence splicing. \cite{gelfman2017annotating} used ML to derive a score, TraP, which identifies around 2\% of synonymous variants and 0.6\% of intronic variants as likely pathogenic due to splicing defects. Thus, it is important to be able to identify the genetic variants that cause splicing. \cite{xiong2015human} models this problem as predicting the splicing level of an exon, measured by the transcript counts of this exon, given its neighboring RNA sequence and the cell type information. It uses Bayesian neural network ensembles on top of curated RNA features and has demonstrated its accuracy by identifying known mutations and discovering new ones. Notably, this model is trained on large-scale data across diverse disease areas and tissue types. Thus, the resulting model can predict the effect of a new unseen mutation contained within hundreds of nucleotides on splicing of an intron without experimental data. In addition, to predict the splicing level given a triplet of exons in various conditions, recent models have been developed to annotate the nucleotide branchpoint of RNA splicing. \cite{paggi2018sequence} feeds an RNA sequence into a recurrent neural network, where it predicts the likelihood of being a branchpoint for each nucleotide. \cite{jagadeesh2019s} further improve the performance by integrating features from the splicing literature and generate a highly accurate splicing-pathogenicity score. \textit{Machine learning formulation: } Given an RNA sequence and its cell type, if available, for each nucleotide, predicts the probability of being a spliced breakpoint and the splicing level. Task illustration is in Figure~\ref{fig:human_biology}c. \subsubsection{Spatial gene expression inference} \label{sec:spatial} Gene expression varies across the spatial organization of tissue. This heterogeneity contains important insights about the biological effects. Regular sequencing, whether of single-cells or bulk tissue, does not capture this information. Recent advances in spatial transcriptomics (ST) characterize gene expression profiles in their spatial tissue context~\citep{staahl2016visualization}. However, there are still challenges to integrating the sequencing output with the tissue context provided by histopathology images to better visualize and understand patterns of gene expression within a tissue section. Machine learning models that directly predict gene expression from the histopathology image can thus be a useful tool. \cite{he2020integrating} develops a deep CNN that predicts gene expression from histopathology of patients with breast cancer at a resolution of 100 $\mu$m. They also show the model can generalize to other breast cancer datasets without re-training. Building upon the inferred spatial gene expression levels, many downstream tasks are enabled. For example, \cite{levy2020spatial} constructs a pipeline that characterizes tumor heterogeneity on top of the CNN gene expression inference step. \textit{Machine learning formulation: } Given the histopathology image of the tissue, predict the gene expression for every gene at each spatial transcriptomics spot. Task illustration is in Figure~\ref{fig:human_biology}d. \subsubsection{Cell composition analysis} \label{sec:composition} Different cell types can drive change in gene expressions that are unrelated to the interventions. Analyzing the average gene expression for a batch of mixed cells with distinct cell types could lead to bias and false results~\citep{egeblad2010tumors}. Thus, it is important to deconvolve the gene expressions of the cell-type composition from the real signals for tissue-based RNA-seq data. ML models can help estimate the cell type proportions and the gene expression. The rationale is to obtain parameters of gene expression (a signature matrix) that characterize each cell type through single-cell profiles. The signature matrix should contain gene expressions that are stably expressed across conditions. These parameters are then integrated into the RNA-seq data to infer cell composition for a set of query gene expression profiles. Various methods, including linear regression~\citep{avila2018computational} and support vector machines~\citep{newman2015robust} are used to predict a cell composition vector when combined with the signature matrix to approximate the gene expression. In these cases, the signature matrix is predefined, which may not be optimal. \cite{menden2020deep} applies DNNs to predict cell composition profile directly from the gene expression, where the hidden neurons can be considered as the learned signature matrix. Cell deconvolution is also crucial for spatial transcriptomes where each spot could contain 2 to 20 cells from a mixture of dozens of possible cell types. \cite{andersson2020single} models various cell type-specific parameters using a customized probabilistic model. \cite{su2020dstg} uses graph convolutional network to leverage information from similar spots in the spatial transcriptomics. However, this problem is constrained by the limited availability of gold standard cell composition annotations. \textit{Machine learning formulation: } Given the gene expressions of a set of cells (in bulk RNA-seq or a spot in spatial transcriptomics), infer proportion estimates of each cell type for this set. Task illustration in Figure~\ref{fig:human_biology}e. \subsubsection{Gene network construction} \label{sec:network_construction} The expression levels of a gene are regulated via transcription factors (TF) produced by other genes. Aggregating these TF-gene relations results in the gene regulatory network. Accurate characterization of this network is crucial because it describes how a cell functions. However, it is difficult to quantify gene networks on a large-scale through experiments alone. Computational approaches have been proposed to construct gene networks from gene-expression data. The majority of them learn a mapping from expressions of a gene to TF. If the mapping is successful, then it is likely that this TF affects this gene. Various mapping methods have been proposed, such as linear regression~\citep{haury2012tigress}, random forest~\citep{huynh2010inferring}, and gradient boosting~\citep{moerman2019grnboost2}. \cite{shrivastava2020grnular} proposes a deep neural network version of the mapping through a specialized unrolled algorithm to control the sparsity of the learned network. They also leverage supervision obtained through synthetic data simulators to improve robustness further. Despite the promises, this problem remains unsolved due to the sparsity, heterogeneity, and noise of the gene expression data, particularly data from single cell RNA sequencing. \textit{Machine learning formulation: } Given a set of gene expression profiles of a gene set, identify the gene regulatory network by predicting all pairs of interacting genes. Task illustration is in Figure~\ref{fig:human_biology}f. \subsection{Identifying Druggable Biomarkers} \label{sec:biomarker} \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{FIG/fig6.pdf} \caption{\textbf{Task illustrations for the theme "identifying druggable biomarkers".} \textbf{a.} A model predicts the zygosity given a read pileup image (Section~\ref{sec:calling}). \textbf{b.} A model predicts whether this patient has the disease given the genomic sequence. After training, feature importance attribution methods are used to assign importance for each variant, which is then ranked and prioritized (Section~\ref{sec:variant_prior}). \textbf{c.} A graph encoder obtains embeddings for each disease and gene node, and they are fed into a predictor to predict their association (Section~\ref{sec:gda}). \textbf{d.} A model identifies a set of gene pathways from the gene expression profiles and the known gene pathways (Section~\ref{sec:pathway}).} \label{fig:biomarker} \end{figure} Diseases are driven by complicated biological processes where each step may be associated with a biomarker. By identifying these biomarkers, we can design therapeutics to break the disease pathway and cure the disease. Machine learning can help identify these biomarkers by mining through large-scale biomedical data to predict genotype-phenotype associations accurately. Probing the trained models can uncover potential biomarkers and identify patterns related to the disease mechanisms. Next, we will present several important tasks related to biomarker identification. \subsubsection{Variant calling} \label{sec:calling} Variant calling is the very first step before relating genotypes to diseases. It is used to specify what genetic variants are present in each individual’s genome from sequencing. The majority of the variants are biallelic, meaning that each locus has only one possible alternative form of nucleotide compared to the reference, while a small fraction are also multiallelic, meaning that each locus can have more than one alternate form. As each locus has two copies, one from mother and another from father, the variant is measured by the total set of nucleotides (e.g., for biallelic variant, suppose B is the reference nucleotide, and b is the alternative; three genotypes are possible: homozygous (BB), heterozygous (Bb) and homozygous alternate(bb)). Raw sequencing outputs are usually billions of short reads, and these reads are aligned to a reference genome. In other words, for each locus, we have a set of short reads that contain this locus. Since sequencing techniques have errors, the challenge is to predict the variant status of this locus accurately from the set of reads. Manual processing of such a large number of reads to identify each variant is infeasible. Thus, efficient computational approaches are needed for this task. A statistical framework called the Genome Analysis Toolkit (GATK)~\citep{depristo2011framework}, combines logistic regression, hidden Markov models, and Gaussian mixture models, and is commonly used for variant calling. Deep learning methods have shown improved performance. For example, while previous works operate on sequencing statistics, DeepVariant~\citep{poplin2018universal} treats the sequencing alignments as an image and applies CNNs. It has been shown to have superior performance to previous modeling efforts and also works for multiallelic variant calling. In addition to predicting zygosity, \cite{luo2019multi} use multi-task CNNs to predict the variant type, alternative allele, and indel length. Many other deep learning based methods are proposed to tackle more specific challenges, such as long sequencing length using LSTMs~\citep{luo2020exploring}. Benchmarking efforts have also been conducted~\citep{zook2019open}. Note that despite most methods achieving greater than 99\% accuracy, thousands of variants are still being called incorrectly since the genome sequence is extremely long. Also, variability persists across different sequencing technologies. Another challenge is the phasing problem, which is to estimate whether the two mutations in a gene are on the same chromosome (haplotypes) or opposite ones~\citep{delaneau2013improved}. Thus, there is still room for further improvement. \textit{Machine learning formulation: } Given the aligned sequencing data ((1) read pileup image, which is a matrix of dimension $M$ and $N$, with $M$ the number of reads and $N$ the length of reads; or (2) the raw reads, which are a set of sequences strings) for each locus, classify the multi-class variant status. Task illustration is in Figure~\ref{fig:biomarker}a. \subsubsection{Variant pathogenicity prioritization/phenotype prediction} \label{sec:variant_prior} There are an extensive number of genomic variants in the human genome, at last one million per person. While many influence complex traits and are relatively harmless, some are associated with diseases. Complex diseases are associated with multiple variants in both coding and non-coding regions of the genome. Thus, prioritization of pathogenic variants from the entire variant set can potentially lead to disease targets. There are mainly two computational approaches. The first one is to predict the pathogenicity given a set of features for a single variant. These features are usually curated from biochemical knowledge, such as amino acid identities. \cite{kircher2014general} build on these features using a linear support vector machine and \cite{quang2015dann} use deep neural networks to classify if a variant is pathogenic. DNN shows improved performance on classification metrics. After training, the model can generate a ranked list of variants based on their predicted pathogenicity likelihood where the top ones are prioritized. Note that this line of work considers each variant as an input data point and assumes some known knowledge of the pathogenicity of the variants, which is not the case in many scenarios, especially for new diseases. Another line of work is to use each genome profile as a data point and use a computational model to predict disease risks from this profile. If the model is accurate, one can obtain variants contributing to the prediction of the disease phenotype. Predicting directly from the whole-genome sequence is challenging because of two reasons. First, as the whole-genome is high-dimensional while the cohort size for each disease is relatively limited, this presents the "curse of dimensionality" challenge in machine learning. Secondly, most SNPs in the input genome are irrelevant to the disease, presenting difficulty in correctly identifying these signals from the noise. \cite{kooperberg2010risk} uses a sparse regression model to predict the risk of Crohn's disease for patients using genomics data in the coding region. \cite{pare2017machine} uses gradient boosted regression to approximate polygenic risk score for complex traits such as diabetes, height, and BMI. \cite{isgut2021highly} uses logistic regression on polygenic risk scores to improve myocardial infarction risk prediction. \cite{zhou2018deep} applies DNNs on the epigenomic features of both the coding and non-coding regions to predict gene expression for more than 200 tissue and cell types and later identify disease-causing SNPs. Built upon DeepSEA~\citep{deepsea}, \cite{zhou2019whole} apply CNN on epigenomic profiles, which are modifications of the DNA sequence such as DNA methylation or chromatin accessibility, to predict autism and identify experimentally validated non-coding variant mutations. \textit{Machine learning formulation: } Given features about a variant, predict its corresponding disease risk and then rank all variants based on the disease risk. Alternatively, given the DNA sequence or other related genomics features, predict the likelihood of disease risk for this sequence and retrieve the variant in the sequence that contributes highly to the risk prediction. Task illustration is in Figure~\ref{fig:biomarker}b. \subsubsection{Rare disease detection} \label{sec:rare} In the US, a rare disease is defined as one that affects fewer than 200,000 people, with other countries similarly defining a rare disease based on low prevalence. There are around 7,000 rare diseases, and they collectively affect 350 million people worldwide~\citep{vickers2013challenges}. Due to limited financial incentives, unknown disease mechanisms and potential difficulties in recruiting sufficient patients for clinical trials, more than 90\% of rare diseases lack effective treatments. Also, initial misdiagnosis is common. On average, it takes more than seven years and eight physicians for a patient to be correctly diagnosed. Importantly, it is likely that targets identified for rare diseases may also be useful for therapeutic intervention of similar more common diseases. ML models are good at identifying patterns from complex patient data. Rare disease detection can be formulated as a classification task, similar to phenotype prediction. It aims to identify if the patient has a rare disease from the patient's genomic sequence and information such as EHR. If sufficient data from patients with a rare disease and suitable controls exist, many ML models can be applied to detect rare diseases. For example, based on the motivation that many rare diseases have missing heritability, which could be harbored in regulatory regions, \cite{yin2019using} propose a two-step CNN approach where one CNN first predicts the promoter regions that are likely associated with Amyotrophic Lateral Sclerosis. Another CNN detects if the patient has the rare disease based on genotypes in the selected genomic regions. However, rare diseases pose special challenges to ML compared to classical phenotype prediction because these diseases have an extremely low prevalence in the data while the majority of data points belong to the control set. This data imbalance makes it difficult for ML models to pick up signals and hence prevent them from making an accurate prediction. Thus, special model designs are required. \cite{cui2020conan} uses a generative adversarial network (GAN) model to generate synthetic but realistic rare disease patient embeddings to alleviate the class imbalance problem and show significant performance increase in rare disease detection. \cite{taroni2019multiplier} use a transfer learning framework to adapt from large-scale genomic data with a diverse set of diseases to a smaller set of rare disease genomic data. Specifically, they leverage biological principle by constructing latent variables shared across a wide range of diseases. These variables correspond to genetic pathways. As these variables are the fundamental biology units, they can be naturally adopted even for smaller datasets such as rare disease cohorts. \textit{Machine learning formulation: } Given the gene expression data and other auxiliary data of a patient predict whether this patient has a rare disease. Also, identify genetic variants for this rare disease. Task illustration is in Figure~\ref{fig:biomarker}b, which is the same as phenotype prediction. \subsubsection{Gene-disease association prediction} \label{sec:gda} Although numerous genes are now mapped to diseases, human knowledge about gene-disease association mapping is vastly incomplete. At the same time, we know many genes are similar to each other, as is also the case for diseases. We can impute unknown associations from known ones by many similarity rules that govern the gene-disease networks to leverage these similarities. One notable rule is the "guilt by association" principle~\citep{wolfe2005systematic}. For example, disease $X$ and gene $a$ are more likely to be associated if we know gene $b$ associated with disease $X$ has a similar functional role as gene $a$. In contrast to variant prioritization focusing on prediction of one specific disease, gene-disease association predictions aim to predict any disease-gene pairs. Many graph-theoretic approaches such as diffusion~\citep{kohler2008walking} have been applied to gene-disease association prediction. However, they require strong assumptions about the data. Learnable methods have also been heavily investigated. This problem is also being formulated as a recommendation system problem where it recommends items(genes) to users(diseases). \cite{huang2020skipgnn} use a molecular network-motivated graph neural network and formulate association prediction as a link prediction problem. Studies have shown that integrating similarity across multiple data types can help gene-disease prediction~\citep{tranchevent2016candidate}. Thus, a multi-modal data fusion scheme is also ideal. Notably, \cite{luo2019enhancing} fuse information from protein-protein interaction and gene ontology through a multimodal deep belief network. As some diseases are not well annotated compared to others, predicting molecularly uncharacterized (no known biological function or genes) diseases such as rare diseases is also important. \cite{caceres2019disease} use phenotype data to transfer knowledge from other phenotypically similar diseases using a network diffusion method, where the phenotypical similarity is defined by the distance on the disease ontology trees. \textit{Machine learning formulation: } Given the known gene-disease association network and auxiliary information, predict the association likelihood for every unknown gene-disease pair. Task illustration is in Figure~\ref{fig:biomarker}c. \subsubsection{Pathway analysis and prediction} \label{sec:pathway} Many diseases are driven by a set of genes forming disease pathways. Pathway analysis identifies these gene sets through transcriptomics data and leads toward a more complete understanding of disease mechanisms. Many statistical approaches have been proposed. For example, Gene Set Enrichment Analysis~\citep{subramanian2005gene} leverages existing known pathways and calculates statistics on omics data to see if any pathway is activated. However, it treats each pathway as a set, while no relation among the genes is provided. Other topology-based pathway analyses~\citep{tarca2009novel} that take into account the gene relational graph structure are also proposed. Many pathway analyses suffer from noise and provide unstable pathway activation and inhibition patterns across samples and experiments. \cite{ozerov2016silico} introduces a clustered gene importance factor to reduce noise and improve robustness. Although current pathway analysis still heavily relies on network-based methods~\citep{reyna2020pathway}, an emerging trend used to understand potential disease mechanisms is to probe into explainable machine learning models that predict genotype-to-disease association. Many efforts have been made to simulate cell signaling pathways and corresponding hierarchical biological processes \textit{in silico}. \cite{karr2012whole} devises the first whole-cell approach to predict cell growth from genotype using a set of differential equations. Recently, a machine learning model called visible neural network~\citep{ma2018using} simulates the hierarchical biological processes (gene ontology) in a eukaryotic cell as a feedforward neural network where each neuron corresponds to a biologic subsystem. This model is trained end-to-end from genotype to cell fitness phenotype with good accuracy. A post-hoc interpretability method that assigns scores for each subsystem generates a likely mechanism for the fitness of a cell after training. This method has been extended recently to train on genomics data related to prostate cancer phenotype, in order to generate disease pathways~\citep{elmarakeby2020biologically}. \textit{Machine learning formulation: } Given the gene expression data for a phenotype and known gene relations, identify a set of genes corresponding to disease pathways. Task illustration is in Figure~\ref{fig:biomarker}d. \section{Machine Learning for Genomics in Therapeutics Discovery} \label{sec:discovery} After a drug target is identified, a campaign to design potent therapeutic agents to modulate the target and block the disease pathway is initiated. These therapeutics can be a small molecule, an antibody, gene therapy, and so on. The discovery consists of numerous phases and subtasks to ensure the efficacy and safety of the therapeutics. Genomics data also play a role in this process. In this section, we review ML for genomics in therapeutics discovery under two main themes. Section~\ref{sec:personalized} investigates the relation of small-molecule drug efficacy given different cellular genomic contexts. Section~\ref{sec:gene_therapy} reviews how ML can enable the design of various gene therapies. \subsection{Improving Context-specific Drug Response}\label{sec:personalized} \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{FIG/fig7.pdf} \caption{\textbf{Task illustrations for the theme "improving context-specific drug response"}. \textbf{a.} A drug encoder and a cell line encoder produce embeddings for drug and cell line, respectively, which are then fed into a predictor to estimate drug response (Section~\ref{sec:drug_response}). \textbf{b.} Drug encoders first map two drugs into embedding, and a cell line encoder maps a cell line into embeddings. Then, three embeddings are fed into a predictor for drug synergy scores (Section~\ref{sec:drug_combo}). } \label{fig:personalized} \end{figure} Personalized medicine aims at developing the treatment strategy based on a patient's genetic profile. This contrasts with the traditional "one-size-fits-all" approach, which assigns the same treatments to patients with the same diseases. Personalized approaches have been one of the most sought-after endeavors in the field due to their numerous advantages such as improving outcomes and reducing side effects~\citep{hamburg2010path}, especially in oncology, where several biomarkers could lead to drastically different treatment plans~\citep{chin2011cancer}. Despite the promise to understand the relations among treatments, diseases, high-dimensional genomics profiles, and the various outcomes, large-scale experiments in combinatorial complexity are required to investigate these relationships~\citep{menden2019community}. Machine learning provides valuable tools to facilitate this process. \subsubsection{Drug response prediction} \label{sec:drug_response} It is known that the same small-molecule drug could have various response levels given different genomic profiles. For example, an anti-cancer drug has a different response to different tumors. Thus, it is crucial to generate an accurate response profile given drug-genomics profile pairs. However, to experimentally test each combination of available drugs and cell-line genomics profiles is prohibitively expensive. A machine learning model can be used to predict a drug's response in a diverse set of cell lines \textit{in silico}. An accurate machine learning model can greatly narrow down the drug screening space and reduce the burden on experimental costs and resources. Various models have been proposed to improve the accuracy, such as matrix factorization~\citep{ammad2016drug}, VAEs~\citep{rampavsek2019dr}, ensemble learning~\citep{tan2019drug}, similarity network model~\citep{zhang2015predicting2}, and feature selection~\citep{ali2019machine}. While promising, one challenge is that the current public database has a limited number of drugs and genomics profiles tested, especially for some tissues or drug classes. It is unclear if the model can generalize to new contexts such as novel cell types and structurally diverse drugs with limited samples. To tackle this challenge, \cite{ma2021few} apply model-agnostic meta-learning to learn from screening data of a set of tissues to generalize to new contexts such as new tissue types and preclinical studies in mice~\citep{finn2017model}. In addition to accurate prediction, it is also important to allow an understanding of drug response mechanisms. \citep{kuenzi2020predicting} Applying visible neural networks in the drug response prediction context, \cite{ma2018using} generates potential mechanisms and validated them through experiments using CRISPR, in-vitro screening, and patient-derived tissue cultures. \textit{Machine learning formulation: } Given a pair of drug compound molecular structure and gene expression profile of the cell line, predict the drug response in this context. Task illustration is in Figure~\ref{fig:personalized}a. \subsubsection{Drug combination therapy prediction} \label{sec:drug_combo} Drug combination therapy, also called cocktails, can expand the use of existing drugs, improve outcomes, and reduce side effects. For example, drug cocktails can modulate multiple targets to provide a novel mechanism of action in cancer treatments. Also, by reducing dosages for each drug, it may be possible to reduce adverse effects. However, screening the entire space of possible drug combinations and various cell lines is not feasible experimentally. Machine learning that can predict synergistic responses given the drug pair and the genomic profile for a cell line can prove valuable. Classical machine learning methods such as naive Bayes~\citep{li2015large} and random forests~\citep{wildenhain2015prediction} have shown initial success on independent external data. Deep learning methods such as deep neural networks~\citep{preuer2018deepsynergy} and deep belief networks ~\citep{chen2018predict} have shown improved performance. Integration with multi-omics data on cell lines has also further improved the performance, such as miRNA expression and proteomic features~\citep{xia2018predicting}. Similar to drug response prediction, one important challenge is to transfer across tissue types and drug classes. \cite{kim2021anticancer} conducts transfer learning to adapt models trained on data-rich tissues such as brain and breast tissues to understudied tissues such as bone and prostate tissues. \textit{Machine learning formulation: } Given a combination of drug compound structures and a cell line's genomics profile, predict the combination response. Task illustration is in Figure~\ref{fig:personalized}b. \subsection{Improving Efficacy and Delivery of Gene Therapy} \label{sec:gene_therapy} \begin{figure}[t] \centering \includegraphics[width = 0.85\textwidth]{FIG/fig8.pdf} \caption{\textbf{Task illustrations for the theme "Improving Efficacy and Delivery of Gene Therapy".} \textbf{a.} A model predicts various gene editing outcomes given the gRNA sequence and the target DNA features (Section~\ref{sec:on_target}). \textbf{b.} First, a model search through similar sequences to the target DNA sequence in the candidate genome and generate a list of potential off-target DNA sequences. Next, an on-target model predicts if the gRNA sequence can affect these potential DNA sequences. The ones that have high on-target effects are considered potential off-targets (Section~\ref{sec:off_target}). \textbf{c.} An optimal model (oracle function) is first obtained by training on a gold-label database. Next, a generative model generates de novo virus vectors potent in the oracle fitness landscape (Section~\ref{sec:virus}). } \label{fig:gene} \end{figure} Gene therapy is an emerging therapeutics class, which delivers nucleic acid instruction into patient cells to prevent or cure disease. These instructions include (1) replacing disease-causing genes with healthy ones, (2) turning off genes that cause diseases, (3) inserting genes to produce disease-fighting proteins. Special vehicles called vectors are used to deliver these instructions (cargos) into the cells and induce sufficient therapeutic effects. Many choices exist, such as naked DNA, virus, and nanoparticles, and so on. Virus vectors have become popular due to their natural ability to directly enter cells and replicate their genetic material. Despite the promise, numerous challenges still exist in reaching the expected effect, such as the host immune response, viral vector toxicity, and off-target effects. In recent years, machine learning tools have been shown to help tackle many of these challenges. \subsubsection{CRISPR on-target outcome prediction} \label{sec:on_target} CRISPR-Cas9 is a biotechnology that can edit genes in a precise location. It allows the correction of genetic defect to treat disease and provides a tool with which to alter the genome and to study gene function. CRISPR-Cas9 is a system with two important players. Cas9 protein is an enzyme that can cut through DNA, where the CRISPR sequence guides the cut location. The guide RNA sequence (gRNA) determines the specificity for the target DNA sequence in the CRISPR sequence. While existing CRISPR mostly make edits by small deletions, it is also of active research to do repairing, which after cutting, a DNA template is provided to fill in the missing part of the gene. In theory, CRISPR can correctly edit the target DNA sequence and even restore a normal copy, but in reality, the outcome varies significantly given different gRNAs~\citep{cong2013multiplex}. It has been shown that the outcome is decided by factors such as gRNA secondary structure and chromatin accessibility~\citep{jensen2017chromatin}. Some of the desirable outcomes include insertion/deletion length, indel diversity, the fraction of insertions/frameshifts. Thus, it is crucial to design a gRNA sequence such that the CRISPR-Cas system can achieve its effect on the designated target (also called on-target). Machine learning methods that can accurately predict the on-target outcome given the gRNA would facilitate the gRNA design process. Many classic machine learning methods have been investigated to predict various repair outcomes given gRNA sequence, such as linear models~\citep{labuhn2018refined,moreno2015crisprscan}, support vector machines~\citep{chari2015unraveling}, and random forests~\citep{wilson2018high}. However, they do not capture the high-order nonlinearity of gRNA features. Deep learning models that apply CNNs to automatically learn gRNA features show further improved performance~\citep{chuai2018deepcrispr,kim2018deep}. Numerous challenges still exist. For example, machine learning models are data-hungry. Limited data of CRISPR knockout experiments from the diverse cell and tissue types exist, affecting the model's generalizability. In particular, improving generalizability to novel target classes and generating prediction mechanisms are still an open question. \textit{Machine learning formulation: } With a fixed target, given the gRNA sequence and other auxiliary information such as target gene expression and epigenetic profile, predict its on-target repair outcome. Task illustration is in Figure~\ref{fig:gene}a. \subsubsection{CRISPR off-target prediction} \label{sec:off_target} As CRISPR can cut any region that matches the gRNA, it can potentially cut through similar off-target regions, leading to significant adverse effects. This is a major hurdle for CRISPR techniques for clinical implementations~\citep{zhang2015off}. Similar to on-target prediction, the off-target prediction is to predict if gRNA could cause off-target effects. In contrast to on-target, where we have a fixed given DNA region, off-target prediction requires identifying potential off-target regions from the entire genome. Thus, the first step is to search and narrow down a set of potential hits using alignment algorithms and distance measures~\citep{heigwer2014crisp,bae2014cas}. Next, given the set of targets and the gRNA, a model needs to score the putative target-gRNA pair.The model also needs to aggregate these scores since one gRNA usually has multiple putative off-targets. Various heuristics aggregation methods have been proposed and implemented~\citep{hsu2013dna,haeussler2016evaluation,cradick2014cosmid}. Machine learning methods improve performance further. \cite{listgarten2018prediction} uses a two-layer boosted regression tree where the first layer scores each gRNA-target pairs and the second layer aggregates the scores. \cite{lin2018off} apply CNN on a fused DNA-gRNA pair representation and achieve improved performance. Space for further improvement is large. For example, as data of richer contexts such as different cell, tissue, and organism types become available, more sophisticated models that can generalize well on all contexts could be possible. \textit{Machine learning formulation: } Given the gRNA sequence and the off-target DNA sequence, predict its off-target effect. Task illustration is in Figure~\ref{fig:gene}b. \subsubsection{Virus vector design} \label{sec:virus} To deliver gene therapy instructions into cells and induce therapeutic effects, virus vectors are used as vehicles. The design of the virus vector is thus crucial. The recent development of Adeno-Associated Virus (AAV) capsid vectors leads to a surge in gene therapy due to its favorable tropism, immunogenicity, and manufacturability properties~\citep{daya2008gene}. However, there are stills unsolved challenges, mainly regarding the undesirable properties of natural AAV forms. For example, up to 50-70\% of humans are immune to the natural AAV vector, which means the human immune system would destroy it without delivering it to the targeted cells~\citep{chirmule1999immune}. This means that those patients are not able to receive gene therapies. Thus, designing functional variants of AAV capsids that can escape the immune system is crucial. Similarly, it would be ideal to design AAV variants that have higher efficiency and selectivity to the tissue target of interest. The standard method to generate new AAV variants is through "directed evolution" with limited diversity, most still similar to natural AAV. But this is very time- and resource-intensive, while the resulting yields are also low (<1\%). Recently, \cite{bryant2021deep} developed a machine learning-based framework to generate AAV variants that can escape the immune system with a >50\% yield rate. They first train an ensemble neural network that aggregates DNN, CNN, and RNN using customized data collection to assign accurate viability scores given an AAV from diverse sources. Then, they sample iteratively on the predictor viability landscape to obtain a set of highly viable AAVs. Many opportunities remain open for machine-aided AAV design~\citep{kelsic2019challenges}. For example, this framework can be easily extended to other targets in addition to the immune system viability, such as tissue selectivity, if a high capacity machine learning property predictor can be constructed. \textit{Machine learning formulation: } Given a set of virus sequences and their labels for a property X, obtain an accurate predictor oracle and conduct various generation modeling to generate de novo virus variants with a high score in X and high diversity. Task illustration is in Figure~\ref{fig:gene}c. \section{Machine Learning for Genomics in Clinical Studies} \label{sec:clinical} After a therapeutic is shown to have efficacy in the wet lab, it is further evaluated in animals and then on humans in full-scale clinical trials. ML can facilitate this process using genomics data. We review the following three themes. Section~\ref{sec:translation} studies the long-standing problem of difficulty translating results from animals to humans and shows ML can enable better translation by better characterization of the molecular differences. Section~\ref{sec:cohort} reviews ML techniques to curate a better patient cohort that the therapeutic can be applied to, as it can greatly affect the clinical trial outcome. Section~\ref{sec:causal} surveys alternative ML techniques called causal inference to augment clinical trials in cases that traditional trials are not ethical or are difficult to conduct. \subsection{Translating Preclinical Animal Models to Humans} \label{sec:translation} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{FIG/fig9.pdf} \caption{\textbf{Task illustration for the theme "translating preclinical animal models to humans".} A model first obtains translatable features between mouse and human by comparing their genotypes. Next, a predictor model is trained to predict phenotype given mouse genotype. Given the translatable features, augment the predictor and make predictions on human genotypes (Section~\ref{sec:geno-pheno}).} \label{fig:translation} \end{figure} Before therapeutics move into trials on humans, they are validated through extensive animal model experiments (preclinical studies). However, despite successful preclinical studies, more than 85\% of early trials for novel drugs fail to translate to humans~\citep{mak2014lost}. One of the main factors for this failure is the gap between animal and human biology and physiology. Animal models do not mimic the human disease condition. However, by comparing large-scale omics data between animals and humans, we can identify translatable features and use machine learning to align animal and human models. \subsubsection{Animal-to-human translation} \label{sec:geno-pheno} One of the central questions of animal-to-human translation is the following. If a study establishes relations between phenotypes and genotypes based on interventions in animals, do these relations persist in humans? Conventional computational methods construct cross-species pairs (CSPs) and compare the pair's molecular profile to find differential expression~\citep{naqvi2019conservation}. Despite identifying several differential features associated with the disease, these methods often do not accurately translate to humans. This is where machine learning can help since it is good at making predictions. To formulate it in ML, the genotype-phenotype relations can be captured by some computational model that builds upon an animal's molecular profile (such as using gene expression data to predict disease phenotypes). We can then evaluate the trained computational model to human molecular profiles (test set) and see if the model can accurately predict human phenotypes. A large ML challenge called SBV-IMPROVER was conducted to predict protein phosphorylation on human cells from rat cells using genomics and transcriptomics data under 52 stimulation conditions~\citep{rhrissorrakrai2015understanding}. A wide range of ML approaches such as deep neural networks, trees, and support vector machines were applied and shown to have promising extrapolation performance to humans. However, these works directly adopt ML models trained on mice and test on humans, while we know human data often have a different distribution from the mouse data. This poses a challenge for ML since the ML model often suffers from the out-of-distribution generalizability issue. Recent works have been developed to explicitly model this out-of-distribution property by identifying and leveraging translatable features between animals and humans. \cite{brubaker2019computational} propose a semi-supervised technique that integrates unsupervised modeling of human disease-context datasets into the supervised component that trains on mouse data. In addition, works that directly train on CSPs have also been proposed. For example, \cite{normand2018found} aim to identify translatable genes. For every gene, they compute the disease effect size for humans and rats in each CSP and apply linear models to fit them. After fitting, they use the mean of the linear model as the predicted human effect size for this gene. They show improved gene selection by up to 50\%. Computational network models leverage existing biological knowledge about system-level signaling pathways and mechanistic models and have shown to identify transferrable biomarkers and predictable pathways~\citep{yao2018integrative,blais2017reconciled}. It is worth noting that the animal-to-human translation problem is similar to the domain adaptation problems in computer vision and natural language processing fields, where they also strive to bridge the gap between the source domain and target domain~\citep{wang2018deep}. Opportunities to leverage advanced domain adaptation techniques to this problem remain open. Despite the improved prediction performance, data availability is still a hurdle to apply ML in this problem since it requires new data for every animal model and disease indication. \textit{Machine learning formulations:} Given genotype-phenotype data of animals and only the genotype data of humans, train the model to fit phenotype from the genotype and transfer this model to human. Task illustration is in Figure~\ref{fig:translation}. \subsection{Curating High-quality Cohorts} \label{sec:cohort} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{FIG/fig10.pdf} \caption{\textbf{Task illustrations for the theme "curating high-quality cohort".} \textbf{a.} Given patient's gene expressions and EHRs, a model clusters them into subgroups (Section~\ref{sec:stratify}). \textbf{b.} A patient model obtains patient embedding from his/her gene expression and EHR. A trial model obtains trial embedding based on trial criteria. A predictor predicts if this patient is fit for enrollment in the given trial (Section~\ref{sec:match}). } \label{fig:cohort} \end{figure} To study the efficacy of therapeutics in the intended or target patient groups, a clinical trial requires a precise and accurate patient population in each arm~\citep{trusheim2007stratified}. However, due to the heterogeneity of patients, it may be difficult to recruit and enroll appropriate patients. ML can help to characterize important factors for the primary endpoints and quickly identify them in patients by predicting patient molecular profiles. \subsubsection{Patient stratification/disease sub-typing}\label{sec:stratify} Patient stratification in clinical trials is designed to create more homogeneous subgroups with respect to risk of outcome or other important variables that might impact validity of the comparison between treatment arms. Some therapeutics may be highly effective in one patient subgroup, and have a weak or even no effect in other subgroups. In the absence of appropriate stratification in heterogenous patient populations, the average treatment effect across all patients will obscure potentially strong effects in a subpopulation. Conventional stratification methods rely on manual rules on a few available features such as clinical genomics biomarkers, but this might ignore signals rising from rich patient data. Machine learning can potentially identify these important criteria for stratification based on heterogeneous data sources such as genomics profiles, patient demographics, and medical history. Various unsupervised models applied on gene expression data have been proposed to group each sample into distinct categories and claim each category as a sub-type. These methods include clustering~\citep{shen2013sparse, witten2010framework}, gene network stratification~\citep{hofree2013network}, and matrix factorization~\citep{gao2005improving}. Also \cite{chen2020deep} proposes a DNN-based clustering method where a supervised constraint on gold-standard sub-type knowledge is included. As the data are high-dimensional and heterogeneous, the challenge is to fuse diverse data sources to obtain a comprehensive patient representation. \cite{wang2014similarity} aggregates mRNA expression, DNA methylation, and microRNA data through similarity network fusion for cancer subtyping. Similarly, \cite{jurmeister2019machine} leverage DNA methylation profiles to subtype lung cancers using DNN and \citep{li2015identification} applies topological data analysis on the patient-patient similarity network constructed from each patient's genotype and EHR data to identify type 2 diabetes subgroups. Despite the accuracy, these methods suffer from interpretability, which is especially important in patient stratification. A black-box stratification model based on complex patient data does not provide a rationale and is often not trustworthy for practitioners to adopt. Decision tree methods are a classical interpretable ML model. Similarly, \cite{valdes2016mediboost} applies a boosted decision tree method with high accuracy compared to a standard decision tree while still providing clues for how the model makes the accurate prediction/stratification. \textit{Machine learning formulation: } Given the gene expression and other auxiliary information for a set of patients produce criteria for patient stratification. Task illustration is in Figure~\ref{fig:cohort}a. \subsubsection{Matching patients for genome-driven trials}\label{sec:match} Clinical trials suffer from difficulties in recruiting a sufficient number of patients. \cite{mendelsohn2010national} report that 40\% of trials fail to complete accrual in the National Clinical Trial Network and \cite{murthy2004participation} show that less than 2\% of adults with cancer enroll in any clinical trials. Many factors can prevent successful enrollment, such as limited awareness of available trials, and ineffective methods to identify eligible patients in the traditional manual matching system~\citep{lee2019conceptual}. These problems can be tackled by automated patient-trial matching, which leverages the heterogeneous patient data such as genomics profile and trial eligibility criteria. Conventional patient-trial matching methods rely on rule-based annotations. For example, \cite{tao2019real} conducts a real-world outcome analysis using an automatic patient-trial matching alert system based on the patient's genomic biomarkers and showed improved results compared to manual matching. However, they are based on heuristics matching rules, which often omits the useful information in rich patient data. \cite{bustos2018learning} uses DNN to generate eligibility criteria, but no matching is done. Recently, advanced machine learning methods have been proposed to leverage the EHR data from patients to match the eligibility criteria of a trial. \cite{zhang2020deepenroll} applies pre-trained Bidirectional Encoder Representations from Transformers(BERT) model for encoding trial protocols into sentence embedding, and uses a hierarchical embedding model to represent patient longitudinal EHR. Building upon this work, \cite{gao2020compose} proposes a multi-granularity memory network to encode structured patient medical codes and use a convolutional highway network to encode trial eligibility criteria. They show significant improvement over previous conventional rule-based methods. However, genomics information has not been included. Methods that fuse genome and EHR data to represent patients could further improve matching efficiency in genome-driven trials. \textit{Machine learning formulation: } Given a pair of patient data (genomics, EHR, etc.) and trial eligibility criteria (text description), predict the matching likelihood. Task illustration is in Figure~\ref{fig:cohort}b. \subsection{Inferring Causal Effects} \label{sec:causal} \begin{figure} \centering \includegraphics[width=0.75\textwidth]{FIG/fig11.pdf} \caption{\textbf{Task illustrations for the theme "inferring causal effects".} Left panel: Mendelian randomization relies on using gene biomarker (e.g., CHRNA5) as an instrumental variable to measure the effect of exposure to the outcome as it is not affected by confounders, and it serves as a proxy for exposure by directly comparing the effect of the gene on the outcome. Right panel: patients are first grouped based on the CHRNA5 gene. One group contains variant alleles, and another contains wild-type alleles. Then, the mortality rate can be calculated within each group and compared to see risks. If the risk is high, then we conclude the exposure causes the outcome (Section~\ref{sec:mendelian}). } \label{fig:causal} \end{figure} Clinical trials study the treatment efficacy on humans. Numerous unmeasured confounders can lead to a biased conclusion about the efficacy. To eliminate these confounders, randomization is conducted such that the control and treatment groups would have an equal distribution of confounders. This way, the comparative effect is not due to unmeasured confounders. However, this requires the control group receives an alternative therapy (e.g. placebo or standard of care). In many studies, it is difficult or unethical to devise and assign placebos/treatments. In these cases, observational studies can be used to study the correlations between exposure (e.g., smoking) to an outcome (e.g., cancer). However, these studies are typically subjected to unmeasured confounding since no randomization is introduced. Recent methods in causal inference provide alternative ways to do randomization through genomics information. \subsubsection{Mendelian randomization} \label{sec:mendelian} Mendelian randomization (MR) uses genes as a mediator for robust causal inference~\citep{davey2003mendelian}. The key is that genetic information is mostly not modified by postnatal events and is thus not susceptible to confounders. If a gene is associated with the exposure and the outcome via the exposure (i.e., vertical pleiotropy), we can use genes as an instrumental variable to simulate randomization. For example, we know that CHRNA5 genes are associated with smoking levels. Then, we can use the CHRNA5 status to group patients and estimate the comparative effect on outcome (e.g., mortality). This process has a tremendous impact as it can bypass clinical trials, add support for trials, and serve as validation for drug targets~\citep{emdin2017mendelian,ference2012effect}. Regression analysis is usually conducted to calculate the effects. Despite the promises, challenges remain for more advanced ML and causal inference methods. One challenge is that in some cases, the assumption of vertical pleiotropy does not hold. For example, the genes can associate with the outcome through another pathway (i.e., horizontal pleiotropy)~\citep{verbanck2018detection}. This requires customized probabilistic models and larger sample size for statistically significant estimation~\citep{cho2020exploiting}. The underlying causal pathways among exposures, genes, and outcomes are usually not obvious in many cases due to limited knowledge. A large-scale causal pathway could not only help protect MR from horizontal pleiotropy by knowing when it could be the case but also allows more accurate causal inference with advanced methods by the inclusion of other genes or selection of alternative genes as the instrument variable. The main challenge to obtain this putative causal map is that different models can contradict conclusions given the same dataset. \cite{hemani2017automating} applies a mixture-of-experts random forest framework to reduce the false discovery rate on a set of GWAS data to construct a large-scale causal map of human genome and phenotype and show its usefulness in MR. \textit{Machine learning formulation: } Given observation data of the genomic factor, exposure, outcome, and other auxiliary information formulate or identify the causal relations among them and compute the effect of the exposure to the outcome. Task illustration is in Figure~\ref{fig:causal}. \section{Machine Learning for Genomics in Post-Market Studies} \label{sec:post-market} After a therapeutic is evaluated in clinical trials and approved for marketing, numerous studies are done to monitor its efficacy and safety when used in clinical practice. These studies contain important and often unknown information about therapeutics that was not evident prior to regulatory approval. This section reviews how ML can mine through a large corpus of texts and identify useful signals for post-market surveillance. \subsection{Mining Real-World Evidence} \label{sec:rwe} \begin{figure} \centering \includegraphics[width=0.95\textwidth]{FIG/fig12.pdf} \caption{\textbf{Task illustrations for the theme "mining real-world evidence".} \textbf{a.} A model predicts genomic biomarker status given a patient's clinical notes (Section~\ref{sec:clinical_text}). \textbf{b.} A model recognizes entities in the literature and extracts relations among these entities (Section~\ref{sec:biomed_literature}). Credits: the text in panel a is from~\cite{huang2020interpretable}; the text in panel b is from~\cite{zhu2018gram}.} \label{fig:rwe} \end{figure} After therapeutics are approved and used to treat patients, voluminous documentation is generated in the EHR system, insurance billing system, and scientific literature. These are called real-world data. The analyses of these data are called real world evidence. They contain important insights about therapeutics, such as patients' drug responses given different patient characteristics. They can also shed light on disease mechanism of action, the novel phenotype for a target gene, and so on. However, free-form texts are notoriously hard to process. Natural language processing (NLP) technology can be helpful to mine insights from these texts. Next, we describe two specific tasks involving two types of real-world evidence, namely, clinical notes and scientific literature. \subsubsection{Clinical Text Biomarker Mining} \label{sec:clinical_text} EHR has rich information about the patient, and it records a wide range of patient's vitals and disease course after treatments. This information is critical for post-market research, where an actionable hypothesis can be drawn. However, the structured EHR data does not cover the entire picture of a patient. The majority of important variables can only be found in the clinical notes~\citep{boag2018s}, such as next-generation sequencing (NGS) status, PDL1 (Immunotherapy) status, treatment change, and so on. These variables can directly facilitate predictive model building to support clinical decision-making or increase the power of disease-gene-drug associations to better understand the drug. However, conventional human annotations are costly, time-consuming, and not scalable. Automatic processing of clinical notes of patients using machine learning can facilitate this process. For example, \cite{guan2019natural} uses bidirectional LSTMs to extract NGS-related information in a patient's genetics report and classify documents to the treatment-change and no-treatment-change groups. However, clinical text is very messy, filled with typos and jargon (e.g., acronyms). Standard NLP techniques do not work. Also, clinical text often requires clinical annotations. Specialized machine learning models are required, such as transfer learning techniques that learn a sufficient clinical note representation through large-scaled self-supervised learning on clinical notes and fine-tuning on a task of interest with a small number of annotations~\citep{devlin2018bert,huang2019clinicalbert}. \cite{huang2020interpretable} applies hierarchical BERT-based models to classify PDL1 and NGS status and use an attention mechanism to provide clues for which parts of a text provide these variables. \textit{Machine learning formulation: } Given a clinical note document, predict the genomic biomarker variable of interest. Task illustration is in Figure~\ref{fig:rwe}a. \subsubsection{Biomedical Literature Gene Knowledge Mining}\label{sec:biomed_literature} One key question in post-market research is to find evidence about a therapeutics’ response to diseases given patient characteristics such as genomic biomarkers. This has several important applications such as validation of therapeutic efficacy, identification of potential off-label genes/diseases for drug repurposing, and detection of therapeutic candidates’ adverse events when treating patients, using some genomic biomarkers. They also serve as important complementary information for target discovery. This summarized information about drug-gene and disease-gene relations is usually reported and published in the scientific literature. Manual annotations are infeasible due to the exponential number of new articles published every day. Conventional methods are rule-based~\citep{tsai2006nerbio} and dictionary-based~\citep{hirschman2002rutabaga}. They both rely on hand-crafted rules/features to construct query text templates and search through the papers to find sentences that match these templates~\citep{davis2013ctd}. However, these hand-crafted features require extensive domain knowledge and are difficult to keep up-to-date with new literature. The limited flexibility leads to the omission of potential newly discovered drug-gene/drug-disease pairs. Recent advances in name entity recognition and relation detection through deep learning can automatically learn from a large corpus to obtain an optimal set of features without human engineering and have shown strong performances~\citep{nasar2018information}. This can be formulated as a model to recognize drugs, genes, disease terms, and detect drug-gene or drug-disease relation types given a set of documents. Numerous machine learning methods have been developed for biomedical named entity recognition/relation extraction. For example, \cite{limsopatham2016learning} uses bidirectional LSTM to predict the name entity label for each word with character-level embedding. \cite{zhu2018gram} use an n-gram based CNN to capture local context around each word for improved prediction. On relation extraction, in addition to the CNN~\citep{zhao2016drug} and RNN~\citep{zhang2018drug} architecture, \cite{zhang2018hybrid} proposes a hybrid model that integrates a CNN on syntax dependency tree and an RNN on the sentence encodings for improved biomedical relation prediction. \cite{zhang2018graph} applies a graph convolutional neural network on the syntax dependency tree of a sentence and shows improved relation extraction. ML models require large amounts of label annotations as training data, which can be difficult to obtain. Distant supervision borrows information from a large-scale knowledge base to automatically create labels so that it does not require labeled corpora, which reduces manual annotation efforts. \cite{lamurias2017extracting} applies a distant-learning based pipeline that predicts microRNA-gene relations. Recently, BioBERT extends BERT~\citep{devlin2018bert} to pre-train on a large-scale biomedical scientific literature corpus and fine-tune it on numerous downstream tasks and has shown great performance in benchmarking tasks such as biomedical named entity recognition and relation extraction. \textit{Machine learning formulation: } Given a document from literature, extract the drug-gene, drug-disease terms, and predict the interaction types from the text. Task illustration is in Figure~\ref{fig:rwe}b. \section{Discussion: Open Challenges and Opportunities} \label{sec:challenge} This survey provides an overview of research in the intersection of machine learning, genomics, and therapeutic developments. It is our view that machine learning has the potential to revolutionize the use of genomics in therapeutics development, as we have presented a diverse set of such applications in Sections~\ref{sec:target}-\ref{sec:post-market}. However, numerous challenges remain. Here, we discuss these challenges and the associated opportunities. \xhdr{Distribution shifts} ML models work well when the training and deployment data follow the same data distribution. However, in real-world usage in genomics and therapeutics ML, many problems experience distribution shifts where the deployment environment and the data generated from it are different from the training stage. For example, training happens given the available batches of gene expression data in brain tissue. The resulting model is required to predict a new experiment with bone tissue. Another example is to train on animal model transcriptomics and predict the phenotype of human models. Thus, a model must generalize to out-of-distribution. Distribution shifts have been a longstanding challenge in ML, and a large body of work in model robustness and domain adaptation could be applied to genomics to improve generalizability~\citep{moreno2012unifying}. For instance, \cite{brbic2020mars} utilizes the meta-learning technique to generalize to novel single-cell experiments. \xhdr{Learning from small datasets} Biological data are generated through expensive experiments. This means that many tasks only have a minimal number of labeled data points. For example, there are usually only a few drug response data points for new therapeutics. However, standard ML models, especially DL models, are data-hungry. Thus, how to make an ML model learn given only a few examples is crucial. Transfer learning can learn from a large body of existing labeled data points and transfer it to the downstream task with limited data points~\citep{devlin2018bert}. However, they usually still require a reasonable number of training data points. Given only a few data points, few-shot learning methods such as model-agnostic meta-learning (MAML)~\citep{finn2017model}, prototypical networks~\citep{snell2017prototypical} learning from other related tasks using a few examples have shown strong promises. Recently, \cite{ma2021few} have successfully applied MAML to improve few-shot drug response prediction. \xhdr{Representation capacity} The key to successful ML models depends on the effective representation of the genome and other related biomedical entities that match biological motivation. For example, the current dominant ML model for DNA sequence is through CNN models. However, most successful usage only applies to short DNA sequences generated from predefined preprocessing steps, instead of a large fraction of the whole-genome sequence, which could allow a model to tap into crucial information of long-range gene regulatory dependencies~\citep{alipanahi2015predicting,deepsea}. RNN and transformers are also only able to take in medium-length inputs, in contrast to more than $O(10^6)$ SNPs per genome. This also means that the number of input features can be orders of magnitude larger than the number of data points, a well-known ML challenge called the curse of dimensionality. Furthermore, the general ML models are often developed for image and text data without any biological motivations. Thus, to model the human genome and the complicated regulation among genes, a domain-motivated model that captures interactions among extremely long-range high-dimensional features is needed. Initial attempts for domain-motivated representation learning have been made. For instance, \cite{romero2016diet} proposes a parameter prediction network that reduces the number of free parameters for DNN to alleviate the curse of dimensionality issue and shows improved patient stratification given $10^6$ SNPs. \cite{ma2018using} modifies the neural network structure to simulate the hierarchical biological processes and explain pathways for phenotype prediction. \xhdr{Model trustworthiness} For an ML model to be used by domain scientists, the model prediction has to be trustworthy. This can happen on two levels. First, in addition to accurate prediction, the model prediction also needs to generate justification in terms of biomedical knowledge ({\it explanation}). However, current ML models focus on improving model prediction accuracy. Towards the goal of explanation, ML models need to encode biomedical knowledge. This can be potentially achieved via integrating biological knowledge graphs~\citep{himmelstein2017systematic} and applies the graph explainability method~\citep{ying2019gnnexplainer}. The second level is on the quality of model prediction. Since ML models are not error-free, it is important to alert the users or abstain from making predictions when the model is not confident. Uncertainty quantification or model abstention around the model prediction can alleviate this problem. Recently, \cite{hie2020leveraging} uses Gaussian processes to generate uncertainty scores of compound bioactivity, protein fluorescence, and single-cell transcriptomic imputation and has been shown to guide the experimental and validation loop. Integrating the explanation into human workflows and promoting human trust in AI also requires special attention as recent works show that directly providing AI explanation to humans can confuse the human observer and degrade the performance~\citep{bansal2020does}. \xhdr{Fairness} ML models can manifest the bias in the training data. It has been shown that ML models do not work equally well on all subpopulations. These algorithmic biases could have significant social and ethical consequences. For example, \cite{martin2018hidden} found that $79\%$ of genomic data are from patients of European descent, even though they comprise only $16\%$ of the world’s population. Due to differences in allele frequencies and effect sizes across populations, ML models that perform well on the discovery population generally have much lower accuracy and are worse predictors in other populations. Since most discovery to date is performed with European-ancestry cohorts, predictive models may exacerbate health disparities since they will not be available for or have lower utility in African and Hispanic ancestry populations. Similarly, most studies focus on common diseases, whereas experimental data on rare diseases are often limited. These imbalances against minorities require specialized ML techniques. The fairness in ML is defined to make the prediction independent of protected variables such as race, gender, and sexual orientation~\citep{barocas2017fairness}. Recent works have been proposed to ensure this criterion in the clinical ML domain~\citep{pierson2021algorithmic}. However, fairness research of ML for the genomic domain is still lacking. \xhdr{Data linking and integration} An individual has a diverse set of data modalities, such as genomics, transcriptomics, proteomics, electronic health records, and social-economic data. Current ML approaches focus on developing methods for a single data modality, whereas to fully capture the comprehensive data types around individuals could potentially unlock new biological insights and actionable hypotheses. One of the reasons for the limited integration is the lack of data access that connects these heterogeneous data types. As large-scale efforts such as UK Biobank~\citep{canela2018atlas}, which connects in-depth genetic and EHR information about a patient, become available, new ML methods designed to take into account this heterogeneity are needed. Indeed, recent studies have discovered novel insights by applying ML to linked genomics and EHR data~\citep{shen2019brain,willetts2018statistical,bellot2018can}. Handling missing data across modalities is a common challenge in this setting. \xhdr{Genomics data privacy} Abundant genomics data and annotations are generated every day. Aggregation of these data and annotations can tremendously benefit ML models. However, these are usually considered private assets for individuals and contain sensitive private information and thus are not shareable directly. Techniques to anonymize, de-identify these data using differential privacy can potentially enable genomics data sharing~\citep{azencott2018machine}. In addition, recent advances in federated learning techniques allow machine learning model training on aggregated data without sharing data~\citep{yang2019federated}. \section{Conclusion} \label{sec:conclusion} We have conducted a comprehensive review of the literature on machine learning applications for genomics in therapeutics development. We systematically identify diverse ML applications in genomics and provide pointers to the latest methods and resources. For ML researchers, we show that most of these applications have problems that remain unsolved and thus provide numerous exciting technical challenges for ML method innovations. We also provide concise ML problem formulation to help ML researchers to approach these tasks. For biomedical researchers, we pinpoint a large set of diverse use cases of ML applications, where they can extend and expand novel use cases. We also introduced the popular ML models and their corresponding use cases in genomic data. In conclusion, this survey provides an in-depth research summary of the intersection of machine learning, genomics, and therapeutic developments. We hope this survey can lead to a deeper understanding of this interdisciplinary domain between ML and genomics and broaden the collaboration across these two communities. As a common belief that the future of medicine is personalized, understanding the therapeutic tasks with machine learning methods on genomic data is the key to lead ultimate breakthroughs in drug discovery and development. We hope this survey helped to bridge the gap between genomics and machine learning domains. \section{Discussion: Open Challenges and Opportunities} \label{sec:challenge} This survey provides an overview of research in the intersection of machine learning, genomics, and therapeutic developments. It is our view that machine learning has the potential to revolutionize the use of genomics in therapeutics development, as we have presented a diverse set of such applications in Sections~\ref{sec:target}-\ref{sec:post-market}. However, numerous challenges remain. Here, we discuss these challenges and the associated opportunities. \xhdr{Distribution shifts} ML models work well when the training and deployment data follow the same data distribution. However, in real-world usage in genomics and therapeutics ML, many problems experience distribution shifts where the deployment environment and the data generated from it are different from the training stage. For example, training happens given the available batches of gene expression data in brain tissue. The resulting model is required to predict a new experiment with bone tissue. Another example is to train on animal model transcriptomics and predict the phenotype of human models. Thus, a model must generalize to out-of-distribution. Distribution shifts have been a longstanding challenge in ML, and a large body of work in model robustness and domain adaptation could be applied to genomics to improve generalizability~\citep{moreno2012unifying}. For instance, \cite{brbic2020mars} utilizes the meta-learning technique to generalize to novel single-cell experiments. \xhdr{Learning from small datasets} Biological data are generated through expensive experiments. This means that many tasks only have a minimal number of labeled data points. For example, there are usually only a few drug response data points for new therapeutics. However, standard ML models, especially DL models, are data-hungry. Thus, how to make an ML model learn given only a few examples is crucial. Transfer learning can learn from a large body of existing labeled data points and transfer it to the downstream task with limited data points~\citep{devlin2018bert}. However, they usually still require a reasonable number of training data points. Given only a few data points, few-shot learning methods such as model-agnostic meta-learning (MAML)~\citep{finn2017model}, prototypical networks~\citep{snell2017prototypical} learning from other related tasks using a few examples have shown strong promises. Recently, \cite{ma2021few} have successfully applied MAML to improve few-shot drug response prediction. \xhdr{Representation capacity} The key to successful ML models depends on the effective representation of the genome and other related biomedical entities that match biological motivation. For example, the current dominant ML model for DNA sequence is through CNN models. However, most successful usage only applies to short DNA sequences generated from predefined preprocessing steps, instead of a large fraction of the whole-genome sequence, which could allow a model to tap into crucial information of long-range gene regulatory dependencies~\citep{alipanahi2015predicting,deepsea}. RNN and transformers are also only able to take in medium-length inputs, in contrast to more than $O(10^6)$ SNPs per genome. This also means that the number of input features can be orders of magnitude larger than the number of data points, a well-known ML challenge called the curse of dimensionality. Furthermore, the general ML models are often developed for image and text data without any biological motivations. Thus, to model the human genome and the complicated regulation among genes, a domain-motivated model that captures interactions among extremely long-range high-dimensional features is needed. Initial attempts for domain-motivated representation learning have been made. For instance, \cite{romero2016diet} proposes a parameter prediction network that reduces the number of free parameters for DNN to alleviate the curse of dimensionality issue and shows improved patient stratification given $10^6$ SNPs. \cite{ma2018using} modifies the neural network structure to simulate the hierarchical biological processes and explain pathways for phenotype prediction. \xhdr{Model trustworthiness} For an ML model to be used by domain scientists, the model prediction has to be trustworthy. This can happen on two levels. First, in addition to accurate prediction, the model prediction also needs to generate justification in terms of biomedical knowledge ({\it explanation}). However, current ML models focus on improving model prediction accuracy. Towards the goal of explanation, ML models need to encode biomedical knowledge. This can be potentially achieved via integrating biological knowledge graphs~\citep{himmelstein2017systematic} and applies the graph explainability method~\citep{ying2019gnnexplainer}. The second level is on the quality of model prediction. Since ML models are not error-free, it is important to alert the users or abstain from making predictions when the model is not confident. Uncertainty quantification or model abstention around the model prediction can alleviate this problem. Recently, \cite{hie2020leveraging} uses Gaussian processes to generate uncertainty scores of compound bioactivity, protein fluorescence, and single-cell transcriptomic imputation and has been shown to guide the experimental and validation loop. Integrating the explanation into human workflows and promoting human trust in AI also requires special attention as recent works show that directly providing AI explanation to humans can confuse the human observer and degrade the performance~\citep{bansal2020does}. \xhdr{Fairness} ML models can manifest the bias in the training data. It has been shown that ML models do not work equally well on all subpopulations. These algorithmic biases could have significant social and ethical consequences. For example, \cite{martin2018hidden} found that $79\%$ of genomic data are from patients of European descent, even though they comprise only $16\%$ of the world’s population. Due to differences in allele frequencies and effect sizes across populations, ML models that perform well on the discovery population generally have much lower accuracy and are worse predictors in other populations. Since most discovery to date is performed with European-ancestry cohorts, predictive models may exacerbate health disparities since they will not be available for or have lower utility in African and Hispanic ancestry populations. Similarly, most studies focus on common diseases, whereas experimental data on rare diseases are often limited. These imbalances against minorities require specialized ML techniques. The fairness in ML is defined to make the prediction independent of protected variables such as race, gender, and sexual orientation~\citep{barocas2017fairness}. Recent works have been proposed to ensure this criterion in the clinical ML domain~\citep{pierson2021algorithmic}. However, fairness research of ML for the genomic domain is still lacking. \xhdr{Data linking and integration} An individual has a diverse set of data modalities, such as genomics, transcriptomics, proteomics, electronic health records, and social-economic data. Current ML approaches focus on developing methods for a single data modality, whereas to fully capture the comprehensive data types around individuals could potentially unlock new biological insights and actionable hypotheses. One of the reasons for the limited integration is the lack of data access that connects these heterogeneous data types. As large-scale efforts such as UK Biobank~\citep{canela2018atlas}, which connects in-depth genetic and EHR information about a patient, become available, new ML methods designed to take into account this heterogeneity are needed. Indeed, recent studies have discovered novel insights by applying ML to linked genomics and EHR data~\citep{shen2019brain,willetts2018statistical,bellot2018can}. Handling missing data across modalities is a common challenge in this setting. \xhdr{Genomics data privacy} Abundant genomics data and annotations are generated every day. Aggregation of these data and annotations can tremendously benefit ML models. However, these are usually considered private assets for individuals and contain sensitive private information and thus are not shareable directly. Techniques to anonymize, de-identify these data using differential privacy can potentially enable genomics data sharing~\citep{azencott2018machine}. In addition, recent advances in federated learning techniques allow machine learning model training on aggregated data without sharing data~\citep{yang2019federated}. \section{Conclusion} \label{sec:conclusion} We have conducted a comprehensive review of the literature on machine learning applications for genomics in therapeutics development. We systematically identify diverse ML applications in genomics and provide pointers to the latest methods and resources. For ML researchers, we show that most of these applications have problems that remain unsolved and thus provide numerous exciting technical challenges for ML method innovations. We also provide concise ML problem formulation to help ML researchers to approach these tasks. For biomedical researchers, we pinpoint a large set of diverse use cases of ML applications, where they can extend and expand novel use cases. We also introduced the popular ML models and their corresponding use cases in genomic data. In conclusion, this survey provides an in-depth research summary of the intersection of machine learning, genomics, and therapeutic developments. We hope this survey can lead to a deeper understanding of this interdisciplinary domain between ML and genomics and broaden the collaboration across these two communities. As a common belief that the future of medicine is personalized, understanding the therapeutic tasks with machine learning methods on genomic data is the key to lead ultimate breakthroughs in drug discovery and development. We hope this survey helped to bridge the gap between genomics and machine learning domains. \section{Introduction} Genomics studies the function, structure, evolution, mapping, and editing of genomes~\citep{hieter1997functional}. The genome contains chapters of instructions for building various types of molecules and organisms. Probing genomes allows us to understand a biological phenomenon, such as identifying the roles that the genome play in diseases. A deep understanding of genomics has led to a vast array of successful therapeutics to cure a wide range of diseases, both complex and rare~\citep{wong2004monoamines,chin2011cancer}. It also allows us to prescribe more precise treatments~\citep{hamburg2010path}, or seek more effective therapeutics strategies such as genome editing~\citep{makarova2011evolution}. Recent advances in high-throughput technologies have led to an outpouring of large-scale genomics data~\citep{reuter2015high,heath2021nci}. However, the bottlenecks along the path of transforming genomics data into tangible therapeutics are innumerable. For instance, diseases are driven by multifaceted mechanisms, so to pinpoint the right disease target requires knowledge about the entire suite of biological processes, including gene regulation by non-coding regions~\citep{rinn2012genome}, DNA methylation status~\citep{singal1999dna}, and RNA splicing~\citep{rogers1980mechanism}; personalized treatment requires accurate characterization of disease sub-types, and the compound's sensitivity to various genomics profiles~\citep{hamburg2010path}; gene-editing tools require an understanding of the interplay between guide RNA and the whole-genome to avoid off-target effects~\citep{fu2013high}; monitoring therapeutics efficacy and safety after approval requires the mining of gene-drug-disease relations in the EHR and literature~\citep{corrigan2018real}. We argue that genomics data alone are insufficient to ensure clinical implementation, but it requires integration of a diverse set of data types, from multi-omics, compounds, proteins, cellular image, electronic health records (EHR), and scientific literature. This heterogeneity and scale of data enable application of sophisticated computational methods such as machine learning (ML). Over the years, ML has profoundly impacted many application domains, such as computer vision~\citep{krizhevsky2012imagenet}, natural language processing~\citep{devlin2018bert}, and complex systems ~\citep{silver2016mastering}. ML has changed computational modeling from expert-curated features to automated feature construction. It can learn useful and novel patterns from data, often not found by experts, to improve prediction performance on various tasks. This ability is much-needed in genomics and therapeutics as our understanding of human biology is vastly incomplete. Uncovering these patterns can also lead to the discovery of novel biological insights. Also, therapeutic discovery consists of large-scale resource-intensive experiments, which limit the scope of experiment and many potent candidates are therefore missed. Using accurate prediction by ML can drastically scale up and facilitate the experiments, catching or generating novel therapeutics candidates. Interests in ML for genomics through the lens of therapeutic development have also grown for two reasons. First, for pharmaceutical and biomedical researchers, ML models have undergone proof-of-concept stages in yielding astounding performance often of previously infeasible tasks~\citep{stokes2020deep,senior2020improved}. Second, for ML scientists, large/complex data and hard/impactful problems present exciting opportunities for innovation. This survey summarizes recent ML applications related to genomics in therapeutic development and describes associated challenges and opportunities. Several reviews of ML for genomics have been published~\citep{leung2015machine,eraslan2019deep,zou2019primer}. Most of these previous works focused on studying genomics for biological applications, whereas we study them in the context of bringing genomics discovery to therapeutic implementations. We identify twenty-two “ML for therapeutics” tasks with genomics data, ranging across the entire therapeutic pipeline, which were not covered in previous surveys. Moreover, most of the previous reviews focused on DNA sequences, while we go beyond DNA sequences and study a wide range of interactions among DNA sequences, compounds, proteins, multi-omics, and EHR data. In this survey, we organize ML applications into four therapeutic pipelines: (1) target discovery: basic biomedical research to discover novel disease targets to enable therapeutics; (2) therapeutic discovery: large-scale screening designed to identify potent and safe therapeutics; (3) clinical study: evaluating the efficacy and safety of the therapeutics in vitro, in vivo, and through clinical trials; and (4) post-market study: monitoring the safety and efficacy of marketed therapeutics and identifying novel indications. We also formulate these tasks and data modalities in ML languages, which can help ML researchers with limited domain background to understand those tasks. In summary, this survey presents a unique perspective on the intersection of machine learning, genomics, and therapeutic development. The survey is organized as follows. In Section~\ref{sec:primer}, we provide a brief primer on genomics-related data. We also review popular machine learning model for each data type. Next, in Sections~\ref{sec:target}-\ref{sec:post-market}, we discuss ML applications in genomics across the therapeutics development pipeline. Each section describes a phase in the therapeutics pipeline and contains several ML applications and ML models and formulations. Lastly, in Section~\ref{sec:challenge}, we identify seven open challenges that present numerous opportunities for ML model development and also novel applications. \begin{figure}[t] \centering \includegraphics[width=0.95\textwidth]{FIG/fig1.pdf} \caption{\textbf{Organization and coverage of this survey}. Our survey covers a wide range of important ML applications in genomics across the therapeutics pipelines (Section~\ref{sec:target}-\ref{sec:post-market}). In addition, we provide a primer on biomedical data modalities and machine learning models (Section~\ref{sec:primer}). At last, we identify seven challenges filled with opportunities (Section~\ref{sec:challenge}).} \label{fig:summary} \end{figure} \section{Machine Learning for Genomics in Target Discovery} \label{sec:target} A therapeutic target is a molecule (e.g., a protein) that plays a role in the disease biological process. The molecule could be targeted by a drug to produce a therapeutic effect such as inhibition, thereby blocking the disease process. Much of target discovery relies on fundamental biological research in depicting a full picture of human biology, and based on this knowledge, we identify target biomarkers. In this section, we review ML for genomics tasks in target discovery. In Section~\ref{sec:human_bio}, we review six tasks that use ML to facilitate understanding of human biology, and in Section~\ref{sec:biomarker}, we describe four tasks in using ML to help identify druggable biomarkers more accurately and more quickly. \subsection{Facilitating Understanding of Human Biology} \label{sec:human_bio} \begin{figure}[t] \centering \includegraphics[width = 0.8\textwidth]{FIG/fig5.pdf} \caption{\textbf{Task illustrations for the theme "facilitating understanding of human biology"}. \textbf{a.} A model predicts if a DNA/RNA sequence can bind to a protein. After training, one can identify binding sites based on feature importance (Section~\ref{sec: dna-protein}). \textbf{b.} A model predicts missing DNA methylation state based on its neighboring states and DNA sequence (Section~\ref{sec:methy}). \textbf{c.} A model predicts the splicing level given the RNA sequence and the context (Section~\ref{sec:splice}). \textbf{d.} A model predicts spatial transcriptomics from tissue image (Section~\ref{sec:spatial}). \textbf{e.} A model predicts the cell type compositions from the gene expression (Section~\ref{sec:composition}). \textbf{f.} A model constructs a gene regulatory network from gene expressions (Section~\ref{sec:network_construction}). Credits: Figure c is adapted from, \cite{xiong2015human} and the spatial transcriptomics image in Figure d is from \cite{he2020integrating}.} \label{fig:human_biology} \end{figure} Oftentimes, the first step for developing any therapeutic agent is to generate a disease hypothesis and understand the disease mechanisms. It requires some understanding of basic human biology since diseases are complicated and driven by many factors. Machine learning applied to genomics can facilitate basic biomedical research and help understand disease mechanisms. A wide range of relevant tasks have been tackled by machine learning, from predicting splicing patterns~\citep{jha2017integrative,xiong2015human}, DNA methylation status~\citep{angermueller2017deepcpg}, to decoding the regulatory roles of genes~\citep{liu2016pedla,deepsea}. The majority of previous reviews have focused on this theme only. While there are numerous tasks under this category, we will describe just six important and popular tasks here. \subsubsection{DNA-protein and RNA-protein binding prediction} \label{sec: dna-protein} DNA-binding proteins bind to specific DNA strands (binding sites/motifs) to influence the transcription rate to RNA, chromatin accessibility, and so on. These motifs regulate gene expression and, if mutated, can potentially contribute to diseases. Similarly, RNA-binding proteins bind to RNA strands to influence RNA processing, such as splicing and folding. Thus, it is important to identify the DNA and RNA motifs for these binding proteins. Traditional approaches are based on position weight matrices (PWMs), but they require existing knowledge about the motif length and typically ignore interactions among the binding site loci. Machine learning models trained directly on sequences to predict binding scores circumvent these challenges. \citep{alipanahi2015predicting} uses a convolutional neural network to train large-scale DNA/RNA sequences with varying lengths to predict the binding scores. The use of CNN is a great match for this task because the CNN’s filters work according to a similar mechanism as PWMs, which means that we can visualize binding site motifs through CNN filter weights. While motifs are useful, they have lower predictive power than evolutionary features~\citep{kircher2014general} for identifying chromatin proteins/histone marks binding. \citep{deepsea} shows that integrating another CNN model on additional information from the epigenomics profile could better predict these marks. Extending CNN-based models, a large body of works has been proposed to predict DNA-, RNA-protein binding~\citep{kelley2016basset,zhang2018high,zeng2016convolutional,cao2019simple}. \textit{Machine learning formulation: } Given a set of DNA/RNA sequences predict their binding scores. After training, use feature importance attribution methods to identify the motifs. Task illustration is in Figure~\ref{fig:human_biology}a. \subsubsection{Methylation state prediction} \label{sec:methy} DNA methylation adds methyl groups to individual A or C bases in the DNA to modify gene activity without changing the sequence. It has been shown to be a commonly used mediator for biological processes such as cancer progression and cell differentiation~\citep{robertson2005dna}. Thus, it is important to know the methylation status for DNA sequences in various cells. However, since the single-cell methylation technique has low coverage, most of the methylation status at specific DNA positions is missing, so it requires accurate imputation. Classical methods can only predict population-level status instead of cell-level as cell-level prediction require annotations that are unavailable~\citep{zhang2015predicting,whitaker2015predicting}. Machine learning models can tackle this problem. Given a set of cells with their available sequenced methylation status for each DNA position and the DNA sequence, \citep{angermueller2017deepcpg} accurately infers the unmeasured methylation statuses in a single-cell level. More specifically, the imputation of DNA methylation positions uses a bidirectional recurrent neural network on a sequence of cells' neighboring available methylation states and a CNN on the DNA sequence. The combined embedding takes into account information between DNA and methylation status across cells and within cells. Alternative architecture choice has also been proposed, such as using Bayesian clustering~\citep{kapourani2019melissa}, or a variational auto-encoder~\citep{levy2020methylnet}. Notably, it can also be extended to RNA methylation state prediction. \citep{zou2019gene2vec} applies CNN on the neighboring methylation status and the word2vec model on RNA subsequence for RNA methylation status prediction. \textit{Machine learning formulation: } For a DNA/RNA position with missing methylation status, given its available neighboring methylation states and the DNA/RNA sequence, predict the methylation status on the position of interest. Task illustration in Figure~\ref{fig:human_biology}b. \subsubsection{RNA splicing prediction} \label{sec:splice} RNA splicing is a mechanism to assemble the coding regions and remove the non-coding ones to be translated into proteins. A single gene can have various functions by splicing the same gene in different ways given different conditions. \cite{lopez2005splicing} estimates that as many as 60\% of pathogenic variants responsible for genetic diseases may influence splicing. \cite{gelfman2017annotating} used ML to derive a score, TraP, which identifies around 2\% of synonymous variants and 0.6\% of intronic variants as likely pathogenic due to splicing defects. Thus, it is important to be able to identify the genetic variants that cause splicing. \cite{xiong2015human} models this problem as predicting the splicing level of an exon, measured by the transcript counts of this exon, given its neighboring RNA sequence and the cell type information. It uses Bayesian neural network ensembles on top of curated RNA features and has demonstrated its accuracy by identifying known mutations and discovering new ones. Notably, this model is trained on large-scale data across diverse disease areas and tissue types. Thus, the resulting model can predict the effect of a new unseen mutation contained within hundreds of nucleotides on splicing of an intron without experimental data. In addition, to predict the splicing level given a triplet of exons in various conditions, recent models have been developed to annotate the nucleotide branchpoint of RNA splicing. \cite{paggi2018sequence} feeds an RNA sequence into a recurrent neural network, where it predicts the likelihood of being a branchpoint for each nucleotide. \cite{jagadeesh2019s} further improve the performance by integrating features from the splicing literature and generate a highly accurate splicing-pathogenicity score. \textit{Machine learning formulation: } Given an RNA sequence and its cell type, if available, for each nucleotide, predicts the probability of being a spliced breakpoint and the splicing level. Task illustration is in Figure~\ref{fig:human_biology}c. \subsubsection{Spatial gene expression inference} \label{sec:spatial} Gene expression varies across the spatial organization of tissue. This heterogeneity contains important insights about the biological effects. Regular sequencing, whether of single-cells or bulk tissue, does not capture this information. Recent advances in spatial transcriptomics (ST) characterize gene expression profiles in their spatial tissue context~\citep{staahl2016visualization}. However, there are still challenges to integrating the sequencing output with the tissue context provided by histopathology images to better visualize and understand patterns of gene expression within a tissue section. Machine learning models that directly predict gene expression from the histopathology image can thus be a useful tool. \cite{he2020integrating} develops a deep CNN that predicts gene expression from histopathology of patients with breast cancer at a resolution of 100 $\mu$m. They also show the model can generalize to other breast cancer datasets without re-training. Building upon the inferred spatial gene expression levels, many downstream tasks are enabled. For example, \cite{levy2020spatial} constructs a pipeline that characterizes tumor heterogeneity on top of the CNN gene expression inference step. \textit{Machine learning formulation: } Given the histopathology image of the tissue, predict the gene expression for every gene at each spatial transcriptomics spot. Task illustration is in Figure~\ref{fig:human_biology}d. \subsubsection{Cell composition analysis} \label{sec:composition} Different cell types can drive change in gene expressions that are unrelated to the interventions. Analyzing the average gene expression for a batch of mixed cells with distinct cell types could lead to bias and false results~\citep{egeblad2010tumors}. Thus, it is important to deconvolve the gene expressions of the cell-type composition from the real signals for tissue-based RNA-seq data. ML models can help estimate the cell type proportions and the gene expression. The rationale is to obtain parameters of gene expression (a signature matrix) that characterize each cell type through single-cell profiles. The signature matrix should contain gene expressions that are stably expressed across conditions. These parameters are then integrated into the RNA-seq data to infer cell composition for a set of query gene expression profiles. Various methods, including linear regression~\citep{avila2018computational} and support vector machines~\citep{newman2015robust} are used to predict a cell composition vector when combined with the signature matrix to approximate the gene expression. In these cases, the signature matrix is predefined, which may not be optimal. \cite{menden2020deep} applies DNNs to predict cell composition profile directly from the gene expression, where the hidden neurons can be considered as the learned signature matrix. Cell deconvolution is also crucial for spatial transcriptomes where each spot could contain 2 to 20 cells from a mixture of dozens of possible cell types. \cite{andersson2020single} models various cell type-specific parameters using a customized probabilistic model. \cite{su2020dstg} uses graph convolutional network to leverage information from similar spots in the spatial transcriptomics. However, this problem is constrained by the limited availability of gold standard cell composition annotations. \textit{Machine learning formulation: } Given the gene expressions of a set of cells (in bulk RNA-seq or a spot in spatial transcriptomics), infer proportion estimates of each cell type for this set. Task illustration in Figure~\ref{fig:human_biology}e. \subsubsection{Gene network construction} \label{sec:network_construction} The expression levels of a gene are regulated via transcription factors (TF) produced by other genes. Aggregating these TF-gene relations results in the gene regulatory network. Accurate characterization of this network is crucial because it describes how a cell functions. However, it is difficult to quantify gene networks on a large-scale through experiments alone. Computational approaches have been proposed to construct gene networks from gene-expression data. The majority of them learn a mapping from expressions of a gene to TF. If the mapping is successful, then it is likely that this TF affects this gene. Various mapping methods have been proposed, such as linear regression~\citep{haury2012tigress}, random forest~\citep{huynh2010inferring}, and gradient boosting~\citep{moerman2019grnboost2}. \cite{shrivastava2020grnular} proposes a deep neural network version of the mapping through a specialized unrolled algorithm to control the sparsity of the learned network. They also leverage supervision obtained through synthetic data simulators to improve robustness further. Despite the promises, this problem remains unsolved due to the sparsity, heterogeneity, and noise of the gene expression data, particularly data from single cell RNA sequencing. \textit{Machine learning formulation: } Given a set of gene expression profiles of a gene set, identify the gene regulatory network by predicting all pairs of interacting genes. Task illustration is in Figure~\ref{fig:human_biology}f. \subsection{Identifying Druggable Biomarkers} \label{sec:biomarker} \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{FIG/fig6.pdf} \caption{\textbf{Task illustrations for the theme "identifying druggable biomarkers".} \textbf{a.} A model predicts the zygosity given a read pileup image (Section~\ref{sec:calling}). \textbf{b.} A model predicts whether this patient has the disease given the genomic sequence. After training, feature importance attribution methods are used to assign importance for each variant, which is then ranked and prioritized (Section~\ref{sec:variant_prior}). \textbf{c.} A graph encoder obtains embeddings for each disease and gene node, and they are fed into a predictor to predict their association (Section~\ref{sec:gda}). \textbf{d.} A model identifies a set of gene pathways from the gene expression profiles and the known gene pathways (Section~\ref{sec:pathway}).} \label{fig:biomarker} \end{figure} Diseases are driven by complicated biological processes where each step may be associated with a biomarker. By identifying these biomarkers, we can design therapeutics to break the disease pathway and cure the disease. Machine learning can help identify these biomarkers by mining through large-scale biomedical data to predict genotype-phenotype associations accurately. Probing the trained models can uncover potential biomarkers and identify patterns related to the disease mechanisms. Next, we will present several important tasks related to biomarker identification. \subsubsection{Variant calling} \label{sec:calling} Variant calling is the very first step before relating genotypes to diseases. It is used to specify what genetic variants are present in each individual’s genome from sequencing. The majority of the variants are biallelic, meaning that each locus has only one possible alternative form of nucleotide compared to the reference, while a small fraction are also multiallelic, meaning that each locus can have more than one alternate form. As each locus has two copies, one from mother and another from father, the variant is measured by the total set of nucleotides (e.g., for biallelic variant, suppose B is the reference nucleotide, and b is the alternative; three genotypes are possible: homozygous (BB), heterozygous (Bb) and homozygous alternate(bb)). Raw sequencing outputs are usually billions of short reads, and these reads are aligned to a reference genome. In other words, for each locus, we have a set of short reads that contain this locus. Since sequencing techniques have errors, the challenge is to predict the variant status of this locus accurately from the set of reads. Manual processing of such a large number of reads to identify each variant is infeasible. Thus, efficient computational approaches are needed for this task. A statistical framework called the Genome Analysis Toolkit (GATK)~\citep{depristo2011framework}, combines logistic regression, hidden Markov models, and Gaussian mixture models, and is commonly used for variant calling. Deep learning methods have shown improved performance. For example, while previous works operate on sequencing statistics, DeepVariant~\citep{poplin2018universal} treats the sequencing alignments as an image and applies CNNs. It has been shown to have superior performance to previous modeling efforts and also works for multiallelic variant calling. In addition to predicting zygosity, \cite{luo2019multi} use multi-task CNNs to predict the variant type, alternative allele, and indel length. Many other deep learning based methods are proposed to tackle more specific challenges, such as long sequencing length using LSTMs~\citep{luo2020exploring}. Benchmarking efforts have also been conducted~\citep{zook2019open}. Note that despite most methods achieving greater than 99\% accuracy, thousands of variants are still being called incorrectly since the genome sequence is extremely long. Also, variability persists across different sequencing technologies. Another challenge is the phasing problem, which is to estimate whether the two mutations in a gene are on the same chromosome (haplotypes) or opposite ones~\citep{delaneau2013improved}. Thus, there is still room for further improvement. \textit{Machine learning formulation: } Given the aligned sequencing data ((1) read pileup image, which is a matrix of dimension $M$ and $N$, with $M$ the number of reads and $N$ the length of reads; or (2) the raw reads, which are a set of sequences strings) for each locus, classify the multi-class variant status. Task illustration is in Figure~\ref{fig:biomarker}a. \subsubsection{Variant pathogenicity prioritization/phenotype prediction} \label{sec:variant_prior} There are an extensive number of genomic variants in the human genome, at last one million per person. While many influence complex traits and are relatively harmless, some are associated with diseases. Complex diseases are associated with multiple variants in both coding and non-coding regions of the genome. Thus, prioritization of pathogenic variants from the entire variant set can potentially lead to disease targets. There are mainly two computational approaches. The first one is to predict the pathogenicity given a set of features for a single variant. These features are usually curated from biochemical knowledge, such as amino acid identities. \cite{kircher2014general} build on these features using a linear support vector machine and \cite{quang2015dann} use deep neural networks to classify if a variant is pathogenic. DNN shows improved performance on classification metrics. After training, the model can generate a ranked list of variants based on their predicted pathogenicity likelihood where the top ones are prioritized. Note that this line of work considers each variant as an input data point and assumes some known knowledge of the pathogenicity of the variants, which is not the case in many scenarios, especially for new diseases. Another line of work is to use each genome profile as a data point and use a computational model to predict disease risks from this profile. If the model is accurate, one can obtain variants contributing to the prediction of the disease phenotype. Predicting directly from the whole-genome sequence is challenging because of two reasons. First, as the whole-genome is high-dimensional while the cohort size for each disease is relatively limited, this presents the "curse of dimensionality" challenge in machine learning. Secondly, most SNPs in the input genome are irrelevant to the disease, presenting difficulty in correctly identifying these signals from the noise. \cite{kooperberg2010risk} uses a sparse regression model to predict the risk of Crohn's disease for patients using genomics data in the coding region. \cite{pare2017machine} uses gradient boosted regression to approximate polygenic risk score for complex traits such as diabetes, height, and BMI. \cite{isgut2021highly} uses logistic regression on polygenic risk scores to improve myocardial infarction risk prediction. \cite{zhou2018deep} applies DNNs on the epigenomic features of both the coding and non-coding regions to predict gene expression for more than 200 tissue and cell types and later identify disease-causing SNPs. Built upon DeepSEA~\citep{deepsea}, \cite{zhou2019whole} apply CNN on epigenomic profiles, which are modifications of the DNA sequence such as DNA methylation or chromatin accessibility, to predict autism and identify experimentally validated non-coding variant mutations. \textit{Machine learning formulation: } Given features about a variant, predict its corresponding disease risk and then rank all variants based on the disease risk. Alternatively, given the DNA sequence or other related genomics features, predict the likelihood of disease risk for this sequence and retrieve the variant in the sequence that contributes highly to the risk prediction. Task illustration is in Figure~\ref{fig:biomarker}b. \subsubsection{Rare disease detection} \label{sec:rare} In the US, a rare disease is defined as one that affects fewer than 200,000 people, with other countries similarly defining a rare disease based on low prevalence. There are around 7,000 rare diseases, and they collectively affect 350 million people worldwide~\citep{vickers2013challenges}. Due to limited financial incentives, unknown disease mechanisms and potential difficulties in recruiting sufficient patients for clinical trials, more than 90\% of rare diseases lack effective treatments. Also, initial misdiagnosis is common. On average, it takes more than seven years and eight physicians for a patient to be correctly diagnosed. Importantly, it is likely that targets identified for rare diseases may also be useful for therapeutic intervention of similar more common diseases. ML models are good at identifying patterns from complex patient data. Rare disease detection can be formulated as a classification task, similar to phenotype prediction. It aims to identify if the patient has a rare disease from the patient's genomic sequence and information such as EHR. If sufficient data from patients with a rare disease and suitable controls exist, many ML models can be applied to detect rare diseases. For example, based on the motivation that many rare diseases have missing heritability, which could be harbored in regulatory regions, \cite{yin2019using} propose a two-step CNN approach where one CNN first predicts the promoter regions that are likely associated with Amyotrophic Lateral Sclerosis. Another CNN detects if the patient has the rare disease based on genotypes in the selected genomic regions. However, rare diseases pose special challenges to ML compared to classical phenotype prediction because these diseases have an extremely low prevalence in the data while the majority of data points belong to the control set. This data imbalance makes it difficult for ML models to pick up signals and hence prevent them from making an accurate prediction. Thus, special model designs are required. \cite{cui2020conan} uses a generative adversarial network (GAN) model to generate synthetic but realistic rare disease patient embeddings to alleviate the class imbalance problem and show significant performance increase in rare disease detection. \cite{taroni2019multiplier} use a transfer learning framework to adapt from large-scale genomic data with a diverse set of diseases to a smaller set of rare disease genomic data. Specifically, they leverage biological principle by constructing latent variables shared across a wide range of diseases. These variables correspond to genetic pathways. As these variables are the fundamental biology units, they can be naturally adopted even for smaller datasets such as rare disease cohorts. \textit{Machine learning formulation: } Given the gene expression data and other auxiliary data of a patient predict whether this patient has a rare disease. Also, identify genetic variants for this rare disease. Task illustration is in Figure~\ref{fig:biomarker}b, which is the same as phenotype prediction. \subsubsection{Gene-disease association prediction} \label{sec:gda} Although numerous genes are now mapped to diseases, human knowledge about gene-disease association mapping is vastly incomplete. At the same time, we know many genes are similar to each other, as is also the case for diseases. We can impute unknown associations from known ones by many similarity rules that govern the gene-disease networks to leverage these similarities. One notable rule is the "guilt by association" principle~\citep{wolfe2005systematic}. For example, disease $X$ and gene $a$ are more likely to be associated if we know gene $b$ associated with disease $X$ has a similar functional role as gene $a$. In contrast to variant prioritization focusing on prediction of one specific disease, gene-disease association predictions aim to predict any disease-gene pairs. Many graph-theoretic approaches such as diffusion~\citep{kohler2008walking} have been applied to gene-disease association prediction. However, they require strong assumptions about the data. Learnable methods have also been heavily investigated. This problem is also being formulated as a recommendation system problem where it recommends items(genes) to users(diseases). \cite{huang2020skipgnn} use a molecular network-motivated graph neural network and formulate association prediction as a link prediction problem. Studies have shown that integrating similarity across multiple data types can help gene-disease prediction~\citep{tranchevent2016candidate}. Thus, a multi-modal data fusion scheme is also ideal. Notably, \cite{luo2019enhancing} fuse information from protein-protein interaction and gene ontology through a multimodal deep belief network. As some diseases are not well annotated compared to others, predicting molecularly uncharacterized (no known biological function or genes) diseases such as rare diseases is also important. \cite{caceres2019disease} use phenotype data to transfer knowledge from other phenotypically similar diseases using a network diffusion method, where the phenotypical similarity is defined by the distance on the disease ontology trees. \textit{Machine learning formulation: } Given the known gene-disease association network and auxiliary information, predict the association likelihood for every unknown gene-disease pair. Task illustration is in Figure~\ref{fig:biomarker}c. \subsubsection{Pathway analysis and prediction} \label{sec:pathway} Many diseases are driven by a set of genes forming disease pathways. Pathway analysis identifies these gene sets through transcriptomics data and leads toward a more complete understanding of disease mechanisms. Many statistical approaches have been proposed. For example, Gene Set Enrichment Analysis~\citep{subramanian2005gene} leverages existing known pathways and calculates statistics on omics data to see if any pathway is activated. However, it treats each pathway as a set, while no relation among the genes is provided. Other topology-based pathway analyses~\citep{tarca2009novel} that take into account the gene relational graph structure are also proposed. Many pathway analyses suffer from noise and provide unstable pathway activation and inhibition patterns across samples and experiments. \cite{ozerov2016silico} introduces a clustered gene importance factor to reduce noise and improve robustness. Although current pathway analysis still heavily relies on network-based methods~\citep{reyna2020pathway}, an emerging trend used to understand potential disease mechanisms is to probe into explainable machine learning models that predict genotype-to-disease association. Many efforts have been made to simulate cell signaling pathways and corresponding hierarchical biological processes \textit{in silico}. \cite{karr2012whole} devises the first whole-cell approach to predict cell growth from genotype using a set of differential equations. Recently, a machine learning model called visible neural network~\citep{ma2018using} simulates the hierarchical biological processes (gene ontology) in a eukaryotic cell as a feedforward neural network where each neuron corresponds to a biologic subsystem. This model is trained end-to-end from genotype to cell fitness phenotype with good accuracy. A post-hoc interpretability method that assigns scores for each subsystem generates a likely mechanism for the fitness of a cell after training. This method has been extended recently to train on genomics data related to prostate cancer phenotype, in order to generate disease pathways~\citep{elmarakeby2020biologically}. \textit{Machine learning formulation: } Given the gene expression data for a phenotype and known gene relations, identify a set of genes corresponding to disease pathways. Task illustration is in Figure~\ref{fig:biomarker}d. \section{Machine Learning for Genomics in Therapeutics Discovery} \label{sec:discovery} After a drug target is identified, a campaign to design potent therapeutic agents to modulate the target and block the disease pathway is initiated. These therapeutics can be a small molecule, an antibody, gene therapy, and so on. The discovery consists of numerous phases and subtasks to ensure the efficacy and safety of the therapeutics. Genomics data also play a role in this process. In this section, we review ML for genomics in therapeutics discovery under two main themes. Section~\ref{sec:personalized} investigates the relation of small-molecule drug efficacy given different cellular genomic contexts. Section~\ref{sec:gene_therapy} reviews how ML can enable the design of various gene therapies. \subsection{Improving Context-specific Drug Response}\label{sec:personalized} \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{FIG/fig7.pdf} \caption{\textbf{Task illustrations for the theme "improving context-specific drug response"}. \textbf{a.} A drug encoder and a cell line encoder produce embeddings for drug and cell line, respectively, which are then fed into a predictor to estimate drug response (Section~\ref{sec:drug_response}). \textbf{b.} Drug encoders first map two drugs into embedding, and a cell line encoder maps a cell line into embeddings. Then, three embeddings are fed into a predictor for drug synergy scores (Section~\ref{sec:drug_combo}). } \label{fig:personalized} \end{figure} Personalized medicine aims at developing the treatment strategy based on a patient's genetic profile. This contrasts with the traditional "one-size-fits-all" approach, which assigns the same treatments to patients with the same diseases. Personalized approaches have been one of the most sought-after endeavors in the field due to their numerous advantages such as improving outcomes and reducing side effects~\citep{hamburg2010path}, especially in oncology, where several biomarkers could lead to drastically different treatment plans~\citep{chin2011cancer}. Despite the promise to understand the relations among treatments, diseases, high-dimensional genomics profiles, and the various outcomes, large-scale experiments in combinatorial complexity are required to investigate these relationships~\citep{menden2019community}. Machine learning provides valuable tools to facilitate this process. \subsubsection{Drug response prediction} \label{sec:drug_response} It is known that the same small-molecule drug could have various response levels given different genomic profiles. For example, an anti-cancer drug has a different response to different tumors. Thus, it is crucial to generate an accurate response profile given drug-genomics profile pairs. However, to experimentally test each combination of available drugs and cell-line genomics profiles is prohibitively expensive. A machine learning model can be used to predict a drug's response in a diverse set of cell lines \textit{in silico}. An accurate machine learning model can greatly narrow down the drug screening space and reduce the burden on experimental costs and resources. Various models have been proposed to improve the accuracy, such as matrix factorization~\citep{ammad2016drug}, VAEs~\citep{rampavsek2019dr}, ensemble learning~\citep{tan2019drug}, similarity network model~\citep{zhang2015predicting2}, and feature selection~\citep{ali2019machine}. While promising, one challenge is that the current public database has a limited number of drugs and genomics profiles tested, especially for some tissues or drug classes. It is unclear if the model can generalize to new contexts such as novel cell types and structurally diverse drugs with limited samples. To tackle this challenge, \cite{ma2021few} apply model-agnostic meta-learning to learn from screening data of a set of tissues to generalize to new contexts such as new tissue types and preclinical studies in mice~\citep{finn2017model}. In addition to accurate prediction, it is also important to allow an understanding of drug response mechanisms. \citep{kuenzi2020predicting} Applying visible neural networks in the drug response prediction context, \cite{ma2018using} generates potential mechanisms and validated them through experiments using CRISPR, in-vitro screening, and patient-derived tissue cultures. \textit{Machine learning formulation: } Given a pair of drug compound molecular structure and gene expression profile of the cell line, predict the drug response in this context. Task illustration is in Figure~\ref{fig:personalized}a. \subsubsection{Drug combination therapy prediction} \label{sec:drug_combo} Drug combination therapy, also called cocktails, can expand the use of existing drugs, improve outcomes, and reduce side effects. For example, drug cocktails can modulate multiple targets to provide a novel mechanism of action in cancer treatments. Also, by reducing dosages for each drug, it may be possible to reduce adverse effects. However, screening the entire space of possible drug combinations and various cell lines is not feasible experimentally. Machine learning that can predict synergistic responses given the drug pair and the genomic profile for a cell line can prove valuable. Classical machine learning methods such as naive Bayes~\citep{li2015large} and random forests~\citep{wildenhain2015prediction} have shown initial success on independent external data. Deep learning methods such as deep neural networks~\citep{preuer2018deepsynergy} and deep belief networks ~\citep{chen2018predict} have shown improved performance. Integration with multi-omics data on cell lines has also further improved the performance, such as miRNA expression and proteomic features~\citep{xia2018predicting}. Similar to drug response prediction, one important challenge is to transfer across tissue types and drug classes. \cite{kim2021anticancer} conducts transfer learning to adapt models trained on data-rich tissues such as brain and breast tissues to understudied tissues such as bone and prostate tissues. \textit{Machine learning formulation: } Given a combination of drug compound structures and a cell line's genomics profile, predict the combination response. Task illustration is in Figure~\ref{fig:personalized}b. \subsection{Improving Efficacy and Delivery of Gene Therapy} \label{sec:gene_therapy} \begin{figure}[t] \centering \includegraphics[width = 0.85\textwidth]{FIG/fig8.pdf} \caption{\textbf{Task illustrations for the theme "Improving Efficacy and Delivery of Gene Therapy".} \textbf{a.} A model predicts various gene editing outcomes given the gRNA sequence and the target DNA features (Section~\ref{sec:on_target}). \textbf{b.} First, a model search through similar sequences to the target DNA sequence in the candidate genome and generate a list of potential off-target DNA sequences. Next, an on-target model predicts if the gRNA sequence can affect these potential DNA sequences. The ones that have high on-target effects are considered potential off-targets (Section~\ref{sec:off_target}). \textbf{c.} An optimal model (oracle function) is first obtained by training on a gold-label database. Next, a generative model generates de novo virus vectors potent in the oracle fitness landscape (Section~\ref{sec:virus}). } \label{fig:gene} \end{figure} Gene therapy is an emerging therapeutics class, which delivers nucleic acid instruction into patient cells to prevent or cure disease. These instructions include (1) replacing disease-causing genes with healthy ones, (2) turning off genes that cause diseases, (3) inserting genes to produce disease-fighting proteins. Special vehicles called vectors are used to deliver these instructions (cargos) into the cells and induce sufficient therapeutic effects. Many choices exist, such as naked DNA, virus, and nanoparticles, and so on. Virus vectors have become popular due to their natural ability to directly enter cells and replicate their genetic material. Despite the promise, numerous challenges still exist in reaching the expected effect, such as the host immune response, viral vector toxicity, and off-target effects. In recent years, machine learning tools have been shown to help tackle many of these challenges. \subsubsection{CRISPR on-target outcome prediction} \label{sec:on_target} CRISPR-Cas9 is a biotechnology that can edit genes in a precise location. It allows the correction of genetic defect to treat disease and provides a tool with which to alter the genome and to study gene function. CRISPR-Cas9 is a system with two important players. Cas9 protein is an enzyme that can cut through DNA, where the CRISPR sequence guides the cut location. The guide RNA sequence (gRNA) determines the specificity for the target DNA sequence in the CRISPR sequence. While existing CRISPR mostly make edits by small deletions, it is also of active research to do repairing, which after cutting, a DNA template is provided to fill in the missing part of the gene. In theory, CRISPR can correctly edit the target DNA sequence and even restore a normal copy, but in reality, the outcome varies significantly given different gRNAs~\citep{cong2013multiplex}. It has been shown that the outcome is decided by factors such as gRNA secondary structure and chromatin accessibility~\citep{jensen2017chromatin}. Some of the desirable outcomes include insertion/deletion length, indel diversity, the fraction of insertions/frameshifts. Thus, it is crucial to design a gRNA sequence such that the CRISPR-Cas system can achieve its effect on the designated target (also called on-target). Machine learning methods that can accurately predict the on-target outcome given the gRNA would facilitate the gRNA design process. Many classic machine learning methods have been investigated to predict various repair outcomes given gRNA sequence, such as linear models~\citep{labuhn2018refined,moreno2015crisprscan}, support vector machines~\citep{chari2015unraveling}, and random forests~\citep{wilson2018high}. However, they do not capture the high-order nonlinearity of gRNA features. Deep learning models that apply CNNs to automatically learn gRNA features show further improved performance~\citep{chuai2018deepcrispr,kim2018deep}. Numerous challenges still exist. For example, machine learning models are data-hungry. Limited data of CRISPR knockout experiments from the diverse cell and tissue types exist, affecting the model's generalizability. In particular, improving generalizability to novel target classes and generating prediction mechanisms are still an open question. \textit{Machine learning formulation: } With a fixed target, given the gRNA sequence and other auxiliary information such as target gene expression and epigenetic profile, predict its on-target repair outcome. Task illustration is in Figure~\ref{fig:gene}a. \subsubsection{CRISPR off-target prediction} \label{sec:off_target} As CRISPR can cut any region that matches the gRNA, it can potentially cut through similar off-target regions, leading to significant adverse effects. This is a major hurdle for CRISPR techniques for clinical implementations~\citep{zhang2015off}. Similar to on-target prediction, the off-target prediction is to predict if gRNA could cause off-target effects. In contrast to on-target, where we have a fixed given DNA region, off-target prediction requires identifying potential off-target regions from the entire genome. Thus, the first step is to search and narrow down a set of potential hits using alignment algorithms and distance measures~\citep{heigwer2014crisp,bae2014cas}. Next, given the set of targets and the gRNA, a model needs to score the putative target-gRNA pair.The model also needs to aggregate these scores since one gRNA usually has multiple putative off-targets. Various heuristics aggregation methods have been proposed and implemented~\citep{hsu2013dna,haeussler2016evaluation,cradick2014cosmid}. Machine learning methods improve performance further. \cite{listgarten2018prediction} uses a two-layer boosted regression tree where the first layer scores each gRNA-target pairs and the second layer aggregates the scores. \cite{lin2018off} apply CNN on a fused DNA-gRNA pair representation and achieve improved performance. Space for further improvement is large. For example, as data of richer contexts such as different cell, tissue, and organism types become available, more sophisticated models that can generalize well on all contexts could be possible. \textit{Machine learning formulation: } Given the gRNA sequence and the off-target DNA sequence, predict its off-target effect. Task illustration is in Figure~\ref{fig:gene}b. \subsubsection{Virus vector design} \label{sec:virus} To deliver gene therapy instructions into cells and induce therapeutic effects, virus vectors are used as vehicles. The design of the virus vector is thus crucial. The recent development of Adeno-Associated Virus (AAV) capsid vectors leads to a surge in gene therapy due to its favorable tropism, immunogenicity, and manufacturability properties~\citep{daya2008gene}. However, there are stills unsolved challenges, mainly regarding the undesirable properties of natural AAV forms. For example, up to 50-70\% of humans are immune to the natural AAV vector, which means the human immune system would destroy it without delivering it to the targeted cells~\citep{chirmule1999immune}. This means that those patients are not able to receive gene therapies. Thus, designing functional variants of AAV capsids that can escape the immune system is crucial. Similarly, it would be ideal to design AAV variants that have higher efficiency and selectivity to the tissue target of interest. The standard method to generate new AAV variants is through "directed evolution" with limited diversity, most still similar to natural AAV. But this is very time- and resource-intensive, while the resulting yields are also low (<1\%). Recently, \cite{bryant2021deep} developed a machine learning-based framework to generate AAV variants that can escape the immune system with a >50\% yield rate. They first train an ensemble neural network that aggregates DNN, CNN, and RNN using customized data collection to assign accurate viability scores given an AAV from diverse sources. Then, they sample iteratively on the predictor viability landscape to obtain a set of highly viable AAVs. Many opportunities remain open for machine-aided AAV design~\citep{kelsic2019challenges}. For example, this framework can be easily extended to other targets in addition to the immune system viability, such as tissue selectivity, if a high capacity machine learning property predictor can be constructed. \textit{Machine learning formulation: } Given a set of virus sequences and their labels for a property X, obtain an accurate predictor oracle and conduct various generation modeling to generate de novo virus variants with a high score in X and high diversity. Task illustration is in Figure~\ref{fig:gene}c. \section{Machine Learning for Genomics in Clinical Studies} \label{sec:clinical} After a therapeutic is shown to have efficacy in the wet lab, it is further evaluated in animals and then on humans in full-scale clinical trials. ML can facilitate this process using genomics data. We review the following three themes. Section~\ref{sec:translation} studies the long-standing problem of difficulty translating results from animals to humans and shows ML can enable better translation by better characterization of the molecular differences. Section~\ref{sec:cohort} reviews ML techniques to curate a better patient cohort that the therapeutic can be applied to, as it can greatly affect the clinical trial outcome. Section~\ref{sec:causal} surveys alternative ML techniques called causal inference to augment clinical trials in cases that traditional trials are not ethical or are difficult to conduct. \subsection{Translating Preclinical Animal Models to Humans} \label{sec:translation} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{FIG/fig9.pdf} \caption{\textbf{Task illustration for the theme "translating preclinical animal models to humans".} A model first obtains translatable features between mouse and human by comparing their genotypes. Next, a predictor model is trained to predict phenotype given mouse genotype. Given the translatable features, augment the predictor and make predictions on human genotypes (Section~\ref{sec:geno-pheno}).} \label{fig:translation} \end{figure} Before therapeutics move into trials on humans, they are validated through extensive animal model experiments (preclinical studies). However, despite successful preclinical studies, more than 85\% of early trials for novel drugs fail to translate to humans~\citep{mak2014lost}. One of the main factors for this failure is the gap between animal and human biology and physiology. Animal models do not mimic the human disease condition. However, by comparing large-scale omics data between animals and humans, we can identify translatable features and use machine learning to align animal and human models. \subsubsection{Animal-to-human translation} \label{sec:geno-pheno} One of the central questions of animal-to-human translation is the following. If a study establishes relations between phenotypes and genotypes based on interventions in animals, do these relations persist in humans? Conventional computational methods construct cross-species pairs (CSPs) and compare the pair's molecular profile to find differential expression~\citep{naqvi2019conservation}. Despite identifying several differential features associated with the disease, these methods often do not accurately translate to humans. This is where machine learning can help since it is good at making predictions. To formulate it in ML, the genotype-phenotype relations can be captured by some computational model that builds upon an animal's molecular profile (such as using gene expression data to predict disease phenotypes). We can then evaluate the trained computational model to human molecular profiles (test set) and see if the model can accurately predict human phenotypes. A large ML challenge called SBV-IMPROVER was conducted to predict protein phosphorylation on human cells from rat cells using genomics and transcriptomics data under 52 stimulation conditions~\citep{rhrissorrakrai2015understanding}. A wide range of ML approaches such as deep neural networks, trees, and support vector machines were applied and shown to have promising extrapolation performance to humans. However, these works directly adopt ML models trained on mice and test on humans, while we know human data often have a different distribution from the mouse data. This poses a challenge for ML since the ML model often suffers from the out-of-distribution generalizability issue. Recent works have been developed to explicitly model this out-of-distribution property by identifying and leveraging translatable features between animals and humans. \cite{brubaker2019computational} propose a semi-supervised technique that integrates unsupervised modeling of human disease-context datasets into the supervised component that trains on mouse data. In addition, works that directly train on CSPs have also been proposed. For example, \cite{normand2018found} aim to identify translatable genes. For every gene, they compute the disease effect size for humans and rats in each CSP and apply linear models to fit them. After fitting, they use the mean of the linear model as the predicted human effect size for this gene. They show improved gene selection by up to 50\%. Computational network models leverage existing biological knowledge about system-level signaling pathways and mechanistic models and have shown to identify transferrable biomarkers and predictable pathways~\citep{yao2018integrative,blais2017reconciled}. It is worth noting that the animal-to-human translation problem is similar to the domain adaptation problems in computer vision and natural language processing fields, where they also strive to bridge the gap between the source domain and target domain~\citep{wang2018deep}. Opportunities to leverage advanced domain adaptation techniques to this problem remain open. Despite the improved prediction performance, data availability is still a hurdle to apply ML in this problem since it requires new data for every animal model and disease indication. \textit{Machine learning formulations:} Given genotype-phenotype data of animals and only the genotype data of humans, train the model to fit phenotype from the genotype and transfer this model to human. Task illustration is in Figure~\ref{fig:translation}. \subsection{Curating High-quality Cohorts} \label{sec:cohort} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{FIG/fig10.pdf} \caption{\textbf{Task illustrations for the theme "curating high-quality cohort".} \textbf{a.} Given patient's gene expressions and EHRs, a model clusters them into subgroups (Section~\ref{sec:stratify}). \textbf{b.} A patient model obtains patient embedding from his/her gene expression and EHR. A trial model obtains trial embedding based on trial criteria. A predictor predicts if this patient is fit for enrollment in the given trial (Section~\ref{sec:match}). } \label{fig:cohort} \end{figure} To study the efficacy of therapeutics in the intended or target patient groups, a clinical trial requires a precise and accurate patient population in each arm~\citep{trusheim2007stratified}. However, due to the heterogeneity of patients, it may be difficult to recruit and enroll appropriate patients. ML can help to characterize important factors for the primary endpoints and quickly identify them in patients by predicting patient molecular profiles. \subsubsection{Patient stratification/disease sub-typing}\label{sec:stratify} Patient stratification in clinical trials is designed to create more homogeneous subgroups with respect to risk of outcome or other important variables that might impact validity of the comparison between treatment arms. Some therapeutics may be highly effective in one patient subgroup, and have a weak or even no effect in other subgroups. In the absence of appropriate stratification in heterogenous patient populations, the average treatment effect across all patients will obscure potentially strong effects in a subpopulation. Conventional stratification methods rely on manual rules on a few available features such as clinical genomics biomarkers, but this might ignore signals rising from rich patient data. Machine learning can potentially identify these important criteria for stratification based on heterogeneous data sources such as genomics profiles, patient demographics, and medical history. Various unsupervised models applied on gene expression data have been proposed to group each sample into distinct categories and claim each category as a sub-type. These methods include clustering~\citep{shen2013sparse, witten2010framework}, gene network stratification~\citep{hofree2013network}, and matrix factorization~\citep{gao2005improving}. Also \cite{chen2020deep} proposes a DNN-based clustering method where a supervised constraint on gold-standard sub-type knowledge is included. As the data are high-dimensional and heterogeneous, the challenge is to fuse diverse data sources to obtain a comprehensive patient representation. \cite{wang2014similarity} aggregates mRNA expression, DNA methylation, and microRNA data through similarity network fusion for cancer subtyping. Similarly, \cite{jurmeister2019machine} leverage DNA methylation profiles to subtype lung cancers using DNN and \citep{li2015identification} applies topological data analysis on the patient-patient similarity network constructed from each patient's genotype and EHR data to identify type 2 diabetes subgroups. Despite the accuracy, these methods suffer from interpretability, which is especially important in patient stratification. A black-box stratification model based on complex patient data does not provide a rationale and is often not trustworthy for practitioners to adopt. Decision tree methods are a classical interpretable ML model. Similarly, \cite{valdes2016mediboost} applies a boosted decision tree method with high accuracy compared to a standard decision tree while still providing clues for how the model makes the accurate prediction/stratification. \textit{Machine learning formulation: } Given the gene expression and other auxiliary information for a set of patients produce criteria for patient stratification. Task illustration is in Figure~\ref{fig:cohort}a. \subsubsection{Matching patients for genome-driven trials}\label{sec:match} Clinical trials suffer from difficulties in recruiting a sufficient number of patients. \cite{mendelsohn2010national} report that 40\% of trials fail to complete accrual in the National Clinical Trial Network and \cite{murthy2004participation} show that less than 2\% of adults with cancer enroll in any clinical trials. Many factors can prevent successful enrollment, such as limited awareness of available trials, and ineffective methods to identify eligible patients in the traditional manual matching system~\citep{lee2019conceptual}. These problems can be tackled by automated patient-trial matching, which leverages the heterogeneous patient data such as genomics profile and trial eligibility criteria. Conventional patient-trial matching methods rely on rule-based annotations. For example, \cite{tao2019real} conducts a real-world outcome analysis using an automatic patient-trial matching alert system based on the patient's genomic biomarkers and showed improved results compared to manual matching. However, they are based on heuristics matching rules, which often omits the useful information in rich patient data. \cite{bustos2018learning} uses DNN to generate eligibility criteria, but no matching is done. Recently, advanced machine learning methods have been proposed to leverage the EHR data from patients to match the eligibility criteria of a trial. \cite{zhang2020deepenroll} applies pre-trained Bidirectional Encoder Representations from Transformers(BERT) model for encoding trial protocols into sentence embedding, and uses a hierarchical embedding model to represent patient longitudinal EHR. Building upon this work, \cite{gao2020compose} proposes a multi-granularity memory network to encode structured patient medical codes and use a convolutional highway network to encode trial eligibility criteria. They show significant improvement over previous conventional rule-based methods. However, genomics information has not been included. Methods that fuse genome and EHR data to represent patients could further improve matching efficiency in genome-driven trials. \textit{Machine learning formulation: } Given a pair of patient data (genomics, EHR, etc.) and trial eligibility criteria (text description), predict the matching likelihood. Task illustration is in Figure~\ref{fig:cohort}b. \subsection{Inferring Causal Effects} \label{sec:causal} \begin{figure} \centering \includegraphics[width=0.75\textwidth]{FIG/fig11.pdf} \caption{\textbf{Task illustrations for the theme "inferring causal effects".} Left panel: Mendelian randomization relies on using gene biomarker (e.g., CHRNA5) as an instrumental variable to measure the effect of exposure to the outcome as it is not affected by confounders, and it serves as a proxy for exposure by directly comparing the effect of the gene on the outcome. Right panel: patients are first grouped based on the CHRNA5 gene. One group contains variant alleles, and another contains wild-type alleles. Then, the mortality rate can be calculated within each group and compared to see risks. If the risk is high, then we conclude the exposure causes the outcome (Section~\ref{sec:mendelian}). } \label{fig:causal} \end{figure} Clinical trials study the treatment efficacy on humans. Numerous unmeasured confounders can lead to a biased conclusion about the efficacy. To eliminate these confounders, randomization is conducted such that the control and treatment groups would have an equal distribution of confounders. This way, the comparative effect is not due to unmeasured confounders. However, this requires the control group receives an alternative therapy (e.g. placebo or standard of care). In many studies, it is difficult or unethical to devise and assign placebos/treatments. In these cases, observational studies can be used to study the correlations between exposure (e.g., smoking) to an outcome (e.g., cancer). However, these studies are typically subjected to unmeasured confounding since no randomization is introduced. Recent methods in causal inference provide alternative ways to do randomization through genomics information. \subsubsection{Mendelian randomization} \label{sec:mendelian} Mendelian randomization (MR) uses genes as a mediator for robust causal inference~\citep{davey2003mendelian}. The key is that genetic information is mostly not modified by postnatal events and is thus not susceptible to confounders. If a gene is associated with the exposure and the outcome via the exposure (i.e., vertical pleiotropy), we can use genes as an instrumental variable to simulate randomization. For example, we know that CHRNA5 genes are associated with smoking levels. Then, we can use the CHRNA5 status to group patients and estimate the comparative effect on outcome (e.g., mortality). This process has a tremendous impact as it can bypass clinical trials, add support for trials, and serve as validation for drug targets~\citep{emdin2017mendelian,ference2012effect}. Regression analysis is usually conducted to calculate the effects. Despite the promises, challenges remain for more advanced ML and causal inference methods. One challenge is that in some cases, the assumption of vertical pleiotropy does not hold. For example, the genes can associate with the outcome through another pathway (i.e., horizontal pleiotropy)~\citep{verbanck2018detection}. This requires customized probabilistic models and larger sample size for statistically significant estimation~\citep{cho2020exploiting}. The underlying causal pathways among exposures, genes, and outcomes are usually not obvious in many cases due to limited knowledge. A large-scale causal pathway could not only help protect MR from horizontal pleiotropy by knowing when it could be the case but also allows more accurate causal inference with advanced methods by the inclusion of other genes or selection of alternative genes as the instrument variable. The main challenge to obtain this putative causal map is that different models can contradict conclusions given the same dataset. \cite{hemani2017automating} applies a mixture-of-experts random forest framework to reduce the false discovery rate on a set of GWAS data to construct a large-scale causal map of human genome and phenotype and show its usefulness in MR. \textit{Machine learning formulation: } Given observation data of the genomic factor, exposure, outcome, and other auxiliary information formulate or identify the causal relations among them and compute the effect of the exposure to the outcome. Task illustration is in Figure~\ref{fig:causal}. \section{Machine Learning for Genomics in Post-Market Studies} \label{sec:post-market} After a therapeutic is evaluated in clinical trials and approved for marketing, numerous studies are done to monitor its efficacy and safety when used in clinical practice. These studies contain important and often unknown information about therapeutics that was not evident prior to regulatory approval. This section reviews how ML can mine through a large corpus of texts and identify useful signals for post-market surveillance. \subsection{Mining Real-World Evidence} \label{sec:rwe} \begin{figure} \centering \includegraphics[width=0.95\textwidth]{FIG/fig12.pdf} \caption{\textbf{Task illustrations for the theme "mining real-world evidence".} \textbf{a.} A model predicts genomic biomarker status given a patient's clinical notes (Section~\ref{sec:clinical_text}). \textbf{b.} A model recognizes entities in the literature and extracts relations among these entities (Section~\ref{sec:biomed_literature}). Credits: the text in panel a is from~\cite{huang2020interpretable}; the text in panel b is from~\cite{zhu2018gram}.} \label{fig:rwe} \end{figure} After therapeutics are approved and used to treat patients, voluminous documentation is generated in the EHR system, insurance billing system, and scientific literature. These are called real-world data. The analyses of these data are called real world evidence. They contain important insights about therapeutics, such as patients' drug responses given different patient characteristics. They can also shed light on disease mechanism of action, the novel phenotype for a target gene, and so on. However, free-form texts are notoriously hard to process. Natural language processing (NLP) technology can be helpful to mine insights from these texts. Next, we describe two specific tasks involving two types of real-world evidence, namely, clinical notes and scientific literature. \subsubsection{Clinical Text Biomarker Mining} \label{sec:clinical_text} EHR has rich information about the patient, and it records a wide range of patient's vitals and disease course after treatments. This information is critical for post-market research, where an actionable hypothesis can be drawn. However, the structured EHR data does not cover the entire picture of a patient. The majority of important variables can only be found in the clinical notes~\citep{boag2018s}, such as next-generation sequencing (NGS) status, PDL1 (Immunotherapy) status, treatment change, and so on. These variables can directly facilitate predictive model building to support clinical decision-making or increase the power of disease-gene-drug associations to better understand the drug. However, conventional human annotations are costly, time-consuming, and not scalable. Automatic processing of clinical notes of patients using machine learning can facilitate this process. For example, \cite{guan2019natural} uses bidirectional LSTMs to extract NGS-related information in a patient's genetics report and classify documents to the treatment-change and no-treatment-change groups. However, clinical text is very messy, filled with typos and jargon (e.g., acronyms). Standard NLP techniques do not work. Also, clinical text often requires clinical annotations. Specialized machine learning models are required, such as transfer learning techniques that learn a sufficient clinical note representation through large-scaled self-supervised learning on clinical notes and fine-tuning on a task of interest with a small number of annotations~\citep{devlin2018bert,huang2019clinicalbert}. \cite{huang2020interpretable} applies hierarchical BERT-based models to classify PDL1 and NGS status and use an attention mechanism to provide clues for which parts of a text provide these variables. \textit{Machine learning formulation: } Given a clinical note document, predict the genomic biomarker variable of interest. Task illustration is in Figure~\ref{fig:rwe}a. \subsubsection{Biomedical Literature Gene Knowledge Mining}\label{sec:biomed_literature} One key question in post-market research is to find evidence about a therapeutics’ response to diseases given patient characteristics such as genomic biomarkers. This has several important applications such as validation of therapeutic efficacy, identification of potential off-label genes/diseases for drug repurposing, and detection of therapeutic candidates’ adverse events when treating patients, using some genomic biomarkers. They also serve as important complementary information for target discovery. This summarized information about drug-gene and disease-gene relations is usually reported and published in the scientific literature. Manual annotations are infeasible due to the exponential number of new articles published every day. Conventional methods are rule-based~\citep{tsai2006nerbio} and dictionary-based~\citep{hirschman2002rutabaga}. They both rely on hand-crafted rules/features to construct query text templates and search through the papers to find sentences that match these templates~\citep{davis2013ctd}. However, these hand-crafted features require extensive domain knowledge and are difficult to keep up-to-date with new literature. The limited flexibility leads to the omission of potential newly discovered drug-gene/drug-disease pairs. Recent advances in name entity recognition and relation detection through deep learning can automatically learn from a large corpus to obtain an optimal set of features without human engineering and have shown strong performances~\citep{nasar2018information}. This can be formulated as a model to recognize drugs, genes, disease terms, and detect drug-gene or drug-disease relation types given a set of documents. Numerous machine learning methods have been developed for biomedical named entity recognition/relation extraction. For example, \cite{limsopatham2016learning} uses bidirectional LSTM to predict the name entity label for each word with character-level embedding. \cite{zhu2018gram} use an n-gram based CNN to capture local context around each word for improved prediction. On relation extraction, in addition to the CNN~\citep{zhao2016drug} and RNN~\citep{zhang2018drug} architecture, \cite{zhang2018hybrid} proposes a hybrid model that integrates a CNN on syntax dependency tree and an RNN on the sentence encodings for improved biomedical relation prediction. \cite{zhang2018graph} applies a graph convolutional neural network on the syntax dependency tree of a sentence and shows improved relation extraction. ML models require large amounts of label annotations as training data, which can be difficult to obtain. Distant supervision borrows information from a large-scale knowledge base to automatically create labels so that it does not require labeled corpora, which reduces manual annotation efforts. \cite{lamurias2017extracting} applies a distant-learning based pipeline that predicts microRNA-gene relations. Recently, BioBERT extends BERT~\citep{devlin2018bert} to pre-train on a large-scale biomedical scientific literature corpus and fine-tune it on numerous downstream tasks and has shown great performance in benchmarking tasks such as biomedical named entity recognition and relation extraction. \textit{Machine learning formulation: } Given a document from literature, extract the drug-gene, drug-disease terms, and predict the interaction types from the text. Task illustration is in Figure~\ref{fig:rwe}b. \section*{Bigger Picture} The genome contains instructions for building the function and structure, and guiding the evolution of molecules and organisms. Recent high-throughput techniques allow the generation of a vast amount of genomics data. However, the path of transforming genomics data into tangible therapeutics is filled with obstacles. We observe that genomics data alone are insufficient but rather require investigation of its interplay with data such as compounds, proteins, electronic health records, images, texts, etc. To make sense of these complex data, machine learning techniques are often utilized for identifying patterns and drawing insights from data. In this review, we study an extensive set of genomics applications of machine learning that can enable faster and more efficacious therapeutic development. Challenges remain, including technical issues such as learning under different contexts given specific low resource constraints, and practical issues such as mistrust of models, privacy, and fairness. \section*{Summary} Thanks to the increasing availability of genomics and other biomedical data, many machine learning approaches have been proposed for a wide range of therapeutic discovery and development tasks. In this survey, we review the literature on machine learning applications for genomics through the lens of therapeutic development. We investigate the interplay among genomics, compounds, proteins, electronic health records (EHR), cellular images, and clinical texts. We identify twenty-two machine learning in genomics applications across the entire therapeutics pipeline, from discovering novel targets, personalized medicine, developing gene-editing tools all the way to clinical trials and post-market studies. We also pinpoint seven important challenges in this field with opportunities for expansion and impact. This survey overviews recent research at the intersection of machine learning, genomics, and therapeutic development. \section*{Data Science Maturity} DSML 3: Development/Pre-production: Data science output has been rolled out/validated across multiple domains/problems \section*{Keywords} machine learning $\cdot$ therapeutics discovery and development $\cdot$ genomics \section{A Primer on Genomics Data and Machine Learning Models} \label{sec:primer} With advances in high-throughput technologies and data management systems, we now have vast and heterogeneous datasets in the field of biomedicine. This section introduces the basic genomics-related data types and their machine learning representation and provides a primer on popular machine learning methods applied to these data. \subsection{Genomics-related biomedical data} \label{sec:data} \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{FIG/fig2.pdf} \caption{\textbf{Therapeutics data modalities and their machine learning representation.} Detailed descriptions of each modality can be found in Section~\ref{sec:data}. \textbf{a.} DNA sequences can be represented as a matrix where each position is a one-hot vector corresponding to A, C, G, T. \textbf{b.} Gene expressions are a matrix of real value, where each entry is the expression level of a gene in a context such as a cell. \textbf{c.} Proteins can be represented in amino acid strings, a protein graph, and a contact map where each entry is the connection between two amino acids. \textbf{d.} Compounds can be represented as a molecular graph or a string of chemical tokens, which are a depth-first traversal of the graph. \textbf{e.} Diseases are usually described by textual descriptions and also symbols in the disease ontology. \textbf{f.} Networks connect various biomedical entities with diverse relations. They can be represented as a heterogeneous graph. \textbf{g.} Spatial data are usually depicted as a 3D array, where 2 dimensions describe the physical position of the entity and the 3rd dimension corresponds to colors (in cell painting) or genes (in spatial transcriptomics). \textbf{h.} Texts are typically represented as a one-hot matrix where each token corresponds to its index in a static dictionary. Credits: The protein image is adapted from \cite{gaudelet2020utilising}; the spatial transcriptomics image is adapted from 10x Genomics; the cell painting image is from Charles River Laboratories.} \label{fig:data} \end{figure} \xhdr{DNAs} The human genome can be thought of as the instructions for building functional individuals. DNA sequences encode these instructions. Like a computer, where we build a program based on 0/1 bit, the basic DNA sequence units are called nucleotides (A, C, G, and T). Given a list of nucleotides, a cell can build a diverse range of functional entities (programs). There are approximately 3 billion base pairs for the human genome, and more than 99.9\% are identical between individuals. If a subset of the population has different nucleotides in a genome position than the majority, this position is called a variant. This single nucleotide variant is often called a single nucleotide polymorphism (SNP). While most variants are not harmful (they are said to be functionally neutral), many correspond to the potential driver for phenotypes, including diseases. \textit{Machine learning representations:} A DNA sequence is a list of ACGT tokens of length $N$. It is typically represented in three ways: (1) a string $\{A, C, G, T\}^N$; (2) a two dimensional matrix $\mathbf{W} \in \mathbb{R}^{4 \times N}$, where the $i$-th column $\mathbf{W}_i$ corresponds to the $i$-th nucleotide and is an one-hot encoding vector of length 4, where A, C, T and G are encoded as [1,0,0,0], [0,1,0,0], [0,0,1,0], and [0,0,0,1], respectively; or (3) a vector of $\{0, 1\}^N$, where 0 means it is not a variant, and 1 a variant. Example illustration in Figure \ref{fig:data}a. \xhdr{Gene expression/transcripts} In a cell, the DNA sequence of each gene is transcribed into messenger RNA (mRNA) transcripts. While most cells share the same genome, the individual genes are expressed at very different levels across cells and tissue types and given different interventions and environments. These expression levels can be measured by the count of mRNA transcripts. Given a disease, we can compare the gene expression in people with the disease with expression to people in healthy cohorts (without the disease of interest) and associate various genes with the underlying biological processes in this disease. With the advance of single-cell RNA sequencing (scRNA-seq) technology, we can now obtain gene expression for the different types of cells that make up a tissue. The availability of transcriptomes of tens of thousands of cells creates new opportunities for understanding interactions among cell types and the impact of heterogeneity. \textit{Machine learning representations:} Gene expressions/transcripts are counts of mRNA. For a scRNA-seq experiment, given $M$ cells with $N$ genes, we can obtain a gene expression matrix $\mathbf{W} \in \mathbb{Z}^{M \times N}$, where each entry $\mathbf{W}_{i,j}$ corresponds to the transcript counts of gene $j$ for cell $i$. Example illustration in Figure \ref{fig:data}b. \xhdr{Proteins} Most of the genes encoded in the DNA provide instructions to build a diverse set of proteins, which perform a vast array of functions. For example, transcription factors are proteins that bind to the DNA/RNA sequence, and regulate their expression in different conditions. A protein is a macro-molecule and is represented by a sequence of 20 standard amino acids or residues, where each amino acid is a simple compound. Based on this sequence code, it naturally folds into a 3D structure, which determines its function. As the functional units, proteins present a large class of therapeutic targets. Many drugs are designed to inhibit/promote proteins in the disease pathways. Proteins can also be used as therapeutics such as antibodies and peptides. \textit{Machine learning representations:} Proteins have diverse forms. For a protein with $N$ amino acids, it can be represented in the following formats: (1) a string $\{A, R, N, D, ...\}^N$ of amino acid sequence tokens; (2) a contact map matrix $\mathbf{W} \in \mathbb{R}^{N \times N}$ where $\mathbf{W}_{i,j}$ is the physical distance between $i$-th and $j$-th amino acids; (3) a protein graph $G$ with nodes corresponding to amino acids, where nodes are connected based on rules such as a physical distance threshold or k-nearest neighbors; (4) a protein 3D grid with three-dimensional discretized tensor, where each grid point $(x, y, z)$ corresponds to amino acids in the 3D space. Example illustration in Figure \ref{fig:data}c. \xhdr{Compounds} Compounds are molecules that are composed of atoms connected by chemical bonds. They can interact with proteins and drive important biological processes. In their natural form, compounds have a 3D structure. Small-molecule compounds are the major class of therapeutics. \textit{Machine learning representations:} A compound is usually represented as (1) a SMILES string where it is a depth traversal order of the molecule graph; (2) a molecular graph $G$ where each node is an atom and edges are the bonds. Example illustration is in Figure \ref{fig:data}d. \xhdr{Diseases} A disease is an abnormal condition that affects the function and/or modifies the structure of an organism. It is derived from both genotypes and environmental factors, with intricate mechanisms driven by biological processes. They are observable and can be described by certain symptoms. \textit{Machine learning representations:} Diseases are represented by (1) symbols such as disease ontology; (2) text description of the specific disease. Example illustration is in Figure \ref{fig:data}e. \xhdr{Biomedical networks} Biological processes are not driven by individual units but consist of numerous interactions among various types of entities such as cell signaling pathways, protein-protein interactions, and gene regulation. These interactions can be characterized by biomedical networks, where they provide a systems view toward biological phenomena. \textit{Machine learning representations:} Biomedical networks are represented as graphs, where each node is a biomedical entity, and an edge corresponds to relations among them. Example illustration is in Figure \ref{fig:data}f. \xhdr{Spatial data} With the advance of microscopes and fluorescent probes, we can visualize cell dynamics through cellular images. By imaging cells under various conditions such as drug treatment, they allow us to identify the effect of conditions at a cellular level. Furthermore, spatial genomic sequencing techniques now allow us to visualize and understand the gene expression for cellular processes in the tissue environment. \textit{Machine learning representations:} Cellular image or spatial transcriptomics can be represented as a matrix of size $M \times N$, where $M, N$ is the width and height of the data or number of pixels/transcripts along this dimension, and each entry corresponds to the pixel of the image or the transcript count in the case of spatial transcriptomics. Additional channels (a separate matrix of size $M \times N$) to encode for information such as colors or various genes for spatial transcriptomics. After aggregation, the spatial data can be represented as a tensor of size $M \times N \times H$, where $H$ is the number of channels. Example illustration in Figure \ref{fig:data}g. \xhdr{Texts} The first important example of text encountered in therapeutics development include clinical trial design protocols, where texts describe inclusion and exclusion criteria for trial participation, often as a function of genome markers. For example, in a trial to study Gefitinib for EGFR-mutant Non-Small Cell Lung Cancer, one of the trial eligibility criteria would be "An EGFR sensitizing mutation must be detected in tumor tissue"~\citep{trial}. A second type of clinical text is clinical notes documented in electronic health records, containing valuable information for post-market research on treatments. \textit{Machine learning representations:} Clinical texts are similar to texts in common natural language processing. The standard way to represent them is a matrix of size $M \times N$, where $M$ is the number of total vocabularies and $N$ is the number of tokens in the texts. Each column is a one-hot encoding for the corresponding token. Example illustration is in Figure \ref{fig:data}h. \subsection{Machine Learning Methods for Biomedical Data} \label{sec:methods} Machine learning models learn patterns from data and leverage these patterns to make accurate predictions. Numerous ML models have been proposed to tackle different challenges. In this section, we briefly introduce the main mechanisms of popular ML models used to analyze genomic data. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{FIG/fig3.pdf} \caption{\textbf{Machine learning for genomics workflow.} \textbf{a.} The first step is to curate a machine learning dataset. Raw data are extracted from databases of various sources, and they are processed into data points. Each data point corresponds to an input of a series of biomedical entities and a label from annotation or experimental result. These data points constitute a dataset, and they are split into three sets. The training set is for the ML model to learn and identify useful and generalizable patterns. The validation set is for model selection and parameter tuning. The testing set is for the evaluation of the final model. The data split could be constructed in a way to reflect real-world challenges. \textbf{b.} Various ML models can be trained using the training set and tuned based on a quantified metric on the validation set such as loss $\mathcal{L}$ that measures how good this model predicts the output given the input. Lastly, we select the optimal model given the lowest loss. \textbf{c.} The optimal model can then predict on the test set, where various evaluation metrics are used to measure how good is the model on new unseen data points. Models can also be probed with explainability methods to identify biological insights captured by the model. Experimental validation is also common to ensure the model can approximate wet-lab experiment results. Finally, the model can be deployed to make predictions on new data without labels. The prediction becomes a proxy for the label from downstream tasks of interest. } \label{fig:ml_workflow} \end{figure} \xhdr{Preliminary} A typical ML model for genomics usage is as follows: given an input of a set of data points, where each data point consists of input features and a ground truth biological label, a machine learning model aims to learn a mapping from input to a label based on the observed data points, which are also called training data. This setting of predicting by leveraging known supervised labels is also called supervised learning. The size of the training data is called the sample size. ML models are data-hungry and usually need a large sample size to perform well. The input features can be DNA sequences, compound graphs, or clinical texts, depending on the task at hand. The ground truth label is usually obtained via biological experiments. The ground truth also presents the goal for an ML model to achieve. Thus, the quality of the ground truth directly affects ML model performance, highlighting the necessity of label curation. There are various forms of ground truth labels. If the labels are continuous (e.g., binding scores), the learning problem is a {\it regression} problem. And if the labels are discrete variables (e.g., the occurrence of interaction), the problem is a {\it classification} problem. Models focusing on predicting the labels of the data are called {\it discriminative models}. Besides making predictions, ML models can also generate new data points by modeling the statistical distribution of data samples. Models following this procedure are called {\it generative models}. When labels are not available, an ML model can still identify the underlying patterns within the unlabeled data points. This problem setting is called {\it unsupervised learning}, where models discover patterns or clusters (e.g., cell types) by modeling the relations among data points. {\it Self-supervised learning} uses supervised learning methods for handling unlabeled data. It creatively produces labels from the unlabeled data (e.g., masking out a motif and use the surrounding context to predict the motif)~\citep{devlin2018bert,hu2019strategies}. In many biological cases, ground truth labels are scarce, where few-shot learning can be considered. {\it Few-shot learning} assumes only a few labeled data points but many unlabeled data points. Another strategy is called {\it meta-learning}, which aims to learn from a set of related tasks to form the ability to learn quickly and accurately on an unseen task. If a model integrates multiple data modalities (e.g., DNA sequence plus compound structure), it is called {\it multimodal learning}. When a model predicts multiple labels (e.g., multiple target endpoints), it is called {\it multi-task learning}. \begin{figure}[t] \centering \includegraphics[width = 0.85\textwidth]{FIG/fig4.pdf} \caption{\textbf{Machine learning models illustrations.} Details about each model can be found in Section~\ref{sec:methods}. \textbf{a.} Classic machine learning models featurize raw data and apply various models (mostly linear) to classify (e.g., binary output) or regress (e.g., real value output). \textbf{b.} Deep Neural Networks map input features to embeddings through a stack of non-linear weight multiplication layers. \textbf{c.} Convolutional Neural Networks apply many local filters to extract local patterns and aggregate local signals through pooling. \textbf{d.} Recurrent Neural Networks generate embeddings for each token in the sequence based on the previous tokens. \textbf{e.} Transformers apply a stack of self-attention layers that assign a weight for each pair of input tokens. \textbf{f.} Graph Neural Networks aggregate information from the local neighborhood to update the node embedding. \textbf{g.} Autoencoders reconstruct the input from an encoded compact latent space. \textbf{h.} Generative models generate novel biomedical entities with more desirable properties. } \label{fig:models} \end{figure} \xhdr{Classic Machine Learning Models} Traditional ML usually requires a transformation of input to tabular real-valued data, where each data point corresponds to a feature vector. In our context, these are predefined features such as the SNP vector, polygenic risk scores, and chemical fingerprints. These tabular data can then be fed into a wide range of supervised models, such as linear/logistic regression, decision trees, random forest, support vector machine, and naive Bayes~\citep{mitchell1997machine}. They work well when the features are well defined. A multilayer perceptron~\citep{rosenblatt1961principles} (MLP) consists of at least three layers of neurons, where each layer is fed into a nonlinear activation function to capture these patterns. When the number of layers is large, it is called a deep neural network (DNN). \textit{Suitable biomedical data:} any real-value feature vectors built upon biomedical entities such as SNP profile and chemical fingerprints. \xhdr{Convolution Neural Network (CNN)} CNNs represent a class of DNNs widely applied for image classification, natural language processing, and signal processing such as speech recognition~\citep{lecun1995convolutional}. A CNN model has a series of convolution filters, which allow it to identify local patterns in the data (e.g., edges, shapes for images). Such networks can automatically extract hierarchical patterns in data. The weight of each filter reveals patterns (such as conserved motifs). \textit{Suitable biomedical data:} short DNA sequence, compound SMILES strings, gene expression profile, cellular images. \xhdr{Recurrent Neural Network (RNN)} An RNN is designed to model sequential data, such as time series, event sequences, and natural language text~\citep{de2015survey}. The RNN model is sequentially applied to a sequence. The input at each step includes the current observation and the previous hidden state. RNN is natural to model variable-length sequences. There are two widely used variants of RNNs: long short-term memory (LSTM)~\citep{hochreiter1997long} and gated recurrent units (GRU)~\citep{cho2014properties}. \textit{Suitable biomedical data:} DNA sequence, protein sequence, texts. \xhdr{Transformer} Transformers~\citep{vaswani2017attention} are a recent class of neural networks that leverage self-attentions: assigning a score of interaction among every pair of input features (e.g., a pair of DNA nucleotides). By stacking these self-attention units, the model can capture more expressive and complicated interactions. Transformers have shown superior performances on sequence data, such as natural language processing. They have also been successfully adapted for state-of-the-art performances on proteins~\citep{Rivese2016239118} and compounds~\citep{huangmoltrans}. \textit{Suitable biomedical data:} DNA sequence, protein sequence, texts, image. \xhdr{Graph Neural Networks (GNN)} Graphs are universal representations of complex relations in many real-world objects. In biomedicine, graphs can represent knowledge graphs, molecules, protein-protein interaction networks, and medical ontologies. However, graphs do not follow rigid data structures as in sequences and images. GNNs are a class of models that convert graph structures into embedding vectors (i.e., node representation or graph representation vectors)~\citep{kipf2016semi}. In particular, GNNs generalize the concept of convolution operations to graphs by iteratively passing and aggregating messages from neighboring nodes. The resulting embedding vectors capture the node attributes and the network structures. \textit{Suitable biomedical data:} biomedical networks, compound/protein graphs, similarity network. \xhdr{Autoencoders (AE)} Autoencoders are an unsupervised method in deep learning. Autoencoders map the input data into a latent embedding (encoder) and then reconstruct the input from the latent embedding (decoder)~\citep{kramer1991nonlinear}. Their objective is to reconstruct the input from a low-dimensional latent space, thus allowing the latent representation to focus on essential properties of the data. Both encoders and decoders are neural networks. AE can be considered as a nonlinear analog to principal component analysis (PCA). The generated latent representation capture patterns in the input data and can thus be used to do unsupervised learning tasks such as clustering. Among its variants, the denoising autoencoders (DAEs) take partially corrupted inputs and are trained to recover original undistorted inputs~\citep{vincent2010stacked}. Variational autoencoders (VAEs) model the latent space with probabilistic models. As these probabilities are complex and usually intractable, they adopt a variational inference technique to approximate these probabilistic models~\citep{kingma2013auto}. \textit{Suitable biomedical data:} unlabeled data. \xhdr{Generative Models} In contrast to making a prediction, generative models aim to learn a sufficient statistical distribution that characterizes the underlying datasets (e.g., a set of DNA sequences for a disease) and its generation process~\citep{wittrock1974learning}. Based on the learned distribution, various kinds of downstream tasks can be supported. For example, from this distribution, one can intelligently generate optimized data points. These optimized samples can be novel images, compounds, or RNA sequences. One popular model is called generative adversarial networks (GAN)~\cite{goodfellow2014generative}. It consists of two sub-models: a {\it generator} that captures the data distribution of a training dataset in a latent representation and a {\it discriminator} that determines whether a sample is real or generated. These two sub-models are trained iteratively such that the resulting generator can produce realistic samples that potentially fool the discriminator. \textit{Suitable biomedical data:} data where new variants can have more desirable properties (e.g., molecule generation for drug discovery)~\citep{fu2020core,jin2018junction}. Depending on the data modality, different encoders can be chosen for the generative models. \begin{table}[t] \centering \caption{High quality machine learning datasets references and pointers for genomics therapeutics tasks.} \adjustbox{max width=\textwidth}{ \begin{tabular}{l|l|l|l} \toprule \textbf{Pipeline} & \textbf{Task} & \textbf{Data Reference} & \textbf{Data Link} \\ \midrule \multirow{10}{*}{Target Discovery (Sec.\ref{sec:target})} & DNA/RNA-protein binding & \cite{zeng2016convolutional} & \url{http://cnn.csail.mit.edu/} \\ & Methylation state & \cite{levy2019pymethylprocess} & \url{https://bit.ly/3rVWgR9}\\ & RNA splicing & \cite{harrow2012gencode} & \url{https://www.gencodegenes.org/}\\ & Spatial gene expression & \cite{weinstein2013cancer} & \url{https://bit.ly/3fOLgTi}\\ & Cell composition analysis & \cite{cobos2020benchmarking}& \url{https://go.nature.com/3mxCZEv}\\ & Gene network construction & \cite{shrivastava2020grnular} & \url{https://bit.ly/3mBMB1f} \\ & Variant calling & \cite{chen2019systematic} & \url{https://bit.ly/39RJcG6} \\ & Variant prioritization & \cite{landrum2014clinvar} & \url{https://www.ncbi.nlm.nih.gov/clinvar/} \\ & Gene-disease association & \cite{pinero2016disgenet} & \url{https://www.disgenet.org/}\\ & Pathway analysis & \cite{fabregat2018reactome} & \url{https://reactome.org/}\\ \midrule \multirow{5}{*}{Therapeutics Discovery (Sec.\ref{sec:discovery})} & Drug response & \cite{yang2012genomics}& \url{https://www.cancerrxgene.org/}\\ & Drug combination & \cite{liu2020drugcombdb} & \url{http://drugcombdb.denglab.org/}\\ & CRISPR on-target& \cite{leenay2019large} & \url{https://bit.ly/3rXlKxi} \\ & CRISPR off-target& \cite{stortz2021crisprsql} & \url{http://www.crisprsql.com/}\\ & Virus vector design& \cite{bryant2021deep}& \url{https://bit.ly/31RRKIP} \\ \midrule \multirow{4}{*}{Clinical Study (Sec.\ref{sec:clinical})} & Cross-species translation & \cite{poussin2014species}& \url{https://bit.ly/3mykFLC} \\ & Patient stratification & \cite{curtis2012genomic} & \url{https://bit.ly/3cWTW8d} \\ & Patient-trial matching & \cite{zhang2020deepenroll} & \url{https://bit.ly/3msp0A0}\\ & Mendelian randomization & \cite{hemani2017automating} & \url{https://www.mrbase.org/}\\ \midrule \multirow{2}{*}{Post-Market Study (Sec.\ref{sec:post-market})} & Clinical texts mining & Proprietary & N/A\\ & Biomedical literature mining & \cite{pyysalo2007bioinfer} & \url{https://bit.ly/3cUtpYZ}\\ \midrule \end{tabular} } \label{tab:database} \end{table}
3,212,635,537,827
arxiv
\section{Introduction} The Hardy inequality, introduced in \cite{Hardy20}, is one of the well known mathematical manifestations of the Uncertainty Principle in Quantum Mechanics. It affirms that \begin{equation}\label{eq:Hardy-classical} \int_{\mathbb{R}^d} |\nabla\psi(x)|^2\,dx \geq \frac{(d-2)^2}{4} \int_{\mathbb{R}^d}\frac{|\psi(x)|^2}{|x|^2}\, dx, \end{equation} for any $\psi\in H^1(\mathbb{R}^d)$\footnote{$H^1(\mathbb{R}^d)$ denotes the Sobolev space of $L^2(\mathbb{R}^d)$ functions with first weak derivatives in $L^2(\mathbb{R}^d).$}, with $d\geq3.$ The low dimensions $d=1,2$ are not included in \eqref{eq:Hardy-classical}, since the weight $|x|^{-2}$ is not locally integrable. On the other hand, if $d=1$, the inequality \eqref{eq:Hardy-classical} holds for any function $\psi$ in the smaller domain $H^1(\mathbb{R}\setminus\{0\}):=\overline{C^\infty_0(\mathbb{R}\setminus\{0\})}^{\|\cdot\|_{H^1(\mathbb{R})}}\subsetneq H^1(\mathbb{R})$. The dimension $d=2$ is critical for the validity of \eqref{eq:Hardy-classical}, which cannot hold in this case with a non-zero constant on the right-hand side. \medskip \noindent An analogous and more singular example is given by the Rellich inequality, introduced in \cite{Rellich53} (see also~\cite{RB1969}), which states that \begin{equation}\label{eq:Rellich-classical} \int_{\mathbb{R}^d} |\Delta \psi(x)|^2\, dx \geq \frac{d^2 (d-4)^2}{16} \int_{\mathbb{R}^d} \frac{|\psi(x)|^2}{|x|^4}\,dx, \end{equation} for any $\psi\in H^2(\mathbb{R}^d)$, with $d\geq5$, or $\psi\in H^2(\mathbb{R}^d\setminus\{0\})$, with $d=1,3$. The dimensions $d=2,4$ play for \eqref{eq:Rellich-classical} the same role of criticality as $d=2$ for the Hardy inequality. Both the constants on the right-hand sides of \eqref{eq:Hardy-classical}, \eqref{eq:Rellich-classical} are sharp, and not attained on any function in the corresponding domains. \medskip \noindent The inequalities \eqref{eq:Hardy-classical}, \eqref{eq:Rellich-classical} are fundamental tools in order to describe scaling-critical perturbations of the free Hamiltonians $-\Delta$ and $\Delta^2$, respectively, by the standard perturbation theory for quadratic forms. In addition, they naturally come into play in a multitude of areas of Mathematics and Physics (elliptic PDEs with singular potentials, stability of quantum systems etc...). Due to their applications, these inequalities have both been objects of intense study (see~\cite{CK2016,BLS2004,FKLV,FS2008,KPP2018,Ozawa } and~\cite{DH1998,Bennett1989,Yafaev1999}, respectively, and references therein, to cite a necessarily incomplete bibliographical list). \medskip \noindent In this paper we are interested in the Hardy-Rellich inequality, which is in between \eqref{eq:Hardy-classical} and \eqref{eq:Rellich-classical}: \begin{equation}\label{eq:HardyRellich-classical} \int_{\mathbb{R}^d} |\Delta \psi(x)|^2\, dx \geq C(d)\int_{\mathbb{R}^d}\frac{|\nabla \psi(x)|^2}{|x|^2}\, dx, \end{equation} for any $\psi\in H^2(\mathbb{R}^d)$ with $d\geq3,$ or $\psi\in H^2(\mathbb{R}\setminus\{0\})$ in the case $d=1$, where the constant $C(d)$ is given by \begin{equation}\label{eq:cididdi} C(d) = \begin{cases} \frac 14 & \text{if }d=1 \\ \frac{25}{36} & \text{if }d=3 \\ 3 & \text{if }d=4 \\ \frac{d^2}{4} & \text{if }d\geq5. \end{cases} \end{equation} The dimension $d=2$ is critical for the validity of \eqref{eq:HardyRellich-classical}, in the same way as for the previous inequalities. \noindent Similarly to \eqref{eq:Hardy-classical} and \eqref{eq:Rellich-classical}, inequality \eqref{eq:HardyRellich-classical} is useful to show the boundedness from below of either the biharmonic operator with second order perturbations (in the form sense) or the Laplacian with first order perturbations, via the Kato-Rellich Theorem (in the operator sense). Moreover, due to the trivial identity $\int_{\mathbb{R}^d}|\widehat{\psi}(\xi)|^2|\xi|^\beta\, d\xi=(2\pi)^{-\beta}\int_{\mathbb{R}^d}|(-\Delta)^{\beta/4}\psi(x)|^2\, dx,$ \eqref{eq:HardyRellich-classical} can be recast in the framework of Pitt's inequalities with gradient terms (see~\cite[Theorem 4]{Beckner2008}) which, in their classical formulation, are weighted inequalities involving a function and its Fourier transform and therefore intimately connected to quantifying uncertainty principles. Finally, \eqref{eq:HardyRellich-classical} serves as a tool to get improvement over more standard Rellich-type inequalities on bounded domains (see \cite{TZ2007}). \medskip \noindent Surprisingly, despite being intimately linked to~\eqref{eq:Hardy-classical} and~\eqref{eq:Rellich-classical}, inequality~\eqref{eq:HardyRellich-classical} appeared for the first time much later than the former. In 2007 Tertikas and Zographopoulos~\cite{TZ2007} proved~\eqref{eq:HardyRellich-classical} for $d\geq 5$. The lower dimensional cases $d=3,4$ were covered later independently by Beckner in~\cite{Beckner2008} and by Ghoussoub and Moradifam in~\cite{GM2011}. Furthermore both these works recovered the higher dimensional case $d\geq 5$ already proved in~\cite{TZ2007}. The method used in~\cite{GM2011} is reminiscent of the one used in~\cite{TZ2007} for $d\geq 5$ and it is based on spherical harmonics decomposition; however, the proof requires distinguishing between the lower and the higher dimensional setting. A compact and unified proof of~\eqref{eq:HardyRellich-classical} in \emph{any} dimension $d\geq 3,$ with optimal constants $C(3)=25/36,$ $C(4)=3$ and $C(d)=d^2/4$ if $d\geq 5,$ was recently obtained by Cazacu in~\cite{Cazacu2019}. He showed that the same technique applied in~\cite{TZ2007} to prove~\eqref{eq:HardyRellich-classical} for $d\geq 5$ could be extended (introducing an additional optimizing parameter) to cover any dimension $d\geq 3.$ In addition, the author showed the non-attainability of the best constant $C(d)$ for any $d\geq 3$ and he also provided minimizing sequences for $C(d)$ in lower dimensions $d=3,4$ (minimizing sequences in $d\geq 5$ were already constructed in~\cite{TZ2007}). Improvements of these inequalities on bounded domains can be found in~\cite{GM2011,Lam2018,NLN2019}. Hardy-Rellich inequalities valid on Riemaniann manifolds are investigated in~\cite{KO2009,Nguyen2020}. Further generalizations can be found in~\cite{GL2018,Costa2009}. To the best of our knowledge, the case $d=1$ is not written, anyway this is an immediate consequence of the classical 1D Hardy inequality. More precisely,~\eqref{eq:HardyRellich-classical} holds true in $d=1$ with $C(1)=1/4.$ \medskip \noindent The best constants of the above inequalities need to be understood as ground energy levels of suitable Hamiltonians. It is convenient to get a deeper insight to \eqref{eq:Hardy-classical} first, introducing the spherical coordinates in $\mathbb{R}^d$, $d\geq2$ to write the free Hamiltonian as $$ -\Delta = -\frac{\partial^2}{\partial r^2}-\frac{d-1}{r}\frac{\partial}{\partial r}-\frac1{r^2}\Delta_{\mathbb S^{d-1}}, $$ where $-\Delta_{\mathbb S^{d-1}}$ is the Laplace-Beltrami operator on the unit sphere. The spectrum of $-\Delta_{\mathbb S^{d-1}}$ is purely discrete, and it is given by the sequence $\sigma(-\Delta_{\mathbb S^{d-1}})=\{k(k+d-2)\}_{k =0,1,\dots}$. Then, if we rewrite \eqref{eq:Hardy-classical} using the language of quadratic forms, i.e. \begin{equation}\label{eq:hardy2quadratic} -\Delta \geq \frac{(d-2)^2}{4|x|^2}, \end{equation} the fact that the lowest eigenvalue of $-\Delta_{\mathbb S^{d-1}}$ is 0 shows that the contribution to \eqref{eq:hardy2quadratic} entirely comes from the positive radial operator $L_r=-\frac{\partial^2}{\partial r^2}-\frac{d-1}{r}\frac{\partial}{\partial r}$. Therefore the following two facts are evident: \begin{itemize} \item[(i)] if one restricts to $L^2$--functions which are orthogonal to the eigenspace associated to the first eigenvalue of $-\Delta_{\mathbb S^{d-1}}$, then there is an improvement of the best constant in \eqref{eq:hardy2quadratic}; \item[(ii)] any angular perturbation to the operator $-\Delta_{\mathbb S^{d-1}}$ which increases the bottom of the spectrum gives a consequent improvement to the best constant in \eqref{eq:hardy2quadratic}. \end{itemize} A trivial example concerning fact (ii) above is obtained by fixing $a>0$ and considering the scaling invariant operator $$ -\Delta + \frac{a}{|x|^2}= -\frac{\partial^2}{\partial r^2}-\frac{d-1}{r}\frac{\partial}{\partial r} + \frac1{r^2}\left(-\Delta_{\mathbb S^{d-1}}+a\right). $$ Since $\sigma(-\Delta_{\mathbb S^{d-1}}+a)=\{k(k+d-2)+a\}_{k =0,1,\dots}$, we have the obvious inequality \begin{equation}\label{eq:hardy3quadratic} -\Delta+ \frac{a}{|x|^2} \geq \left(\frac{(d-2)^2}{4}+a\right)\frac1{|x|^2}. \end{equation} A completely analogous more general result can be easily obtained if $a$ is replaced by a $0$--degree homogeneous function $a(\theta):\mathbb S^{d-1}\to\mathbb{R}$, assuming that $\inf_{\mathbb S^{d-1}}a(\theta)=: a>0$. \medskip \noindent A more geometric improvement occurs in presence of an external magnetic field. A magnetic Schr\"odinger Hamiltonian is an operator of the form $-\Delta_A = (-i\nabla + A)^2$, where $A:\mathbb{R}^d\to\mathbb{R}^d$, $d\geq2$. The diamagnetic inequality $$ \left|(-i\nabla+A)\psi(x)\right|\geq|\nabla|\psi|(x)|, \qquad \text{for a.e. }x\in\mathbb{R}^d $$ valid for $A\in L^2_{\text{loc}}$ (see e.g. \cite{Li_Lo}), together with \eqref{eq:hardy2quadratic}, immediately shows that \begin{equation}\label{eq:hardy4quadratic} -\Delta_A \geq \frac{(d-2)^2}{4|x|^2}, \end{equation} for any vector potential $A\in L^2_{\text{loc}}(\mathbb{R}^d)$. In order to understand the role of $A$ in \eqref{eq:hardy4quadratic}, it is again convenient to describe a scaling invariant example. Let $A\in L^2_{\text{loc}}(\mathbb{R}^d)$ be of the form \begin{equation}\label{eq:magnex} A(x)=|x|^{-1}\mathbf{A}(\theta), \qquad \theta:=\frac{x}{|x|} \end{equation} for some $0$--degree homogeneous vector field $\mathbf{A}:\mathbb S^{d-1}\to\mathbb S^{d-1}$. In addition, assume that $A$ is in the {\it transversal gauge} (or Cr\"onstrom, or Poincar\'e gauge, see \cite{I}), namely $x\cdot A(x)\equiv0$ for almost every $x\in\mathbb{R}^d$. Then the operator $-\Delta_A$ in spherical coordinates reads as $$ -\Delta_A =-\frac{\partial^2}{\partial r^2}-\frac{d-1}{r}\frac{\partial}{\partial r}-\frac1{r^2}\Delta_{\mathbf A,\mathbb S^{d-1}}, $$ where $-\Delta_{\mathbf A,\mathbb S^{d-1}}=(-i\nabla_{\mathbb S^{d-1}}+\mathbf{A})^2$. As in the previous examples, the main contribution to the improvement in \eqref{eq:hardy4quadratic} comes from the fact that the bottom of the spectrum of $-\Delta_{\mathbf A,\mathbb S^{d-1}}$ is always non-negative, due to the spherical version of the diamagnetic inequality (see e.g. \cite{FFT2011}). Therefore it is natural to look for explicit examples of potentials such that $\min\sigma(-\Delta_{\mathbf A,\mathbb S^{d-1}})=a>0$, with a consequent quantitative improvement in \eqref{eq:hardy4quadratic}. The first example in this direction, at our knowledge, is due to Laptev and Weidl \cite{LW1999}. They proved in the two-dimensional case $d=2$ that \begin{equation}\label{eq:LW} \int_{\mathbb{R}^2} |\nabla_{\!A} \psi(x)|^2\, dx\geq \dist\{\widetilde \Psi,\mathbb{Z}\}^2 \int_{\mathbb{R}^2} \frac{|\psi(x)|^2}{|x|^2}\, dx, \end{equation} for any $\psi\in H^1_A:=\left\{f\in L^2(\mathbb{R}^2):\int_{\mathbb{R}^2}|\nabla_A f|^2<\infty\right\}$, where $A$ is the {\it Aharonov-Bohm} vector potential \begin{equation}\label{eq:AB} A(x,y)=\widetilde{\Psi} \left(\frac{-y}{x^2+y^2}, \frac{x}{x^2+y^2} \right), \qquad \widetilde \Psi\in \mathbb{R}, \end{equation} and we denote by $\nabla_A:=\nabla-iA$ the magnetic gradient. In particular, if $\widetilde\Psi\notin\mathbb{Z}$, then \eqref{eq:LW} gives a non-trivial 2D-Hardy inequality. Notice that the potential $A$ in \eqref{eq:AB} is very singular, since $A\notin L^2_{\text{loc}}(\mathbb{R}^2)$. Some examples in higher dimensions have been recently introduced in \cite{FKLV}. \medskip \noindent As for the Rellich inequality~\eqref{eq:Rellich-classical}, similar arguments lead to the statement of facts (i) and (ii) above. About (i), it is known that there are two cases of special interest. First, when $d=2$ inequality~\eqref{eq:Rellich-classical} still holds but only for functions $\psi\in C^\infty_0(\mathbb{R}^2\setminus\{0\})$ which satisfy the following orthogonality condition \begin{equation}\label{eq:orth-cond} f_1(r):=\int_0^{2\pi} \psi(r,\theta)\overline{Y_1(\theta)}\, d\theta =0, \qquad Y_1(\theta):=e^{i\theta}. \end{equation} Second, when $d=4$, even if one works on the domain $H^2(\mathbb{R}^4\setminus\{0\})$, the inequality~\eqref{eq:Rellich-classical} gives a trivial contribution, as mentioned above. Indeed~\eqref{eq:Rellich-classical} descends from the following estimate (see~\cite[Section 7, pag. 94]{RB1969}) \begin{equation}\label{eq:Rellich-preliminary} \int_{\mathbb{R}^d} |\Delta \psi(x)|^2\, dx \geq \frac{d^2(d-4)^2}{16}\int_{\mathbb{R}^d} \frac{|\psi(x)|^2}{|x|^4}\, dx + p_0\int_{\mathbb{R}^d} \frac{|\psi(x)|^2}{|x|^4}\, dx, \qquad p_0:=\min_{k\in \mathbb{N}_0}\Big[c_k \Big(\frac{d(d-4)}{2} + c_k\Big)\Big], \end{equation} with $c_k:=k(k+d-2),$ $k\in \mathbb{N}_0$ being the eigenvalues of the Laplace-Beltrami operator $-\Delta_{\mathbb{S}^{d-1}}.$ If $d=4,$ then the first term in the right-hand side of~\eqref{eq:Rellich-preliminary} disappears, whereas $p_0=\min c_k^2=0.$ This gives the claimed trivial Rellich inequality in $d=4.$ If $d=2,$ then $p_0=\min k(k-2).$ Notice that $k^2(k^2-2)\geq 0$ if $k\neq 1,$ thus the Rellich inequality~\eqref{eq:Rellich-classical} holds also in $d=2$ with constant $C(2)=1$ as soon as $f_1(r)=0,$ \emph{i.e.} when $\psi$ satisfies~\eqref{eq:orth-cond}. \medskip \noindent Moving to the discussion about fact (ii), on the same line of the work by Laptev and Weidl~\cite{LW1999}, Evans and Lewis~\cite{EL2005} showed that, for $d=2,4$, the Rellich inequality \begin{equation}\label{eq:Rellich-magnetic} \int_{\mathbb{R}^d} |\Delta_{A} \psi(x)|^2\, dx \geq \widetilde C(d) \int_{\mathbb{R}^d} \frac{|\psi(x)|^2}{|x|^4}\, dx \end{equation} holds true for any $\psi\in H^2_A:=\left\{f\in L^2(\mathbb{R}^d):\int_{\mathbb{R}^d}|\Delta_A f|^2<\infty\right\}$. Here $A$ is the Aharonov-Bohm potential in \eqref{eq:AB} when $d=2$, or a higher dimensional generalization if $d\geq3$ (see~\eqref{eq:AB-gen} below). As for the constant $\widetilde C(d)$, we have $\widetilde C(2)=\min_{m\in \mathbb{Z}}((m+\widetilde\Psi)^2 -1)^2$ and $\widetilde C(4)=\min_{m\in \mathbb{Z}'}((m+\widetilde\Psi)^2 -1)^2,$ where $\mathbb{Z}'=\{m\colon (m+\widetilde \Psi)^2\geq 1\}.$ If $\widetilde \Psi\in \mathbb{Z},$ then $\widetilde C(2)=\widetilde C(4)=0.$ Moreover, when $d=2,$ if one assumes the orthogonality condition~\eqref{eq:orth-cond}, then $\widetilde C(2)=1$\footnote{By the gauge invariance, in the case $\widetilde{\Psi}\in \mathbb{Z}$, the Hamiltonian $-\Delta_A$ is unitarily equivalent to the free Hamiltonian. Thus $\widetilde C(2)=0$ if and only if $m=\pm1.$ Condition~\eqref{eq:orth-cond} ensures that the minimum is taken over $\mathbb{Z}\setminus\{-1,1\}.$ This yields $\widetilde C(2)=1.$}. \medskip \noindent As far as we know, improvements upon Hardy-Rellich inequalities~\eqref{eq:HardyRellich-classical} in the same style as above are still missing in the literature. The purpose of this paper is to fill this gap. Such improvements descend from a more general result, which is the main contribution of this paper. \begin{theorem}[Improved weighted Hardy-Rellich]\label{thm:main-general} In dimension $d\geq2$, let $\Lambda_\omega$ be a non-negative, self-adjoint operator with domain $\Dom(\Lambda_\omega)\subset L^2(\mathbb{S}^{d-1};d\omega)$. Assume that $\Lambda_\omega$ has purely discrete spectrum, consisting of isolated eigenvalues $\lambda_m,$ $m\in \mathcal{I}$ (repeated according to multiplicity), which can accumulate only at infinity, with corresponding normalized eigenfunctions $u_m,$ $m\in \mathcal{I},$ being $\mathcal{I}$ a countable index set. Denote by $L_r:=-\frac{\partial^2}{\partial r^2} - \frac{d-1}{r}\frac{\partial}{\partial r}$, and define the non-negative operator \begin{equation}\label{eq:operator} \mathcal{L}:=L_r + \tfrac{1}{r^2}\Lambda_\omega \end{equation} acting on the set \begin{equation}\label{eq:domain} \Dom(\mathcal{L}):= \{\psi\colon \psi \in C^{\infty}_0(\mathbb{R}^d\setminus\{0\}), \, \psi(r,\cdot)\in \Dom(\Lambda_\omega)\, \text{for } r>0, \, \text{and } \mathcal{L}\psi\in L^2(\mathbb{R}^d) \}. \end{equation} Let $\alpha\in \mathbb{R}.$ Then, for all $\psi\in \Dom(\mathcal{L})$ such that $|\cdot|^{-\alpha/2}\mathcal{L}\psi \in L^2(\mathbb{R}^d)$ we have \begin{equation}\label{eq:main} \int_{\mathbb{R}^d} \frac{|\mathcal{L}\psi(x)|^2}{|x|^\alpha}\, dx \geq C(d,\alpha) \int_{\mathbb{R}^d} \frac{\mathcal{D}\psi(x)}{|x|^{\alpha+2}}\, dx, \end{equation} where $\mathcal{D}$ is the first-order operator defined by $\mathcal{D}\psi:=\big|\frac{\partial \psi}{\partial r}\big|^2 + \frac{1}{r^2}|\Lambda_\omega^{1/2}\psi|^2 \footnote{ $\Lambda_\omega^{1/2}$ is the square root of the non-negative, self-adjoint operator $\Lambda_\omega.$ This operator exists and is unique by the functional calculus (see, for example,~\cite[Prop.5.13]{Schmu}). In particular, $\Lambda_\omega^{1/2}u_m=\sqrt{\lambda_m}u_m,$ where $u_m,$ $m\in \mathcal{I}$ are the eigenfunctions of $\Lambda_\omega$ and $\lambda_m$ the corresponding eigenvalues.}$, and where $C(d, \alpha)$ is given by \begin{equation}\label{eq:C(d,alpha)} C(d,\alpha)= \begin{cases}\displaystyle \min_{m\in I}\frac{(4\lambda_m + (d+\alpha)(d-\alpha-4))^2}{4(4\lambda_m +(d-\alpha-4)^2)}, \qquad &\text{if }d-\alpha-4\neq0,\vspace{0.2cm}\\ \min \Big((d-2)^2; \min_{\substack{m\in \mathcal{I}\\ \lambda_m\neq 0}}\lambda_m \Big), &\text{if } d-\alpha-4=0. \end{cases} \end{equation} \end{theorem} \begin{remark}\label{rem:cut} Due to the general statement of Theorem~\ref{thm:main-general}, which aims at covering \emph{any} dimension $d\geq 2$ and \emph{any} power-weight $\alpha\in \mathbb{R},$ we needed to restrict ourselves to considering functions $C^\infty_0(\mathbb{R}^d\setminus \{0\})$. Despite that, it is clear that, by density arguments, if one restricts to particular situations according to the values of $d$ and $\alpha,$ then the assumption of cutting the origin can be dropped. For example, if $d\geq 3,$ $\alpha=0$ and $\mathcal{L}=-\Delta,$ then~\eqref{eq:main} holds for any $\psi\in C^\infty_0(\mathbb{R}^d)$ (see~\cite{Cazacu2019}). \end{remark} \begin{remark} We stress that the right hand side of~\eqref{eq:main} can be written in terms of the \emph{Carré du Champ} associated to $\mathcal{L}.$ \begin{definition}\label{def:carreduchamp} Given $\mathcal L$ a linear operator on $L^2(\mathbb{R}^d;\mathbb{C}),$ the Carré du Champ associated to $\mathcal{L}$ is the sesquilinear form $\Gamma$ on $C^\infty_0(\mathbb{R}^d)\times C^\infty_0(\mathbb{R}^d)$ defined by \begin{equation*} 2\Gamma(\psi, \phi)=\overline{\psi}\mathcal{L}\phi + \overline{\mathcal{L}\psi} \phi - \mathcal{L}(\overline{\psi} \phi). \end{equation*} In particular \begin{equation*} 2\Gamma(\psi):=2\Gamma(\psi,\psi) = 2\Re(\overline{\psi}\mathcal{L}\psi) - \mathcal{L}|\psi|^2. \end{equation*} \end{definition} \noindent Using integration by parts one sees that \begin{equation}\label{eq:cdc-corr} \int_{\mathbb{R}^d} \Gamma(\psi)|x|^\beta\, dx =\int_{\mathbb{R}^d} |\partial_r \psi|^2 |x|^{\beta}\, dx + \int_{\mathbb{R}^d} |\Lambda_\omega^{1/2}\psi|^2 |x|^{\beta-2}\, dx - \frac{1}{2}\int_{\mathbb{R}^d} \frac{|\psi|^2}{|x|^2}\Lambda_\omega |x|^\beta\, dx, \end{equation} in other words the right hand side of~\eqref{eq:main} can be written in terms of the Carré du Champ provided that an angular correction is added. More specifically, we have \begin{equation*} \int_{\mathbb{R}^d} \mathcal{D}(\psi) |x|^\beta\, dx =\int_{\mathbb{R}^d} \Gamma(\psi)|x|^\beta\, dx + \frac{1}{2}\int_{\mathbb{R}^d} \frac{|\psi|^2}{|x|^2}\Lambda_\omega |x|^\beta\, dx. \end{equation*} For the sake of completeness we show~\eqref{eq:cdc-corr}. From Definition~\ref{def:carreduchamp} one has \begin{equation}\label{eq:Gamma-pre} \begin{split} \int_{\mathbb{R}^d} \Gamma(\psi)|x|^\beta\, dx &= \int_{\mathbb{R}^d} \Re\big (\overline{\psi}\mathcal{L}\psi\big ) |x|^{\beta}\, dx -\frac{1}{2} \int_{\mathbb{R}^d} \mathcal{L}(|\psi|^2) |x|^{\beta}\, dx\\ &=\int_{\mathbb{R}^d} \Re\big (\overline{\psi}L_r \psi\big ) |x|^{\beta}\, dx + \int_{\mathbb{R}^d} \Re\big (\overline{\psi}\Lambda_\omega\psi\big ) |x|^{\beta-2}\, dx -\frac{1}{2} \int_{\mathbb{R}^d} |\psi|^2 \mathcal{L} |x|^{\beta}\, dx, \end{split} \end{equation} where in the last identity we have used that $\mathcal{L}$ can be written as $\mathcal{L}=L_r + \frac{1}{r^2}\Lambda_\omega$ and that $\mathcal{L}$ is self-adjoint. Using integration by parts, one easily checks that \begin{equation}\label{eq:int-parts} \langle f, L_r g\rangle_{L^2(\mathbb{R}^d)}=\langle \partial_r f, \partial_r g\rangle_{L^2(\mathbb{R}^d)}, \qquad \text{and} \qquad \langle u, \Lambda_\omega v \rangle_{L^2(\mathbb{S}^{d-1})} =\langle \Lambda_\omega^{1/2} u, \Lambda_\omega^{1/2} v \rangle_{L^2(\mathbb{S}^{d-1})}. \end{equation} Using~\eqref{eq:int-parts} and $L_r |x|^\beta=-\beta(d+\beta -2)|x|^{\beta -2}$ in~\eqref{eq:Gamma-pre} we get \begin{multline*} \int_{\mathbb{R}^d} \Gamma(\psi)|x|^\beta\, dx =\int_{\mathbb{R}^d} |\partial_r \psi|^2 |x|^{\beta}\, dx + \beta \int_{\mathbb{R}^d} \Re(\overline{\psi}\partial_r \psi)|x|^{\beta-1}\, dx\\ + \int_{\mathbb{R}^d} |\Lambda_\omega^{1/2}\psi|^2 |x|^{\beta-2}\, dx - \frac{\beta(d+\beta-2)}{2}\int_{\mathbb{R}^d} |\psi|^2 |x|^{\beta-2} -\frac{1}{2} \int_{\mathbb{R}^d} \frac{|\psi|^2}{|x|^2} \Lambda_\omega |x|^{\beta}\, dx. \end{multline*} Integrating by parts with respect to the radial variable $r,$ the second term cancels with the last but one term and thus~\eqref{eq:cdc-corr} follows. \end{remark} \medskip \noindent Theorem \ref{thm:main-general} is stated in dimension $d\geq2$, in order to describe a non-trivial contribution given by the angular operator $\Lambda_\omega$. Anyway an analogous result holds true in $d=1$ as well. More precisely, the following theorem is an immediate consequence of the classical 1D-weighted Hardy inequality (see~\eqref{eq:1d-Hardy} below) applied to $\psi'.$ \begin{theorem}[1D-weighted Hardy-Rellich]\label{thm:1D} Let $d=1.$ Then for all $\psi\in C^\infty_0(\mathbb{R}\setminus \{0\}),$ we have \begin{equation}\label{eq:1dHardy-Rellich} \int_\mathbb{R} \frac{|\psi''(x)|^2}{|x|^\alpha}\, dx\geq \frac{(\alpha+1)^2}{4}\int_\mathbb{R} \frac{|\psi'(x)|^2}{|x|^{\alpha+2}}\, dx. \end{equation} \end{theorem} \begin{remark} Notice that in the weighted free case $\alpha=0,$~\eqref{eq:1dHardy-Rellich} gives the claimed Hardy-Rellich inequality~\eqref{eq:HardyRellich-classical} for $d=1$ with $C(1)=1/4.$ \end{remark} \medskip \noindent We also claim that the constant $C(d,\alpha)$ in~\eqref{eq:C(d,alpha)} is sharp and not attained. To show this in the complete generality, we construct a minimizing sequence which is suitably supported far away from the origin. To this aim, given $\epsilon>0$, we introduce a smooth cut-off function $g_\epsilon\in C^\infty_0(\mathbb{R}^+)$ such that \begin{equation}\label{eq:cutoff} g_\epsilon (r)= \begin{cases} 0, \quad &\text{if } 0\leq r\leq \epsilon\; \text{ or } r\geq1/\epsilon,\\ 1, &\text{if } 2\epsilon\leq r\leq 1/2\epsilon, \end{cases} \end{equation} $0\leq g_\epsilon\leq 1$ in $0\leq r<\infty$, and \begin{equation*} \begin{cases} |g_\epsilon'(r)|\leq \frac{c}{\epsilon}, \quad \hspace{0.08cm} |g_\epsilon''(r)|\leq \frac{c}{\epsilon^2}, \quad &\text{for } \epsilon\leq r\leq 2\epsilon,\\ |g_\epsilon'(r)|\leq c\epsilon, \quad |g_\epsilon''(r)|\leq c\epsilon^2, \quad &\text{for } 1/2\epsilon\leq r\leq 1/\epsilon,\\ \end{cases} \end{equation*} for some constant $c>0$. We have the following result. \begin{theorem}[Optimality of \eqref{eq:main}]\label{thm:minimizing} In dimension $d\geq 2$, for any $\epsilon$, define \begin{equation}\label{eq:minimizing} \psi_\epsilon(x):= \begin{cases} |x|^{\frac{-(d-4) +\alpha}{2}} g_\epsilon(|x|) u_{m_0}\big(\tfrac{x}{|x|}\big), \qquad &\text{if } d-\alpha-4\neq 0 \text{ \;or\; } C(d,\alpha)=\lambda_{m_0},\\ h_\epsilon(|x|), & \text{if } d-\alpha-4=0 \text{ and } C(d,\alpha)=(d-2)^2, \end{cases} \end{equation} where $m_0\in \mathcal{I}$ is a minimizing index in~\eqref{eq:C(d,alpha)}, $u_{m_0}$ is the eigenfunction corresponding to the eigenvalue $\lambda_{m_0}$ of $\Lambda_{\omega}$ and $h_\epsilon$ is defined such that \begin{equation}\label{eq:h_eps} h_\epsilon'(r)=r^{-1}g_\epsilon(r), \qquad r=|x|. \end{equation} Then $\{\psi_\epsilon\}_{\epsilon>0}\subset \Dom(\mathcal{L})$ is a minimizing sequence for $C(d,\alpha),$ \emph{i.e.} \begin{equation*} \frac{\int_{\mathbb{R}^d} |\mathcal{L}\psi_\epsilon(x)|^2/|x|^\alpha\, dx}{\int_{\mathbb{R}^d} \mathcal{D}(\psi_\epsilon)(x)/|x|^{\alpha+2}\, dx} \searrow C(d,\alpha), \; \text{as } \epsilon \searrow 0. \end{equation*} Besides, the constant $C(d,\alpha)$ is not attained in $\Dom(\mathcal{L}).$ \end{theorem} \medskip \noindent We present now some interesting particular cases of Theorem~\ref{thm:main-general}, which show the analogous improvements as the above mentioned ones for the Hardy and the Rellich inequalities. \begin{theorem}\label{thm:general-electric} Assume $d\geq 2.$ Let $a\in L^\infty(\mathbb{S}^{d-1}; d\theta)$ be a non-negative real-valued function and consider the non-negative operator $-\Delta_{a(\theta)}:=-\Delta + \frac{a(\theta)}{|x|^2}.$ Then for all $\psi\in C^\infty_0(\mathbb{R}^d\setminus \{0\}),$ \begin{equation}\label{eq:general-electric} \int_{\mathbb{R}^d} \frac{|-\Delta_{a(\theta)}\psi(x)|^2}{|x|^\alpha}\, dx \geq C_a(d,\alpha)\Bigg[ \int_{\mathbb{R}^d} \frac{|\nabla \psi(x)|^2}{|x|^{\alpha + 2}}\, dx + \int_{\mathbb{R}^d} a(\theta) \frac{|\psi(x)|^2}{|x|^{\alpha+ 4}}\, dx\Bigg], \end{equation} where $C_{a}(d,\alpha)$ is given by \begin{equation*} C_{a}(d,\alpha)= \begin{cases}\displaystyle \min_{k\in \mathbb{N}_0}\frac{(4\mu_k + (d+\alpha)(d-\alpha-4))^2}{4(4\mu_k +(d-\alpha-4)^2)}, \qquad &\text{if }d-\alpha-4\neq0,\vspace{0.2cm}\\ \min \big((d-2)^2; \min_{\substack{k\in \mathbb{N}_0\\ \mu_k\neq 0}}\mu_k \big), &\text{if } d-\alpha-4=0. \end{cases} \end{equation*} Here $\mu_k,$ with $k=0,1,\dots$ are the discrete eigenvalues of the angular operator $-\Delta_{\mathbb{S}^{d-1}} + a(\theta).$ Moreover $\mu_0\geq \ess \inf_{\mathbb{S}^{d-1}} a(\theta).$ \end{theorem} \begin{remark} Notice that the right hand side of~\eqref{eq:general-electric} is exactly the (weighted) quadratic form associated to $-\Delta_{a(\theta)}.$ The same holds for the particular case of Corollary~\ref{cor:a-d2} below. \end{remark} \noindent An interesting corollary of the above result is the following, with $d=2$, and $a(\theta)\equiv a>0$. \begin{corollary}\label{cor:a-d2} Assume $d=2.$ Let $a\geq 0$ and consider the non-negative operator $-\Delta_a:=-\Delta + \tfrac{a}{|x|^2}.$ Then for all $\psi\in C^\infty_0(\mathbb{R}^2\setminus \{0\}),$ \begin{equation}\label{eq:prot-electr} \int_{\mathbb{R}^2}|-\Delta_a \psi(x)|^2\, dx \geq C_a \Bigg[ \int_{\mathbb{R}^2} \frac{|\nabla \psi(x)|^2}{|x|^{\alpha + 2}}\, dx + a \int_{\mathbb{R}^2} \frac{|\psi(x)|^2}{|x|^{\alpha+ 4}}\, dx\Bigg]. \end{equation} The constant $C_a$ in~\eqref{eq:prot-electr} is given by \begin{equation}\label{eq:C_a} C_a=\min_{k\in \mathbb{N}_0} \frac{(k^2+a-1)^2}{k^2+a+1}. \end{equation} \end{corollary} \noindent Notice that if $a>1,$ then $C_a=\tfrac{(a-1)^2}{a+1}>0.$ \medskip \noindent Another consequence of Theorem~\ref{thm:main-general} goes in the direction of the results of Laptev-Weidl~\cite{LW1999} and Evans-Lewis~\cite{EL2005}. In order to state the next result we generalize the definition of Aharonov-Bohm type potentials to any dimension $d\geq 2:$ for $(x_1, x_2, \dots, x_d)\in \mathbb{R}^d \setminus\{x_d=x_{d-1}=0\}$ it is defined to be the vector field \begin{equation}\label{eq:AB-gen} A(x_1, x_2, \dots, x_d) =\widetilde \Psi \Bigg(\underbrace{0,0, \dots, 0}_{d-2}, -\frac{x_d}{x_{d-1}^2 + x_{d}^2}, \frac{x_{d-1}}{x_{d-1}^2 + x_{d}^2}\Bigg), \qquad \widetilde{\Psi}\in \mathbb{R}. \end{equation} \begin{theorem}\label{thm:HB-anydimension} Let $d\geq 2$ and let $A$ be the Aharonov-Bohm type vector potential given by~\eqref{eq:AB-gen}. Then for all $\psi\in C^\infty_0(\mathbb{R}^d\setminus \{0\}),$ \begin{equation}\label{eq:HR-anydimension} \int_{\mathbb{R}^d}\frac{|\Delta_A\psi(x)|^2}{|x|^\alpha}\, dx\geq C_{\textup{AB}}(d,\alpha)\int_{\mathbb{R}^d} \frac{|\nabla_{\!A}\psi(x)|^2}{|x|^{\alpha+2}} \, dx. \end{equation} The constant $C_\textup{AB}(d,\alpha)$ is given by \begin{equation*} C_\textup{AB}(d,\alpha)= \begin{cases}\displaystyle \min_{m\in \mathbb{Z}'} \frac{\left( 4(m+ \widetilde \Psi)(m + \widetilde \Psi + d-2) + (d-4-\alpha)(d+\alpha)\right)^2}{4(4(m+ \widetilde \Psi)(m + \widetilde \Psi + d-2) + (d-4-\alpha)^2) }, \qquad &\text{if }d-\alpha-4\neq 0,\vspace{0.2cm}\\ \min \Big((d-2)^2;\min \{ (m+ \widetilde \Psi)^2(m + \widetilde \Psi + d-2)^2 \mid m\in\mathbb{Z}', m+\widetilde \Psi\neq 0, 2-d \}\Big), &\text{if }d-\alpha-4=0, \end{cases} \end{equation*} where $\mathbb{Z}':=\{m\in \mathbb{Z}\colon m\leq 2-d-\widetilde \Psi\, \text{or } m\geq -\widetilde \Psi\}.$ \end{theorem} \begin{remark}\label{rmk:mgrad-cdc} Notice that in the specific situation of Theorem~\ref{thm:HB-anydimension} (and Corollary~\ref{thm:Hardy-Rellich_magnetic} below), in the Hardy-Rellich inequality~\eqref{eq:HR-anydimension} it appears the magnetic gradient instead of the first order operator $\mathcal{D}$ as in the general case. Indeed, one can check that the integral identity $\int_{\mathbb{R}^d}\mathcal{D}(\psi)/|x|^{\alpha+2}=\int_{\mathbb{R}^d}|\nabla_A\psi|^2/|x|^{\alpha +2}$ (refer to the proof of Theorem~\ref{thm:HB-anydimension} for more clarifications). \end{remark} \medskip \noindent In the case $d=2$ and $\alpha=0,$ Theorem~\ref{thm:HB-anydimension} reduces to the following result, in the same style as~\eqref{eq:LW} and~\eqref{eq:Rellich-magnetic}. \begin{corollary}\label{thm:Hardy-Rellich_magnetic} Assume $d=2.$ Let $A$ be the Aharonov-Bohm (AB) type vector potential given by~\eqref{eq:AB}. Then for all $\psi\in C^\infty_0(\mathbb{R}^2\setminus \{0\}),$ \begin{equation}\label{eq:Hardy-Rellich_magnetic} \int_{\mathbb{R}^2} |\Delta_A \psi(x)|^2\, dx\geq C_{\textup{AB}}\int_{\mathbb{R}^2} \frac{|\nabla_{\!A}\psi(x)|^2}{|x|^2}\, dx. \end{equation} The constant $C_{\textup{AB}}$ in~\eqref{eq:Hardy-Rellich_magnetic} is given by \begin{equation*} C_{\textup{AB}}=\min_{m\in \mathbb{Z}} \frac{((m+ \widetilde \Psi)^2-1)^2}{(m+ \widetilde \Psi)^2+1}. \end{equation*} \end{corollary} \noindent Notice that $C_\textup{AB}=0$ if and only if $\widetilde{\Psi}\in \mathbb{Z}$, which fits with the fact that no Hardy-Rellich inequality holds in dimension $d=2$ for the free Hamiltonian. \medskip \noindent If we assume $\widetilde \Psi\in \mathbb{Z}$ then Theorem~\ref{thm:HB-anydimension} covers the weighted Hardy-Rellich inequalities already available for the free Hamiltonian (see~\cite{TZ2007,Beckner2008,GM2011,Cazacu2019,HT2021,HT2021arXiv,Ham}). More precisely, we have the following corollary. \begin{corollary}\label{cor:free} Let $d\geq 2.$ Then for all $\psi\in C^\infty_0(\mathbb{R}^d\setminus \{0\})$ \begin{equation}\label{eq:weighted-Hardy-Rellich} \int_{\mathbb{R}^d}\frac{|\Delta\psi(x)|^2}{|x|^\alpha}\, dx\geq C(d,\alpha)\int_{\mathbb{R}^d} \frac{|\nabla \psi(x)|^2}{|x|^{\alpha+2}} \, dx. \end{equation} The constant $C(d,\alpha)$ is given by \begin{equation}\label{eq:constant-classical} C(d,\alpha)= \begin{cases}\displaystyle \min_{k\in \mathbb{N}_0} \frac{\left(4k(k + d-2) + (d-4-\alpha)(d+\alpha)\right)^2}{4(4k(k + d-2)+(d-4-\alpha)^2)}, \qquad &\text{if }d-\alpha-4\neq0,\\ \min ( (d-2)^2; (d-1)), &\text{if } d-\alpha-4=0. \end{cases} \end{equation} \end{corollary} \begin{remark} In the specific case of Corollary~\ref{cor:free} one easily checks that $\int_{\mathbb{R}^d}\mathcal{D}(\psi)/|x|^{\alpha+2}=\int_{\mathbb{R}^d}|\nabla \psi|^2/|x|^{\alpha+2}.$ This can be seen from~\eqref{eq:cdc-corr}, indeed one checks easily that the Carré du Champ associated to the classical Laplacian is $\Gamma(\psi)=|\nabla \psi|^2$ and moreover, since in this case $\Lambda_\omega=\Delta_{\mathbb{S}^{d-1}},$ where $\Delta_{\mathbb{S}^{d-1}}$ denotes the Laplace-Beltrami operator, the last term in~\eqref{eq:cdc-corr} cancels. \end{remark} \begin{remark}\label{rmk:grad} The value of the constant $C(d,\alpha)$ in~\eqref{eq:weighted-Hardy-Rellich} has been largely investigated in the aforementioned works~\cite{TZ2007,Beckner2008,GM2011,Cazacu2019,HT2021}. There, according to the relation between the relevant parameters, namely the dimension $d,$ the order of the weight-power $\alpha$ and the non-negative integer $k,$ a more explicit description has been provided in different cases. Here we will describe the behavior of the constant only in the weight-free case, namely the original case $\alpha=0,$ and we show that $C(d, 0)$ in~\eqref{eq:weighted-Hardy-Rellich} coincides with the best constant $C(d)$ in~\eqref{eq:HardyRellich-classical}. Nevertheless we stress that in the case $\alpha\neq 0$ we recover the previous available results in~\cite{GM2011,TZ2007,HT2021}. First of all, if $\alpha=0$ then one immediately has from the second expression in~\eqref{eq:constant-classical} that $C(4,0)=3.$ When $\alpha=0$ and $d\neq 4$ one needs to study the first in~\eqref{eq:constant-classical}. Plugging $\alpha=0,$ the first expression in~\eqref{eq:constant-classical} becomes \begin{equation}\label{eq:constant-classical-zero} C(d,0)=\min_{k\in \mathbb{N}_0} \frac{\left(4 c_k + d(d-4)\right)^2}{4(4c_k + (d-4)^2)}, \qquad c_k=k(k+d-2). \end{equation} Studying the minimum $x_0$ of the function $\frac{(4x+d(d-4))^2}{4(4x+ (d-4)^2)}$ for $x\geq 0,$ one sees that $c_0\leq x_0\leq c_1,$ equivalently $0\leq x_0\leq d-1$ (see definition of $c_k$ in~\eqref{eq:constant-classical-zero}). Thus the value of $C(d,0)$ in~\eqref{eq:constant-classical-zero} depends only on $k=0$ and $k=1.$ One can easily check that for $d\geq 5$ the minimum is obtained for $k=0,$ yielding $C(d,0)=\frac{d^2}{4}.$ Instead in lower dimensions, namely $d\in\{2,3\},$ then this is achieved for $k=1.$ This gives $C(2,0)=0,$ $C(3,0)=\tfrac{25}{36}.$ Thus $C(d,0)$ in~\eqref{eq:weighted-Hardy-Rellich} equals $C(d)$ in~\eqref{eq:HardyRellich-classical} as claimed. We stress that in the two-dimensional setting $d=2$ no non-trivial inequalities are available, indeed $C(2,0)=0,$ unless one restricts the domain of validity of inequality~\eqref{eq:weighted-Hardy-Rellich} to functions $\psi\in C^\infty_0(\mathbb{R}^2\setminus \{0\})$ which satisfy~\eqref{eq:orth-cond}. In this case then the minimum in~\eqref{eq:constant-classical-zero} is taken over $\mathbb{N}_0\setminus \{1\}$ and thus, due to the reasoning above, it is achieved for $k=0$ giving $C(2,0)=1.$ This means that in $d=2$ a non-trivial Hardy-Rellich inequality holds true if one restricts to a smaller set of function. We stress that as far as we know this simple two dimensional property was not observed before elsewhere. \end{remark} \medskip \noindent A further example is given by magnetic monopoles in $\mathbb{R}^3.$ This model has been intensively studied in the last decades (see~\cite{CT2010}). More recently, Frank and Loss~\cite{FL20} considered it as an example of (non-standard) magnetic field that supports zero mode for the three dimensional Dirac equation. For a magnetic monopole at the origin, the vector field $A$ takes the form \begin{equation}\label{eq:A-monopole} A(x,y,z)=g \frac{(-y,x,0)}{r(r+z)}, \qquad r=\sqrt{x^2+y^2+z^2}, \qquad (x,y,z)\in \mathbb{R}^3\setminus\{(0,0,z)\mid z\leq0\} \end{equation} with a parameter $g$ representing the monopole strength. The corresponding magnetic field is given by \begin{equation*} B(x,y,z)=\curl A= g \frac{(x,y,z)}{r^3}. \end{equation*} As a consequence of Theorem~\ref{thm:main-general}, the following result holds. \begin{theorem}\label{thm:monopole} Let $d=3$ and assume $g\geq 1/2$ in~\eqref{eq:A-monopole}. Then for all $\psi \in C^\infty_0(\mathbb{R}^3\setminus \{0\}),$ \begin{equation*} \int_{\mathbb{R}^3} \frac{|\Delta_A \psi(x)|^2}{|x|^\alpha}\, dx \geq C_\textup{mon}(\alpha)\int_{\mathbb{R}^3} \frac{\mathcal{D} (\psi)(x)}{|x|^{\alpha+2}}\, dx. \end{equation*} The constant $C_\textup{mon}(\alpha)$ is given by \begin{equation*} C_\textup{mon}(\alpha)=\min_{\substack{k=2(|g|+l),\\ l\in \mathbb{N}_0}} \frac{(k(k+2)-4g^2-(\alpha+1)(\alpha+ 3))^2}{4(k(k+2)-4g^2 +(\alpha+1))}. \end{equation*} \end{theorem} \begin{remark} We could not find in the literature Hardy and Rellich inequalities involving magnetic monopoles. Nevertheless, the same approach (even simplified) we use to prove Hardy-Rellich inequalities for this model, namely Theorem~\ref{thm:monopole}, can be adopted to establish improvements of these more classical inequalities. \end{remark} \noindent Using a more direct strategy than the one used to prove Theorem~\ref{thm:main-general}, the following weighted Hardy-type inequalities for the first order operator $\mathcal{D}$ associated to $\mathcal{L}=L_r + \frac{1}{r^2}\Lambda_\omega$ are easily obtained. \begin{theorem}\label{thm:Hardy-cdc} Assume that the hypotheses of Theorem~\ref{thm:main-general} are satisfied. Let $\beta \in \mathbb{R}.$ Then for all $\psi \in \Dom(\mathcal{L})$ such that $|\cdot|^{-\beta/2}\mathcal{D}(\psi)^{1/2}\in L^2(\mathbb{R}^d)$ we have \begin{equation}\label{eq:Hardy-cdc} \int_{\mathbb{R}^d} \frac{\mathcal{D}(\psi)(x)}{|x|^{\beta}}\, dx \geq C_{\mathcal{D}}(d,\beta) \int_{\mathbb{R}^d} \frac{|\psi(x)|^2}{|x|^{\beta+2}}\, dx, \end{equation} where $C_\mathcal{D}(d,\beta)$ is given by \begin{equation}\label{eq:CGamma} C_\mathcal{D}(d, \beta)=\min_{m\in \mathcal{I}}\Bigg\{\lambda_m + \Big(\frac{d-\beta-2}{2}\Big)^2 \Bigg\}. \end{equation} \end{theorem} \begin{remark} Notice that~\eqref{eq:Hardy-cdc} has as particular cases the classical weighted Hardy inequalities with optimal constants (just take $\mathcal{L}=-\Delta$ and notice that in this case $\mathcal{D}(\psi)=|\nabla \psi|^2$) and the optimal magnetic Hardy inequalities for Aharonov-Bohm magnetic fields (take $\mathcal{L}:=-\Delta_A,$ with $A$ as in~\eqref{eq:AB-gen} and use that $\int_{\mathbb{R}^d} \mathcal{D}(\psi)/|x|^\beta=\int_{\mathbb{R}^d} |\nabla_A \psi|^2/|x|^\beta$ (see also Remark~\ref{rmk:mgrad-cdc})) \end{remark} \begin{remark} Combining the Hardy inequality~\eqref{eq:Hardy-cdc} in Theorem~\ref{thm:Hardy-cdc} and the Hardy-Rellich inequality~\eqref{eq:main} in Theorem~\ref{thm:main-general} one gets easily the following weighted Rellich inequalities in the spirit of Evans and Lewis~\cite{EL2005}: \begin{equation}\label{eq:Rellich-cdc} \int_{\mathbb{R}^d} \frac{|\mathcal{L}\psi(x)|^2}{|x|^\alpha}\, dx\geq \widetilde{C}(d,\alpha) \int_{\mathbb{R}^d} \frac{|\psi(x)|^2}{|x|^{\alpha + 4}}\, dx, \end{equation} where $\widetilde{C}(d,\alpha)=C(d,\alpha) C_\mathcal{D}(d, \alpha+2),$ where $C(d,\alpha)$ is as in~\eqref{eq:C(d,alpha)} and $C_\mathcal{D}(d, \alpha+2)$ is as in~\eqref{eq:CGamma}. In general, it is not easy to see whether $\widetilde{C}(d,\alpha)$ equals the optimal constant in~\cite{EL2005}, nevertheless in the specific case of the Laplacian, namely for $\mathcal{L}=-\Delta,$ and for $\alpha=0,$ one checks that $\widetilde{C}(d,0)=d^2(d-4)^2/16$ (see Remark~\ref{rmk:grad}). In other words, inequality~\eqref{eq:Rellich-cdc} coincides with the classical Rellich inequality with optimal constant. \end{remark} \medskip \noindent The paper is organized as follows: we give the proof of the main result Theorem~\ref{thm:main-general} and of Theorem~\ref{thm:Hardy-cdc} in the next Section~\ref{sec:main}. The optimality, as stated in Theorem~\ref{thm:minimizing}, is shown in Section~\ref{sec:optimality}. In Section~\ref{sec:consequences} we show how to get Theorem~\ref{thm:general-electric}, Theorem~\ref{thm:HB-anydimension} and Theorem~\ref{thm:monopole} from the general result Theorem~\ref{thm:main-general}. \subsection*{Acknowledgments} The idea of this project came out during the CIRM conference on \emph{``Mathematical aspects of the physics with non-self-adjoint operators: 10 years after"} held in Marseille in February 2021. B.C. and L.C. would like to express their gratitude to the organizers of the conference L. Boulton, D. Krej\v ci\v r\'ik and P. Siegl for the chance of being part of this stimulating event. B. C. is member of GNAMPA (INDAM) and he is supported by Fondo Sociale Europeo – Programma Operativo Nazionale Ricerca e Innovazione 2014-2020, progetto PON: progetto AIM1892920-attivit\`a 2, linea 2.1. The research of L.C. is supported by the Deutsche Forschungsgemeinschaft (DFG) through CRC 1173. The authors are very grateful to the anonymous referee for the comments/suggestions on the preliminary version of this paper which highly improve the quality of the manuscript. \section{Proof of Theorem~\ref{thm:main-general} and Theorem~\ref{thm:Hardy-cdc} }\label{sec:main} We start with the proof of Theorem~\ref{thm:main-general}. Inspired by the arguments in \cite{Cazacu2019}, we introduce a suitable orthonormal basis decomposition of the functions in the domain $\Dom(\mathcal{L})$ (see~\eqref{eq:domain}) of the operator $\mathcal{L},$ which is reminiscent of the classical spherical harmonics decomposition used on the case of the Laplacian: since the spectrum of $\Lambda_\omega$ is assumed to be discrete, its normalized eigenvectors $u_m,$ $m\in \mathcal{I}$ (with eigenvalues $\{\lambda_m\}_{m\in \mathcal{I}}$ repeated according to multiplicity) form an orthonormal basis of $L^2(\mathbb{S}^{d-1};d\omega).$ Thus one can expand any $\psi\in \Dom(\mathcal{L})$ as \begin{equation}\label{eq:decomposition} \psi(x)=\psi(r,\omega)=\sum_{m\in \mathcal{I}} f_m(r) u_m(\omega), \end{equation} where the coefficients $f_m\in C^\infty_0(\mathbb{R}^+)$ are computed by projecting $\psi$ onto each basis eigenfunction $u_m,$ $m\in \mathcal{I},$ \emph{i.e.} \begin{equation}\label{eq:coefficients} f_m(r):=\int_{\mathbb{S}^{d-1}}\psi(r,\omega)\overline{u_m(\omega)}\, d\omega. \end{equation} The decomposition in~\eqref{eq:decomposition} reduces matters to a 1D-problem: indeed, the following lemma holds. \begin{lemma}\label{lemma:1d-reduction} Let $\psi\in \Dom(\mathcal{L}).$ Then the following identities hold true \begin{align} \label{eq:second-order} & \begin{multlined} \int_{\mathbb{R}^d} \frac{|\mathcal{L}\psi(x)|^2}{|x|^\alpha}\, dx= \sum_{m\in \mathcal{I}} \left \{\int_0^\infty |f_m''(r)|^2r^{d-\alpha-1}\, dr + [(d-1)(\alpha +1) + 2\lambda_m]\int_0^\infty |f_m'(r)|^2 r^{d-\alpha-3} \,dr \right.\\ \left.+ \lambda_m [(\alpha + 2)(d-\alpha -4) + \lambda_m] \int_0^\infty |f_m(r)|^2 r^{d-\alpha-5} \,dr \right \}, \end{multlined} \\ \label{eq:first-order} &\int_{\mathbb{R}^d} \frac{\mathcal{D}(\psi)(x)}{|x|^{\alpha+2}}\, dx= \sum_{m\in \mathcal{I}} \left \{\int_0^\infty |f_m'(r)|^2r^{d-\alpha-3}\, dr + \lambda_m \int_0^\infty |f_m(r)|^2 r^{d-\alpha-5} \,dr \right \}. \end{align} Here $\mathcal{D}$ is the first order operator defined as $\mathcal{D}(\psi)=|\partial_r \psi|^2 + \frac{1}{r^2}|\Lambda_\omega^{1/2}\psi|^2$ and $f_m(r),$ $m\in \mathcal{I}$ are the coefficients introduced in~\eqref{eq:coefficients}. \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma:1d-reduction}] Even though identity~\eqref{eq:second-order} can be already found in\cite{EL2005}, for the reader's convenience in the following we prove both~\eqref{eq:second-order} and~\eqref{eq:first-order}. Since $\mathcal{L}=L_r + \frac{1}{r^2}\Lambda_\omega$ one easily has \begin{equation}\label{eq:pre-sec} \int_{\mathbb{R}^d} \frac{|\mathcal{L}\psi(x)|^2}{|x|^\alpha}\, dx =\int_{\mathbb{R}^d} \frac{|L_r\psi(x)|^2}{|x|^\alpha}\, dx + \int_{\mathbb{R}^d} \frac{|\Lambda_\omega\psi(x)|^2}{|x|^{\alpha+4}}\, dx + 2 \Re \int_{\mathbb{R}^d} \frac{L_r \psi(x) \overline{\Lambda_\omega \psi(x)}}{|x|^{\alpha + 2}}. \end{equation} Let us consider the right hand side of~\eqref{eq:pre-sec}. From the decomposition~\eqref{eq:decomposition} one has \begin{equation*} L_r\psi(x)= \sum_{m\in \mathcal{I}} L_r f_m(r) u_m(\omega). \end{equation*} Using this fact and the Parseval's identity we obtain \begin{equation}\label{eq:preliminary} \int_{\mathbb{R}^d} \frac{|L_r \psi(x)|^2}{|x|^\alpha}\, dx =\int_0^\infty \int_{\mathbb{S}^{d-1}} \Big |\sum_{m\in \mathcal{I}} L_rf_m(r)u_m(\omega)\Big |^2 r^{d-\alpha-1}\, dr\, d\omega =\sum_{m\in \mathcal{I}}\int_0^\infty |L_rf_m(r)|^2 r^{d-\alpha-1}\, dr. \end{equation} Let us consider $\int_0^\infty |L_rf_m(r)|^2 r^{d-\alpha-1}\, dr.$ Using the explicit form of $L_r,$ namely $L_r=-\partial_{rr} -\tfrac{d-1}{r}\partial_r,$ and integrating by parts, we obtain \begin{equation*} \begin{split} \int_0^\infty &|L_rf_m(r)|^2 r^{d-\alpha-1}\, dr\\ &=\int_0^\infty |f_m''(r)|^2r^{d-\alpha-1}\, dr + (d-1)^2\int_0^\infty |f_m'(r)|^2 r^{d-\alpha-3}\, dr + 2(d-1)\Re \int_0^\infty f_m''(r) \overline{f_m'(r)} r^{d-\alpha-2}\, dr\\ &=\int_0^\infty |f_m''(r)|^2r^{d-\alpha-1}\, dr + (d-1)^2\int_0^\infty |f_m'(r)|^2 r^{d-\alpha-3}\, dr - (d-1)(d-\alpha-2)\int_0^\infty |f_m'(r)|^2 r^{d-\alpha-3}\, dr. \end{split} \end{equation*} Plugging the last identity in~\eqref{eq:preliminary} gives \begin{multline}\label{eq:last-r} \int_{\mathbb{R}^d} \frac{|L_r \psi(x)|^2}{|x|^\alpha}\, dx =\sum_{m\in \mathcal{I}} \left\{ \int_0^\infty |f_m''(r)|^2r^{d-\alpha-1}\, dr + (d-1)^2\int_0^\infty |f_m'(r)|^2 r^{d-\alpha-3}\, dr \right.\\ \left.- (d-1)(d-\alpha-2)\int_0^\infty |f_m'(r)|^2 r^{d-\alpha-3}\, dr \right\}. \end{multline} Again using the decomposition~\eqref{eq:decomposition} one has \begin{equation*} \Lambda_\omega \psi(x) =\sum_{m\in \mathcal{I}} f_m(r) \Lambda_\omega u_m(\omega) =\sum_{m\in \mathcal{I}} \lambda_m f_m(r) u_m(\omega), \end{equation*} where in the second equality we have used that $\{u_m\}_{m\in \mathcal{I}}$ are eigenfunctions of the operator $\Lambda_\omega$ with corresponding eigenvalues $\lambda_m,$ $m\in \mathcal{I}.$ Using again Parceval's identity one gets \begin{equation}\label{eq:last-angular} \int_{\mathbb{R}^d} \frac{|\Lambda_\omega \psi(x)|^2}{|x|^{\alpha + 4}}\, dx =\int_0^\infty \int_{\mathbb{S}^{d-1}} |\sum_{m\in \mathcal{I}} \lambda_m f_m(r)u_m(\omega)|^2 r^{d-\alpha-5}\, dr\, d\omega =\sum_{m\in \mathcal{I}} \lambda_m^2 \int_0^\infty |f_m(r)|^2 r^{d-\alpha-5}\, dr. \end{equation} Similarly as above, it is easy to check that the following identity holds: \begin{equation}\label{eq:last-mixed} 2\Re \int_{\mathbb{R}^d} \frac{L_r \psi(x) \overline{\Lambda_\omega \psi(x)}}{|x|^{\alpha + 2}}\, dx = \sum_{m\in \mathcal{I}}\lambda_m \left\{ 2 \int_0^\infty |f_m'(r)|^2 r^{d-\alpha-3}\, dr + (\alpha+ 2)(d-\alpha-4) \int_0^\infty |f_m(r)|^2 r^{d-\alpha-5}\, dr \right\}. \end{equation} Plugging~\eqref{eq:last-r},~\eqref{eq:last-angular} and~\eqref{eq:last-mixed} in~\eqref{eq:pre-sec} gives~\eqref{eq:second-order}. Now, from the definition of $\mathcal{D}$ one has \begin{equation} \label{eq:pre-first} \int_{\mathbb{R}^d} \frac{\mathcal{D}(\psi)(x)}{|x|^{\alpha+2}}\, dx= \int_{\mathbb{R}^d} \frac{|\partial_r \psi(x)|^2}{|x|^{\alpha+ 2}}\, dx + \int_{\mathbb{R}^d} \frac{|\Lambda_\omega^{1/2} \psi(x)|^2}{|x|^{\alpha + 4}}\, dx. \end{equation} Similarly as above, one checks that the following identities hold true: \begin{equation}\label{eq:last-first2} \int_{\mathbb{R}^d} \frac{|\partial_r \psi(x)|^2}{|x|^{\alpha+ 2}}\, dx = \sum_{m\in \mathcal{I}} \int_0^\infty |f_m'(r)|^2r^{d-\alpha-3}\, dr, \quad \text{and} \quad \int_{\mathbb{R}^d} \frac{|\Lambda_\omega^{1/2} \psi(x)|^2}{|x|^{\alpha + 4}}\, dx= \sum_{m\in \mathcal{I}} \lambda_m \int_0^\infty |f_m(r)|^2r^{d-\alpha-5}\, dr. \end{equation} Eventually, plugging ~\eqref{eq:last-first2} in~\eqref{eq:pre-first} gives~\eqref{eq:first-order} and, thus, the thesis. \end{proof} \noindent We are now ready to prove Theorem~\ref{thm:main-general}. \begin{proof}[Proof of Theorem~\ref{thm:main-general}] The proof is based on the strategy introduced by Cazacu in~\cite{Cazacu2019}. Let us first split~\eqref{eq:second-order} as follows: \begin{equation}\label{eq:I+II} \int_{\mathbb{R}^d} \frac{|\mathcal{L}\psi(x)|^2}{|x|^{\alpha}}\, dx= I+ II, \end{equation} where \begin{multline*} I:= \sum_{\substack{m\in \mathcal{I}\\\lambda_m\neq 0}} \left \{ \int_0^\infty |f_m''(r)|^2 r^{d-\alpha-1}\, dr + [(d-1)(\alpha+1)]\int_0^\infty |f_m'(r)|^2 r^{d-\alpha-3}\, dr \right\}\\ + \sum_{\substack{m\in \mathcal{I}\\\lambda_m\neq 0}} \lambda_m \left \{ 2\int_0^\infty |f_m'(r)|^2r^{d-\alpha-3}\, dr + [(\alpha+2)(d-\alpha-4) +\lambda_m]\int_0^\infty |f_m(r)|^2\, r^{d-\alpha-5}\, dr \right \} \end{multline*} and \begin{equation*} II:=\sum_{\substack{m\in \mathcal{I}\\\lambda_m=0}} \left\{ \int_0^\infty |f_m''(r)|^2 r^{d-\alpha-1} + [(d-1)(\alpha +1)]\int_0^\infty |f_m'(r)|^2 r^{d-\alpha-3}\, dr \right\}. \end{equation*} We estimate $II$ first. Using the 1D weighted Hardy inequality \begin{equation}\label{eq:1d-Hardy} \int_0^\infty |f'(r)|^{2} r^{t+2}\, dr\geq \left(\frac{t+1}{2} \right)^2 \int_0^\infty |f(r)|^2 r^t\, dr, \qquad t\in \mathbb{R}, \end{equation} which is valid for any distribution $f$ on $(0,\infty)$ such that the integral on the left hand side of~\eqref{eq:1d-Hardy} is finite (see \emph{e.g.}~\cite[Prop.2.4]{CP2018}), we have \begin{equation}\label{eq:II} \begin{split} II&\geq \sum_{\substack{m\in \mathcal{I}\\\lambda_m=0}} \frac{(d+\alpha)^2}{4}\int_0^\infty |f_m'(r)|^2 r^{d-\alpha-3}\, dr\\ &=\sum_{\substack{m\in \mathcal{I}\\\lambda_m=0}} \frac{(d+\alpha)^2}{4} \left[ \int_0^\infty |f_m'(r)|^2 r^{d-\alpha-3}\, dr + \lambda_m \int_0^\infty |f_m(r)|^2 r^{d-\alpha-5}\, dr \right]. \end{split} \end{equation} Let $\varepsilon \in \mathbb{R}$ to be fixed later (in particular, the forthcoming choice of $\varepsilon$ will satisfy $\varepsilon/\lambda_m+2\geq0$). We split $I$ as $I=I_{1, \varepsilon}$ + $I_{2,\varepsilon},$ where \begin{equation*} I_{1,\varepsilon}:=\sum_{\substack{m\in \mathcal{I}\\\lambda_m\neq 0}} \left\{ \int_0^\infty |f_m''(r)|^2 r^{d-\alpha-1}\, dr + [(d-1)(\alpha + 1) -\varepsilon] \int_0^\infty |f_m'(r)|^2 r^{d-\alpha-3}\, dr \right\}, \end{equation*} and \begin{equation*} I_{2,\varepsilon}=\sum_{\substack{m\in \mathcal{I}\\ \lambda_m\neq 0}} \lambda_m \left[ \left(\frac{\varepsilon}{\lambda_m} + 2 \right) \int_0^\infty |f_m'(r)|^2 r^{d-\alpha-3}\, dr + [(\alpha+2)(d-\alpha-4) + \lambda_m] \int_0^\infty |f_m(r)|^2 r^{d-\alpha-5}\, dr \right]. \footnote{As we will see below, our choice of the parameter $\varepsilon$ will depend on $m,$ therefore the notation $I_{1,\varepsilon}$ and $I_{1,\varepsilon}$ used for these two sums is not entirely correct. A better choice would have been to consider the splitting $I=\sum_{\substack{m\in \mathcal{I}\\ \lambda_m\neq 0}}(I_{1, \varepsilon_m} + I_{2, \varepsilon_m}),$ with $I_{1,\varepsilon_m}$ and $I_{2,\varepsilon_m}$ being the terms inside the sums over $m.$ Anyway we decided to avoid it not to weight down the notation.} \end{equation*} By~\eqref{eq:1d-Hardy}, we get \begin{equation}\label{eq:last-first} I_{1,\varepsilon}\geq \sum_{\substack{m\in \mathcal{I}\\ \lambda_m\neq 0}}\left[\frac{(d+\alpha)^2}{4} - \varepsilon \right] \int_0^\infty |f_m'(r)|^2 r^{d-\alpha-3}\, dr, \end{equation} \begin{equation}\label{eq:last-second} I_{2,\varepsilon}\geq \sum_{\substack{m\in \mathcal{I}\\ \lambda_m\neq 0}} \lambda_m \left[\frac{\varepsilon}{\lambda_m}\frac{(d-\alpha-4)^2}{4} + \frac{(d-\alpha-4)(d+\alpha)}{2} + \lambda_m\right]\int_0^\infty |f_m(r)|^2r^{d-\alpha-5}\, dr. \end{equation} Let $\varepsilon>0$ be chosen such that \begin{equation*} \frac{(d+\alpha)^2}{4}-\varepsilon = \frac{\varepsilon}{\lambda_m}\frac{(d-\alpha-4)^2}{4} + \frac{(d-\alpha-4)(d+\alpha)}{2} + \lambda_m, \end{equation*} which yields \begin{equation*} \varepsilon(d)=\frac{\lambda_m[(d+\alpha)(-d+3\alpha+8)-4\lambda_m]}{4\lambda_m +(d-\alpha-4)^2}. \end{equation*} We stress that with this choice of $\varepsilon$ one has $\tfrac{\varepsilon}{\lambda_m} + 2\geq 0$ (this in particular justifies the possibility to apply~\eqref{eq:1d-Hardy} in $(\tfrac{\varepsilon}{\lambda_m} + 2)\int_0^\infty |f_m'(r)|^2r^{d-\alpha-3}\, dr$ above). Indeed \begin{equation*} \frac{\varepsilon}{\lambda_m} + 2= 1 + \frac{4(\alpha+ 2)^2}{4\lambda_m + (d-\alpha-4)^2}\geq 0. \end{equation*} In addition, one has \begin{equation}\label{eq:I} I=I_{1,\varepsilon} + I_{2,\varepsilon} \geq\sum_{\substack{m\in \mathcal{I}\\ \lambda_m\neq 0}} \frac{(4\lambda_m + (d+\alpha)(d-\alpha-4))^2}{4(4\lambda_m +(d-\alpha-4)^2)} \left \{ \int_0^\infty |f_m'(r)|^2 r^{d-\alpha-3}\, dr + \lambda_m \int_0^\infty |f_m(r)|^2r^{d-\alpha-5}\, dr \right \}. \end{equation} Plugging estimates~\eqref{eq:I} and~\eqref{eq:II} in~\eqref{eq:I+II} we have \begin{equation}\label{eq:final} \begin{split} &\int_{\mathbb{R}^d} \frac{|\mathcal{L}\psi(x)|^2}{|x|^\alpha}\, dx\\ &\geq \min \Bigg ( \tfrac{(d+\alpha)^2}{4}; \min_{\substack{m\in \mathcal{I}\\ \lambda_m\neq 0}} \tfrac{(4\lambda_m + (d+\alpha)(d-\alpha-4))^2}{4(4\lambda_m +(d-\alpha-4)^2)} \Bigg) \sum_{m\in \mathcal{I}} \left \{ \int_0^\infty |f_m'(r)|^2 r^{d-\alpha-3}\, dr + \lambda_m \int_0^\infty |f_m(r)|^2r^{d-\alpha-5}\, dr \right \} \\ &= \min \Bigg ( \tfrac{(d+\alpha)^2}{4}; \min_{\substack{m\in \mathcal{I}\\ \lambda_m\neq 0}} \tfrac{(4\lambda_m + (d+\alpha)(d-\alpha-4))^2}{4(4\lambda_m +(d-\alpha-4)^2)} \Bigg) \int_{\mathbb{R}^d} \frac{\mathcal{D}(\psi)(x)}{|x|^{\alpha+ 2}}\, dx, \end{split} \end{equation} where in the last identity we have used~\eqref{eq:first-order}. Notice that if $d-\alpha-4\neq 0,$ then we have \begin{equation*} \frac{(4\lambda_m + (d+\alpha)(d-\alpha-4))^2}{4(4\lambda_m +(d-\alpha-4)^2)}=\frac{(d+\alpha)^2}{4}, \qquad \text{if } \lambda_m=0. \end{equation*} This allows us to write the minimum in~\eqref{eq:final} in a more compact form, thus~\eqref{eq:final} can be rewritten as \begin{equation*} \int_{\mathbb{R}^d} \frac{|\mathcal{L}\psi(x)|^2}{|x|^{\alpha}}\, dx \geq \min_{m\in \mathcal{I}} \frac{(4\lambda_m + (d+\alpha)(d-\alpha-4))^2}{4(4\lambda_m +(d-\alpha-4)^2)} \int_{\mathbb{R}^d} \frac{\mathcal{D}(\psi)(x)}{|x|^{\alpha+ 2}}\, dx. \end{equation*} On the other hand, if $d-\alpha-4=0$ the minimum in~\eqref{eq:final} becomes $\min \big((d-2)^2; \min_{\substack{m\in \mathcal{I}\\\lambda_m\neq 0}} \lambda_m\big ).$ This concludes the proof. \end{proof} \medskip \noindent We now pass to the proof of the Hardy-type inequality contained in Theorem~\ref{thm:Hardy-cdc}. \begin{proof}[Proof of Theorem~\ref{thm:Hardy-cdc}] From~\eqref{eq:first-order} (replacing $\alpha + 2$ with $\beta$) one has \begin{equation*} \int_{\mathbb{R}^d} \frac{\mathcal{D}(\psi)(x)}{|x|^{\beta}}\, dx= \sum_{m\in \mathcal{I}} \left \{\int_0^\infty |f_m'(r)|^2r^{d-\beta-1}\, dr + \lambda_m \int_0^\infty |f_m(r)|^2 r^{d-\beta-3} \,dr \right \}. \end{equation*} Using in the first integral of the right hand side of this identity the 1D-weighted Hardy inequality~\eqref{eq:1d-Hardy}, one gets \begin{equation*} \begin{split} \int_{\mathbb{R}^d} \frac{\mathcal{D}(\psi)(x)}{|x|^{\beta}}\, dx &\geq \min_{m\in \mathcal{I}} \Big \{ \lambda_m + \frac{(d-\beta -2)^2}{4} \Big\} \sum_{m\in \mathcal{I}} \int_0^\infty |f_m(r)|^2 r^{d-\beta-3}\, dr\\ &=\min_{m\in \mathcal{I}} \Big \{ \lambda_m + \frac{(d-\beta -2)^2}{4} \Big\}\int_{\mathbb{R}^d} \frac{|\psi(x)|^2}{|x|^{\beta +2}}\,dx, \end{split} \end{equation*} where in the last identity we just used Parceval's identity as in the proof of Theorem~\ref{thm:main-general}. This concludes the proof. \end{proof} \section{Proof of Theorem~\ref{thm:minimizing}. Optimality of $C(d,\alpha).$ }\label{sec:optimality} Let $\psi_\epsilon$ the sequence defined in~\eqref{eq:minimizing}. We consider first the case $d-\alpha-4\neq 0$ or $d-\alpha-4=0$ and $C(d,\alpha)=\lambda_{m_0}.$ To shorten the notation we write \begin{equation*} \psi_\epsilon(x)= f_\epsilon(r)u_{m_0}(\omega), \end{equation*} where $f_\epsilon(r)$ represents the radial part of $\psi_\epsilon,$ namely $f_\epsilon(r):=r^{-\frac{(d-\alpha-4)}{2}}g_\epsilon(r)$ and $g_\epsilon$ defined in~\eqref{eq:cutoff}. As in Lemma~\ref{lemma:1d-reduction} one easily has \begin{equation}\label{eq:lemma-minimizing} \begin{split} & \begin{multlined} \int_{\mathbb{R}^d} \frac{|\mathcal{L}\psi_\epsilon(x)|^2}{|x|^\alpha}\, dx= \int_0^\infty |f_\epsilon''(r)|^2r^{d-\alpha-1}\, dr + [(d-1)(\alpha +1) + 2\lambda_{m_0}]\int_0^\infty |f_\epsilon'(r)|^2 r^{d-\alpha-3} \,dr \\ + \lambda_{m_0} [(\alpha + 2)(d-\alpha -4) + \lambda_{m_0}] \int_0^\infty |f_\epsilon(r)|^2 r^{d-\alpha-5} \,dr, \end{multlined} \\ &\int_{\mathbb{R}^d} \frac{\mathcal{D}(\psi_\epsilon)(x)}{|x|^{\alpha+2}}\, dx= \int_0^\infty |f_\epsilon'(r)|^2r^{d-\alpha-3}\, dr + \lambda_{m_0} \int_0^\infty |f_\epsilon(r)|^2 r^{d-\alpha-5} \,dr. \end{split} \end{equation} Differentiating $f_\epsilon$ with respect to $r$ gives \begin{equation}\label{eq:der1} f_\epsilon'(r)= - \frac{(d-\alpha-4)}{2} r^{-\frac{d-\alpha-2}{2}} g_\epsilon(r) + r^{-\frac{d-\alpha-4}{2}} g_\epsilon'(r), \end{equation} and \begin{equation}\label{eq:der2} f_\epsilon''(r)= \frac{(d-\alpha-4)}{2}\frac{(d-\alpha-2)}{2}r^{-\frac{d-\alpha}{2}} g_\epsilon(r) -2\frac{(d-\alpha-4)}{2}r^{-\frac{d-\alpha-2}{2}}g_\epsilon'(r) +r^{-\frac{d-\alpha-4}{2}}g_\epsilon''(r). \end{equation} From the definition of $g_\epsilon$ in~\eqref{eq:cutoff}, the integrals in~\eqref{eq:lemma-minimizing} are supported over the interval $[\epsilon, 1/\epsilon].$ Now we consider separately the contributions of those integrals over the three sub-intervals $[\epsilon,2\epsilon],$ $[2\epsilon, 1/2\epsilon]$ and $[1/2\epsilon,1/\epsilon].$ We will see that the sole $\epsilon$-dependent contribution comes from the integration over $[2\epsilon,1/2\epsilon],$ whereas the integrals over $[\epsilon, 2\epsilon]$ and $[1/2\epsilon,1/\epsilon]$ are $\mathcal{O}(1)$ in the limit $\epsilon$ goes to $0.$ We start considering the integrals over $[\epsilon,2\epsilon].$ Using the explicit expressions for $f_\epsilon'$ and $f_\epsilon''$ in~\eqref{eq:der1} and~\eqref{eq:der2} respectively, one has \begin{multline}\label{eq:int1} \int_\epsilon^{2\epsilon} |f_\epsilon''(r)|^2r^{d-\alpha-1}\, dr\\ \leq 3\Bigg\{ \frac{(d-\alpha-4)^2}{4} \frac{(d-\alpha-2)^2}{4}\int_\epsilon^{2\epsilon} r^{-1}g_\epsilon^2(r)\,dr +4\frac{(d-\alpha-4)^2}{4}\int_\epsilon^{2\epsilon} r g_\epsilon'^{\,2}(r)\, dr + \int_\epsilon^{2\epsilon} r^3g_\epsilon''^{\,2}(r)\, dr \Bigg\}. \end{multline} Now, using again the property of the function $g_\epsilon,$ it is easy to see that \begin{equation}\label{eq:no-dep} \begin{split} &\int_\epsilon^{2\epsilon} r^{-1}g_\epsilon^2(r)\, dr \leq \int_\epsilon^{2\epsilon} r^{-1}\, dr=\ln(2);\\ &\int_\epsilon^{2\epsilon} r g_\epsilon'^{\,2}(r)\, dr \leq 2\epsilon \Big(\frac{c}{\epsilon}\Big)^2\epsilon=2c^2;\\ &\int_\epsilon^{2\epsilon} r^3g_\epsilon''^{\,2}(r)\, dr \leq (2\epsilon)^3\Big(\frac{c}{\epsilon^2}\Big)^2\epsilon=8c^2. \end{split} \end{equation} In particular, the three integrals above do not depend on $\epsilon,$ therefore from~\eqref{eq:int1} we have \begin{equation*} \int_\epsilon^{2\epsilon} |f_\epsilon''(r)|^2r^{d-\alpha-1}\, dr=\mathcal{O}(1). \end{equation*} Similarly, one has \begin{equation*} \int_\epsilon^{2\epsilon} |f_\epsilon'(r)|^2r^{d-\alpha-3}\, dr \leq 2\Bigg\{ \frac{(d-\alpha-4)^2}{4} \int_\epsilon^{2\epsilon} r^{-1} g_\epsilon^2(r)\, dr +\int_\epsilon^{2\epsilon} r g_\epsilon'^{\,2}(r)\,dr \Bigg\}, \end{equation*} and from~\eqref{eq:no-dep} \begin{equation*} \int_\epsilon^{2\epsilon} |f_\epsilon'(r)|^2r^{d-\alpha-3}\, dr=\mathcal{O}(1). \end{equation*} Analogously, \begin{equation*} \int_\epsilon^{2\epsilon} |f_\epsilon(r)|^2r^{d-\alpha-5}\, dr =\int_\epsilon^{2\epsilon} r^{-1}g_\epsilon^2(r)\, dr=\mathcal{O}(1). \end{equation*} To sum up, one has \begin{equation}\label{eq:O(1)} \begin{split} \int_\epsilon^{2\epsilon} |f_\epsilon''(r)|^2r^{d-\alpha-1}\, dr &=\mathcal{O}(1),\\ \int_\epsilon^{2\epsilon} |f_\epsilon'(r)|^2r^{d-\alpha-3}\, dr &=\mathcal{O}(1),\\ \int_\epsilon^{2\epsilon} |f_\epsilon(r)|^2r^{d-\alpha-5}\, dr &=\mathcal{O}(1). \end{split} \end{equation} When we are on $[2\epsilon, 1/2\epsilon],$ then $g_\epsilon=1$ and $f_\epsilon, f_\epsilon'$ and $f_\epsilon''$ assume the particularly simple form \begin{equation*} f_\epsilon(r)=r^{-\frac{d-\alpha-4}{2}}; \qquad f_\epsilon'(r)=-\frac{(d-\alpha-4)}{2}r^{-\frac{d-\alpha-2}{2}}; \qquad f_\epsilon''(r)=\frac{(d-\alpha-4)}{2}\frac{(d-\alpha-2)}{2}r^{-\frac{d-\alpha}{2}}. \end{equation*} Now, a direct computation gives \begin{equation}\label{eq:non-trivial} \begin{split} \int_{2\epsilon}^{1/2\epsilon} |f_\epsilon''(r)|^2r^{d-\alpha-1}\, dr &=-\frac{(d-\alpha-4)^2}{4}\frac{(d-\alpha-2)^2}{4} \ln(4\epsilon^2),\\ \int_{2\epsilon}^{1/2\epsilon} |f_\epsilon'(r)|^2r^{d-\alpha-3}\, dr &=-\frac{(d-\alpha-4)^2}{4}\ln(4\epsilon^2),\\ \int_{2\epsilon}^{1/2\epsilon} |f_\epsilon(r)|^2r^{d-\alpha-5}\, dr &=-\ln(4\epsilon^2). \end{split} \end{equation} In the interval $[1/2\epsilon,1/\epsilon]$ analogous computations as the ones in $[\epsilon, 2\epsilon]$ give \begin{equation}\label{eq:O(1)bis} \begin{split} \int_{1/2\epsilon}^{1/\epsilon} |f_\epsilon''(r)|^2r^{d-\alpha-1}\, dr &=\mathcal{O}(1),\\ \int_{1/2\epsilon}^{1/\epsilon} |f_\epsilon'(r)|^2r^{d-\alpha-3}\, dr &=\mathcal{O}(1),\\ \int_{1/2\epsilon}^{1/\epsilon} |f_\epsilon(r)|^2r^{d-\alpha-5}\, dr &=\mathcal{O}(1). \end{split} \end{equation} Using~\eqref{eq:O(1)},~\eqref{eq:non-trivial} and~\eqref{eq:O(1)bis} in~\eqref{eq:lemma-minimizing} we have \begin{equation*} \begin{split} \frac{\int_{\mathbb{R}^d} |\mathcal{L}\psi_\epsilon(x)|^2/|x|^\alpha\,dx}{\int_{\mathbb{R}^d} \mathcal{D}(\psi_\epsilon)(x)/|x|^{\alpha+2}\, dx} &= \frac{\int_{\mathbb{R}^d\cap \{2\epsilon\leq |x|\leq 1/2\epsilon\}} |\mathcal{L}\psi_\epsilon(x)|^2/|x|^\alpha\, dx + \mathcal{O}(1) }{\int_{\mathbb{R}^d\cap \{2\epsilon\leq |x|\leq 1/2\epsilon\}} |\mathcal{L}^{1/2}\psi_\epsilon(x)|^2/|x|^{\alpha+2}\, dx + \mathcal{O}(1)}\\ &=\frac{ \big[ (d-\alpha-4)(d+\alpha) +4\lambda_{m_0}\big]^2 +\mathcal{O}(1/\ln(4\epsilon^2)) } { 4[(d-\alpha-4)^2+ 4\lambda_{m_0}] +\mathcal{O}(1/\ln(4\epsilon^2)) }\vspace{0.1cm}\\ &\searrow C(d,\alpha), \qquad \text{as }\epsilon\searrow 0. \end{split} \end{equation*} Now we consider the case $d-\alpha-4=0$ and $C(d,\alpha)=(d-2)^2.$ In this case $\psi_\epsilon(x):=h_\epsilon(r),$ \emph{i.e.} $\psi_\epsilon(x)$ is radial. Since the spherical part is missing, performing analogous computations as in Lemma~\ref{lemma:1d-reduction} one gets \begin{equation*} \begin{split} \int_{\mathbb{R}^d} \frac{|\mathcal{L}_\epsilon\psi(x)|^2}{|x|^\alpha}\, dx &= |\mathbb{S}^{d-1}|\bigg(\int_0^\infty |h_\epsilon''(r)|^2r^3\, dr + [(d-1)(d-3)]\int_0^\infty |h_\epsilon'(r)|^2r\, dr\bigg)\\ \int_{\mathbb{R}^d} \frac{\mathcal{D}(\psi_\epsilon)(x)}{|x|^{\alpha+2}}\, dx& =|\mathbb{S}^{d-1}|\int_0^\infty |h_\epsilon'(r)|^2r\, dr. \end{split} \end{equation*} From the definition~\eqref{eq:h_eps} of $h_\epsilon$ one has \begin{equation*} h_\epsilon'(r)=r^{-1}g_\epsilon(r), \qquad h_\epsilon''(r)=-r^{-2}g_\epsilon(r) + r^{-1}g_\epsilon'(r). \end{equation*} As above we consider separately the integrals over the sub-interval $[\epsilon,2\epsilon], [2\epsilon,1/2\epsilon]$ and $[1/2\epsilon,1/\epsilon].$ In $[\epsilon,2\epsilon]$ one has \begin{equation*} \int_\epsilon^{2\epsilon} |h_\epsilon''|^2r^3\,dr\leq 2\Bigg(\int_\epsilon^{2\epsilon}r^{-1}g_\epsilon^2(r)\, dr + \int_\epsilon^{2\epsilon} r g_\epsilon'^{\,2}(r)\, dr\Bigg) \end{equation*} and \begin{equation*} \int_\epsilon^{2\epsilon} |h_\epsilon'(r)|^2r\,dr =\int_\epsilon^{2\epsilon} r^{-1}g_\epsilon^2(r)\, dr. \end{equation*} Using~\eqref{eq:no-dep} one has \begin{equation*} \begin{split} \int_\epsilon^{2\epsilon} |h_\epsilon''|^2r^3\,dr&=\mathcal{O}(1),\\ \int_\epsilon^{2\epsilon} |h_\epsilon'(r)|^2r\,dr&=\mathcal{O}(1). \end{split} \end{equation*} We now consider the integrals over $[2\epsilon, 1/2\epsilon].$ Here $h_\epsilon'(r)=r^{-1}$ and $h_\epsilon''(r)=-r^{-2}.$ Thus \begin{equation*} \int_{2\epsilon}^{1/2\epsilon} |h_\epsilon''|^2r^3\,dr =\int_{2\epsilon}^{1/2\epsilon} |h_\epsilon'|^2r\,dr =-\ln(4\epsilon^2). \end{equation*} Finally the integrals over $[1/2\epsilon,1/\epsilon]$ can be treated similarly to the ones over $[\epsilon, 2\epsilon].$ This gives \begin{equation*} \begin{split} \int_{1/2\epsilon}^{1/\epsilon} |h_\epsilon''|^2r^3\,dr&=\mathcal{O}(1),\\ \int_{1/2\epsilon}^{1/\epsilon} |h_\epsilon'(r)|^2r\,dr&=\mathcal{O}(1). \end{split} \end{equation*} These facts together give \begin{equation*} \begin{split} \frac{\int_{\mathbb{R}^d} |\mathcal{L}\psi_\epsilon(x)|^2/|x|^\alpha\,dx}{\int_{\mathbb{R}^d} \mathcal{D}(\psi_\epsilon)(x)/|x|^{\alpha+2}\, dx} &= \frac{\int_{\mathbb{R}^d\cap \{2\epsilon\leq |x|\leq 1/2\epsilon\}} |\mathcal{L}\psi_\epsilon(x)|^2/|x|^\alpha\, dx + \mathcal{O}(1) }{\int_{\mathbb{R}^d\cap \{2\epsilon\leq |x|\leq 1/2\epsilon\}} \mathcal{D}(\psi_\epsilon)(x)/|x|^{\alpha+2}\, dx + \mathcal{O}(1)}\\ &=\frac{ (d-2)^2 +\mathcal{O}(1/\ln(4\epsilon^2)) } { 1 +\mathcal{O}(1/\ln(4\epsilon^2)) }\vspace{0.1cm}\\ &\searrow (d-2)^2, \qquad \text{as }\epsilon\searrow 0. \end{split} \end{equation*} This concludes the proof of the optimality of $C(d,\alpha).$ \medskip \noindent It remains to show that the constant $C(d,\alpha)$ is not attained. This fact is a consequence of the non-attainability of the best constant in the 1D-Hardy inequality~\eqref{eq:1d-Hardy}. Indeed, going back through the proof of Theorem~\ref{thm:main-general}, one realizes that for $C(d,\alpha)$ to be attained, it is necessary to have equality in the estimates where we applied~\eqref{eq:1d-Hardy}. More precisely, we want to have equality in \begin{equation}\label{eq:first-att} \int_0^\infty |f_m''(r)|^2 r^{d-\alpha-1}\, dr \geq \Big( \frac{d-\alpha-2}{2} \Big)^2 \int_0^\infty |f_m'(r)|^2 r^{d-\alpha-3}\, dr, \end{equation} or, equivalently, in \begin{equation}\label{eq:last-att} \int_0^\infty |f_m'(r)|^2 r^{d-\alpha-3}\, dr \geq \Big( \frac{d-\alpha-4}{2} \Big)^2 \int_0^\infty |f_m(r)|^2 r^{d-\alpha-5}\, dr. \end{equation} Notice that~\eqref{eq:last-att} is also a consequence of the identity \begin{equation*} \int_0^\infty |f_m'(r)|^2 r^{d-\alpha-3}\, dr - \Big( \frac{d-\alpha-4}{2} \Big)^2 \int_0^\infty |f_m(r)|^2 r^{d-\alpha-5}\, dr =\int_0^\infty \Big|\big(r^\frac{d-\alpha-4}{2}f_m(r)\big)'\Big|^2 r\, dr. \end{equation*} In view of the last identity, equality in~\eqref{eq:last-att} is achieved if \begin{equation*} \big(r^\frac{d-\alpha-4}{2}f_m(r)\big)'=0, \end{equation*} which leads to the family of solutions \begin{equation*} f_m(r)=a_m r^{-\frac{d-\alpha-4}{2}} + b_m, \end{equation*} for some real constants $a_m, b_m.$ Thus, the fundamental system of solutions is given by $\{r^{-\frac{d-\alpha-4}{2}},1\}.$ Notice that $f_m(r)=1$ is not possible since constant functions are not admissible for inequality~\eqref{eq:last-att}. Moreover, $f_m(r)=r^{-\frac{d-\alpha-4}{2}}$ is not admissible because none of the terms in~\eqref{eq:first-att} are integrable. Thus, we conclude that $C(d,\alpha)$ is not attained. \qed \section{Proof of the particular cases: Theorem~\ref{thm:general-electric}, Theorem~\ref{thm:HB-anydimension} and Theorem~\ref{thm:monopole}}\label{sec:consequences} In order to prove Theorem~\ref{thm:general-electric}, Theorem~\ref{thm:HB-anydimension} and Theorem~\ref{thm:monopole} one simply has to show that the corresponding operators can be recast into the form of the general operator $\mathcal{L}$ defined in~\eqref{eq:operator}. \begin{proof}[Proof of Theorem~\ref{thm:general-electric}] Consider the operator $-\Delta_{a(\theta)}:=-\Delta + \tfrac{a(\theta)}{|x|^2},$ since the function $a=a(\theta)$ depends only on the spherical variable $\theta,$ it is easy to see that $-\Delta_{a(\theta)}$ can be written more conveniently as \begin{equation*} -\Delta_{a(\theta)}=L_r + \frac{1}{r^2} (-\Delta_{\mathbb{S}^{d-1}} + a(\theta)), \qquad L_r=-\frac{\partial^2}{\partial r^2}- \frac{d-1}{r}\frac{\partial}{\partial r}, \end{equation*} thus the operator $\Lambda_\omega$ in~\eqref{eq:operator} is represented by the non-negative, self-adjoint operator $-\Delta_{\mathbb{S}^{d-1}} + a(\theta)$ in $L^2(\mathbb{S}^{d-1};d\theta).$ This operator has been largely studied (see \emph{e.g.}~\cite{FMT2007,FFT2011, FFFP2}). In particular in~\cite[Lemma 2.1]{FMT2007} it has been proved that $\Lambda_\omega=-\Delta_{\mathbb{S}^{d-1}} + a(\theta)$ on $\mathbb{S}^{d-1}$ admits a divergent sequence of eigenvalues $\mu_k,$ $k\in \mathbb{N}_0,$ with finite multiplicity, the first of which satisfies $\mu_0\geq \ess \inf_{\mathbb{S}^{d-1}} a.$ Therefore, the hypotheses of Theorem~\ref{thm:main-general} are satisfied. Thus Theorem~\ref{thm:general-electric} follows from identity~\eqref{eq:cdc-corr} as soon as one checks that the Carré du Champ in this case is given by $\Gamma(\psi)= |\nabla \psi|^2 + \frac{a}{2}|\psi|^2/|x|^2$ and eventually noticing that $\Lambda_\omega |x|^\beta=a|x|^\beta.$ \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:HB-anydimension}] As in the previous case we show that the Aharonov-Bohm magnetic Laplacian $-\Delta_{A}$ can be written in the form~\eqref{eq:operator} in any dimension $d\geq 2.$ For $d\geq 2,$ we take the transformation from Cartesian to spherical coordinates, namely $x=(x_1,x_2, \dots, x_d)\in \mathbb{R}^d$ to $(r, \theta_1, \dots, \theta_{d-1})\in (0,\infty)\times \mathbb{S}^{d-1},$ where $\mathbb{S}^{d-1}$ is the $d-1$-dimensional sphere with respect to the Hausdorff measure in $\mathbb{R}^d,$ given by \begin{equation*} \begin{split} &x_1=r\cos\theta_1,\\ &x_j=r\cos \theta_j \prod_{k=1}^{j-1}\sin \theta_k, \qquad j\in \{2,3\dots, d-1\},\\ &x_d=r\prod_{k=1}^{d-1}\sin \theta_k. \end{split} \end{equation*} The corresponding orthogonal unit vectors are given by \begin{equation*} \begin{split} &e_r:=(\cos \theta_1, \cos \theta_2\sin \theta_1, \dots, \cos \theta_{d-1}\prod_{k=1}^{d-2}\sin \theta_k, \prod_{k=1}^{d-1}\sin \theta_k),\\ &\begin{multlined} e_{\theta_j}:=(\underbrace{0, \dots, 0}_{j-1}, -\sin \theta_j, \cos \theta_{j+1} \cos \theta_j, \cos\theta_{j+2}\cos \theta_j \sin \theta_{j+1}, \dots,\\ \cos\theta_{d-1}\cos \theta_j \prod_{k=1, k\neq j}^{d-2}\sin \theta_k, \cos \theta_j \prod_{k=1, k\neq j}^{d-1}\sin \theta_k), \qquad \qquad j\in \{1, \dots, d-2\}, \end{multlined} \\ &e_{\theta_{d-1}}:=(\underbrace{0, \dots, 0}_{d-2},-\sin \theta_{d-1}, \cos \theta_{d-1}). \end{split} \end{equation*} Without loss of generality we can assume that the function $\Psi=\Psi(\theta_{d-1})$ is constant, indeed $A$ as defined in~\eqref{eq:A-polar-coordinates} is gauge equivalent to the vector potential $\widetilde{A}$ defined as Using spherical coordinates, the Aharonov-Bohm vector potential $A$ defined in~\eqref{eq:AB-gen} can be rewritten as \begin{equation}\label{eq:A-polar-coordinates} A:= \begin{cases} \frac{1}{r}\widetilde{\Psi} e_{\theta_1}, \qquad &\text{if } d=2,\\ \frac{1}{r\prod_{k=1}^{d-2} \sin \theta_k} \widetilde{\Psi} e_{\theta_{d-1}}, \qquad &\text{if } d\geq3, \end{cases} \qquad \widetilde{\Psi}:=\frac{1}{2\pi}\int_0^{2\pi}\Psi(\theta)\, d\theta, \end{equation} (see~\cite[Section 5.4.2]{BEL} for more details). Recalling the following expression for the gradient in spherical coordinates \begin{equation*} \nabla=e_r \frac{\partial}{\partial r} + \frac{1}{r}e_{\theta_1} \frac{\partial}{\partial \theta_1} +\sum_{j=2}^{d-1} \frac{1}{r \prod_{k=1}^{j-1}\sin \theta_k} e_{\theta_j} \frac{\partial}{\partial \theta_j}, \end{equation*} one checks easily that the magnetic gradient $\nabla_A:=\nabla -iA$ associated to the Aharonov-Bohm magnetic vector potential~\eqref{eq:A-polar-coordinates} can be written as \begin{equation*} \nabla_A=e_r\frac{\partial}{\partial_r} + \frac{1}{r}\nabla_{d,\theta}, \qquad \nabla_{d,\theta}=e_{\theta_1}\frac{\partial}{\partial\theta_1} + \sum_{j=2}^{d-2} \frac{1}{\prod_{k=1}^{j-1}\sin \theta_k} e_{\theta_j} \frac{\partial}{\partial \theta_j} + \frac{1}{\prod_{k=1}^{d-2}\sin \theta_k}e_{\theta_{d-1}} \Big(\frac{\partial}{\partial \theta_{d-1}}-i\widetilde{\Psi}\Big). \end{equation*} The corresponding magnetic Laplacian $-\Delta_A:=-\nabla_{\!A}^2$ has the form \begin{equation*} -\Delta_A:=L_r + \frac{1}{r^2}\Lambda_{d,\theta}, \end{equation*} where \begin{equation*} L_r=-\frac{\partial^2}{\partial r^2}-\frac{d-1}{r} \frac{\partial}{\partial r} \qquad \text{and} \qquad \Lambda_{d,\theta}= - \sum_{j=1}^{d-2} \frac{1}{q_j}\Big[(d-j-1)\cot \theta_j \frac{\partial}{\partial \theta_j} + \frac{\partial^2}{\partial \theta_j^2}\Big] + \frac{1}{q_{d-1}}\left(i \frac{\partial}{\partial \theta_{d-1}} + \Psi(\theta_{d-1}) \right)^2, \end{equation*} with \begin{equation*} q_j:= \begin{system} &1, \qquad & \text{if } j=1,\\ &\prod_{k=1}^{j-1}\sin^2 \theta_k, \qquad & \text{if } j\geq 2. \end{system} \end{equation*} An easy computation also shows that the generalisation of the Laplace-Beltrami operator $\Lambda_{d,\theta}$ can be obtained through the angular part of the magnetic gradient as follows $\Lambda_{d,\theta}=-\nabla_{d,\theta}\cdot \nabla_{d,\theta}.$ Moreover, the following identity can be obtained by integration by parts \begin{equation}\label{eq:grad-LB} \int_{\mathbb{S}^{d-1}} \psi \Lambda_{d,\theta}\psi\, d\omega=\int_{\mathbb{S}^{d-1}} |\nabla_{d, \theta}\psi|^2\, d\omega. \end{equation} Clearly, the operator $\Lambda_{d,\theta}$ plays the role of $\Lambda_\omega$ in~\eqref{eq:operator}. Moreover, in~\cite[Theorem 3.2]{Thomas2007} (see also~\cite{EL2005}) it is proved that the non-negative, self-adjoint magnetic Laplace-Beltrami operator $\Lambda_\omega:=\Lambda_{d,\theta}$ has spectrum consisting of eigenvalues \begin{equation*} \lambda_m=(m+\widetilde \Psi)(m+\widetilde \Psi + d-2), \end{equation*} where $m\in \mathbb{Z}':=\{m\in \mathbb{Z}\colon m\leq 2-d-\widetilde \Psi \, \text{or } m\geq -\widetilde \Psi\}.$ Using now Theorem~\ref{thm:main-general}, one gets Theorem~\ref{thm:HB-anydimension} as soon as it is shown that \begin{equation}\label{eq:final-cdc} \int_{\mathbb{R}^d} \frac{\mathcal{D}(\psi)(x)}{|x|^{\alpha+2}}\, dx=\int_{\mathbb{R}^d} \frac{|\nabla_A \psi|^2}{|x|^{\alpha+2}}\, dx. \end{equation} By definition \begin{equation*} \int_{\mathbb{R}^d} \frac{\mathcal{D}(\psi)(x)}{|x|^{\alpha+2}}\, dx= \int_{\mathbb{R}^d} \frac{|\partial_r \psi(x)|^2}{|x|^{\alpha+ 2}}\, dx + \int_{\mathbb{R}^d} \frac{|\Lambda_{d,\theta}^{1/2} \psi(x)|^2}{|x|^{\alpha + 4}}\, dx, \end{equation*} where $\Lambda_{d,\theta}^{1/2}$ denotes the square root of the non-negative, self-adjoint magnetic Laplace-Beltrami $\Lambda_{d,\theta}.$ From identity~\eqref{eq:grad-LB} the following chain of identities holds \begin{equation*} \int_{\mathbb{S}^{d-1}} |\Lambda_{d,\theta}^{1/2}\psi|^2\, d\omega =\int_{\mathbb{S}^{d-1}} \overline{\Lambda_{d,\theta}^{1/2}\psi}\Lambda_{d,\theta}^{1/2}\psi\, d\omega =\int_{\mathbb{S}^{d-1}} \overline{\psi} \Lambda_{d,\theta}\psi \, d\omega =\int_{\mathbb{S}^{d-1}} |\nabla_{d,\theta}\psi|^2\, d\omega, \end{equation*} thus one has \begin{equation*} \int_{\mathbb{R}^d} \frac{\mathcal{D}(\psi)(x)}{|x|^{\alpha+2}}\, dx= \int_{\mathbb{R}^d} \frac{|\partial_r \psi(x)|^2}{|x|^{\alpha+ 2}}\, dx + \int_{\mathbb{R}^d} \frac{|\nabla_{d,\theta} \psi(x)|^2}{|x|^{\alpha + 4}}\, dx. \end{equation*} Using that \begin{equation*} |\nabla_A \psi|^2=|\partial_r \psi|^2 + \frac{1}{r^2}|\nabla_{d,\theta}\psi|^2, \end{equation*} then we have~\eqref{eq:final-cdc} and, in turn, Theorem~\ref{thm:HB-anydimension} is proved. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:monopole}] The Hamiltonian of a monopole of degree $g$ in $\mathbb{R}^3$ has been intensively studied in ~\cite{CT2010}). In particular it can be shown that the magnetic Laplacian $-\Delta_A$ associated to the vector potential $A$ defined in~\eqref{eq:A-monopole} can be written as \begin{equation*} -\Delta_A=-\frac{\partial^2}{\partial r^2} - \frac{2}{r} \frac{\partial}{\partial r} + \frac{1}{r^2} K_g, \end{equation*} where $K_g$ is the angular magnetic Schrödinger operator (see~\cite{CT2010}) and plays the role of $\Lambda_\omega$ in~\eqref{eq:operator}. One can also prove~\cite[Theorem 5.13]{CT2010} that spectrum of $K_g$ is discrete, more precisely it is the sequence $\lambda_k=\frac{1}{4} k(k+2)-g^2,$ $k=2(|g|+l), l\in \mathbb{N}_0.$ Thus, the hypotheses of Theorem~\ref{thm:main-general} are satisfied and therefore Theorem~\ref{thm:monopole} follows. \end{proof} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
3,212,635,537,828
arxiv
\section{Introduction}\label{Sec:Intro} For an affine real smooth algebraic hypersurface $X \subset {\mathbb C}^{n+1}$ with real part ${\mathbb R} X \subset {\mathbb R}^{n+1}$ there is an universal inequality between the total curvatures of the real part ${\mathbb R} X$ and the one of ${\mathbb C} X$ (\cite{Ris03}), similar to the ``Smith's Inequality'' between the sums of mod.2 Betti numbers. In this paper we prove an analogous result in the tropical setting; it turns out that in the non-singular tropical case, this inequality becomes an equality; this fact is true in the algebraic world only up to any positive $\epsilon$ and for special maximal varieties. For plane algebraic curves the only cases when the equality holds are the line, the ellipse and the parabola.\\ Let us describe briefly the main results of the paper.\\ If $V$ is a tropical hypersurface defined by a polynomial with coefficients in the field of real Puiseux series, it has a real part ${\mathbb R} V$ (see Subsection~\ref{Subsec:Real-Tropical-Hypersurfaces}). Using the fact that $V$ (resp. ${\mathbb R} V$) is limit of Amoebas (Resp. Real Amoebas), we define the total curvature of $V$ (resp. ${\mathbb R} V$) by using the total curvature of Amoebas and passing to the limit.\\ For the real part ${\mathbb R} V$, which is a polyhedral manifold, we also define its total curvature geometrically (as a polyhedral hypersurface), and call it the ``polyhedral total curvature''.\\ Non-singular tropical hypersurfaces are those whose dual subdivision is primitive; they are the tropical counterpart of primitive T-hypersurfaces of Viro's patchworking (see~\cite{Vir83} and \cite{Vir84}). The main results we prove about these notions are the following: \begin{enumerate} \item The fact that the notions of total curvature and polyhedral total curvature coincide for real non-singular tropical hypersurfaces (Section~\ref{Sec:Polyhedral-Curvature}). \item A universal inequality between the total curvatures of $V$ and ${\mathbb R} V$ (Subsection~\ref{Subsec:Real-Tropical-Curvature}) based on a similar inequality between the logarithmic Gaussian curvatures of the complex and real parts of a real algebraic hypersurface (Section~\ref{Sec:Amoebas-Curvature}). \item In the non-singular case, this inequality turns out to be an equality (Subsection~\ref{Subsec:Real-Tropical-Curvature}). \item A "Gauss-Bonnet's style" formula for the total curvature of a non-singular (complex) tropical hypersurface (Subsection \ref{Subsec:Gauss-Bonnet}). \end{enumerate} We would like to thank the referee for their useful comments. The structure of the paper is the following: \smallskip Section 2 is a preliminary one: it introduces notation and basic properties of tropical and real tropical hypersurfaces. \smallskip Section 3 treats the case of Amoebas, using the ``Logarithmic Curvature'' to define their curvature. \smallskip Section 4 describes how we can define the tropical total curvature passing to the limit from the one of Amoebas, both in the real and complex cases, and contains the main results of the paper. \smallskip Section 5 is devoted to defining directly the ``Polyhedral Curvature'' for a real non-singular tropical hypersurface and to proving that this technique gives the same notion of total curvature than the previous one in the tropical non-singular case. \smallskip Section 6 gives some complements and applications, in particular a Gauss-Bonnet's style formula for non-singular tropical hypersurfaces, comparing the Euler characteristic of a generic complex hypersurface in $({\mathbb C}^*)^{n+1}$ and the total curvature of its tropicalisation. \section{Preliminary}\label{Sec:Prelim} \subsection{Total curvature }\label{Subsec:Total-Curvature} In all the paper, ${\mathbb R}^{n+1}$ is considered with its canonical orientation, $\sigma_n$ will be the volume of the unit sphere $S^n \subset {\mathbb R}^{n+1}$, and we will set $\displaystyle a_n = \pi \frac{\sigma_{2n}}{\sigma_{2n +1}}$.\\ We have then: \[ \begin{gathered} \sigma_{2n} = \frac{2 \times (2 \pi)^n}{1.3\dots (2n-1)}\\ \sigma_{2n+1} = \frac{(2\pi)^{n+1}}{2.4 \dots 2n}\\ a_n = \frac{2.4 \dots 2n}{1.3 \dots (2n-1)}. \end{gathered} \] Let ${\mathbb R} W \subset {\mathbb R}^{n+1}$ be a smooth oriented hypersurface, $g: {\mathbb R} W \rightarrow {\mathbb R}{\mathbb P}^n$ the Gauss map $x \mapsto n_x$, where $n_x$ is a non-zero normal vector to ${\mathbb R} W$ at $x$. We define the curvature function $x \mapsto k(x)$ on ${\mathbb R} W$ as the jacobian of $g$. The curvature of a measurable set $U \subset {\mathbb R} W$ is the integral $\int_{U} \vert k(x) \vert dv$ of the curvature function on $U$. The {\bf total curvature} of $ {\mathbb R} W$ is then by definition: $$ \int_{{\mathbb R} W} \vert k(x) \vert dv $$ where $dv$ is the canonical Euclidean volume form on ${\mathbb R} W \subset {\mathbb R}^{n+1}$. It clearly satisfies the following equality~: \begin{equation} \label{totcurv} \int_{{\mathbb R} W} \vert k(x) \vert dv = \int_{{\mathbb R}{\mathbb P}^n} \# g^{-1}(\beta) ds \end{equation} where $ds$ is the canonical volume form on ${\mathbb R}{\mathbb P}^n$ and assuming $g^{-1}(\beta)$ is almost everywhere finite. One can think of the total curvature as the volume of Im$(g)$ taken ``with multiplicities'', the multiplicity of a point $x$ being the cardinality of the fiber $g^{-1}(x)$.\\ If now $W \subset {\mathbb C}^{n+1}$ is a smooth analytic complex hypersurface, one may define its curvature as the ``Lipschitz- Killing'' curvature $K(x) dw$, where $dw$ is the canonical volume form on $W \subset {\mathbb C}^{n+1}$ and $K : W \rightarrow {\mathbb R}$ the curvature function.\\ Another way, due to Milnor, to define the function $K$ is the following. Let $\gamma_{{\mathbb C}} : W \rightarrow {\mathbb C} P^n$ be the ``complex Gauss map'' $x \mapsto [N_x W]$, where $N_x W$ is the complex normal vector to $W$ at $x$. One then has: \begin{equation} \label{compcurv} (-1)^n K dw = a_n \gamma_{{\mathbb C}}^{*} (dp) \end{equation} where $dp$ is the volume form on ${\mathbb C} P^n$ (see \cite{Lan79}); note that $(-1)^n K(x)$ is a positive function on $W$.\\ The following inequality is proved in \cite{Ris03} in the algebraic case: \begin{equation} \label{ineqcurv} \frac{\sigma_{2 n}}{\sigma_n} \int_{{\mathbb R} W} \vert k \vert dv \leq \int_W \vert K \vert dw \end{equation} This paper is devoted to defining similar notions and proving similar results in the tropical case. In particular we prove the same type of inequality for the logarithmic Gaussian curvatures which are the natural curvatures in the tropical setting and study its sharpness. \subsection{Tropical hypersurfaces }\label{Subsec:Tropical-Hypersurfaces} In order to fix notations and definitions we use in this text, we recall briefly basic notions in tropical geometry. We use the following notation: the scalar product is written $z\cdot v$; for $X=(X_1,\dots,X_{n+1})$ and $\alpha=(\alpha_1,\dotsc,\alpha_{n+1})$ we write $X^\alpha:=\prod X_i^{\alpha_i}$; the set of vertices of a polytope $\triangle$ is denoted $Vert(\triangle)$. We tend to identify a point and its coordinate vector when it makes the notation less cumbersome. We consider the tropical semi-field ${\mathbb T}=({\mathbb R}\cup\{-\infty\},"+","\cdot")$, where the tropical operations are defined by $u"+"v=max\{u,v\}$ and $u"\cdot"v=u+v$. A tropical polynomial $f\in{\mathbb T}[X_1,\dots,X_{n+1}]$ is of the form $$f(X)="\sum_{\alpha\in\mathcal{E}(f)}u_\alpha X^\alpha"=max_{\alpha\in\mathcal{E}(f)}\{ u_\alpha+X\cdot \alpha\}$$ where $\mathcal{E}(f)$ denotes the set of exponents of $f$. The Newton polytope of $f$ will be denoted by $\triangle_f$. Given a tropical polynomial $f$ and a point $\omega\in{\mathbb R}^{n+1}$, the $\omega$-initial part of $f$ is $$In_\omega f(X):="\sum_{\alpha\in\mathcal{E}(f)\mid f(\omega)="u_{\alpha}\omega^\alpha"}u_{\alpha}X^\alpha".$$ A tropical polynomial $f$ determines naturally a polyhedral subdivision of ${\mathbb R}^{n+1}$ whose cells are formed of the sets of points defining the same initial part of $f$. Dually, the set of cells $$\Gamma_f:=(\triangle_{In_\omega f})_{\omega\in{\mathbb R}^{n+1}}$$ realises a regular subdivision of $\triangle_f$. It is dual to the subdivision of ${\mathbb R}^{n+1}$ determined by $f$ in the following usual sense. If $c$ is a cell of the subdivision of ${\mathbb R}^{n+1}$ determined by $f$ there exists a unique dual cell $\check{c}$ of $\Gamma_f$. It satisfies $dim(c)+dim(\check{c})=n+1$ and $c$ and $\check{c}$ generate orthogonal affine subspaces. We say that a tropical polynomial $f$ is \textbf{generic} if $\Gamma_f$ is simplicial, \textbf{non-singular} if all the maximal dimensional cells of $\Gamma_f$ are primitives simplices, and \textbf{primitive} if $\triangle_f$ is a primitive simplex of dimension $n+1$. The \textbf{corner locus} of a tropical polynomial $f\in{\mathbb T}[X]$ is the set of points $\omega\in{\mathbb R}^{n+1}$ where the value of $f$ is attained by at least two of its monomials. Or equivalently, the set of points contained in cells of dimensions at most $n$. In this text, such a set will be called a \textbf{tropical hypersurface}. \begin{defi}\label{Def:Tropical-Hypersurface}Let $f\in{\mathbb T}[X]$. The set $$V(f):=\{\omega\in{\mathbb R}^{n+1}\mid In_\omega f\text{ is not a monomial}\}$$ is by definition the tropical hypersurface defined by $f$. \end{defi} We denote by $Vert(V(f))$ the $0$-dimensional cells of $V(f)$ and more generally $Vert(Z)$ will always denote the $0$-dimensional cells of the natural subdivision of a piecewise linear variety $Z$. \subsection{Real convergent Puiseux series and tropicalisation }\label{Subsec:Tropicalisation} A formal series $$\xi(t)=\sum_{r\in R}\beta_rt^r$$ is a \textbf{locally convergent generalised Puiseux series} if $R\subset{\mathbb R}$ is a well-ordered set, $\beta_r\in{\mathbb C}$, and the series is convergent for $t>0$ small enough. Denote by ${\mathbb K}$ the set of all locally convergent generalised Puiseux series. It is an algebraically closed field of characteristic $0$. A series $\xi\in{\mathbb K}$ is said to be \textbf{real} if all its coefficients are real numbers. We denote by ${\mathbb K}_{{\mathbb R}}$ the subfield of ${\mathbb K}$ composed by the real series. Since the coefficients of a polynomial $F\in{\mathbb K}[x_1,\dots,x_{n+1}]$ are locally convergent near $0$, any polynomial over ${\mathbb K}$ (resp. ${\mathbb K}_{\mathbb R}$) can be thought as a one parametric family of complex (resp. real) polynomials. For any $t, \; 0<t \ll 1$, the complex (resp. real) polynomial $F_t$ is the polynomial resulting of the evaluation of the coefficients at $t$. By hypothesis, the set of exponents of an element of ${\mathbb K}^*$ has a first element. The map that sends each element of ${\mathbb K}^*$ to the first element of its set of exponents and $0$ to $\infty$ is a non-archimedean valuation. We denote by $val$ the opposite of such a map. In other words, $$val: {\mathbb K}\rightarrow {\mathbb R}\cup\{-\infty\}$$ maps $\sum_{r\in{\mathbb R}}\beta_r t^r\neq 0$ to $-min\{r\mid \beta_r\neq 0\}$ and $0$ to $-\infty$. The map $val$ extends naturally to the map $Val:{\mathbb K}^{n+1}\rightarrow ({\mathbb R}\cup\{-\infty\})^{n+1}$ by applying $val$ coordinate-wise. The image of a variety $V\subset {\mathbb K}^{n+1}$ under the map $Val$ is called the \textbf{non-archimedean amoeba} of $V$. Given a polynomial $F\in{\mathbb K}[x_1,\dots,x_{n+1}]$ one can associate a tropical polynomial. This map is called tropicalisation and acts as follows: If $F(x)=\sum_{\alpha\in\mathcal{E}(F)}c_\alpha x^\alpha$, the \textbf{tropicalisation} of $F$ is the tropical polynomial $$Trop(F)(X):="\sum_{\alpha\in\mathcal{E}(F)}val(c_\alpha)X^\alpha".$$ Kapranov's theorem establishes that the non-archimedean amoeba of $F$ and the tropical hypersurface $V(Trop(F))$ coincide. Given a tropical hypersurface $V \subset {\mathbb R}^{n+1}$ and a polynomial $F\in{\mathbb K}[x_1,\dots,x_{n+1}]$, we will say that $F$ {\it realises} $V$ if $V(Trop(F))=V$. \subsection{Real tropical hypersurfaces }\label{Subsec:Real-Tropical-Hypersurfaces} Real tropical hypersurfaces are very closely related to Viro Patchworking (See~\cite{Vir83} and \cite{Vir84}). A description can be found in \cite{Ber2} and one can look at \cite{Mikh04a} pp.~25 and 37, \cite{Vir01}, and the appendix of \cite{Mikh00} in the case of amoebas for further details. We recall some definitions here for the convenience of the reader. Let $F \in {\mathbb K}_{\mathbb R}[x_1, \dotsc, x_{n+1}]$ be a {\bf real} polynomial defined over the field of real Puiseux series. Let $wal: ({\mathbb K}^*) \to {\mathbb R}\times S^1$ be the map sending a real Puiseux series $\xi(t)=\sum_{r\in R}\beta_rt^r$ to $(val(\xi(t)),arg(\beta_{-val(\xi(t))}))$ and $Wal: ({\mathbb K}^*)^{n+1} \to {\mathbb R}^{n+1}\times{\left(S^1\right)}^{n+1}$ be the map defined by $wal$ coordinate-wise. The map $wal$ restricts to $wal_{\mathbb R}: ({\mathbb K}_{\mathbb R}^*) \to {\mathbb R}\times{\mathbb Z}_2$ which sends $\xi(t)=\sum_{r\in R}\beta_rt^r$ to $(val(\xi(t)),sign(\beta_{-val(\xi(t))}))$ and we denote $Wal_{\mathbb R}: ({\mathbb K}_{\mathbb R}^*)^{n+1} \to {\mathbb R}^{n+1}\times{\mathbb Z}_2^{n+1}$ the corresponding restriction of $Wal$. For any $z\in{\mathbb Z}_2^{n+1}$ we will call {\bf orthant} of the torus over the field of Puiseux series and denote by $Q_z^{{\mathbb K}_{\mathbb R}}$ the preimage of ${\mathbb R}^{n+1} \times \{z\}$ under $Wal_{\mathbb R}$. As in the case of $({\mathbb R}^*)^{n+1}$ an orthant is thus a choice of sign for each coordinate. The map $Wal$ allows to consider the collection of images under $Val_{\mathbb R}$ of each orthant $Q_z^{{\mathbb K}_{\mathbb R}}$. \begin{defi}\label{Def:Real-Tropical-Hypersurface-1} A {\bf real tropical hypersurface} $V^{\mathbb R}(Trop(F))$ is the data of $Wal(V(F))$ for a polynomial $F \in {\mathbb K}_{\mathbb R}[x_1, \dotsc, x_{n+1}]$. The {\bf real part} of the real tropical hypersurface is $Wal_{\mathbb R}(V(F)\cap ({\mathbb K}_{\mathbb R}^*)^{n+1})$. \end{defi} Let us now compare this definition to the following patchworking procedure. Let $f$ be a generic tropical polynomial, and $\vartheta:\mathcal{E}(f)\rightarrow \{1,-1\}$ be a distribution of signs. Let $(e_i)_{i=1..n+1}$ be the canonical basis of ${\mathbb Z}^{n+1}\subset{\mathbb R}^{n+1}$. For $z=\sum_{i=1}^{n+1}z_i e_i\in{\mathbb Z}^{n+1}$, let $s_z:{\mathbb R}^{n+1}\to{\mathbb R}^{n+1}$ be the symmetry mapping $x=\sum_{i=1}^{n+1}x_i e_i$ to $s_z(x)=\sum_{i=1}^{n+1}(-1)^{z_i}x_i e_i$. The map $s_z$ only depends on the reduction modulo $2$ of the coordinates of $z$, so we will indifferently use the notation $s_z$ for $z \in {\mathbb Z}^{n+1}$ or $z \in {\mathbb Z}_2^{n+1}$. Define the symmetrised distribution of sign $S_z(\vartheta):\mathcal{E}(f)\rightarrow \{1,-1\}$ by $S_z(\vartheta)(v) = (-1)^{z\cdot v}\vartheta(v)$. The maps $S_z$ are involutions on the set $\{\vartheta\}$ of sign distributions on $\mathcal{E}(f)$. They define an action of $({\mathbb Z}_2)^{n+1}$ on sign distributions. We will consider this action via the maps $S_z$ below in Subsection~\ref{Sec:Polyhedral-Curvature} (for example in Proposition~\ref{Prop:Transitivity}). \begin{defi}\label{Def:Real-Tropical-Hypersurface} Let $f$ be a generic tropical polynomial, and $\vartheta:\mathcal{E}(f)\rightarrow \{1,-1\}$ a distribution of signs. The {\bf patchworked real tropical hypersurface} $V^{\mathbb R}_\vartheta(f)$ is the data of $V(f)$ and $\vartheta$. The {\bf real part } ${\mathbb R} V^{\mathbb R}_\vartheta(f)$ of $V^{\mathbb R}_\vartheta(f)$ is a subset of ${\mathbb R}^{n+1}\times {\mathbb Z}_2^{n+1}$ consisting of relevant symmetric copies of cells of $V(f)$. Namely for each given $z \in {\mathbb Z}_2^{n+1}$ and $c$ a cell of $V(f)$, $s_z(c) \subset {\mathbb R} V^{\mathbb R}_\vartheta(f)$ if and only if $$S_z(\vartheta)(Vert(\check{c})) = {\mathbb Z}_2.$$ \end{defi} Here and below we identify the set of elements of ${\mathbb Z}_2$ with $\{1,-1\}$ or $\{+,-\}$ depending on which is more convenient. With this identification, the above equality $S_z(\vartheta)(Vert(\check{c})) = {\mathbb Z}_2$ just means that not all vertices of $\check{c}$ carry the same sign. \begin{rem} The set ${\mathbb R} V^{\mathbb R}_\vartheta(f)\cap \left({\mathbb R}^{n+1}\times \{z\}\right)$ is either empty or a polyhedral hypersurface. \end{rem} \begin{rem} One can identify the real tropical hypersurfaces $V^{\mathbb R}_\vartheta(f)$ and $V^{\mathbb R}_{-\vartheta}(f)$ one being obtained from the other by simultaneously reversing all signs ({\it i.e.}, multiplying $\vartheta(v)$ by $-1$ for all $v$). The real parts ${\mathbb R} V^{\mathbb R}_\vartheta(f)$ and ${\mathbb R} V^{\mathbb R}_{-\vartheta}(f)$ are the same. \end{rem} If $F \in {\mathbb K}_{\mathbb R}[x_1,\dots,x_{n+1}]$ is a polynomial with real series coefficients, we associate to each monomial the sign of the first term of its coefficient. In particular this defines a natural sign distribution $\vartheta_F:\mathcal{E}(Trop(F))\rightarrow \{1,-1\}$ at the vertices of the subdivision $\Gamma_{Trop(F)}$. The proposition below is a direct consequence of Viro's Patchworking. \begin{prop}\label{Prop:Patchworking} Let $F\in {\mathbb K}_{\mathbb R}[x_1,\dots,x_{n+1}]$ be a polynomial such that $Trop(F)(X)="\sum_{\alpha\in\mathcal{E}}u_\alpha X^\alpha"$ is generic. For $\omega \in {\mathbb R}^{n+1}$ let $\alpha$ be in $\Gamma_{In_\omega Trop(F)}$. If for every such pair $(\omega,\alpha)$ the identity $Trop(F)(\omega)="u_{\alpha}\omega^\alpha"$ holds only if $\alpha$ is a vertex of $\Gamma_{Trop(F)}$ then, $Wal_{\mathbb R}(V(F)\cap ({\mathbb K}_{\mathbb R}^*)^{n+1})={\mathbb R} V^{\mathbb R}_{\vartheta_F}(Trop(F))$ \mbox{i.e.}, the real part of the real tropical hypersurface coincides with the real part of the patchworked tropical hypersurface. \end{prop} \begin{rem} In order to recover a hypersurface in $\left({\mathbb R}^*\right)^{n+1}$ from ${\mathbb R} V^{\mathbb R}_\vartheta(f)$ one uses the map $\mathfrak{exp}:{\mathbb R}^{n+1}\times{\mathbb Z}_2^{n+1}\to\left({\mathbb R}^*\right)^{n+1}$ defined by $\mathfrak{exp}((x,z)):=s_z(exp (x))$ where the exponential is applied component-wise. \end{rem} \begin{exa}\label{Exa:Real-Conic} Let $f_c$ be a second degree tropical polynomial $"a_0 + a_1 X +a_2 Y + a_3 X^2 + a_4 XY + a_5 Y^2"$ such that $V(f_c)$ is the tropical conic represented on Figure~\ref{Fig:Newton}~a) and $\vartheta$ be the signed distribution shown on Figure~\ref{Fig:Newton}~b). On Figure~\ref{Fig:Real-Conic}, we have drawn the real part ${\mathbb R} V^{\mathbb R}_\vartheta(f_c)$ of the real tropical conic $V^{\mathbb R}_\vartheta(f_c)$ in each of the four quadrants ${\mathbb R}^2\times\{z\}$ corresponding to the four elements of ${\mathbb Z}_2^2$. The axes are the dashed blue lines. The dotted-dashed black segments are the parts of the symmetric copies $s_z(V(f_c))$ of $V(f_c)$ which do not belong to ${\mathbb R} V^{\mathbb R}_\vartheta(f_c)$. The real part ${\mathbb R} V^{\mathbb R}_\vartheta(f_c)$ is depicted in plain thick red. In each region of ${\mathbb R}^2 \setminus s_z(V(f_c))$ we indicated the sign of each vertex in the sign distribution $S_z(\vartheta)$. On Figure~\ref{Fig:Exp} we represented the image of ${\mathbb R} V^{\mathbb R}_\vartheta(f_c)$ under the map $\mathfrak{exp}$ (still in plain thick red). \end{exa} \begin{figure} \begin{tabular}{cc} \includegraphics[width=0.3\textwidth] {coniquep.eps}& \includegraphics[width=0.4\textwidth]{Newton-triang+signsp.eps}\\ a) A tropical conic & b) Its Newton polygon with a distribution of signs \end{tabular} \caption{A tropical conic $V(f_c)$ and the corresponding triangulation of its Newton polygon with a sign distribution $\vartheta$ at its vertices.} \label{Fig:Newton} \end{figure} \begin{figure} \begin{tabular}{cc} \includegraphics[width=0.4\textwidth]{conique+axisQ10sp.eps}& \includegraphics[width=0.4\textwidth]{conique+axisQ00p.eps}\\ a) In ${\mathbb R}^2\times\{(1,0)\}$ & b) In the first quadrant ${\mathbb R}^2\times\{(0,0)\}$\\ \includegraphics[width=0.4\textwidth]{conique+axisQ11sp.eps}& \includegraphics[width=0.4\textwidth]{conique+axisQ01sp.eps}\\ c) In ${\mathbb R}^2\times\{(1,1)\}$ & d) In ${\mathbb R}^2\times\{(0,1)\}$ \end{tabular} \caption{The real part ${\mathbb R} V^{\mathbb R}_\vartheta(f_c)$ of the real tropical conic $V^{\mathbb R}_\vartheta(f_c)$ in the four quadrants (after the relevant reflections). The signs of the corresponding vertices of the dual triangulation are indicated in each region of the plane.}\label{Fig:Real-Conic} \end{figure} \begin{figure} \includegraphics[width=\textwidth] {exp-con-realp.eps} \caption{The real tropical conic $V^{\mathbb R}_\vartheta(f_c)$ after applying the map $\mathfrak{exp}$. }\label{Fig:Exp} \end{figure} When no confusion is possible we will abuse notation and terminology and write $V^{\mathbb R}_\vartheta(f)$ instead of ${\mathbb R} V^{\mathbb R}_\vartheta(f)$ and "real tropical hypersurface" instead of "real part of the real tropical hypersurface". For a vertex $\textbf{v}$ of $V^{\mathbb R}_\vartheta(f)$, when $(\textbf{v},z)$ is a vertex of ${\mathbb R} V^{\mathbb R}_\vartheta(f)$, it will be implicitly denoted by $s_z(\textbf{v})$. Notice that if all vertices of $\check{\textbf{v}}$ have the same sign, $(\textbf{v},z)$ is not in ${\mathbb R} V^{\mathbb R}_\vartheta(f)$. \section{Total curvature of real and complex amoebas}\label{Sec:Amoebas-Curvature} Let $F\in {\mathbb K}_{{\mathbb R}}[x_1,\dots,x_{n+1}]$ be a polynomial defining a non-singular hypersurface in ${({\mathbb K}^*)}^{n+1}$. For $t\in{\mathbb R}^*$, $\vert t \vert \ll 1$, $F_t\in{\mathbb R}[x_1,\dots,x_{n+1}]$ defines a non-singular hypersurface $V(F_t)\subset({\mathbb C}^*)^{n+1}$. We call it a real algebraic variety because $F_t$ is defined over ${\mathbb R}$ and denote by ${\mathbb R} V(F_t)\subset ({\mathbb R}^*)^{n+1}$ the real part of $V(F_t)$.\\ We will set $\triangle_F$ for the Newton polygon of $F$ (or $F_t$ for $0 < t \ll 1$). We denote by $\Log_t$ the map from $({\mathbb C}^*)^{n+1}$ to ${\mathbb R}^{n+1}$ that sends $(x_1,\dotsc, x_{n+1})$ to $(\log_t(|x_1|),\dotsc, \log_t(|x_{n+1}|))$ where $\log_t$ is the base $t$ logarithm. \begin{defi}\label{Def:Amoebas} Set $x = (x_1, \dots, x_{n+1})$. \par 1) For $F \in {\mathbb K}[x], 0<t \ll 1$, $F_t \in {\mathbb C}[x]$, the amoeba of $V(F_t)$ is \[ \mathcal{A}(F_t) = \Log_t(V(F_t)) \subset {{\mathbb R}}^{n+1}. \] \par 2) For $F \in{{\mathbb K}}_{{\mathbb R}}[x], 0<t \ll 1$, $F_t \in {\mathbb R}[x]$, The real amoeba of ${\mathbb R} V(F_t)$ is \[ \mathcal{A}^{\mathbb R}(F_t) = \Log_t({\mathbb R} V(F_t)). \] \end{defi} \begin{rem} In each orthant $Q \subset ({\mathbb R}^*)^{n+1}$, the map $\Log_t \vert_{Q}$ is a diffeomorphism onto ${\mathbb R}^{n+1}$; then we may define the `` Gauss map `` $g_t : \mathcal{A}^{\mathbb R}(F_t) \rightarrow {\mathbb R} {\mathbb P}^n$'' by taking the Gauss map for the image of each orthant (then for some points of the Amoeba, the ``map'' $g_t$ may be multivalued). \end{rem} We then have the following diagram: \begin{eqnarray} \label{Diag:Log-Gauss} \xymatrix{ {\mathbb R} V(F_t) \ar[rd]^{\gamma_t^{\mathbb R}} \ar[d]^{\Log_t} \\ \mathcal{A}^{\mathbb R}(F_t) \ar[r] _{g_t} & {\mathbb R} {\mathbb P}^n} \end{eqnarray} where $g_t$ is the Gauss map and $\gamma_t^{\mathbb R}=g_t\circ \Log_t$ the logarithmic Gauss map, defined as: $$\gamma_t^{\mathbb R} (x)=[x_i\frac{\partial F_t}{\partial x_i}].$$ \begin{rem}\label{Def:Amoebas-Real-Curvature} According to diagram (\ref{Diag:Log-Gauss}), the total curvature of the amoeba $\mathcal{A}^{\mathbb R}(F_t)$ is $$\int_{\mathcal{A}^{\mathbb R}(F_t)} \vert k \vert dv = \int_{{\mathbb R}{\mathbb P}^n} \# {\left(\gamma_t^{\mathbb R}\right)^{-1}}(\beta) ds $$ \end{rem} In the complex case, contrary to the real case, the Amoeba $\mathcal{A}(F_t)$ is not in general an immersed manifold, therefore there is no natural definition of a Gauss map $\mathcal{A}(F_t) \rightarrow {\mathbb C} {\mathbb P}^n$. However, the Logarithmic Gauss map $\gamma_t$: $ V(F_t) \rightarrow {\mathbb C} {\mathbb P}^n$ is meaningfull, and we have the following diagram: \[ \xymatrix{ V(F_t) \ar[rd]^{\gamma_t} \ar[d]^{\Log_t} \\ \mathcal{A}(F_t) & {\mathbb C} {\mathbb P}^n} \] It is then natural to give the following definition (see (\ref{compcurv})): \begin{defi}\label{Def:Amoebas-Complex-Curvature}The total curvature of the amoeba $\mathcal{A}(F_t)$ is defined by $$\int_{\mathcal{A}(F_t)} K : = (-1)^n a_n \int_{{\mathbb C} {\mathbb P}^n} \# \gamma_t^{-1}(\beta) dp.$$ \end{defi} \begin{rem}\label{Rem:Multiplicative-Translation-Invariance} We consider subvarieties of the torus $({\mathbb C}^*)^n$ therefore for any $\omega\in{\mathbb Z}^{n+1}$, $F_t$ and $x^\omega F_t$ define the same variety. Moreover, the logarithmic Gauss map $$\fonction{\gamma}{V(F_t)}{{\mathbb C}{\mathbb P}^n}{x}{[x_i\frac{\partial F_t}{\partial x_i}]}$$ is also invariant when multiplying $F_t$ by $x^\omega$ (since the tangent space to $V(F_t)$ at a given point is unaffected by this action). \end{rem} These definitions lead us to consider the systems \[ (G'_\beta) \begin{cases} F_t(x) = 0 \\ \displaystyle [x_1\frac{\partial F_t}{\partial x_1}: \dotso : x_{n+1}\frac{\partial F_t}{\partial x_{n+1}}] = [\beta_1:\dotso :\beta_{n+1}] \end{cases} \] where $[\beta]=[\beta_1:\beta_2:\dotso :\beta_{n+1}]$ is an element of ${\mathbb C} {\mathbb P}^{n+1}$. Since we only consider solutions in the torus, the system $(G'_\beta)$ is equivalent to \[ (G_\beta) \begin{cases} F_t= 0 \\ \displaystyle x_1\frac{\partial F_t}{\partial x_1}=\beta_1y \\ \displaystyle \vdots \\ \displaystyle x_{n+1}\frac{\partial F_t}{\partial x_{n+1}}=\beta_{n+1}y \end{cases} \] where we introduce a new variable $y\in{\mathbb C}^*$. \begin{defi} We say that a polynomial $F_t$ is B-generic if, for a generic $\beta$, the system $(G_{\beta})$ satisfies the genericity conditions of \cite{Bern75} {\it i.e.}, the restriction of the system to any proper face of the convex hull of its set of exponent has no solution in the torus. \end{defi} \begin{rem}\label{Rem:Multiplicative-Translation-Invariance-2} Let $(G_{\beta}'')$ be the system \[ \begin{cases} x^\omega F_t = 0 \\ \displaystyle [x_1\frac{\partial x^\omega F_t}{\partial x_1}: x_2\frac{\partial x^\omega F_t}{\partial x_2}: \dotso : x_{n+1}\frac{\partial x^\omega F_t}{\partial x_{n+1}}] = [\beta_1:\beta_2:\dotso :\beta_{n+1}] \end{cases}.\] If the Newton polytopes of $F_t$ and $x^\omega F_t$ are included in $({\mathbb R}_+^*)^{n+1}$ we find algebraically the statement of Remark~\ref{Rem:Multiplicative-Translation-Invariance}. Indeed, $(G_{\beta}'')$ is then clearly equivalent to: \[ \begin{cases} F_t = 0 \\ \displaystyle [x_1\frac{\partial F_t}{\partial x_1}: x_2\frac{\partial F_t}{\partial x_2}: \dotso : x_{n+1}\frac{\partial F_t}{\partial x_{n+1}}] = [\frac{\beta_1}{x^\omega}:\frac{\beta_2}{x^\omega}:\dotso :\frac{\beta_{n+1}}{x^\omega}] \end{cases} \] since we consider solutions in $({\mathbb C}^*)^{n+1}$. Thus $(G_{\beta}'')$ is equivalent to $(G_{\beta}')$ and for every $[\beta]\in{\mathbb C}{\mathbb P}^{n+1}$, both corresponding logarithmic Gauss maps have the same fiber. \end{rem} Let $F_t^\delta$ denote the truncation of $F_t$ to a face $\delta$ of the Newton polytope $\triangle_{F_t}$ of $F_t$. \begin{defi}[Viro] Let $F\in{\mathbb C}[x_1,\dotsc,x_{n+1}]$ be a polynomial, $\triangle_{F}$ its Newton polytope and $X_{\triangle_{F}}$ the toric variety associated to $\triangle_{F}$. We say that $F$ is {\bf completely non-degenerate} if for each (not necessarily proper) face $\delta$ of $\triangle_{F}$ the restriction $F^\delta$ defines a non-singular variety in $({\mathbb C}^*)^{n+1}$. \end{defi} \begin{rem}\label{Rem:Tprimitive-nondegenerate} Let $F\in{\mathbb K}[x_1,\dotsc,x_{n+1}]$ be such that $Trop F$ is non-singular. Then $F_t$ is completely non-degenerate for $0 < t \ll 1$. \end{rem} \begin{prop}\label{Prop:Nondegenerate} A polynomial $F_t\in{\mathbb C}[x_1,\dotsc,x_{n+1}]$ is B-generic if and only if it is completely non-degenerate. \end{prop} \begin{proof} The support polytope $\triangle_\beta$ of the system $(G_\beta)$ is the cone with apex $(0,\dotsc,0,1)$ over $\triangle_{F_t}\times \{0\}$. In fact, $\triangle_\beta$ is the Newton polytope of all polynomial of $(G_\beta)$ but $F_t$. The restrictions of $(G_\beta)$ to the faces of $\triangle_\beta$ are either of the form \[ (G^\delta_\beta) \begin{cases} F_t^\delta = 0 \\ \displaystyle x_1\frac{\partial F_t^\delta}{\partial x_1}=\beta_1 y \\ \displaystyle \vdots \\ \displaystyle x_{n+1}\frac{\partial F^\delta _t}{\partial x_{n+1}}= \beta_{n+1} y \end{cases} \] or of the form, \[ (G^\delta_0) \begin{cases} F_t^\delta = 0 \\ \displaystyle x_1\frac{\partial F_t^\delta}{\partial x_1}=0 \\ \displaystyle \vdots \\ \displaystyle x_{n+1}\frac{\partial F^\delta _t}{\partial x_{n+1}}= 0 \end{cases} \] for $\delta$ a face of $\triangle_{F_t}$. The system $(G^\delta_\beta)$ is the resctriction of $(G_\beta)$ to the cone with apex $(0,\dotsc,0,1)$ over a face $\delta$ of $\triangle_\beta$. By definition, $F_t$ is B-generic if and only if for all proper faces $\delta$ of $\triangle_{F_t}$ the systems $(G^\delta_\beta)$ have no solution for a generic $\beta$ and for all (not necessarily proper) faces $\delta$ of $\triangle_{F_t}$ the systems $(G^\delta_0)$ have no solution. Note that $(G^\delta_0)$ has no solution if and only if $V(F^\delta_t)$ is non-singular in $({\mathbb C}^*)^{n+1}$. Assume first that $F_t$ is B-generic. Then, in particular for all (not necessarily proper) faces $\delta$ of $\triangle_{F_t}$ the systems $(G^\delta_0)$ have no solution. So, for any face $\delta$, $V(F^\delta_t)$ is non-singular which is equivalent to $F_t$ being completely non-degenerate. Assume now that $F_t$ is completely non-degenerate. We saw above that this implies that for all (not necessarily proper) faces $\delta$ of $\triangle_{F_t}$ the systems $(G^\delta_0)$ have no solution. We only need to check that for a generic $\beta$, the systems $(G^\delta_\beta)$ have no solution for all proper faces $\delta$ of $\triangle_{F_t}$. Let us now fix a proper face $\delta$ of $\triangle_{F_t}$ and consider the system $(G^\delta_\beta)$. Denote by $\overline\gamma_\delta$ the map $$\overline\gamma_\delta:(\mathbb C^*)^{n+1}\rightarrow\mathbb C^{n+1}$$ $$x\mapsto (x_1\frac{\partial F_t^\delta}{\partial x_1}, \dotsc , x_{n+1}\frac{\partial F_t^\delta}{\partial x_{n+1}}).$$ Note that $0 \in \overline\gamma_\delta(V(F_t^\delta))$ if and only if $V(F_t^\delta)$ is singular. Let us write $F_t(x)= \sum_{\alpha\in\mathcal E(F_t)}a_\alpha x^\alpha$. Since $dim(\delta)<n+1$, $\delta$ is contained in a hyperplane and there exist $\mu\in{\mathbb R}^{n+1}$ and $c\in{\mathbb R}$ such that for any exponent $\alpha$ of $F_t^\delta$, $$\mu\cdot \alpha=c.$$ For any $x\in (\mathbb C^*)^{n+1}$, $$\mu\cdot \overline\gamma_\delta(x)=\sum_{\alpha\in\mathcal E(F_t^\delta)}a_\alpha (\mu\cdot\alpha) x^\alpha$$ $$=\sum_{\alpha\in\mathcal E(F_t^\delta)}a_\alpha c x^\alpha=c F_t^\delta(x).$$ In particular, if $x\in V(F_t^\delta)$, $\mu\cdot \overline\gamma_\delta(x)=0$. Then, $\overline\gamma_\delta(V(F_t^\delta))$ is contained in the linear hyperplane $H^\delta$ orthogonal to $\mu$. Then, for any $\delta$ the system $(G^\delta_\beta)$ has no solution in the torus if the line $(\beta_1y,\dotsc,\beta_{n+1}y)$ is not contained in $H^\delta$. Since there exists a finite number of hyperplanes containing all the proper faces of $\triangle_{F_t}$, $$\cup_{\delta\subset\triangle_{F_t}}\overline\gamma_\delta(V(F_t^\delta))$$ is contained in a finite union of linear hyperplanes. Then, for a generic $[\beta]$ none of the systems in $\{(G^\delta_\beta),\delta \mbox{ proper face of } \triangle_{F_t} \}$ has a solution. Thus $F_t$ is B-generic. \end{proof} From now on we consider completely non-degenerate polynomials. \begin{prop} A polynomial $F_t$ is completely non-degenerate if and only if the degree of the map $\gamma_t$ is $$(n+1)! vol(\triangle_{F_t}).$$ \end{prop} \begin{proof} It follows directly from Proposition~\ref{Prop:Nondegenerate} applying Bernstein theorem to the systems $(G_\beta)$. \end{proof} As a direct consequence of Proposition~\ref{Prop:Nondegenerate}, we have the following corollary. \begin{cor}\label{Rem:complexamoeba} $F_t$ is completely non-degenerate if and only if $$ \int_{\mathcal{A}(F_t)} K = (-1)^n a_n (n+1)! vol(\triangle_F) vol({\mathbb C} {\mathbb P}^n)$$ which does not depend on $t$. \end{cor} We have then the following inequality, similar to (\ref{ineqcurv}) for the logarithmic curvatures of the real and complex parts of a real algebraic hypersurface: \begin{thm}\label{Prop:Inequality} For any completely non-degenerate $F_t\in{\mathbb R}[x_1,\dotsc,x_{n+1}]$, \begin{equation}\label{Eq:Inequality} \frac{\sigma_{2n}}{\sigma_{n}} \int_{\mathcal{A}^{\mathbb R}(F_t)}\vert k \vert\leq \int_{\mathcal{A}(F_t)}\vert K \vert. \end{equation} \end{thm} \begin{proof} We have seen that $ \int_{\mathcal{A}(F_t)} K = (-1)^n a_n (n+1)! vol(\triangle_F) vol({\mathbb C} {\mathbb P}^n)$. For $x \in {\mathbb R} {\mathbb P}^n$, the cardinality of the real fiber $(\gamma_t^{{\mathbb R}})^{-1}$ is smaller than the cardinality of the complex one. So we similarly have that $ \int_{\mathcal{A}^{{\mathbb R}}(F_t)}\vert k \vert \leq deg(\gamma_t) vol({\mathbb R} {\mathbb P}^n)$, with $vol({\mathbb C} {\mathbb P}^n) = \frac{\sigma_{2n +1}}{\sigma_1}$ and $vol({\mathbb R} {\mathbb P}^n) = \frac{\sigma_n}{2}$ from which the inequality follows. \end{proof} \begin{cor} There is equality in the above theorem if and only if the map $\gamma_t$ is totally real, {\it i.e.}, $\gamma_t^{-1}(x) \subset {\mathbb R} V(F_t)$ for $x \in {\mathbb R} {\mathbb P}^n$. \end{cor} \begin{rem} One can prove (see \cite{PasRis10}) that for a non-singular real curve, the maximality for the above inequality characterises the Harnack curves in the sense of \cite{Mikh00}. \end{rem} \section{Complex and real total curvature of tropical hypersurfaces}\label{Sec:Tropical-Curvature} \subsection{Complex total curvature}\label{Subsec:Complex-Tropical-Curvature} Let $f$ be a tropical polynomial, $F \in {\mathbb K}[x]$ realising $f$ and such that $F_t$ is completely non-degenerate for $0 < t \ll 1$. Since the total curvature of the Amoeba $\mathcal{A}(F_t)$ does not depend on $t$ for $0 < t \ll 1$, we define the total curvature of the tropical variety $V(f)$ by passing to the limit in the trivial way: \begin{defi}\label{Def:Complex-Tropical-Curvature}Let $f$ be a tropical polynomial. We define the complex total curvature of $V(f)$ as \begin{equation}\label{Eq:Complex-Tropical-Curvature}\int _{V(f)} K : = \int_{\mathcal{A}(F_t)} K = (-1)^n a_n (n+1)! vol(\triangle_F) vol({\mathbb C} {\mathbb P}^n).\end{equation} \end{defi} Notice that $(-1)^n K$ is a positive function, then $\vert K \vert = (-1)^n K$, therefore $\int _{V(f)}\vert K \vert=vol({\mathbb C}{\mathbb P}^n)\times (n+1)! vol(\triangle_f)\times a_n.$ \begin{cor}\label{Cor:Hyperplane-Complex-Tropical-Curvature} For any primitive tropical hypersurface $H$, \begin{equation}\label{Eq:Hyperplane-Complex-Tropical-Curvature} \int _{H} K =(-1)^n a_n vol({\mathbb C}{\mathbb P}^n). \end{equation} \end{cor} \begin{cor}\label{Cor:Complex-Tropical-Curvature} Let $f$ be a tropical polynomial and let $\textbf{v}_1,\dots, \textbf{v}_r$ be the set of vertices of $V(f)$. Then $$\int _{V(f)} K =\sum_{i=1}^r\int_{V(In_{\textbf{v}_i}f)} K. $$ \end{cor} \subsection{Total curvature of real tropical hypersurfaces}\label{Subsec:Real-Tropical-Curvature} \begin{defi}\label{Def:Real-Tropical-Curvature} Let $F \in {\mathbb K}_{{\mathbb R}}[x_1, \dots, x_{n+1}] $ be a real polynomial, $f = Trop (F)$ its tropicalization.\\ The real total curvature of $V^{\mathbb R}(f)$ is defined as $$\int _{V^{\mathbb R}(f)}\vert k \vert:= \limsup_{t \rightarrow 0} \int_{\mathcal{A}^{\mathbb R}(F_t)}\vert k \vert.$$ \end{defi} \begin{rem}\label{Rem:limit} The proof of Proposition~\ref{totcurvR} below implies that if $V(f)$ is non-singular, $\int_{\mathcal{A}^{\mathbb R}(F_t)}\vert k \vert$ has a limit when $t \rightarrow 0$. \end{rem} Recall the following diagram: \begin{equation} \label{diag1} \xymatrix{ {\mathbb R} V(F_t) \ar[rd]^{\gamma_t^{\mathbb R}} \ar[d]^{\Log_t} \\ \mathcal{A}^{\mathbb R}(F_t) \ar[r] _{g_t} & {\mathbb R} {\mathbb P}^n} \end{equation} where $g_t$ is the Gauss map and $\gamma_t^{\mathbb R}=g_t\circ \Log_t$ the logarithmic Gauss map, defined as: $$\gamma_t^{\mathbb R} (x)=[x_i\frac{\partial F_t}{\partial x_i}].$$ It follows immediately from Theorem~\ref{Prop:Inequality} and Definitions~\ref{Def:Complex-Tropical-Curvature} and~\ref{Def:Real-Tropical-Curvature} that real and complex tropical curvatures satisfy an inequality similar to Inequality~(\ref{ineqcurv}). \begin{thm}\label{Thm:Tropical-Inequality}Let $F \in {\mathbb K}_{{\mathbb R}}[x_1, \dots, x_{n+1}] $ be a real polynomial and $f= Trop (F)$ be its tropicalisation. We have \begin{equation}\label{Eq:Tropical-Inequality} \frac{\sigma_{2n}}{\sigma_n}\int_{V^{\mathbb R}(f)}\vert k\vert\leq\int_{V(f)}\vert K\vert \end{equation} \end{thm} \qed We now establish one the main results of this article; namely that the Inequality~(\ref{Eq:Tropical-Inequality}) of Theorem~\ref{Thm:Tropical-Inequality} is an equality for real non-singular tropical hypersurfaces (Theorem~\ref{Thm:Tropical-equality}). Let us first look at the case of a primitive hypersurface. \begin{prop}\label{Prop:Realcurv} Let $f$ be a primitive tropical polynomial, tropicalisation of $F \in {\mathbb K}_{{\mathbb R}}[x_1, \dots, x_{n+1}] $. Then, \begin{equation}\label{Eq:Realcurv} \int _{V^{\mathbb R}(f)}\vert k \vert=vol({\mathbb R}{\mathbb P}^n)=\frac{\sigma_n}{2}. \end{equation} \end{prop} \begin{proof} We have $\int_{{\mathcal A}^{{\mathbb R}}(F_t) }\vert k \vert = $vol$(Im(g_t))$ by definition. But the map $\gamma_t$ is generically of degree one by Berstein's theorem, and for a generic $\beta \in {\mathbb R} {\mathbb P}^n$ (namely in the complementary of the set of normal directions to non-compact cells of the tropical variety $V(f)$), we have $\#(\gamma_t^{{\mathbb R}})^{-1}(\beta) \leq \#(\gamma_t)^{-1}(\beta) = 1$ and $\#(\gamma_t^{{\mathbb R}})^{-1}(\beta) \equiv \#(\gamma_t)^{-1}(\beta) \bmod 2$; therefore $\#(\gamma_t^{{\mathbb R}})^{-1}(\beta) = 1$ and we have that vol($Im(g_t)) = $vol$(Im(\gamma_t^{{\mathbb R}}))=$ vol$({\mathbb R} {\mathbb P}^n) = \frac{\sigma_n}{2}$, and (\ref{Eq:Realcurv}) passing to the limit. \end{proof} Let now $F(x) = \sum a_i(t) x^{\alpha_i} \in {\mathbb K}_{{\mathbb R}}[x_1, \dots, x_{n+1}]$ be a polynomial such that $f = Trop F$ is non-singular, $(\textbf{v}_i)_{(1 \leq i \leq r)}$ the vertices of $V(f)$, $\triangle_F =\cup \triangle_i$ the subdivision of $\triangle_F$ in simplices dual to $V(f)$. Then for each vertex $\textbf{v}_i$, we set $f^{\textbf{v}_i} =Trop F^{\textbf{v}_i}$, with $F^{\textbf{v}_i} = \sum_{\alpha_j \in \triangle_i} a_j x^j$. Notice that $In_{\textbf{v}_i} f = f^{\textbf{v}_i}$. The main step in the proof of Theorem~\ref{Thm:Tropical-equality} is the following: \\ \begin{prop} \label{totcurvR} Let $F \in {\mathbb K}_{{\mathbb R}}[x_1, \dots, x_{n+1}]$ be such that $f = $Trop$ F$ is non-singular. Then \[ \int _{V^{{\mathbb R}} (f)}\vert k \vert=\sum_{i=1}^r\int_{V^{{\mathbb R}}(f^{\textbf{v}_i})}\vert k \vert = r \; vol({\mathbb R} {\mathbb P}^n).\] \end{prop} Before proving the proposition, we need a lemma of "localisation" at a vertex.\\ \begin{lemma} Let $v \in V(f)$ be a vertex, $v = (\lambda_1, \dots, \lambda_{n+1}) \in {\mathbb R}^{n+1}$. Let $\beta_0 \in {\mathbb R} {\mathbb P}^n $ be a generic element (see the proof of Proposition \ref{Prop:Realcurv}) and $\eta > 0$ be given. Then there exist $\epsilon >0$ and $t_0 > 0$ such that for any $\beta$ such that $d(\beta,\beta_0) < \epsilon$ and $t$ such that $0 < t < t_0$, we have $ B(v, \eta) \cap g_t^{-1}(\beta) \not = \emptyset$. Here $d$ is induced on ${\mathbb R} {\mathbb P}^n$ by the distance on the unit sphere in ${\mathbb R}^{n+1}$. \end{lemma} \begin{proof} Up to multiplying each $x_i$ by $t^{-\lambda_i}$ (which has the effect of translating the vertex $v$ to $0$) and multiplying $F_t$ by the relevant power of $t$, one may write: $$F_t(x)=\sum_{\alpha \in Vert(\check{v})} a_i(0) x^\alpha + t^\nu Q_t(x)$$ where $a_i(0)\in {\mathbb R}^*$, $\nu$ is a positive real number and $Q_t\in {\mathbb K}^{\mathbb R}[x_1,\dotsc,x_{n+1}]$ is a polynomial whose coefficients have non-negative valuation. We set $H(x)=\sum_{\alpha \in Vert(\check{v})} a_i(0) x^\alpha$ so that $F_t= H +t^\nu Q_t$. For $\beta \in {\mathbb R} {\mathbb P}^n$, the points of $g_t^{-1}(\beta)$ are the images under $\Log_t$ of the solutions of the system: \[ (G'_\beta) \begin{cases} F_t = 0 \\ \displaystyle [x_1\frac{\partial F_t}{\partial x_1}: \dotso : x_{n+1}\frac{\partial F_t}{\partial x_{n+1}}] = [\beta] \end{cases}. \] For a generic $\beta_0$, we know by the proof of Proposition~\ref{Prop:Realcurv} that the system: \[ \begin{cases} H = 0 \\ \displaystyle [x_1\frac{\partial H}{\partial x_1}: \dotso : x_{n+1}\frac{\partial H}{\partial x_{n+1}}] = [\beta_0] \end{cases} \] has a non-degenerated solution $x_0$. Then the system: \[ (G^H_\beta) \begin{cases} H = 0 \\ \displaystyle [x_1\frac{\partial H}{\partial x_1}: \dotso : x_{n+1}\frac{\partial H}{\partial x_{n+1}}] = [\beta] \end{cases} \] has a solution $x_H$ such that $\Log(x_H)$ is in the ball $B(\Log(x_0),1)$ for $d(\beta,\beta_0)$ sufficiently small. The system $(G'_\beta)$ is a one parameter deformation of $(G^H_\beta)$ therefore it has a solution $x_F$ such that $\Log(x_F)\in B(\Log(x_0),2)$ for $t$ small enough. Then $\Log_t(x_F)$ is in the ball $\displaystyle B(\frac{\Log(x_0)}{\log t},\frac{2}{|\log t|})$ which is included, for $t$ small enough, in $B(0, \eta)\, =\, B(v, \eta)$. \end{proof} Let us now prove Proposition~\ref{totcurvR}. Let $\Omega \subset {\mathbb R} {\mathbb P}^n$ be a compact set, $t_0 > 0$ and $\epsilon > 0$ such that: \par 1) Vol$({\mathbb R} {\mathbb P}^n \setminus \Omega) < \epsilon$ \par 2) For any direction $\beta \in \Omega$ and any vertex $v$ of $V(f)$, the system $(G'_\beta)$ has a non-degenerated solution.\\ Then, if $r$ is the number of vertices of $V(f)$, we have that for $0 < t < t_0$: \[ \int_{{\mathcal A}^{{\mathbb R}}(F_t)} \vert k \vert \geq r vol ({\mathbb R} {\mathbb P}^n) - r \epsilon \] passing to the limit when $t_0 \rightarrow 0$ and $\epsilon \rightarrow 0$ gives the result. We now deduce easily the main result of the paper. \begin{thm}\label{Thm:Tropical-equality} Let $F \in {\mathbb K}_{{\mathbb R}}[x_1, \dots, x_{n+1}]$ be such that $f = $Trop$ F$ is non-singular. Then \begin{equation}\label{Eq:Tropical-equality} \frac{\sigma_{2n}}{\sigma_n} \int_{V^{\mathbb R}(f)}\vert k\vert = \int_{V(f)}\vert K\vert \end{equation} \end{thm} \begin{proof} By Proposition~\ref{totcurvR} and Corollary~\ref{Cor:Complex-Tropical-Curvature}, it is enough to prove the theorem in the primitive case. We have then to prove that $\frac{\sigma_{2n}}{\sigma_n} vol({\mathbb R} {\mathbb P}^n) = a_n vol({\mathbb C} {\mathbb P}^n)$ with $vol({\mathbb C} {\mathbb P}^n) = \frac{\sigma_{2n + 1}}{\sigma_1}$, that is immediate. \end{proof} \section{Polyhedral total curvature of a real tropical hypersurface.}\label{Sec:Polyhedral-Curvature} \subsection{Definition and elementary properties} Our definition of the curvature in the polyhedral case is similar to Banchoff's in \cite{Ban70} (see also \cite{Ban67} and \cite{Ban83}) but, exactly as in the complex case, we only consider absolute value of the curvature here. It amounts to the following. The {\bf solid angle} of a cone is the portion of the unit sphere centred at the vertex of the cone that it intersects, its {\bf measure} is the volume of this spherical portion. We might abuse terminology and write "solid angle" when we mean its measure. Let $\textbf{v}$ be a vertex of a polyhedral hypersurface $\mathfrak{H}$ (here our real tropical hypersurface ${\mathbb R} V^{\mathbb R}_\vartheta(f)$). For sufficiently small neighbourhoods $U$ of $\textbf{v}$, $U \setminus \mathfrak{H}$ has two connected components. Label one by $+$ and the other by $-$ (for ${\mathbb R} V^{\mathbb R}_\vartheta(f)$ these will be the signs of the corresponding vertices of the dual subdivision). For each maximal dimensional cell of $\mathfrak{H}$ containing $\textbf{v}$, choose a normal vector oriented from $-$ to $+$. The {\bf curvature cone} $C_\textbf{v}$ at $\textbf{v}$ is the cone generated by these vectors. \begin{defi}\label{Def:Local-Polyhedral-Curvature} The {\bf curvature} $\kappa_{\textbf{v}}$ at $\textbf{v}$ is the measure of the solid angle of the curvature cone $C_\textbf{v}$. \end{defi} \begin{rem}\label{Rem:Curvature-cone} For $V^{\mathbb R}_\vartheta(f)$ the normal vectors above can be chosen to correspond to the vectors $n_i$ supported by the edges of the simplex $\check{\textbf{v}}$ dual to $\textbf{v}$ and oriented from a vertex with $-$ sign to a vertex with $+$ sign. Thus the curvature cone $C_\textbf{v}$ is naturally identified to the cone generated by the $n_i$'s and depends only on the simplex $\check{\textbf{v}}$ dual to $\textbf{v}$ and the sign distribution at the vertices of $\check{\textbf{v}}$. \end{rem} \begin{rem}\label{Rem:Sign-inversion} Changing all the vectors to their opposites clearly leaves $\kappa_{\textbf{v}}$ invariant. \end{rem} \begin{defi}\label{Def:Polyhedral-Curvature} Let $f$ be a generic tropical polynomial and let $\vartheta$ be a distribution of signs in $\mathcal{E}(f)$. The polyhedral total curvature of $V^{\mathbb R}_\vartheta(f)$ is $$\int_{V^{\mathbb R}_\vartheta(f)}\vert k^p\vert:=\sum_{\textbf{v} \in Vert ({\mathbb R} V^{\mathbb R}_\vartheta(f))}\kappa_{\textbf{v}}$$ {\it i.e.}, it is the sum of the curvatures at all vertices of the real part of $V^{\mathbb R}_\vartheta(f)$. \end{defi} It follows from the definition that we have the equality below. \begin{lemma}\label{Prop:By-Def}Let $f$ be a generic tropical polynomial let $\vartheta$ a distribution of signs in $\mathcal{E}(f)$. If $\textbf{v}_1,\dots,\textbf{v}_k$ are the vertices of $V(f)$, then $$\int_{V^{\mathbb R}_\vartheta(f)}\vert k^p\vert=\sum_{i=1}^{k}\int_{V^{\mathbb R}_{\vartheta_i}(In_{\textbf{v}_i}f)}\vert k^p\vert,$$ where $\vartheta_i=\vartheta\mid_{ \mathcal{E}(In_{\textbf{v}_i}f)}.$ \end{lemma} We denote by $C_{s_z(\textbf{v})}$ the curvature cone at the vertex $s_z(\textbf{v})$ of the real tropical hypersurface ${\mathbb R} V^{\mathbb R}_\vartheta(f)$. It is the cone generated by vector edges of $\check{v}$ oriented form vertices with minus sign to vertices with plus sign in the sign distribution $S_z(\vartheta)$. \begin{rem}\label{Rem:alternative-Sum} One can also define the curvature $\widetilde{\kappa_\textbf{v}}$ at a vertex $\textbf{v}$ of $V(f)$ to be the sum over all symmetric copies $s_z(\textbf{v})$ of $\textbf{v}$ appearing in ${\mathbb R} V^{\mathbb R}_\vartheta(f)$ of the solid angles of the corresponding curvature cones $C_{s_z(\textbf{v})}$ {\it i.e.}, $\widetilde{\kappa_\textbf{v}}:=\sum_{s_z(\textbf{v}) \in {\mathbb R} V^{\mathbb R}_\vartheta(f)}\kappa(s_z(\textbf{v}))$. Then $\int_{V^{\mathbb R}_\vartheta(f)}\vert k^p\vert=\sum_{\textbf{v} \in Vert(V(f))}\widetilde{\kappa_{\textbf{v}}}$. \end{rem} \subsection{Elementary simplex case} \begin{defi}\label{Def:Elementary} Let $S \in {\mathbb R}^{n+1}$ be a simplex with integer vertices and $(u_i)_{i\in\{1..n+1\}}$ be the collection of the vectors defined by edges issuing from one of its vertices. The simplex $S$ is {\bf elementary} if $(\overline{u_i})_{i\in\{1..n+1\}}$ is a basis of $({\mathbb Z}_2)^{n+1}$ where $\overline{u_i}$ is the reduction modulo $2$ of $u_i$. \end{defi} \begin{prop}\label{Prop:Hyperplane-Polyhedral-Curvature} Let $f$ be a tropical polynomial such that $\triangle_f$ is an elementary simplex and that ${\mathcal E}(f)=Vert(\triangle_f)$. Then, for any distribution of signs $\vartheta$, \begin{equation}\label{Eq:Hyperplane-Polyhedral-Curvature} \int _{V^{\mathbb R}_\vartheta(f)}\vert k^p \vert=\frac{\sigma_n}{2}. \end{equation} \end{prop} In particular, Proposition~\ref{Prop:Hyperplane-Polyhedral-Curvature} holds for primitive real tropical hypersurfaces (those whose Newton polytope is primitive) since a primitive simplex is elementary. The two corollaries below follow from Proposition~\ref{Prop:Hyperplane-Polyhedral-Curvature} and Lemma~\ref{Prop:By-Def}. \begin{cor}\label{Cor:Generic-Tropical-Curvature}Let $f$ be a generic tropical polynomial and let $\textbf{v}_1,\dots,\textbf{v}_l$ be the vertices of $V(f)$. If $\triangle_{In_{\textbf{v}_i}f}$ is an elementary simplex for all $i$ then, for any distributions of signs $\vartheta$, $$\int_{V^{\mathbb R}_\vartheta(f)}\vert k^p\vert=l \frac{\sigma_n}{2}.$$ \end{cor} \begin{cor}\label{Cor:Primitive-Tropical-Curvature}Let $f$ be a non-singular tropical polynomial. Then, for any distribution of signs $\vartheta$, $$\int_{V^{\mathbb R}_\vartheta(f)}\vert k^p\vert=(n+1)!Vol(\triangle_f)\frac{\sigma_n}{2}.$$ \end{cor} From Proposition~\ref{Prop:Hyperplane-Polyhedral-Curvature} and Proposition~\ref{totcurvR} we deduce the equality between the polyhedral curvature and real total curvature of a non-singular real tropical hypersurface: \begin{prop}\label{Thm:Polhedral=Limit}Let $f$ be a non-singular tropical polynomial. Then, for any distribution of signs $\vartheta$, $$\int_{V^{\mathbb R}_\vartheta(f)}\vert k\vert=\int_{V^{\mathbb R}_\vartheta(f)}\vert k^p\vert.$$ \end{prop} \begin{proof} Both can be expressed as $(n+1)!Vol(\triangle_f)$ times the volume of ${\mathbb R}{\mathbb P}^{n}$ (see Corollary~\ref{Cor:Primitive-Tropical-Curvature} in the polyhedral case). \end{proof} \subsection{Proof of Proposition ~\ref{Prop:Hyperplane-Polyhedral-Curvature}} We will use the following proposition which essentially follows from Itenberg's Prop 3.1 in \cite{It97}. Recall that ${\mathbb R} V^{\mathbb R}_\vartheta (f)= {\mathbb R} V^{\mathbb R}_{-\vartheta} (f)$ so for our study we might as well consider sign distributions up to total inversion of signs. We will denote $\mathcal D_{{\mathcal E}(f)}$ the set of sign distributions on ${\mathcal E}(f)$ up to simultaneous change of all signs. \begin{prop}\label{Prop:Transitivity} Let $S$ be an elementary simplex. The group $({\mathbb Z}_2)^{n+1}$ acts transitively on $\mathcal D_{Vert(S)}$ via the maps $S_z$ (see Subsection~\ref{Subsec:Real-Tropical-Hypersurfaces}). \end{prop} \begin{proof} In notation of Definition~\ref{Def:Elementary} the $\overline{u_i}$'s form a basis of $({\mathbb Z}_2)^{n+1}$. Let $(\overline{z_i})_{i=1..n+1}$ be the dual basis. Then for each vertex $v_i$ of $S$, \begin{itemize} \item either $S_{\overline{z_i}}(\vartheta)(v_i)=- \vartheta(v_i)$ and $S_{\overline{z_i}}(\vartheta)(v_j)= \vartheta(v_j)$ for $v_j \neq v_i$ \item or $S_{\overline{z_i}}(\vartheta)(v_i)= \vartheta(v_i)$ and $S_{\overline{z_i}}(\vartheta)(v_j)= -\vartheta(v_j)$ for $v_j \neq v_i$. \end{itemize} \end{proof} We will prove that the curvature cones defined by a real tropical hypersurface dual to an elementary simplex $S$ give rise to a partition of a half-space, which yields the result. Since we are considering the sign distributions up to total inversion of signs, we can assume that for one vertex $v_0$ of $S$, $S_z(\vartheta)(v_0)=-1$ for all $z \in {{\mathbb Z}_2}^{n+1}$. By Proposition~\ref{Prop:Transitivity} and Remark~\ref{Rem:Curvature-cone}, the cones we need to consider are exactly those corresponding to all distributions of signs $\varphi$ on $Vert(S)$ such that $v_0$ carries a minus sign. Let us denote by $C_\varphi$ the curvature cone corresponding to such a distribution $\varphi$. Let us prove that the curvature cones $C_\varphi$ naturally define a fan which covers a half-space. To each vertex $v$ of $S$ one associates its opposite facet $F_v$, the vectorial hyperplane $H_v$ parallel to $F_v$ and a vector $n_v$ normal to $F_v$ pointing from $F_v$ to the interior of $S$. For a vertex $v$ of $S$ let $H_{v}^-$ and $H_{v}^+$ be respectively the half-space defined by $\{x \in {\mathbb R}^{n+1}, n_{v}\cdot x \le 0\}$ and $\{x \in {\mathbb R}^{n+1}, n_{v}\cdot x \ge 0\}$. The key point is the following fact. \begin{lemma}\label{Lem:ConesAndHyperplanes0} For a sign distribution $\varphi$ (with $\varphi (v_0)=-1$), the curvature cone $C_\varphi$ is the intersection of the half-spaces $H_{v}^{\varphi(v)}$ for all $v \in Vert(S)$. Moreover it is enough to intersect only those $H_{v}^{\varphi(v)}$ such that $\varphi(Vert(F_v))={\mathbb Z}_2$. \end{lemma} \begin{proof} We need to prove that the cone $C_\varphi$ is defined by $\{x \in {\mathbb R}^{n+1}, \forall v \in Vert(S),\> \varphi(v)\, n_v\cdot x \ge 0\}$. \vspace*{1ex} Let us denote by $E=\{e_i\}$ the set of edges of $S$ whose vertices have different signs and by $W = \{w_i\}$ the set of vectors such that $w_i$ is supported by $e_i$ and oriented from "-" to "+". For any $v\in Vert(S)$, $W \subset H_{v}^{\varphi(v)}$. If $e_i$ is not in the face $F_v$ of $S$ opposite to $v$, it points towards the (affine) half-space containing $S$ if $\varphi(v)$ is "+" and towards the other half-space determined by $F_v$ if $\varphi(v)$ is "-". Thus $C_\varphi \subset \cap_{v\in S} H_{v}^{\varphi(v)}$. \vspace*{1ex} Let us prove that each facet of $C_\varphi$ is parallel to a facet of $S$ and thus included in some $H_v$ which leads to the equality in the above inclusion. \vspace*{1ex} Each facet of $C_\varphi$ is a cone generated by a subset $Y$ of $W$ whose linear span $<Y>$ is of dimension $n$ and such that all vectors of $W\setminus Y$ are on the same side of $<Y>$. Let $Y$ be a subset of $W$, $E_Y$ be the corresponding subset of $E$ and assume that $\dim <Y> =n$ and $<Y>$ is included in no $H_v$. Let us prove that the cone generated by $Y$ is not a face of $C_\varphi$. Each vertex of $S$ belongs to an edge of $E_Y$ (otherwise $<Y>$ would be parallel to a facet of $S$). Let $Vert(S)^+$ (resp. $Vert(S)^-$) be the set of vertices of $S$ with "+" respectively "-" signs. If either $\# Vert(S)^+$ or $\# Vert(S)^-$ is $1$ then $C_\varphi$ is just the cone of apex a vertex $v$ over its opposite face $F_v$ and its facets are cones on facets of $F_v$. Let us then assume that $\# Vert(S)^+ \ge 2$ and $\# Vert(S)^-\ge 2$. If every pair of vertices in $Vert(S)$ were connected by a chain of edges in $E_Y$ then every vertex would be connected by a chain of edge in $E_Y$ thus $\dim <Y> = n+1$ which would contradict the hypothesis. Thus the edges in $E_Y$ split in several connected components $E_Y^i$. Each one contains at least one element of $ Vert(S)^-$ and one element of $Vert(S)^+$. The affine span $\Aff(E_Y^i)$ of $E_Y^i$ is just the affine span of the vertices it contains, thus it is the affine span of the corresponding face of $S$. Each $\Aff(E_Y^i)$ is parallel to $<Y>$ which is of codimension one. Since $E_Y$ covers all vertices of $S$, all $\Aff(E_Y^i)$ are not in the same affine hyperplane parallel to $<Y>$. Then an affine hyperplane parallel to $<Y>$ separates the affine spans $\Aff(E_Y^1)$ and $\Aff(E_Y^2)$ of two connected components $E_Y^1$ and $E_Y^2$. Let us pick two vertices with minus sign ${v_1}^-$ and ${v_2}^-$ respectively in $E_Y^1$ and $E_Y^2$. Let $w_1$ (resp. $w_2$) be vectors in $W\setminus Y$ having origin $v_1^-$ (resp $v_2^-$) and extremity a vertex in $E_Y^2$ (resp. in $E_Y^1$). The connected components $E_Y^1$ and $E_Y^2$ being separated by an hyperplane parallel to $<Y>$, the vectors $w_1$ and $w_2$ are not on the same side of $<Y>$ and $Y$ does not generate a facet of $C_\varphi$. \vspace*{1ex} Thus each facet of $C_\varphi$ is contained in one of the hyperplanes $H_v$ and, since we already have that $C_\varphi\subset \cap_{v\in S} H_{v}^{\varphi(v)}$, then $C_\varphi=\cap_{v\in S} H_{v}^{\varphi(v)}$. \vspace*{1ex} It is enough to intersect only those $H_{v}^{\varphi(v)}$ such that $\varphi(Vert(F_v))={\mathbb Z}_2$. Indeed when the signs of all the vertices of a facet are the same, the intersection of $C_\varphi$ with $H_{v}$ is the origin. \end{proof} \begin{rem} In the proof of Lemma~\ref{Lem:ConesAndHyperplanes0} one can easily see that, if $\dim <Y> =n$ and $<Y>$ is included in no $H_v$, $E_Y$ has exactly two connected components. Indeed, the affine spans $\Aff(E_Y^i)$ are just the spans of pairwise disjoint faces of $S$. Let $m$ be the number of connected components of $E_Y$. Each $E_Y^i$ contains $\dim \Aff(E_Y^i) +1$ vertices and $\sum_i^m \dim \Aff(E_Y^i)=n$. But the number of vertices in $S$ is $n+2$, thus $\sum_i^m \left(\dim \Aff(E_Y^i) +1\right)=n+2$ and $m \le 2$. \end{rem} \begin{lemma}\label{Lem:ConesAndHyperplanes} Consider the union $A_{S}\> = \> \cup_{v\in Vert(S)} H_v$ of all linear hyperplanes $H_v$ and the collection of curvature cones $\mathcal C = (C_\varphi)_{\{\varphi \vert \varphi(v_0)=-1 \mbox{ and }\varphi(Vert(S))={\mathbb Z}_2 \}}$. The cones in $\mathcal C$ are precisely the maximal dimensional closed cones in $H_{v_0}^-$ defined by $A_{S}$. \end{lemma} \begin{proof} By Lemma~\ref{Lem:ConesAndHyperplanes0} for a sign distribution $\varphi$ (with $\varphi (v_0)=-1$), the curvature cone $C_\varphi$ is the intersection of the half-spaces $H_{v}^{\varphi(v)}$ for all $v \in Vert(S)$. Any cone $D$ which is the closure of a connected component of the complement of $A_{S}\cap H_{v_0}^-$ in $ H_{v_0}^-$ is of the form $C_\varphi$. Indeed it is defined by a choice of a side for each hyperplane $H_v$ {\it i.e.}, by the choice of the sign of the scalar product of vectors in the interior of $D$ with $n_v$ for each $v \in Vert(S)$. The sign distribution $\varphi$ is then given by $\varphi(v)=sign(n_v\cdot x)$ for any $x$ in the interior of $D$. Indeed, setting a sign on a vertex of $S$ amounts to choosing on which side of $H_v$ are all vectors not in $H_v$ generating $C_\varphi$. The sign "-" corresponds to pointing from $F_v$ to the exterior of $S$ and "+" from $F_v$ to the interior of $S$. (Of course cones defined by a choice of side for each $n+2$ hyperplanes $H_v$ can sometimes be reduced to the origin; this corresponds exactly to a sign distribution $\varphi$ on $Vert(S)$ which does not surjects on ${\mathbb Z}_2$, {\it i.e.}, to an empty orthant on the real tropical variety side.) \end{proof} Thus the closed cones $C_\varphi$ clearly cover $H_{v_0}^-$. (A vector $x$ in $H_{v_0}^-$ not belonging to one of the $H_v$'s is in $C_\varphi$ if and only if for all $v \in Vert(S)$, $\varphi(v)=sign(n_v\cdot x)$.) Moreover by Lemma~\ref{Lem:ConesAndHyperplanes} they realise a subdivision of $H_{v_0}^-$. So, since by Proposition~\ref{Prop:Transitivity} we get all possible sign distributions such that $\varphi (v_0)=-1$, we proved that $\int _{V^{\mathbb R}_\vartheta(f)}\vert k^p \vert=\frac{vol(S^n)}{2}=\frac{\sigma_n}{2}$. \qed \begin{exa}\label{Ex:Total-Curvature-Real-Line} On Figure~\ref{Fig:example} and Figure~\ref{Fig:Line-Total-Curvature} we illustrate Proposition~\ref{Prop:Hyperplane-Polyhedral-Curvature} on the trivial case of a real tropical line. We depict the angles formed by the vectors generating the curvature cones which in this case are the angles of the Newton triangle. \end{exa} \begin{figure} [htbp] \begin{center} \resizebox{\textwidth}{!}{\input{courbure-totale-droite-fig4p.pstex_t}} \caption {In the case of a tropical curve ($n=1$), Proposition~\ref{Prop:Hyperplane-Polyhedral-Curvature} follows from the fact that the sum of the interior angles of a triangle is equal to $\pi$.} \label{Fig:example} \end{center} \end{figure} \begin{figure} [htbp] \begin{center} \resizebox{\textwidth}{!}{\input{courbure-totale-droitep.pstex_t}} \caption{The curvature cones for the real line cover $H_{v_0}^-$. The symmetric copies of the non-empty quadrants are shown on the left and the corresponding sign distributions at the vertices of the Newton polygon on the right. The bottom left picture shows how the curvature cones cover a half-plane.} \label{Fig:Line-Total-Curvature} \end{center} \end{figure} \section{Complement}\label{Sec:Complement} \subsection{Tropical lower bound}\label{Subsec:T-Varieties} In a forthcoming paper, the second author studies, using tropical geometry, the limit when $t$ goes to zero of the real total curvature of a family ${\mathbb R} V(F_t)$ of real algebraic hypersurfaces. This study allows to give a lower bound to this limit, for any polynomial $F\in{\mathbb K}_{\mathbb R}[x]$ realising a generic tropical hypersurface. This bound depends only on the tropicalisation of $F$ and will be called the \textit{tropical bound}.\\ Then, if $k$ is the classical curvature function, $$\lim_{t\rightarrow 0}\int_{{\mathbb R} V(F_t)}\vert k\vert dv$$ will be bounded from above by the Risler's complex bound (see Inequation~(\ref{ineqcurv})) and from below by the tropical bound. A polynomial $F\in{\mathbb K}_{\mathbb R}[x]$ is called \textit{maximal} with respect to the real total curvature if the Risler's upper bound is sharp. In other words, a polynomial $F\in{\mathbb K}_{\mathbb R}[x]$ is maximal with respect to the real total curvature if $$\lim_{t\rightarrow 0}\int_{{\mathbb R} V(F_t)}\vert k\vert dv=\frac{\sigma_n}{\sigma_{2n}}\int_{V(F_t)}\vert K(x(t))\vert dv.$$ The idea behind the construction of the tropical bound is to look for the valuation of points in $V(F)\cap ({\mathbb K}_{\mathbb R}^*)^{n+1}$ that \textit{concentrate} the real total curvature: a point $x\in V(F)\cap ({\mathbb K}_{\mathbb R}^*)^{n+1}$ concentrates the real total curvature if for any family of neighbourhoods $\{U_t\}_{0<t\ll 1}$ of the family of points $x(t)$, $$\lim_{t\rightarrow 0}\int_{{\mathbb R} V(F_t)\cap U_t}\vert k\vert dv\geq \frac{\sigma_n}{2}.$$ Via tropical methods, the second author gives a lower bound for the number of points in $V(F)$ that concentrates the real total curvature and the tropical bound arises as a direct consequence. Using this study, an infinite family of polynomials in ${\mathbb K}_{\mathbb R}[x]$ whose tropical bound is equal to their Risler's complex bound is constructed. This is a tropical proof of the following theorem:\\ \begin{thm}\label{Thm:Lucia} For any $d\in{\mathbb N}$ and any $n\in{\mathbb N}$, there exists real polynomials of degree $d$ in ${\mathbb K}[x_1,\dots,x_{n+1}]$ maximal with respect to the real total curvature. \end{thm} One deduces from this theorem a tropical proof of Orevkov's observation (see \cite{orevkov}) about the sharpness (up to any $\epsilon>0$) of Risler's complex bound for affine real algebraic hypersurfaces. In the Viro's patchworking language, this result has been also proved in \cite{lopezdemedrano}. \subsection{Gauss-Bonnet }\label{Subsec:Gauss-Bonnet} Let $f$ be a non-singular tropical polynomial with Newton polytope $\triangle_f$, $V(f)$ the tropical variety it defines and $\triangle_f = \cup \triangle_i$ be its dual (primitive) triangulation.\\ \begin{prop} Let $V(f)$ be a non-singular tropical hypersurface with Newton polytope $\triangle_f$, and $V \subset ({\mathbb C}^*)^{n+1}$ be a generic complex hypersurface with Newton polyhedra $\triangle_f$. Then: \begin{equation} \label{G-B} \int_{V(f)} K = a_n \frac{\sigma_{2n+1}}{\sigma_1} \chi (V) \end{equation} where $\chi(V)$ stands for the {\it Euler characteristic} of $V$. \end{prop} {\bf Remarks} \par a) If $M$ is a compact real variety of even dimension $n = 2m$, the classical Gauss-Bonnet formula is \[ \int_{M} k= (-1)^m \frac{\sigma_n}{2} \chi(M). \] \par b) For $n = 1$ (curves), (\ref{G-B}) gives: \[ \int_{V(f)} K = 2\pi \chi(V) = - 4 \pi vol(\triangle_f). \] \par c) For $n = 2$ (surfaces), (\ref{G-B}) gives: \[\int_{V(f)} K = 4 \frac{\pi^2}{ 3} \chi (V) = 8 \pi^2 vol(\triangle_f). \] \begin{proof} (of (\ref{G-B})). \\If $V \subset ({\mathbb C}^*)^{n+1}$ is a generic hypersurface with Newton polytope $\triangle_f$,\break one has $\chi(V)= (-1)^n (n+1)! \, vol(\triangle_f)$ (Hovansky's formula, cf. \cite{Hov78}). The tropical variety $V(f)$ is by hypothesis dual to a primitive triangulation $\triangle_f = \cup \triangle_i$ with $vol \triangle_i = 1/ (n+1)!$. \\ Let $r$ be the number of $\triangle_i$'s ({\it i.e.}, the number of vertices of $V(f)$); then one has: \[\int_{V(f)} K = r (-1)^n a_n vol( {\mathbb C} {\mathbb P}^n) = r (-1)^n a_n \frac{\sigma_{2n + 1}}{\sigma_1} \] by Definition \ref{Def:Complex-Tropical-Curvature}.\\ This proves (\ref{G-B}), because $vol(\triangle_f) = r \times (1/(n+1)!)$, and then $(-1)^n r = \chi(V)$. \end{proof} \def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
3,212,635,537,829
arxiv
\section{Introduction} Transition metals with nearly half-filled bands, such as Cr,~\cite{Faw88} Mn,~\cite{Yam70,Yam71,Men57} $\gamma$-Fe,~\cite{Tsu89,Qia01} and their alloys,~\cite{Bac57,Huc70,End71,Yam00,Akb98,Fis99} show complex magnetic structures due to competing magnetic interactions. The determination of their magnetic structures has long been a challenging problem in both theory and experiment in the study of metallic magnetism.~\cite{NATO98} Of these, iron in the fcc phase ($\gamma$-Fe) has received special attention, since it is located at a crossover point from the ferromagnetic to the antiferromagnetic state on the periodic table and has been suggested to show spin density wave (SDW) states. Early experimental data on $\gamma$-Fe precipitates on a Cu matrix~\cite{Abr62} and the extrapolation from those on $\gamma$-FeMn alloys~\cite{End71} suggested that bulk $\gamma$-Fe shows the first-kind antiferromagnetic (AF) structure with wave vector $\hat{\mib{Q}}=(0,0,1)2\pi/a$. Here $a$ denotes the lattice constant. Later, the neutron diffraction measurements~\cite{Tsu89} on cubic $\gamma$-Fe$_{100-x}$Co$_{x}$($x<4$) alloy precipitates in Cu showed magnetic satellite peaks at wave vector $\mib{Q}=(0.1,0,1)2\pi/a$. The magnetic structure was suggested to be a helical SDW, but was not determined precisely because of the high symmetry of the crystal structure. On the other hand, thin Fe films epitaxially grown on Cu were reported to show simple ferromagnetism~\cite{Gra80, Pes87} or the coexistence of a low-spin AF state and a high-spin ferromagnetic state~\cite{Mac88,Don94,Fre98}, depending on the film thickness. Recent studies~\cite{Qia01} on Fe films suggested, however, that the fcc phase of Fe is formed only for film thicknesses of 5 to 11 monolayers and SDW is realized in these films. Theoretically, the magnetic structure of bulk cubic $\gamma$-Fe was investigated intensively by means of the ground-state theories of electronic-structure calculations. In most of the calculations,~\cite{Mry91, Uhl92, Kor96, Byl98, Byl991, Byl992, Kno00} the 1$\mib{Q}$ helical SDW structure was assumed for the bulk cubic $\gamma$-Fe and the wave vector that minimizes the total energy was determined. The obtained wave vectors, however, are different among different theoretical approaches. The linear muffin-tin orbital (LMTO) calculations by Mryasov \textit{et al.}~\cite{Mry91} and the augmented spherical wave (ASW) calculations by Uhl \textit{et al.}~\cite{Uhl92}, both based on the local density approximation (LDA), yielded ground-state wave vector $\mib{Q}=(0,0,0.6)2\pi/a$ for lattice constant $a=6.8$ a.u. On the other hand, K\"{o}rling and Ergon~\cite{Kor96} performed the LMTO calculations using the generalized gradient approximation (GGA), and found the energy minimum at $\mib{Q}=(0.5,0,1)2\pi/a$. Furthermore, the recent full potential calculations suggest the possibility of other ground-state wave vectors. Bylander and Kleinman~\cite{Byl98,Byl991,Byl992} performed full potential calculations using the ultrasoft pseudopotential and found the energy minimum at $\mib{Q}=(0,0,0.55)2\pi/a$. By means of a modified ASW method, Kn\"{o}pfle \textit{et al.}~\cite{Kno00} found the energy minimum at $\mib{Q}=(0.15,0,1)2\pi/a$ for lattice constant $a \leq 6.75$ a.u. Although the wave vector obtained in their calculations is close to the experimental value, more recent ground-state calculations~\cite{Sjo02} based on the full potential augmented-plane-wave method, which accounts for more complex magnetic structures, show that the 1$\mib{Q}$ helical SDW is not the stable state of $\gamma$-Fe. The possibility of magnetic structures other than the 1$\mib{Q}$ helical SDW has been suggested by Fujii \textit{et al.}~\cite{Fuj91} on the basis of the LMTO calculations and the von Barth-Hedin potential. They compared the energies among the first-kind AF structure with $\hat{\mib{Q}}=(1,0,0)2\pi/a$, the commensurate 2$\hat{\mib{Q}}$ structure with $\hat{\mib{Q}}= (1,0,0)2\pi/a$ and $(0,1,0)2\pi/a$, and the 3$\hat{\mib{Q}}$ structure with $\hat{\mib{Q}}=(1,0,0)2\pi/a$, $(0,1,0)2\pi/a$, and $(0,0,1)2\pi/a$. They concluded that the 3$\hat{\mib{Q}}$ structure is the most stable at $a=6.8$ a.u. (experimental lattice constant). Antropov and co-workers~\cite{Ant95,Ant96} compared the energies of various magnetic structures, including the structures obtained from the spin-dynamics calculations with 32 atoms in a unit cell. Using the GGA potential and the $spdf$ basis, they claimed that the 3$\hat{\mib{Q}}$ structure superposed with a helical SDW with $\mib{Q}=(0,0,1/6)2\pi/a$ is the most stable for lattice constant $a=6.6$ a.u., although it is nearly degenerate with a noncollinear eight-atom structure and helical structure. Kakehashi and coworkers~\cite{Kak98,Kak99} have recently developed a molecular-dynamics (MD) approach which automatically determines the magnetic structure in a given unit cell at a finite temperature. Applying the approach to $\gamma$-Fe with 500 atoms in a unit cell ($5\times 5\times 5$ fcc lattice),~\cite{Kak99} they found an incommensurate multiple spin density wave (MSDW) state whose principal terms consist of 3$\mib{Q}$ MSDW with $\mib{Q}=(0.6,0,0)2\pi/a$, $(0,0.6,0) 2\pi/a$, and $(0,0,0.6)2\pi/a$. Subsequently, they performed the ground-state electronic-structure calculations~\cite{Kak02} on the basis of the first-principles tight-binding LMTO method and the GGA potentials. Comparing the energies of various magnetic structures, including the 1$\mib{Q}$ helical SDW and the MSDW found in the MD calculations, they concluded that the MSDW becomes the most stable state for lattice constants $6.8 \leq a \leq 7.0$ a.u. More recently, Sj\"{o}stedt and Nordstr\"{o}m~\cite{Sjo02} implemented density-functional calculations based on the alternative linearization of the full-potential augmented-plane-wave method. They compared various competing collinear and noncollinear magnetic structures: ferromagnetic, commensurate 1$\hat{\mib{Q}}$, 2$\hat{\mib{Q}}$, and 3$\hat{\mib{Q}}$ structures, and double-layered antiferromagnetic, as well as incommensurate helical structures. For lattice constant $a=6.82$ a.u., a collinear double-layered AF state was found to be the most stable structure. The results of the ground-state calculations described above indicate that we have not yet obtained a solid conclusion as to the ground-state magnetic structure of cubic $\gamma$-Fe. In particular, no agreement between theory and experiment has yet been obtained. The discrepancy between theory and experiment can be ascribed to both sides. On the experimental side, the difficulty originates in the fact that the bulk $\gamma$-Fe is stable only at high temperatures above the Curie temperature of $\alpha$-Fe. At low temperatures, $\gamma$-Fe can be stabilized either in the form of small precipitates~\cite{Tsu89} in Cu or in thin Fe films~\cite{Gra80,Pes87,Mac88,Don94,Fre98,Qia01} grown on Cu. Thus, it is a subtle problem whether or not the SDW structure observed at low temperatures is identical with that should be realized in bulk cubic $\gamma$-Fe. Furthermore, Tsunoda~\cite{Tsu89} emphasized in his detailed analyses that the 1$\mib{Q}$ helical SDW is one of possible structures with the same wave vector because of the high symmetry of the crystal structure of $\gamma$-Fe . On the theoretical side, the discrepancy should partly be ascribed to the various approximation schemes of the potential used in the density-functional calculations. As stated above, the equilibrium wave vectors for the 1$\mib{Q}$ helical SDW predicted by the LDA calculations~\cite{Mry91,Uhl92} are different from those predicted by the GGA calculations.~\cite{Kor96} Similarly, the results of the atomic sphere approximation calculations~\cite{Kor96} are not in agreement with those of full-potential calculations.~\cite{Byl98,Byl991,Byl992,Kno00,Sjo02} Through these calculations, it has become clear that there are various local minimum states in $\gamma$-Fe that are close in energy. In this situation, it is worthwhile to approach this problem from a phenomenological, but more general point of view so that one can gain useful information about possible magnetic structures of $\gamma$-Fe from only the symmetry of the system. The purpose of the present paper is to carry out such analysis by means of a Ginzburg-Landau type of theory and to clarify possible scenarios for the magnetic structure of $\gamma$-Fe from a phenomenological point of view. Since the observed magnetic moment~\cite{Abr62} of $\gamma$-Fe and calculated ones~\cite{Fuj91} near the equilibrium volume are rather small ($\lesssim 1 \mu_{\text{B}}$), we will expand, in \S 2, a phenomenological free energy with respect to magnetic moments, and derive a general expression of free energy up to the fourth order. In \S 3 and 4, we discuss commensurate and incommensurate SDW structures, respectively, and obtain the magnetic phase diagrams in the space of expansion coefficients. The results of a preliminary analysis for this part have been published.~\cite{Uch03,Uch04} In the last section, we discuss possible scenarios for the magnetic structure of $\gamma$-Fe that are consistent with the present theory. In contrast to the previous Landau type of phenomenological theories applied to the incommensurate 1$\mib{Q}$ SDWs and their harmonics in Cr~\cite{Wal80,Zhu86} and those applied to the commensurate MSDWs in $\gamma$-Mn alloys~\cite{Jo86}, the present phenomenological free energy allows for incommensurate MSDWs with both linear and helical polarizations, in addition to the commensurate MSDWs. In this respect, it should be emphasized that, so far, there has been no discussion about the possibility of helical MSDW states having the wave vectors found in the $\gamma$-FeCo precipitates in Cu, in spite of the fact that such MSDWs are also consistent with the experimental observation~\cite{Tsu89} of cubic $\gamma$-Fe. In the present analysis, we have shown that each 3$\mib{Q}$ state becomes the most stable state among the corresponding 1$\mib{Q}$, 2$\mib{Q}$, and 3$\mib{Q}$ states for both commensurate and incommensurate wave vectors. This relation leads to magnetic phase diagrams that indicate the possibility of various 3$\mib{Q}$ MSDW states in fcc transition metals, which are found to be consistent with the previous results: the commensurate 3$\hat{\mib{Q}}$ state with $\hat{Q}=2\pi/a$ for lattice constants $6.5 \le a \le 6.8$ a.u. in the ground-state calculations,~\cite{Fuj91,Kak02} and the incommensurate linearly polarized 3$\mib{Q}$ state for $a \ge 6.8$ a.u. in the MD calculations~\cite{Kak02}. The phase diagram also suggests a possibility of the incommensurate helically polarized 3$\mib{Q}$ state, which has not yet been investigated in the ground-state calculations. \section{Phenomenological Free Energy} We consider a free energy expansion with respect to the local magnetic moments on the fcc lattice. Because the magnetic system in the absence of external magnetic fields has the time reversal symmetry, the free energy must include only even order terms with respect to magnetic moments. The free energy per lattice site expanded up to the fourth order in magnetic moments can then be written as \begin{equation} \begin{split} &f = \frac{1}{N^2} \sum_{l,l^{\prime}}^{N} \sum_{\alpha,\beta}^{x,y,z} a_{\alpha\beta}(l,l^{\prime})m_{l\alpha}m_{l^{\prime}\beta} \\ &+ \frac{1}{N^4}\sum_{l,l^{\prime},l^{\prime\prime},l^{\prime\prime\prime}}^{N} \sum_{\alpha,\beta,\gamma,\delta}^{x,y,z} b_{\alpha\beta\gamma\delta} (l,l^{\prime},l^{\prime\prime},l^{\prime\prime\prime}) m_{l\alpha}m_{l^{\prime}\beta}m_{l^{\prime\prime}\gamma}m_{l^{\prime\prime\prime}\delta}. \label{GLfree} \end{split} \end{equation} \noindent Here, $N$ is the number of lattice sites and $m_{l\alpha}$ denotes the $\alpha$-component ($\alpha=x,y,z$) of a magnetic moment on the $l$-th site. $a_{\alpha\beta}(l,l^{\prime})$ and $b_{\alpha\beta\gamma\delta}(l,l^{\prime},l^{\prime\prime},l^{\prime\prime\prime})$ are the expansion coefficients of the second- and fourth-order terms, respectively. $\sum_{l,l^{\prime}}^{N}$($\sum_{l,l^{\prime},l^{\prime\prime},l^{\prime\prime\prime}}^{N}$) denotes the summations with respect to the site indices $l$ and $l^{\prime}$ ($l$,$l^{\prime}$,$l^{\prime\prime}$, and $l^{\prime\prime\prime}$) over all integer values from 1 to $N$. $\sum_{\alpha,\beta}^{x,y,z}$ ($\sum_{\alpha,\beta,\gamma,\delta}^{x,y,z}$) denotes the summations with respect to the component indices $\alpha$ and $\beta$ ($\alpha$, $\beta$, $\gamma$, and $\delta$) over $x$, $y$, and $z$. Note that free energy (\ref{GLfree}) is written in the most general way in which each pair (quartet) of local magnetic moments at different sites is coupled via a coefficient independent of those for other pairs (quartets) of local magnetic moments. Therefore, free energy (\ref{GLfree}) can describe the energy costs when the local magnetic moments change their magnitudes and directions, and hence can describe both the incommensurate and commensurate SDWs in itinerant magnets. The same type of free energy expansion has been applied to the incommensurate SDWs of Cr.~\cite{Wal80,Zhu86} In the present paper, we apply free energy (\ref{GLfree}) to various SDW states that can be realized on the fcc transition metals, specifically, $\gamma$-Fe. Then the free energy must be invariant with respect to the symmetry operations for the magnetic moments on the fcc lattice: rotation $\mib{m}_{\mib{R}_l} \to \mathcal{R}(\mib{m}_{\mib{R}_l})$, inversion $\mib{m}_{\mib{R}_l} \to \mib{m}_{-\mib{R}_l}$, and translation $\mib{m}_{\mib{R}_l} \to \mib{m}_{\mib{R}_l+\mib{r}}$. Here, $\mib{R}_l$ is the position vector of the $l$-th site; $\mib{m}_{\mib{R}_l}\equiv\mib{m}_l$ is the magnetic moment at the $l$-th site. $\mathcal{R}$ denotes either the rotation C$_4$[100] or C$_3$[111], and $\mib{r}$ denotes an arbitrary lattice translation vector of the fcc lattice. The requirement that the free energy be invariant under these operations yields the following free energy (see Appendix A): \begin{multline} f = \frac{1}{N^2}\sum_{l,l^{\prime}}A(l,l^{\prime})\mib{m}_l\cdot\mib{m}_{l^{\prime}} \\ +\frac{1}{N^4}\sum_{l,l^{\prime},l^{\prime\prime},l^{\prime\prime\prime}} [B(l,l^{\prime},l^{\prime\prime},l^{\prime\prime\prime})\{\mib{m}_l\cdot\mib{m}_{l^{\prime}}\} \{\mib{m}_{l^{\prime\prime}}\cdot\mib{m}_{l^{\prime\prime\prime}}\} \\ + C(l,l^{\prime},l^{\prime\prime},l^{\prime\prime\prime})\sum_{(\alpha,\beta)}^{(y,z)(z,x)(x,y)} (m_{l\alpha}m_{l^{\prime}\alpha}m_{l^{\prime\prime}\beta}m_{l^{\prime\prime\prime}\beta} \\ +m_{l\beta}m_{l^{\prime}\beta}m_{l^{\prime\prime}\alpha}m_{l^{\prime\prime\prime}\alpha})]. \label{freefcc} \end{multline} \noindent Here, $A(l,l^{\prime})\equiv a_{xx}(l,l^{\prime})$, $B(l,l^{\prime},l^{\prime\prime},l^{\prime\prime\prime})\equiv b_{xxxx}(l,l^{\prime},l^{\prime\prime},l^{\prime\prime\prime})$, and $C(l,l^{\prime},l^{\prime\prime},l^{\prime\prime\prime})\equiv b_{yyzz}(l,l^{\prime},l^{\prime\prime},l^{\prime\prime\prime}) +b_{yzyz}(l,l^{\prime\prime},l^{\prime},l^{\prime\prime\prime})+b_{yzzy}(l,l^{\prime\prime},l^{\prime\prime\prime},l^{\prime}) -b_{xxxx}(l,l^{\prime},l^{\prime\prime},l^{\prime\prime\prime})$. \noindent The second-order terms with $A(l,l^{\prime})$ and the fourth-order terms with $B(l,l^{\prime},l^{\prime\prime},l^{\prime\prime\prime})$ in the free energy are isotropic since they are expressed in terms of the scalar products of magnetic moments. On the other hand, the fourth-order terms with the coefficients $C(l,l^{\prime},l^{\prime\prime},l^{\prime\prime\prime})$ are anisotropic. In the present paper, we restrict ourselves to the transition metals where the spin-orbit coupling effects are negligibly small, and thus neglect the anisotropic terms, i.e., we consider the case $C(l,l^{\prime},l^{\prime\prime},l^{\prime\prime\prime})=0$ in free energy (\ref{freefcc}). We now define the Fourier representation of the magnetic moment at $\mib{R}_l$ by \begin{equation} \mib{m}_l=\sum_{\mib{q}}^{\textrm{\scriptsize EBZ}} \mib{m}(\mib{q})e^{\text{i}\mib{q}\cdot\mib{R}_l}, \end{equation} \noindent where $\sum_{\mib{q}}^{\textrm{\scriptsize EBZ}}$ denotes a summation with respect to $\mib{q}$ over the extended first Brillouin zone (EBZ) of the fcc lattice, which is defined to include all the zone boundary points. This form, with the use of the EBZ, has the merit that one can argue the structures in both the commensurate and incommensurate cases on the same footing. The Fourier representation of the isotropic free energy is then given by \begin{multline} f = \sum_{\mib{q}}^{\textrm{\scriptsize EBZ}}A(\mib{q})|\mib{m}(\mib{q})|^2 +\sum_{\mib{K}}\sum_{\mib{q},\mib{q}^{\prime},\mib{q}^{\prime\prime},\mib{q}^{\prime\prime\prime}}B(\mib{q},\mib{q}^{\prime},\mib{q}^{\prime\prime},\mib{q}^{\prime\prime\prime}) \\ \times \{\mib{m}(\mib{q})\cdot\mib{m}(\mib{q}^{\prime})\}\{\mib{m}(\mib{q}^{\prime\prime})\cdot\mib{m}(\mib{q}^{\prime\prime\prime})\}, \label{fourierf} \end{multline} \noindent where the coefficients $A(\mib{q})$ and $B(\mib{q},\mib{q}^{\prime},\mib{q}^{\prime\prime},\mib{q}^{\prime\prime\prime})$ are defined by \begin{align} A(\mib{q}) &=\frac{1}{N}\sum_n A(n)e^{\text{i}\mib{q}\cdot\mib{R}_n} , \label{Aq} \\ B(\mib{q},\mib{q}^{\prime},\mib{q}^{\prime\prime},\mib{q}^{\prime\prime\prime}) &= \frac{1}{N^3}\sum_{n,n^{\prime},n^{\prime\prime}}B(n,n^{\prime},n^{\prime\prime}) \nonumber \\ \times & e^{\text{i}\mib{q}^{\prime}\cdot\mib{R}_n}e^{\text{i}\mib{q}^{\prime\prime}\cdot\mib{R}_{n^{\prime}}} e^{\text{i}\mib{q}^{\prime\prime\prime}\cdot\mib{R}_{n^{\prime\prime}}}\delta_{\mib{q}+\mib{q}^{\prime}+\mib{q}^{\prime\prime}+\mib{q}^{\prime\prime\prime},\mib{K}}. \label{Bqqqq} \end{align} \noindent In defining the coefficients $A(\mib{q})$ and $B(\mib{q},\mib{q}^{\prime},\mib{q}^{\prime\prime},\mib{q}^{\prime\prime\prime})$ in eqs.~(\ref{Aq}) and (\ref{Bqqqq}), we have introduced the relative coordinates $\mib{R}_{n}\equiv\mib{R}_{l^{\prime}}-\mib{R}_l$, $\mib{R}_{n^{\prime}}\equiv\mib{R}_{l^{\prime\prime}}-\mib{R}_l$, and $\mib{R}_{n^{\prime\prime}}\equiv\mib{R}_{l^{\prime\prime\prime}}-\mib{R}_l$, and have used the notations \begin{align} A(n) &\equiv A(l,l^{\prime})=A(\mib{R}_l,\mib{R}_{l^{\prime}})=A(0,\mib{R}_n), \label{Al} \\ B(n,n^{\prime},n^{\prime\prime}) &\equiv B(l,l^{\prime},l^{\prime\prime},l^{\prime\prime\prime}) =B(\mib{R}_l,\mib{R}_{l^{\prime}},\mib{R}_{l^{\prime\prime}},\mib{R}_{l^{\prime\prime\prime}}) \nonumber \\ & \text{\hspace{25mm}} =B(0,\mib{R}_n,\mib{R}_{n^{\prime}},\mib{R}_{n^{\prime\prime}}). \label{Bllll} \end{align} \noindent Here, the last equalities in eqs.~(\ref{Al}) and (\ref{Bllll}) result from the translational symmetry of the fcc lattice. Note that the inclusion of the fourth-order terms in free energy (\ref{GLfree}), and hence in eq.~(\ref{fourierf}), is essential for the present analysis since the second-order terms alone do not describe the MSDW states. As is always the case with all Landau-type theories, one should keep in mind the applicability of free energy expansion (\ref{fourierf}), which is valid for the system with small local magnetic moments. In the present paper, our main concern is the complex magnetic structures of $\gamma$-Fe which appear between the nonmagnetic state and the strong ferromagnetic state with increasing volume. In such a region, the magnitudes of the magnetic moments are relatively small. Even if this were not the case, we can apply the theory at any volume near the transition temperature where the local magnetic moments become small. Because of these reasons, we apply free energy expansion (\ref{fourierf}) for the analysis of various SDW states in $\gamma$-Fe, i.e., the commensurate 1$\hat{\mib{Q}}$, 2$\hat{\mib{Q}}$, and 3$\hat{\mib{Q}}$ SDWs, and the incommensurate linearly and helically polarized SDWs with 1$\mib{Q}$, 2$\mib{Q}$, and 3$\mib{Q}$ wave vectors. On the basis of the analysis, one can draw a general and exact conclusion independent of the specific model or the potential in the first-principles calculations. \section{Commensurate SDW Structures} We investigate here the commensurate SDW structures whose magnetic moments are given by \begin{equation} \mib{m}_l=\sum_{n=1}^{3}[\mib{m}(\hat{\mib{Q}}_n)e^{\text{i}\hat{\mib{Q}}_n\cdot\mib{R}_l} +\mib{m}(\hat{\mib{Q}}_n)e^{-\text{i}\hat{\mib{Q}}_n\cdot\mib{R}_l}], \label{comml} \end{equation} \noindent with the set of equivalent wave vectors $\hat{\mib{Q}}_1=(1,0,0)(2\pi/a)$, $\hat{\mib{Q}}_2=(0,1,0)(2\pi/a)$, and $\hat{\mib{Q}}_3=(0,0,1)(2\pi/a)$. \noindent Here, $\mib{m}(\hat{\mib{Q}}_1)$, $\mib{m}(\hat{\mib{Q}}_2)$, and $\mib{m}(\hat{\mib{Q}}_3)$ are real and assumed to be orthogonal to each other: $ \mib{m}(\hat{\mib{Q}}_2)\cdot\mib{m}(\hat{\mib{Q}}_3)=\mib{m}(\hat{\mib{Q}}_3)\cdot\mib{m}(\hat{\mib{Q}}_1)= \mib{m}(\hat{\mib{Q}}_1)\cdot\mib{m}(\hat{\mib{Q}}_2)=0$. \noindent The commensurate MSDW with the form of eq.~(\ref{comml}) has been discussed in the previous ground-state electronic-structure calculations~\cite{Fuj91, Ant95, Kak02, Sjo02}. The free energy is given by \begin{multline} f_{\text{co}} = \sum_{i=1}^{3}[\tilde{A}_Q|\mib{m}(\hat{\mib{Q}}_i)|^2 +(B_{1Q}+\tilde{B}_{2Q})|\mib{m}(\hat{\mib{Q}}_i)|^4] \\ + \sum_{(i,j)}^{(2,3)(3,1)(1,2)}\tilde{B}_{1QQ} |\mib{m}(\hat{\mib{Q}}_i)|^2|\mib{m}(\hat{\mib{Q}}_j)|^2. \label{freecom2} \end{multline} \noindent The coefficients $\tilde{A}_Q$, $B_{1Q}$, $\tilde{B}_{2Q}$, and $\tilde{B}_{1QQ}$ are expressed in terms of linear combinations of the coefficients \{$A(\mib{q})$\} and \{$B(\mib{q},\mib{q}^{\prime},\mib{q}^{\prime\prime},\mib{q}^{\prime\prime\prime})$\} in eqs.~(\ref{Aq}) and (\ref{Bqqqq}), with $\mib{q}$, $\mib{q}^{\prime}$, $\mib{q}^{\prime\prime}$, and $\mib{q}^{\prime\prime\prime}$ being chosen from $\pm\hat{\mib{Q}}_1$, $\pm\hat{\mib{Q}}_2$, and $\pm\hat{\mib{Q}}_3$. The full expressions of these coefficients are given in Appendix B. We see that free energy (\ref{freecom2}) depends only on the absolute squares of magnetic moments $|\mib{m}(\hat{\mib{Q}}_i)|^2$ ($i=1,2,3$). Thus the SDW structures described by this free energy are degenerate with respect to the directions of polarization. This degeneracy is partially removed when the anisotropic terms are included in the free energy. In the following, we investigate three commensurate magnetic structures, the first-kind AF structure, the 2$\hat{\mib{Q}}$ structure, and the 3$\hat{\mib{Q}}$ structure, on the basis of free energy (\ref{freecom2}). \subsection{First-kind antiferromagnetic structure} The equilibrium magnetic moment for the first-kind AF structure is obtained by minimizing free energy (\ref{freecom2}) with respect to $|\mib{m}(\hat{\mib{Q}}_1)|^2$ and setting $\mib{m}(\hat{\mib{Q}}_2)=\mib{m}(\hat{\mib{Q}}_3)=0$. We have \begin{equation} |\mib{m}(\mib{Q}_1)|=\left[-\frac{\tilde{A}_Q} {2(B_{1Q}+\tilde{B}_{2Q})}\right]^{1/2}, \label{afm} \end{equation} \noindent under the condition \begin{equation} -\frac{\tilde{A}_Q}{2(B_{1Q}+\tilde{B}_{2Q})}>0. \label{afpositive} \end{equation} \noindent In order for the solution (\ref{afm}) to be thermodynamically stable, it is necessary that \begin{equation} \left. \frac{\partial^2f}{\partial \{|\mib{m}(\hat{\mib{Q}}_1)|^2\}^2} \right|_{(\ref{afm})} >0. \label{afhes} \end{equation} \noindent Inequalities (\ref{afpositive}) and (\ref{afhes}) reduce to \begin{align} \tilde{A}_{Q} &< 0, \label{afst1} \\ B_{1Q}+\tilde{B}_{2Q} &> 0. \label{afst2} \end{align} \noindent Inequalities (\ref{afst1}) and (\ref{afst2}) yield the stability conditions for the first-kind AF structure. The equilibrium free energy is obtained by substituting eq.~(\ref{afm}) and $\mib{m}(\hat{\mib{Q}}_2)=\mib{m}(\hat{\mib{Q}}_3)=0$ into eq.~(\ref{freecom2}): \begin{equation} f_{\textrm{\scriptsize AF}}=-\frac{\tilde{A}_{Q}^2} {4(B_{1Q}+\tilde{B}_{2Q})}. \label{affree} \end{equation} \noindent The amplitude $M$ of the magnetic moment per site is given by \begin{equation} \begin{split} M^2 &\equiv \frac{1}{N}\sum_l\mib{m}_l\cdot\mib{m}_l \\ &= \sum_{\mib{q}, \mib{q}^{\prime}}^{\text{EBZ}}\mib{m}(\mib{q})\cdot\mib{m}(\mib{q}^{\prime}) \sum_{\mib{K}}\delta_{\mib{q}+\mib{q}^{\prime}, \mib{K}}. \label{mm} \end{split} \end{equation} \noindent In the commensurate case, it becomes \begin{equation} M^2 = 4(|\mib{m}(\hat{\mib{Q}}_1)|^2+|\mib{m}(\hat{\mib{Q}}_2)|^2+|\mib{m}(\hat{\mib{Q}}_3)|^2). \label{comm} \end{equation} \noindent Substituting eq.~(\ref{afm}) and $\mib{m}(\hat{\mib{Q}}_2)=\mib{m}(\hat{\mib{Q}}_3)=0$ into eq.~(\ref{comm}), we have the amplitude $M_{\text{AF}}$ of the magnetic moment for the first-kind AF structure: \begin{equation} M_{\text{AF}}^2 = -\frac{2\tilde{A}_Q}{B_{1Q}+\tilde{B}_{2Q}}. \end{equation} \subsection{2$\hat{\mib{Q}}$ structure} The equilibrium magnetic moments for the 2$\hat{\mib{Q}}$ structure are obtained by minimizing free energy (\ref{freecom2}) with respect to $|\mib{m}(\hat{\mib{Q}}_1)|^2$ and $|\mib{m}(\hat{\mib{Q}}_2)|^2$ and setting $\mib{m}(\hat{\mib{Q}}_3)=0$, we have \begin{align} \tilde{A}_Q &+ 2(B_{1Q}+\tilde{B}_{2Q})|\mib{m}(\hat{\mib{Q}}_1)|^2 +\tilde{B}_{1QQ}|\mib{m}(\hat{\mib{Q}}_2)|^2=0 \label{2qceq1}, \\ \tilde{A}_Q &+ 2(B_{1Q}+\tilde{B}_{2Q})|\mib{m}(\hat{\mib{Q}}_2)|^2 +\tilde{B}_{1QQ}|\mib{m}(\hat{\mib{Q}}_1)|^2=0 \label{2qceq2}. \end{align} \noindent When \begin{equation} D_{2\hat{Q}} \equiv 4(B_{1Q}+\tilde{B}_{2Q})^2-\tilde{B}_{1QQ}^2 \neq 0, \label{2qcdet} \end{equation} \noindent eqs.~(\ref{2qceq1}) and (\ref{2qceq2}) are solved as \begin{equation} |\mib{m}(\hat{\mib{Q}}_1)|=|\mib{m}(\hat{\mib{Q}}_2)|= \left[-\frac{\tilde{A}_Q}{2(B_{1Q}+\tilde{B}_{2Q}) +\tilde{B}_{1QQ}}\right]^{1/2}, \label{2qcm} \end{equation} \noindent under the condition \begin{equation} -\frac{\tilde{A}_Q}{2(B_{1Q}+\tilde{B}_{2Q}) +\tilde{B}_{1QQ}}>0. \label{2qcpositive} \end{equation} \noindent In order for the solution (\ref{2qcm}) to be thermodynamically stable, it is necessary that \begin{multline} \delta^2f=\left. \sum_{i=1}^2\sum_{j=1}^2\frac{\partial^2f} {\partial \{|\mib{m}(\hat{\mib{Q}}_i)|^2\}\partial \{|\mib{m}(\hat{\mib{Q}}_j)|^2\}}\right|_{(\ref{2qcm})} \\ \times \delta|\mib{m}(\hat{\mib{Q}}_i)|^2\delta|\mib{m}(\hat{\mib{Q}}_j)|^2 > 0. \end{multline} \noindent This condition is equivalent to \begin{equation} f_{11}>0, \qquad \left| \begin{array}{cc} f_{11} & f_{12} \\ f_{21} & f_{22} \end{array} \right| >0, \label{2qches} \end{equation} \noindent where $f_{ij}$ is defined by \begin{equation} f_{ij}\equiv\left. \frac{\partial^2f}{\partial \{|\mib{m}(\hat{\mib{Q}}_i)|^2\} \partial \{|\mib{m}(\hat{\mib{Q}}_j)|^2\}}\right|_{(\ref{2qcm})} \qquad (i,j=1,2). \label{2qcfij} \end{equation} \noindent Using eq.~(\ref{2qcfij}), condition (\ref{2qches}) becomes \begin{align} 2(B_{1Q}+\tilde{B}_{2Q}) &> 0, \label{2qches1} \\ 4(B_{1Q}+\tilde{B}_{2Q})^2-\tilde{B}_{1QQ}^2 &> 0. \label{2qches2} \end{align} \noindent Conditions (\ref{2qcdet}), (\ref{2qcpositive}), (\ref{2qches1}), and (\ref{2qches2}) reduce to \begin{align} \tilde{A}_{Q} &< 0, \label{2qcst1} \\ B_{1Q}+\tilde{B}_{2Q} &> \frac{|\tilde{B}_{1QQ}|}{2}. \label{2qcst2} \end{align} \noindent Inequalities (\ref{2qcst1}) and (\ref{2qcst2}) yield the stability condition for the 2$\hat{\mib{Q}}$ structure. \noindent The equilibrium free energy is obtained by substituting eq.~(\ref{2qcm}) and $\mib{m}(\hat{\mib{Q}}_3)=0$ into eq.~(\ref{freecom2}): \begin{equation} f_{2\hat{Q}}=-\frac{\tilde{A}_{Q}^2} {2(B_{1Q}+\tilde{B}_{2Q})+\tilde{B}_{1QQ}}. \label{2qcfree} \end{equation} \noindent The amplitude $M_{2\hat{Q}}$ of the magnetic moment per site is obtained by substituting eq.~(\ref{2qcm}) into eq.~(\ref{comm}): \begin{equation} M_{2\hat{Q}}^2=-\frac{8\tilde{A}_Q} {2(B_{1Q}+\tilde{B}_{2Q})+\tilde{B}_{1QQ}}. \label{2qcmm} \end{equation} \subsection{3$\hat{\mib{Q}}$ structure} The equilibrium magnetic moments for the 3$\hat{\mib{Q}}$ structure are obtained by minimizing free energy (\ref{freecom2}) with respect to $|\mib{m}(\hat{\mib{Q}}_1)|^2$, $|\mib{m}(\hat{\mib{Q}}_2)|^2$, and $|\mib{m}(\hat{\mib{Q}}_3)|^2$. We obtain \begin{multline} |\mib{m}(\hat{\mib{Q}}_1)|=|\mib{m}(\hat{\mib{Q}}_2)|=|\mib{m}(\hat{\mib{Q}}_3)| \\ =\left[-\frac{\tilde{A}_Q}{2(B_{1Q}+\tilde{B}_{2Q}+\tilde{B}_{1QQ})} \right]^{1/2}, \label{3qcm} \end{multline} \noindent under the condition \begin{equation} -\frac{\hat{A}_Q}{2(B_{1Q}+\hat{B}_{2Q}+\hat{B}_{1QQ})}>0. \label{3qcpositive} \end{equation} \noindent The thermodynamical stability analysis and inequality (\ref{3qcpositive}) lead to the stability condition for the 3$\hat{\mib{Q}}$ structure: \begin{align} \tilde{A}_Q &< 0, \label{3qcst1} \\ B_{1Q}+\tilde{B}_{2Q} &> \frac{\tilde{B}_{1QQ}}{2} \qquad \;\> \text{for} \quad \tilde{B}_{1QQ}>0, \label{3qcst2} \\ B_{1Q}+\tilde{B}_{2Q} &> -\tilde{B}_{1QQ} \qquad \text{for} \quad \tilde{B}_{1QQ}<0. \label{3qcst3} \end{align} \noindent The equilibrium free energy is obtained by substituting eq.~(\ref{3qcm}) into eq.~(\ref{freecom2}): \begin{equation} f_{3\hat{Q}}=-\frac{3\tilde{A}_Q^2} {4(B_{1Q}+\tilde{B}_{2Q}+\tilde{B}_{1QQ})}. \label{3qcfree} \end{equation} \noindent The amplitude $M_{3\hat{Q}}$ of the magnetic moment for the 3$\hat{\mib{Q}}$ structure is obtained by substituting eq.~(\ref{3qcm}) into eq.~(\ref{comm}): \begin{equation} M_{3\hat{Q}}^2=-\frac{6\tilde{A}_Q}{B_{1Q}+ \tilde{B}_{2Q}+\tilde{B}_{1QQ}}. \label{3qcmm} \end{equation} \subsection{Relative stability among commensurate structures} The relative stability among the first-kind AF, and the 2$\hat{\mib{Q}}$ and 3$\hat{\mib{Q}}$ structures has been determined by comparing stability conditions (\ref{afst1}), (\ref{afst2}), (\ref{2qcst1}), (\ref{2qcst2}), and (\ref{3qcst1})-(\ref{3qcst3}), and equilibrium free energies (\ref{affree}), (\ref{2qcfree}), and (\ref{3qcfree}). The obtained magnetic phase diagram~\cite{fig1comment} for $\tilde{A}_Q < 0$ is shown in Fig.~1 in the space of expansion coefficients $\tilde{B}_{1QQ}/B_{1Q}$ and $\tilde{B}_{2Q}/B_{1Q}$, where $B_{1Q} > 0$ for $\tilde{B}_{2Q}/B_{1Q} > -1$ and $B_{1Q} < 0$ for $\tilde{B}_{2Q}/B_{1Q} < -1$. \begin{figure} \includegraphics{fig1} \caption{\label{fig1} Magnetic phase diagram for the commensurate structures with $\hat{Q}=2\pi/a$ for $\tilde{A}_Q < 0$. The first-kind antiferromagnetic (AF), and the 2$\hat{\mib{Q}}$, and 3$\hat{\mib{Q}}$ phases are shown in the space of expansion coefficients $\tilde{B}_{1QQ}/B_{1Q}$ and $\tilde{B}_{2Q}/B_{1Q}$, where $B_{1Q} > 0$ for $\tilde{B}_{2Q}/B_{1Q} > -1$ and $B_{1Q} < 0$ for $\tilde{B}_{2Q}/B_{1Q} < -1$.} \end{figure} All three commensurate structures appear in the magnetic phase diagram. In the AF phase ($0 < B_{1Q}+\tilde{B}_{2Q} < |\tilde{B}_{1QQ}|/2$), the AF state is the only stable structure. In the 2$\hat{\mib{Q}}$ phase ($0 < -\tilde{B}_{1QQ}/2 < B_{1Q}+\tilde{B}_{2Q} < -\tilde{B}_{1QQ}$), the AF and 2$\hat{\mib{Q}}$ structures are stable. Comparison of their free energies shows that the 2$\hat{\mib{Q}}$ structure is the most stable state in this region. Note that the amplitude of the magnetic moment for the 2$\hat{\mib{Q}}$ structure is larger than that for the AF structure in this region. In the 3$\hat{\mib{Q}}$ phase ($0 < \tilde{B}_{1QQ}/2 < B_{1Q}+\tilde{B}_{2Q}$,\; $0 < -\tilde{B}_{1QQ} < B_{1Q}+\tilde{B}_{2Q}$), all three commensurate structures are stable. Since the equilibrium free energies satisfy the inequality $f_{\text{AF}}>f_{2\hat{Q}}>f_{3\hat{Q}}$, the 3$\hat{\mib{Q}}$ structure is the most stable state in this region. The relation among the amplitudes of the magnetic moments for the three structures $M_{\text{AF}} < M_{2\hat{Q}} < M_{3\hat{Q}}$ shows that the 3$\hat{\mib{Q}}$ structure is the state with the largest magnetic moment in this region. It is of interest here to compare the present magnetic phase diagram for the commensurate structures with the past results of the ground-state electronic-structure calculations for cubic $\gamma$-Fe. Those calculations~\cite{Mry91,Uhl92,Kor96,Byl98,Byl991,Byl992,Kno00, Sjo02,Kak99,Kak02} showed that the magnetism of $\gamma$-Fe depends sensitively on the volume; the first-kind AF state appears for lattice constants $a \lesssim 6.5$ a.u., complex magnetic structures for lattice constants 6.5 $\lesssim a \lesssim$ 7.0 a.u., and the ferromagnetic state for 7.0 a.u. $\lesssim a$. Although there is a wide diversity in the results of the predicted magnetic structures for intermediate values of the lattice constant, 6.5 $\lesssim a \lesssim$ 7.0 a.u., it is worth noting that the ground-state electronic-structure calculations by Kakehashi \textit{et al.}~\cite{Kak02} and those by Fujii \textit{et al.}~\cite{Fuj91} predicted the commensurate 3$\hat{\mib{Q}}$ state to appear for lattice constants $a \leq$ 6.8 a.u. Kakehashi \textit{et al.}~\cite{Kak02} performed the first-principles tight-binding LMTO calculations for $\gamma$-Fe using the GGA potential and compared the ground-state energies of various MSDW states. It was predicted that with increasing volume, $\gamma$-Fe undergoes a transition from the first-kind AF state to the commensurate 3$\hat{\mib{Q}}$ state at $a = 6.5$ a.u. with the 3$\hat{\mib{Q}}$ state remaining stable until $a= 6.8$ a.u. This result is consistent with the present magnetic phase diagram of Fig.~\ref{fig1} which shows the possibility of the transition from the AF to the 3$\hat{\mib{Q}}$ phase crossing the boundary. Fujii and co-workers~\cite{Fuj91} found three possible ground states for $\gamma$-Fe: the first-kind AF, and the 2$\hat{\mib{Q}}$ and 3$\hat{\mib{Q}}$ structures, using the LMTO method and the von Barth-Hedin potential, and found numerically that the 3$\hat{\mib{Q}}$ structure is the most stable among the three at $a=6.8$ a.u. As we mentioned above, the present theory yields the same relative stability, $f_{\text{AF}} > f_{2\hat{Q}} > f_{3\hat{Q}}$, among the AF, 2$\hat{\mib{Q}}$, and 3$\hat{\mib{Q}}$ structures; their results are verified by the present theory. It is of interest to note that the amplitudes of their magnetic moments for the 3$\hat{\mib{Q}}$ and 2$\hat{\mib{Q}}$ structures were found to be the same and larger than that for the AF structure: $M_{3\hat{Q}}=M_{2\hat{Q}}>M_{\text{AF}}$. According to eqs.~(\ref{2qcmm}) and (\ref{3qcmm}), this implies that $B_{1Q}+\tilde{B}_{2Q}=\tilde{B}_{1QQ}/2$; the ground state of $\gamma$-Fe calculated by Fujii \textit{et al.} is located in the vicinity of the AF-3$\hat{\mib{Q}}$ boundary in the 3$\hat{\mib{Q}}$ phase in Fig.~\ref{fig1}. \section{Incommensurate SDW Structures} We consider SDW structures described by three incommensurate wave vectors $\mib{Q}_1$, $\mib{Q}_2$, and $\mib{Q}_3$. These wave vectors are assumed to be equivalent in space with each other and to satisfy the following incommensurate conditions: \begin{align} & \pm 4\mib{Q}_i \neq \mib{K}, \quad \pm 2\mib{Q}_i \neq \mib{K} \quad (i=1,2,3), \label{cond1} \\ & \pm 2(\mib{Q}_i \pm \mib{Q}_j) \neq \mib{K}, \quad \pm (3\mib{Q}_i \pm \mib{Q}_j) \neq \mib{K}, \nonumber \\ &\pm (\mib{Q}_i \pm \mib{Q}_j) \neq \mib{K} \quad ((i,j)=(2,3) (3,1) (1,2)), \label{cond2} \\ & \pm (2\mib{Q}_i \pm \mib{Q}_j \pm \mib{Q}_k) \neq \mib{K} \nonumber \\ & \text{\hspace{2cm}} ((i,j,k)=(1,2,3) (2,3,1) (3,1,2)). \label{cond3} \end{align} \noindent Here, $\mib{K}$ is a reciprocal lattice vector of the fcc lattice. Incommensurate conditions (\ref{cond1})-(\ref{cond3}) are satisfied by the wave vectors predicted for the bulk cubic $\gamma$-Fe in the electronic band-structure calculations~\cite{Mry91,Uhl92} and in the molecular-dynamics calculations,~\cite{Kak99,Kak02} and by that found in the neutron diffraction measurements~\cite{Tsu89} of the cubic $\gamma$-Fe$_{100-x}$Co$_x$ ($x<4$) precipitates in Cu. The free energy describing the incommensurate SDWs can be written as \begin{multline} f_{\text{ic}} = \sum_{i=1}^{3}[A_Q|\mib{m}(\mib{Q}_i)|^2 +B_{1Q}|\mib{m}(\mib{Q}_i)|^4 \\ +B_{2Q}\mib{m}^2(\mib{Q}_i)\mib{m}^{*2}(\mib{Q}_i)] \\ + \sum_{(i,j)}^{(2,3)(3,1)(1,2)}[B_{1QQ}|\mib{m}(\mib{Q}_i)|^2|\mib{m}(\mib{Q}_j)|^2 \\ +B_{2QQ}|\mib{m}(\mib{Q}_i)\cdot\mib{m}(\mib{Q}_j)|^2 +B_{3QQ}|\mib{m}(\mib{Q}_i)\cdot\mib{m}^*(\mib{Q}_j)|^2]. \label{icfree} \end{multline} \noindent The coefficients $A_Q$, $B_{1Q}$, $B_{2Q}$, $B_{1QQ}$, $B_{2QQ}$, and $B_{3QQ}$ are expressed in terms of linear combinations of coefficients $A(\mib{q})$ and $B(\mib{q},\mib{q}^{\prime},\mib{q}^{\prime\prime},\mib{q}^{\prime\prime\prime})$ in eqs.~(\ref{Aq}) and (\ref{Bqqqq}), with $\mib{q}$, $\mib{q}^{\prime}$, $\mib{q}^{\prime\prime}$, and $\mib{q}^{\prime\prime\prime}$ being chosen from $\mib{Q}_1$, $\mib{Q}_2$, and $\mib{Q}_3$ satisfying conditions (\ref{cond1})-(\ref{cond3}). The full expressions of these coefficients are given in Appendix B. On the basis of free energy (\ref{icfree}), we investigate the 1$\mib{Q}$, 2$\mib{Q}$, and 3$\mib{Q}$ linearly polarized SDWs, and the 1$\mib{Q}$, 2$\mib{Q}$, and 3$\mib{Q}$ helically polarized SDWs. \subsection{Linearly polarized SDWs} We consider first the linearly polarized SDWs whose magnetic moments are described by \begin{equation} \mib{m}_l=\sum_{n=1}^3[\mib{m}(\mib{Q}_n)e^{\text{i}\mib{Q}_n\cdot\mib{R}_l} +\mib{m}^*(\mib{Q}_n)e^{-\text{i}\mib{Q}_n\cdot\mib{R}_l}], \label{lml} \end{equation} \noindent with \begin{equation} \mib{m}(\mib{Q}_n)=(m_x(\mib{Q}_n),m_y(\mib{Q}_n),m_z(\mib{Q}_n))e^{\text{i}\alpha_n} \;(n=1,2,3). \label{lmq} \end{equation} \noindent Here, $m_x(\mib{Q}_n)$, $m_y(\mib{Q}_n)$, and $m_z(\mib{Q}_n)$ ($n=1,2,3$) are assumed to be real. $\alpha_1$, $\alpha_2$, and $\alpha_3$ are phase factors. We consider the case in which $\mib{m}(\mib{Q}_1)$, $\mib{m}(\mib{Q}_2)$, and $\mib{m}(\mib{Q}_3)$ are orthogonal to each other: \begin{equation} \mib{m}(\mib{Q}_2)\cdot\mib{m}(\mib{Q}_3)=\mib{m}(\mib{Q}_3)\cdot\mib{m}(\mib{Q}_1) =\mib{m}(\mib{Q}_1)\cdot\mib{m}(\mib{Q}_2)=0. \label{lortho} \end{equation} The free energy for the linear SDWs is obtained by substituting eqs.~(\ref{lmq}) and (\ref{lortho}) into eq.~(\ref{icfree}): \begin{multline} f_{\text{L}} = \sum_{i=1}^{3}[A_Q|\mib{m}(\mib{Q}_i)|^2+(B_{1Q}+B_{2Q})|\mib{m}(\mib{Q}_i)|^4] \\ + \sum_{(i,j)}^{(2,3)(3,1)(1,2)}B_{1QQ}|\mib{m}(\mib{Q}_i)|^2|\mib{m}(\mib{Q}_j)|^2. \label{lfree} \end{multline} \noindent Note that free energy (\ref{lfree}) depends only on the absolute squares of magnetic moments, $|\mib{m}(\mib{Q}_1)|^2$, $|\mib{m}(\mib{Q}_2)|^2$, and $|\mib{m}(\mib{Q}_3)|^2$. This again implies that the 1$\mib{Q}$, 2$\mib{Q}$, and 3$\mib{Q}$ states are degenerate with respect to the directions of polarization. This degeneracy is partially removed when the anisotropic terms are included in the free energy. We also note that free energy (\ref{lfree}) has the same form as eq.~(\ref{freecom2}) in which $\tilde{A}_Q$, $\tilde{B}_{2Q}$, and $\tilde{B}_{1QQ}$ have been replaced by $A_{Q}$, $B_{2Q}$, and $B_{1QQ}$, respectively. Therefore, following the same steps as in \S 3, we obtain the equilibrium states of the 1$\mib{Q}$, 2$\mib{Q}$, and 3$\mib{Q}$ linear SDWs as follows. \subsubsection{1$\mib{Q}$ linearly polarized SDW} \noindent Magnetic moment \begin{equation} |\mib{m}(\mib{Q}_1)|=\left[-\frac{A_Q}{2(B_{1Q}+B_{2Q})}\right]^{1/2}. \label{1qm} \end{equation} \noindent Stability condition \begin{align} A_{Q} &< 0, \label{1qst1} \\ B_{1Q}+B_{2Q} &> 0. \label{1qst2} \end{align} \noindent Equilibrium free energy \begin{equation} f_{1Q}=-\frac{A_{Q}^2} {4(B_{1Q}+B_{2Q})}. \label{1qfree} \end{equation} \noindent Amplitude of the magnetic moment \begin{equation} M_{1Q}^2=2|\mib{m}(\mib{Q}_1)|^2=-\frac{A_Q}{B_{1Q}+B_{2Q}}. \end{equation} \subsubsection{2$\mib{Q}$ linearly polarized SDW} \noindent Magnetic moment \begin{equation} |\mib{m}(\mib{Q}_1)|=|\mib{m}(\mib{Q}_2)|= \left[-\frac{A_Q}{2(B_{1Q}+B_{2Q})+B_{1QQ}}\right]^{1/2}. \label{2qm} \end{equation} \noindent Stability condition \begin{align} A_{Q} &< 0, \label{2qst1} \\ B_{1Q}+B_{2Q} &> \frac{|B_{1QQ}|}{2}. \label{2qst2} \end{align} \noindent Equilibrium free energy \begin{equation} f_{2Q}=-\frac{A_{Q}^2}{2(B_{1Q}+B_{2Q})+B_{1QQ}}. \label{2qfree} \end{equation} \noindent Amplitude of the magnetic moment \begin{equation} M_{2Q}^2=-\frac{4A_Q}{2(B_{1Q}+B_{2Q})+B_{1QQ}}. \label{2qmm} \end{equation} \subsubsection{3$\mib{Q}$ linearly polarized SDW} \noindent Magnetic moment \begin{multline} |\mib{m}(\mib{Q}_1)|=|\mib{m}(\mib{Q}_2)|=|\mib{m}(\mib{Q}_3)| \\ =\left[-\frac{A_Q}{2(B_{1Q}+B_{2Q}+B_{1QQ})} \right]^{1/2}. \label{3qm} \end{multline} \noindent Stability condition \begin{align} A_Q &< 0, \label{3qst1} \\ B_{1Q}+B_{2Q} &> \frac{B_{1QQ}}{2} \qquad \>\>\, \text{for} \quad B_{1QQ}>0, \label{3qst2} \\ B_{1Q}+B_{2Q} &> -B_{1QQ} \qquad \text{for} \quad B_{1QQ}<0. \label{3qst3} \end{align} \noindent Equilibrium free energy \begin{equation} f_{3Q}=-\frac{3A_Q^2} {4(B_{1Q}+B_{2Q}+B_{1QQ})}. \label{3qfree} \end{equation} \noindent Amplitude of the magnetic moment \begin{equation} M_{3Q}^2=-\frac{3A_Q}{B_{1Q}+B_{2Q}+B_{1QQ}}. \label{3qmm} \end{equation} \subsection{Relative stability among linear SDWs} The relative stability among the incommensurate 1$\mib{Q}$, 2$\mib{Q}$, and 3$\mib{Q}$ linear SDWs has been determined by comparing stability conditions (\ref{1qst1})-(\ref{1qst2}), (\ref{2qst1})-(\ref{2qst2}) and (\ref{3qst1})-(\ref{3qst3}), and equilibrium free energies (\ref{1qfree}), (\ref{2qfree}), and (\ref{3qfree}). The obtained magnetic phase diagram for $A_Q < 0$ is shown in Fig.~2 in the space of expansion coefficients $B_{1QQ}/B_{1Q}$ and $B_{2Q}/B_{1Q}$, where $B_{1Q} > 0$ for $B_{2Q}/B_{1Q} > -1$ and $B_{1Q} < 0$ for $B_{2Q}/B_{1Q} < -1$. \begin{figure} \includegraphics{fig2} \caption{\label{fig2} Magnetic phase diagram for the incommensurate linear SDWs for $A_Q < 0$. The 1$\mib{Q}$, the 2$\mib{Q}$, and the 3$\mib{Q}$ phase are shown in the space of expansion coefficients $B_{1QQ}/B_{1Q}$ and $B_{2Q}/B_{1Q}$, where $B_{1Q} > 0$ for $B_{2Q}/B_{1Q} > -1$ and $B_{1Q} < 0$ for $B_{2Q}/B_{1Q} < -1$ .} \end{figure} The relative stability among the linear SDWs has been found to have the same feature as that of the relative stability among the commensurate SDWs. In the 1$\mib{Q}$ phase ($0 < B_{1Q}+B_{2Q}< |B_{1QQ}|/2$), the 1$\mib{Q}$ linear SDW is the only stable structure. In the 2$\mib{Q}$ phase ($0 < -B_{1QQ}/2 < B_{1Q}+B_{2Q} < -B_{1QQ}$), both the 1$\mib{Q}$ and 2$\mib{Q}$ linear SDWs are stable, but the latter has a lower free energy and larger amplitude of the magnetic moment. In the 3$\mib{Q}$ phase ($0 < B_{1QQ}/2 < B_{1Q}+B_{2Q},\; 0 < -B_{1QQ} < B_{1Q}+B_{2Q}$), all three linear SDWs are stable, and the 3$\mib{Q}$ state has the lowest free energy and the largest amplitude of the magnetic moment. Concerning the magnetism of $\gamma$-Fe, the present magnetic phase diagram for the linear SDWs indicates the possibility of the 3$\mib{Q}$ and 2$\mib{Q}$ MSDWs as well as the 1$\mib{Q}$ SDW. In the ground-state calculations of bulk cubic $\gamma$-Fe, although the magnetic structure for lattice constants 6.5 $\lesssim a \lesssim$ 7.0 a.u. is under debate, the possibility of incommensurate linear MSDW was suggested by Kakehashi and coworkers~\cite{Kak99,Kak02} On the basis of the molecular-dynamics (MD) method,~\cite{Kak99} they predicted a new MSDW whose principal terms consist of 3$\mib{Q}$ waves with $\mib{Q}=(0.6,0,0)(2\pi/a)$, $(0,0.6,0)(2\pi/a)$, and $(0,0,0.6)(2\pi/a)$. Subsequently, they performed the ground-state electronic-structure calculations~\cite{Kak02} using the first-principles tight-binding LMTO method and the GGA potentials to compare the ground-state energies of various magnetic structures: the first-kind AF state, the commensurate 3$\hat{\mib{Q}}$ structure, the incommensurate 1$\mib{Q}$ helical SDW, the incommensurate MSDW found in the MD calculations, and the ferromagnetic state. It was concluded that the MSDW becomes the most stable state for lattice constants $6.8 \leq a \leq 7.0$ a.u. In particular, they find that the incommensurate 3$\mib{Q}$ MSDW is stable as compared with the 1$\mib{Q}$ SDW irrespective of the lattice constant and that the amplitude of the magnetic moment for the 3$\mib{Q}$ state is larger than that for the 1$\mib{Q}$ state. These results are consistent with the present result that the 3$\mib{Q}$ MSDW is always stabilized and has a larger amplitude of the magnetic moment as compared with the 1$\mib{Q}$ SDW when the 3$\mib{Q}$ solution exists, resulting in a wide range of the 3$\mib{Q}$ phase in the magnetic phase diagram in Fig.~\ref{fig2}. \subsection{Helically polarized SDWs} Next, we consider the helically polarized SDWs whose magnetic moments are described by \begin{multline} \mib{m}_l=\sum_{j=1}^{3}\sqrt{2}|\mib{m}(\mib{Q}_j)| [\mib{e}_k\cos(\mib{Q}_j\cdot\mib{R}_l+\alpha_j) \\ +\mib{e}_m\sin(\mib{Q}_j\cdot\mib{R}_l+\alpha_j)]. \label{lmh0} \end{multline} \noindent Here, $(j,k,m)$ is (1,2,3), (2,3,1), and (3,1,2) when $j=1,2$, and 3, respectively. $\mib{e}_1$, $\mib{e}_2$, and $\mib{e}_3$ form an orthonormal basis set. $\alpha_1$, $\alpha_2$, and $\alpha_3$ are phase factors. Note that the possibility of the helical 3$\mib{Q}$ MSDW given by eq. (\ref{lmh0}) has not been examined in either the experimental analyses or the electronic-structure calculations. Introducing the basis set $(\hat{\mib{e}}_{jk}, \hat{\mib{e}}_{jm})$ obtained by a rotation of $(\mib{e}_k,\mib{e}_m)$ by $\alpha_j$ ($(j,k,m) =(1,2,3) (2,3,1) (3,1,2)$) for each helical component $j$, \begin{align} \hat{\mib{e}}_{jk} &= \mib{e}_k\cos\alpha_j + \mib{e}_m\sin\alpha_j, \label{ejk} \\ \hat{\mib{e}}_{jm} &= -\mib{e}_k\sin\alpha_j + \mib{e}_m\cos\alpha_j, \label{ejm} \end{align} \noindent we have the expression \begin{equation} \mib{m}_l = \sum_{j=1}^{3}[\mib{m}(\mib{Q}_j)e^{\text{i}\mib{Q}_j\cdot\mib{R}_l} +\mib{m}^*(\mib{Q}_j)e^{-\text{i}\mib{Q}_j\cdot\mib{R}_l}]. \label{lmh} \end{equation} \noindent Here, \begin{multline} \mib{m}(\mib{Q}_j) = \frac{|\mib{m}(\mib{Q}_j)|}{\sqrt{2}} (\hat{\mib{e}}_{jk}-\text{i}\hat{\mib{e}}_{jm}) \\ ((j,k,m)=(1,2,3) (2,3,1) (3,1,2)). \label{lmhq} \end{multline} The free energy for the helical SDWs is obtained by substituting eq.~(\ref{lmhq}) into eq.~(\ref{icfree}): \begin{multline} f_{\text{H}} = \sum_{i=1}^{3}[A_Q|\mib{m}(\mib{Q}_i)|^2+B_{1Q}|\mib{m}(\mib{Q}_i)|^4] \\ +\sum_{(i,j)}^{(2,3)(3,1)(1,2)}(B_{1QQ}+B_{2QQ\text{H}})|\mib{m}(\mib{Q}_i)|^2|\mib{m}(\mib{Q}_j)|^2, \label{hfree} \end{multline} \noindent with \begin{equation} B_{2QQ\text{H}} \equiv \frac{B_{2QQ}+B_{3QQ}}{4}. \end{equation} \noindent Note that free energy (\ref{hfree}) depends only on the absolute squares of magnetic moments $|\mib{m}(\mib{Q}_1)|^2$, $|\mib{m}(\mib{Q}_2)|^2$, and $|\mib{m}(\mib{Q}_3)|^2$; therefore, the 1$\mib{Q}$, 2$\mib{Q}$, and 3$\mib{Q}$ helical states are degenerate with respect to the directions of polarization. We also note that free energy (\ref{hfree}) is identical to eq.~(\ref{freecom2}) in which $\tilde{A}_Q$, $B_{1Q}+\tilde{B}_{2Q}$, and $\tilde{B}_{1QQ}$ have been replaced by $A_Q$, $B_{1Q}$, and $B_{1QQ}+B_{2QQ\text{H}}$. Thus, following the same steps as in \S 3, we obtain the equilibrium states of 1$\mib{Q}$, 2$\mib{Q}$, and 3$\mib{Q}$ helical SDWs as follows. \subsubsection{1$\mib{Q}$ helically polarized SDW} \noindent Magnetic moment \begin{equation} |\mib{m}(\mib{Q}_1)|=\left[-\frac{A_{Q}}{2B_{1Q}}\right]^{1/2}. \label{1qhm} \end{equation} \noindent Stability condition \begin{align} A_{Q} &< 0, \label{1qhst1} \\ B_{1Q} &> 0. \label{1qhst2} \end{align} \noindent Equilibrium free energy \begin{equation} f_{1Q\text{H}}=-\frac{A_{Q}^2}{4B_{1Q}}. \label{1qhfree} \end{equation} \noindent Amplitude of the magnetic moment \begin{equation} M_{1Q\text{H}}^2=-\frac{A_Q}{B_{1Q}}. \label{1qhmm} \end{equation} \noindent \subsubsection{2$\mib{Q}$ helically polarized SDW} \noindent Magnetic moment \begin{equation} |\mib{m}(\mib{Q}_1)|=|\mib{m}(\mib{Q}_2)| =\left[-\frac{A_{Q}}{2B_{1Q}+B_{1QQ}+B_{2QQ\text{H}}} \right]^{1/2}. \label{2qhm} \end{equation} \noindent Stability condition \begin{align} A_{Q} &< 0, \label{2qhst1} \\ B_{1Q} &> \frac{1}{2}|B_{1QQ}+B_{2QQ\text{H}}|. \label{2qhst2} \end{align} \noindent Equilibrium free energy \begin{equation} f_{2Q\text{H}}=-\frac{A_{Q}^2}{2B_{1Q}+B_{1QQ}+B_{2QQ\text{H}}}. \label{2qhfree} \end{equation} \noindent Amplitude of the magnetic moment \begin{equation} M_{2Q\text{H}}^2 = -\frac{4A_Q}{2B_{1Q}+B_{1QQ}+B_{2QQ\text{H}}}. \label{2qhmm} \end{equation} \subsubsection{3$\mib{Q}$ helically polarized SDW} \noindent Magnetic moment \begin{multline} |\mib{m}(\mib{Q}_1)|=|\mib{m}(\mib{Q}_2)|=|\mib{m}(\mib{Q}_3)| \\ =\left[-\frac{A_Q}{2(B_{1Q}+B_{1QQ}+B_{2QQ\text{H}})} \right]^{1/2}. \label{3qhm} \end{multline} \noindent Stability condition \begin{align} &A_Q < 0, \label{3qhst1} \\ &B_{1Q} > \frac{B_{1QQ}+B_{2QQ\text{H}}}{2} \qquad \quad \>\, \text{for} \quad B_{1QQ}+B_{2QQ\text{H}}>0, \label{3qhst2} \\ &B_{1Q} > -(B_{1QQ}+B_{2QQ\text{H}}) \qquad \text{for} \quad B_{1QQ}+B_{2QQ\text{H}}<0. \label{3qhst3} \end{align} \noindent Equilibrium free energy \begin{equation} f_{3Q\text{H}}=-\frac{3A_Q^2} {4(B_{1Q}+B_{1QQ}+B_{2QQ\text{H}})}. \label{3qhfree} \end{equation} \noindent Amplitude of the magnetic moment \begin{equation} M_{3Q\text{H}}^2=-\frac{3A_Q}{B_{1Q}+B_{1QQ}+B_{2QQ\text{H}}}. \label{3qhmm} \end{equation} \subsection{Relative stability among helical SDWs} The relative stability among the incommensurate 1$\mib{Q}$, 2$\mib{Q}$, and 3$\mib{Q}$ helical SDWs has been determined by comparing stability conditions (\ref{1qhst1})-(\ref{1qhst2}), (\ref{2qhst1})-(\ref{2qhst2}) and (\ref{3qhst1})-(\ref{3qhst3}), and the equilibrium free energies (\ref{1qhfree}), (\ref{2qhfree}), and (\ref{3qhfree}). The obtained magnetic phase diagram is shown in Fig.~3 in the space of expansion coefficients $B_{1QQ}/B_{1Q}$ and $B_{2QQ\text{H}}/B_{1Q}$ for $A_Q < 0$ and $B_{1Q} > 0$ for which the solutions of helical SDWs exist. \begin{figure} \includegraphics{fig3} \caption{\label{fig3} Magnetic phase diagram for the helical SDWs for $A_Q < 0$ and $B_{1Q} > 0$. The 1$\mib{Q}$ helical (1$\text{QH}$), 2$\mib{Q}$ helical (2$\text{QH}$), and 3$\mib{Q}$ helical (3$\text{QH}$) phases are shown in the space of expansion coefficients $B_{1QQ}/B_{1Q}$ and $B_{2QQ\text{H}}/B_{1Q}$.} \end{figure} The relative stability among the helical SDWs has been found to have the same feature as that of the relative stability among the commensurate SDWs and among the linear SDWs. In the 1$\mib{Q}$ phase ($0 < B_{1Q} <|B_{1QQ}+B_{2QQ\text{H}}|/2$), the 1$\mib{Q}$ helical SDW is the only stable structure. In the 2$\mib{Q}$ phase ($0 < -(B_{1QQ}+B_{2QQ\text{H}})/2 < B_{1Q} < -(B_{1QQ}+B_{2QQ\text{H}})$), both the 1$\mib{Q}$ and 2$\mib{Q}$ helical SDWs are stable, but the latter has a lower free energy and larger amplitude of the magnetic moment. In the 3$\mib{Q}$ phase ($0 < (B_{1QQ}+B_{2QQ\text{H}})/2 < B_{1Q},\; 0< -(B_{1QQ}+B_{2QQ\text{H}}) < B_{1Q}$), all the three helical SDWs are stable, but the 3$\mib{Q}$ state yields the lowest free energy and the largest amplitude of the magnetic moment. Neutron diffraction experiments~\cite{Tsu89} on cubic $\gamma$-Fe$_{100-x}$Co$_{x}$ ($x < 4$) alloy precipitates in Cu showed a magnetic satellite peak for wave vector $\mib{Q}=(0.1,0,1)2\pi/a$. The magnetic structure was suggested to be a helical SDW but has not been determined precisely. This is because the neutron diffraction analysis cannot distinguish between the 1$\mib{Q}$ and 3$\mib{Q}$ states~\cite{Kou63} when the crystal structure of the $\gamma$-Fe precipitates is properly cubic and the distribution of domains is isotropic. The present finding that the 3$\mib{Q}$ helical MSDW is always stable as compared with the 1$\mib{Q}$ and 2$\mib{Q}$ SDWs when the 3$\mib{Q}$ solution exists suggests that the 3$\mib{Q}$ helical MSDW should be taken into consideration in addition to the 1$\mib{Q}$ helical SDW in the analysis of the magnetic structure of cubic $\gamma$-Fe. One might have a question as to why the 3$\mib{Q}$ MSDW is stable in a wide range of the parameter region (\ref{3qhst2}) or (\ref{3qhst3}), while the previous phenomenological theories concerning the Heisenberg model~\cite{Yoshi59,Kaplan59,Kaplan60} predict the 1$\mib{Q}$ helical SDW ground state. The physical reason for this is as follows. For simplicity, we consider first the free energy for the helical SDW states eq.~(\ref{hfree}) without the mode-mode coupling term (the term with ($B_{1QQ}+B_{2QQ\text{H}}$)). The free energy for the 3$\mib{Q}$ state is three times smaller than that for the 1$\mib{Q}$ state, as is seen from eqs.~(\ref{1qhfree}) and (\ref{3qhfree}). This free energy gain is caused by the increase in amplitudes of local magnetic moments as is seen from eqs.~(\ref{1qhmm}) and (\ref{3qhmm}). This is characteristic of the itinerant electron system. In the localized model reported by Yoshimori~\cite{Yoshi59} and Kaplan,~\cite{Kaplan59,Kaplan60} this mechanism of energy gain is forbidden because of the constraint of the constant amplitudes of local magnetic moments, so that the 1$\mib{Q}$ state is realized. Under the constraint of a constant amplitude of local magnetic moments, the present theory also predicts the 1$\mib{Q}$ helical SDW as the stable structure, which is consistent with the theory presented by Yoshimori and Kaplan. When the mode-mode coupling term is positive, it suppresses the increase in the amplitudes of local moments of the 3$\mib{Q}$ state (see eqs.~(\ref{1qhmm}) and (\ref{3qhmm})). As a result, the 3$\mib{Q}$ MSDW is stable only when the coefficient of the mode-mode coupling term is smaller than a critical value; otherwise, the 1$\mib{Q}$ helical SDW is stable. This condition is given by inequality (\ref{3qhst2}). \subsection{Relative stability among linear and helical SDWs} In the previous two subsections, we examined the relative stability among the linear SDWs and that among the helical SDWs, separately, assuming incommensurate conditions (\ref{cond1})-(\ref{cond3}) for the wave vectors. Free energy (\ref{icfree}) having such incommensurate wave vectors, however, allows for both linear and helical SDWs in the common space of the expansion coefficients. In order to discuss their relative stability, we present, in this subsection, a magnetic phase diagram allowing for both linear and helical SDWs. \begin{figure} \includegraphics{fig4} \caption{\label{fig4}Magnetic phase diagram for the incommensurate SDWs for $A_Q < 0$, $B_{1Q}>0$ and $B_{2QQ\text{H}}/B_{1Q}=1$. The phases of the 1$\mib{Q}$ linear SDW (1\text{Q}), the 2$\mib{Q}$ linear MSDW (2\text{Q}), the 3$\mib{Q}$ linear MSDW (3\text{Q}), the 1$\mib{Q}$ helical SDW (1\text{QH}), the 2$\mib{Q}$ helical MSDW (2\text{QH}), and the 3$\mib{Q}$ helical MSDW (3\text{QH}) are shown in the space of expansion coefficients $B_{1QQ}/B_{1Q}$ and $B_{2Q}/B_{1Q}$. Coexistence lines between the linear and helical SDWs are indicated by solid lines. } \end{figure} Comparing the equilibrium free energies for the linear and helical SDWs, we have obtained magnetic phase diagrams~\cite{fig4comment} for $A_Q < 0$ and $B_{1Q} > 0$ for which both the linear and helical SDWs are stable. Figure~\ref{fig4} shows an example of the magnetic phase diagram for $B_{2QQ\text{H}}/B_{1Q}=1$ in the space of expansion coefficients $B_{1QQ}/B_{1Q}$ and $B_{2Q}/B_{1Q}$, where $A_Q < 0$ and $B_{1Q} > 0$. We see that the 3$\mib{Q}$ linear (3\text{Q}) and 3$\mib{Q}$ helical (3\text{QH}) MSDWs occupy most of the region $-2 < B_{1QQ}/B_{1Q} < 1$. This arises from the fact that the 3$\mib{Q}$ state is stable when the mode-mode coupling term $B_{1QQ}$ or $B_{1QQ}+B_{2QQ\text{H}}$ is relatively small, as discussed in \S 4.4. Although we presented an example of the magnetic phase diagram for $B_{2QQ\text{H}}/B_{1Q}=1$ in Fig.~4, changing the value of $B_{2QQ\text{H}}/B_{1Q}$ does not alter the global feature of the magnetic phase diagram if it does not become exceedingly large. Thus we discuss the possible magnetic structures of $\gamma$-Fe on the basis of the magnetic phase diagram in Fig.~4. According to the ground-state electronic-structure calculations by Kakehashi \textit{et al.}~\cite{Kak02}, the 3$\mib{Q}$ linear MSDW is stabilized for lattice constants $6.8 < a < 7.0$ a.u. This result can be explained by the existence of the 3$\mib{Q}$ linear MSDW phase in Fig.~4. Note that the 3$\mib{Q}$ linear MSDW solution is extended to the region of the 3$\mib{Q}$ helical MSDW in Fig.~4. Thus there is another possibility that the latter MSDW is stable when the 3$\mib{Q}$ helical MSDW is taken into account in the electronic-structure calculations. It is highly desirable to investigate the relative stability between 3$\mib{Q}$ linear and helical MSDWs in the first-principles ground-state calculations of $\gamma$-Fe. \section{Summary and Discussion} We have investigated the relative stability among various SDW structures in fcc transition metals on the basis of a Ginzburg-Landau type of free energy with the terms up to the fourth order in magnetic moments. We have obtained magnetic phase diagrams in the space of expansion coefficients for both commensurate and incommensurate wave vectors, and discussed their implications on the magnetism of cubic $\gamma$-Fe. In both the commensurate and incommensurate cases, we proved that the 3$\mib{Q}$ state is always stable compared with the corresponding 2$\mib{Q}$ and 1$\mib{Q}$ states when there is a solution of the 3$\mib{Q}$ state. The energy gain of the 3$\mib{Q}$ state is caused by a change in the amplitude of the magnetic moment when an additional mode $\mib{Q}$ is introduced. Accordingly, we have the relation $M_{3Q} > M_{2Q} > M_{1Q}$. This is characteristic of itinerant magnets and has a profound effect on the magnetic phase diagram. In the localized systems, only the 1$\mib{Q}$ helical SDW is possible because of the fixed amplitudes of local magnetic moments. On the basis of the magnetic phase diagrams for the commensurate case (Fig.~1) and for the incommensurate case (Figs.~2-4), we have discussed the possible magnetic structures of cubic $\gamma$-Fe. According to the ground-state electronic-structure calculations,~\cite{Mry91,Uhl92,Kor96,Byl98,Byl991,Byl992,Kno00, Sjo02,Kak99,Kak02} the magnetism of $\gamma$-Fe depends sensitively on the volume; the first-kind AF state appears for lattice constants $a \lesssim 6.5$ a.u., SDW structures for lattice constants 6.5 $\lesssim a \lesssim$ 7.0 a.u., and the ferromagnetic state for $a \gtrsim 7.0$ a.u. The magnetic structures for 6.5 $\lesssim a \lesssim$ 7.0 a.u. are under debate and there is a wide diversity in the predicted results. The ground-state electronic-structure calculations by Kakehashi \textit{et al.}~\cite{Kak02} and those by Fujii \textit{et al.}~\cite{Fuj91} predicted the commensurate 3$\hat{\mib{Q}}$ state to appear for lattice constants $a \leq$ 6.8 a.u. This result can be explained by the existence of the commensurate 3$\hat{\mib{Q}}$ phase, as shown in Fig.~1, specifically in the region of the phase diagram with $\tilde{B}_{1QQ} \approx 2(B_{1Q}+\tilde{B}_{2Q})$ and $\tilde{B}_{1QQ} > 0$. For larger lattice constants, $6.8 \le a \le 7.0$, the MD calculations~\cite{Kak02} for $\gamma$-Fe predicted the incommensurate 3$\mib{Q}$ linear MSDW state with $\mib{Q}=(0.6,0,0)2\pi/a$, $(0,0.6,0)2\pi/a$, and $(0,0,0.6)2\pi/a$. This can be explained by the existence of the 3$\mib{Q}$ linear MSDW phase in Fig.~4. It should be noted, however, that there is another possibility of the 3$\mib{Q}$ helical phase since the 3$\mib{Q}$ linear MSDW solution is extended to the region of the 3$\mib{Q}$ helical phase. It is desirable to examine the relative stability between the 3$\mib{Q}$ linear and helical states for the above wave vector by means of the ground-state electronic-structure calculations. Experimentally, the SDW of cubic $\gamma$-Fe was found for wave vector $\mib{Q}=(0.1,0,1)2\pi/a$~\cite{Tsu89}. It was suggested that a helical spin configuration is a more highly possible structure of the SDW on the basis of the observation that there were no appreciable indications of a strain wave with 2$\mib{Q}$ and there was no third-harmonic component. Following that work, most of the ground-state calculations for $\gamma$-Fe were concentrated on finding a wave vector which minimizes the energy within the 1$\mib{Q}$ helical structure. It should be emphasized, however, that the 3$\mib{Q}$ helical MSDW with $\mib{Q}=(0.1,0,1)2\pi/a$, $(1,0.1,0)2\pi/a$, and $(0,1,0.1)2\pi/a$ is also consistent with the experimental results. This is because the neutron diffraction analysis cannot distinguish between the 1$\mib{Q}$ and 3$\mib{Q}$ states~\cite{Kou63} when the crystal structure of the $\gamma$-Fe precipitates is properly cubic and the distribution of domains is isotropic. The MD approach presented by Kakehashi and co-workers~\cite{Kak98,Kak99,Kak02} can predict the ground state without assuming the magnetic structure at the beginning. The resolution for a wave vector of magnetic structure in the MD calculations, however, is $\delta \ge 0.2$ (in units of $2\pi/a$) at the present stage; therefore it cannot reproduce the MSDW having the observed fraction ($\delta=0.1$).~\cite{Tsu89} One needs more accurate ground-state electronic-structure calculations to allow for the possibility of the 3$\mib{Q}$ helical MSDW with the experimental wave vector. Regarding the consistency between theory and experiment, it should also be noted that although the experimentally suggested magnetic structure is a helical SDW,~\cite{Tsu89} one should not exclude the possibility of the commensurate~\cite{Fuj91, Kak02} and linear~\cite{Kak02} MSDWs which were found in the ground-state calculations for $\gamma$-Fe, because of the strong dependence of the magnetism of $\gamma$-Fe on the volume and strain. Experimentally, the SDW of cubic $\gamma$-Fe is found in a narrow range of lattice constants approximately equal to that of Cu, and the volume dependence of the magnetic structure of cubic $\gamma$-Fe has not been investigated. It is also noted that the possibility of a small lattice distortion is suggested at the onset of the 1$\mib{Q}$ SDW in the $\gamma$-Fe precipitates in Cu,~\cite{Nao04} which might change the stable structure of $\gamma$-Fe. In the present phenomenological analysis, we focussed upon the magnetism of fcc transition metals, specifically, that of cubic $\gamma$-Fe. Because the spin-orbit coupling effects are small in these systems, we neglected the anisotropic terms $C(l,l^{\prime},l^{\prime\prime},l^{\prime\prime\prime})$ in free energy (\ref{freefcc}). In order to examine the effect of anisotropy, we also calculated the magnetic phase diagrams including the anisotropic terms in the free energy. We found two main effects. First, inclusion of the anisotropic term partially removes the degeneracy of each SDW state; the most stable states become the SDWs having the longitudinal and transverse polarizations with respect to the $x$, $y$, and $z$ axes, or the superposition of the three longitudinally (transversely) polarized states. Note that the longitudinal and transverse SDWs are still degenerate there relative to each other. Second, the phase boundary between SDW states is subject to a small displacement due to the anisotropy, but the global features of the magnetic phase diagrams in Figs.~\ref{fig1}-\ref{fig4} remain qualitatively the same as long as the anisotropic terms are sufficiently small; the conclusions of the present work are not changed by considering the anisotropic terms. \begin{acknowledgments} We are grateful to Professor Y. Tsunoda for valuable discussions on the experimental aspects of the SDW states of the cubic $\gamma$-Fe precipitates in Cu. \end{acknowledgments}
3,212,635,537,830
arxiv
\section{Introduction} Recently, organic semiconductors have been attracting considerable attention for their use in electronic devices like organic light-emitting diodes and organic solar cells. \cite{Brutting} Carrier transport in molecular solids can be described by hopping transitions between neighboring molecules and the mobility is considered to be strongly influenced by electrostatic energy distribution on ionized molecules. \cite{Bassler_93} In amorphous molecular solids, the electrostatic energy at each molecule is different because the polarization originating from the surrounding molecules fluctuates if the molecular orientation and arrangement are distributed. \cite{Dunlap_96,Novikov_94,Young_95,Novikov_98,Seki_01} A Gaussian distribution of the site energy is expected from the central limit theorem and the variance $\sigma^2$ characterizes the site energy disorder. \cite{Bassler_93,Dunlap_96,Novikov_94,Young_95,Seki_01} As a result of energetic disorder, the mobility deviates from the Arrhenius law and scales with the reciprocal square of temperature. In analyzing experiments and interpreting computer simulation results, the low-field drift mobility in disordered organic solids has been commonly expressed in the form \cite{Seki_01,Parris_01,Lukyanov_10,Fishchuk_13,Baranovskii14} \begin{align} \mu_{\rm eff} \propto \exp[-E_a/(k_{\rm B} T)-C_d \sigma^2 /(k_{\rm B}T)^2], \label{eq:scaling1} \end{align} with a parameter $E_a$ characterizing the activation energy. $C_d$ is a numerical constant independent of $\sigma$ and temperature $T$. $k_{\rm B}$ is the Boltzmann constant. The expression given by Eq. (\ref{eq:scaling1}) has been frequently used to determine $\sigma$ from experimental data by plotting $\ln \mu_{\rm eff}$ against $1/T^2$. \cite{Tessler14,Bassler_93,Baranovskii14,Ochse99,Bleyl99,Hertel_08} In order to determine $\sigma$, the numerical value of $C_d$ should be known in advance and it is important to theoretically determine $C_d$ to extract the correct value of $\sigma$ from experimental data. By means of simulations which assumed the Gaussian density of states and a carrier transport model based on phonon assisted tunneling and hopping (Miller-Abrahams (MA) process), \cite{Ambegaokar_71,Bassler_93} the numerical parameter $C_d$ was found to be equal to $0.44$ in 3 dimensions (3D). \cite{Bassler_93} In 1 dimension (1D), $C_d=1$ with an extra weak $\sigma$-dependence is obtained for the same model by analytical exact calculation. \cite{Cordes_01} Clearly, the value of $C_d$ depends on the dimensionality and the coordination number. In principle, the parameter $C_d$ may be influenced by elementary transition rates. The carrier transport in organic solids can be regarded as series of self-exchange reactions \cite{Soos_00,Seki_01,Verbeek_92} and the elementary transition rate of self-exchange reaction in solution is expressed by the Marcus equation. \cite{Marcus_56,Marcus_64} The Marcus equation is equivalent to the small polaron model in organic solids by reinterpreting the reorganization energy. \cite{Holstein_59,HOLSTEIN_59_2} The reorganization energy in solution mainly originates from the coupling between the charge and solvent dipoles. In organic solids, it originates from the vibronic coupling in addition to the coupling between the charge and surrounding dipoles. Recently, the Marcus equation has been applied to study carrier transport in disordered molecular solids. \cite{Soos_00,Seki_01,Verbeek_92,Baranovskii14} In 1 D, $C_d=3/4$ was obtained by analytical exact calculation based on the mean first passage time using the Marcus equation and the Gaussian density of states. \cite{Seki_01} This value is different from $C_d=1$ obtained using MA process. For higher dimension, the value of $C_d$ is still controversial. The obtained values vary between $1/8$ and $0.6$, and there are some reports that $C_d$ depends on the value of the reorganization energy. \cite{Cottaar_11,Fishchuk_03,Fishchuk_13,Radin_15,Baranovskii14} In this manuscript, we study the effective mobility for 2D square lattice (the coordination number z=4) and 3D cubic lattice (the coordination number z=6) using the Marcus equation and the Gaussian density of states. The effective mobility is approximately obtained by applying an effective medium approximation (EMA). In general, the self-consistency equation obtained by EMA is expressed as an integral equation. In this manuscript, the integral has been evaluated numerically, and also an analytical expression has been obtained by further approximating the integration. The result is expressed as a simple scaling form given by Eq. (\ref{eq:scaling1}). The validity of approximating integration is checked by comparison to the original self-consistency equation. The EMA employed in this study is known to give the exact results for the nearest neighbor hopping transport in periodic lattices both in the limit of $z=2$ (one dimensional periodic lattice) and $z \rightarrow \infty$. \cite{Haus_87,Kehr_96} However, the EMA results are approximate for other values of the coordination number. To assess the quality of the EMA approximation, we have performed kinetic Monte-Carlo simulations and compared the results with those obtained by EMA. In Sec. II, we show EMA results. In Sec. III, the results of EMA are compared with those obtained by kinetic Monte-Carlo simulations. In Sec. IV, we discuss our results, and in Sec. V we apply them to analyze experimental data. The conclusion is given in Sec. VI. \section{Theory} \label{sec:II} When carrier transport occurs by incoherent hopping transitions of a small polaron between adjacent molecules, the transition rate from the site denoted by $i$ to that denoted by $j$ can be given by the Marcus equation, \cite{Marcus_56,Marcus_64} \begin{align} \Gamma_{ij} (\Delta E_i) = \frac{2\pi}{\hbar}\frac{J^2}{\sqrt{4\pi \lambda k_{\rm B} T}} \exp \left(- \frac{\left(\Delta E_i+\lambda\right)^2}{4\lambda k_{\rm B} T} \right), \label{eq:Marcus} \end{align} where $\Delta E_i=E_j-E_i$, $E_i$ and $E_j$ are the site energies, $\hbar$ is the Planck constant divided by $2\pi$, $J$ is the transfer integral, and $\lambda$ is the reorganization energy. In solid phases, the reorganization can be governed by both vibronic coupling \cite{Levich_59} and the dielectric relaxation of surroundings \cite{Marcus_56,Marcus_64}. For many molecular solids, the value of the reorganization energy can be $\lambda \sim 3-15 k_{\rm B} T$. \cite{Bredas_04} The density of states of $E_i$ is assumed to obey the Gaussian distribution, \begin{align} g(E_i)=\frac{1}{\sqrt{2\pi \sigma^2}} \exp \left(-\frac{E_i^2}{2 \sigma^2}\right). \label{eq:siteenergy} \end{align} The mean energy $\langle E_i \rangle$ can in principle be set to an arbitrary value since the Marcus equation is expressed by the site energy difference. Here, we set $\langle E_i \rangle=0$. For the Gaussian density of states, the mean square displacement of a particle is known to be proportional to time, except for a certain non-stationary period. Such a behavior is confirmed by our simulations, as will be described later. If another form of the density of states is considered, described by a heavy-tailed exponential function, the transient non-stationary period will be prolonged. \cite{Barkai_98,Harvey_91,Berlin_93} During the non-stationary period, the mean square displacement is not proportional to time and the diffusion coefficient is no longer a constant. \cite{Barkai_98,Harvey_91,Berlin_93} Here, we focus on the effect of random energies on the diffusion constant of normal diffusion and will not study the effect of a heavy tailed distribution leading to the anomalous diffusion. The transition rate $\Gamma (0)$ in the absence of the site energy distribution is obtained as \begin{align} \Gamma (0) = \frac{2\pi}{\hbar}\frac{J^2}{\sqrt{4\pi \lambda k_{\rm B} T}} \exp \left(- \frac{\lambda}{4 k_{\rm B} T} \right), \label{eq:Gamma0} \end{align} where the activation energy of hopping is given by $\lambda/4$. In the below, we consider the mobility of a single carrier on a hypercubic lattice. The coordination number of the lattice is denoted by $z$. We have $z=2d$ for a $d$-dimensional hypercubic lattice. On each site, a random site energy is assigned and the distribution is given by Eq. (\ref{eq:siteenergy}). Because the Marcus equation depends on the site energy, the carrier mobility differs for each realization of random site energy. The effective mobility can be defined as its ensemble average. In EMA, the self-consistency condition is imposed to obtain the effective transition rate. The relation between the diffusion constant and the transition rate in the absence of the site energy distribution is given by $D_0=a^2 \Gamma(0)$, where $a$ is the lattice constant. The effective diffusion constant can be expressed using the effective transition rate by $D_{\rm eff}=a^2 \Gamma_{\rm eff}$. The ratio becomes \begin{align} \frac{D_{\rm eff}}{D_0}=\frac{\Gamma_{\rm eff}}{\Gamma(0)}. \label{eq:ratioD} \end{align} The mobility satisfies the Einstein relation in the absence of the site energy distribution in the zero field limit $D_0=\mu_0 k_{\rm B} T/e$. The effective mobility also satisfies the Einstein relation $D_{\rm eff}=\mu_{\rm eff} k_{\rm B} T/e$ in 1 dimension in the zero field limit. \cite{Derrida} In the higher dimension, the Einstein relation is numerically confirmed under certain conditions in the dilute limit. \cite{Haus_87} Since we are interested in zero field mobility and the Einstein relation holds under linear response, we can safely assume \begin{align} \frac{\mu_{\rm eff}}{\mu_0}=\frac{\Gamma_{\rm eff}}{\Gamma(0)} \label{eq:ratiomu} \end{align} and calculate $\Gamma_{\rm eff}/\Gamma(0)$ to obtain the mobility ratio given by $\mu_{\rm eff}/\mu_0$, where $\Gamma(0)$ is given by Eq. (\ref{eq:Gamma0}). The Einstein relation results from a linear response theory for stationary processes, so it is applicable when the external electric field is sufficiently small. \cite{Toda_92} The condition of the weak field depends on the energetic disorder. \cite{Richert_89,Bouchaud_89,Derrida} A stronger electric field dependence was found for the effective diffusion constant compared to that of the effective mobility. \cite{Richert_89,Bouchaud_89} It should also be noted that the Einstein relation does not hold at short times before the process becomes stationary. This period again depends on the degree of energetic disorder. \cite{Schirmacher,BERLIN_96,Berlin_93,Berlin_99,Barkai_98,Harvey_91} We confirm the stationarity of the processes considered in this study by analyzing the simulation results obtained over wide ranges of time. In the simplest EMA, we consider random energy for two neighboring sites and ensemble average of a single transition rate connecting these sites is calculated while other transitions are expressed by an effective transition rate. The self-consistency condition is that the average over the different realizations of the random energy of two neighboring sites will reproduce the effective transition rate. When a single transition rate between a pair of neighboring sites is allowed to fluctuate and these sites are embedded in the effective medium, these two random sites should be statistically equivalent. As shown in Appendix A, the EMA can be simplified, if the rate is symmetrized. \cite{Haus_87,Kehr_96} The symmetrized rate in view of the detailed balance can be given by \begin{align} \Gamma^{\rm sym} = \rho_i^{\rm(eq)} \Gamma_{ij}, \label{eq:symmetricrates} \end{align} where we abbreviated $\Gamma_{ij}^{\rm sym}$ by $\Gamma^{\rm sym}$. The abbreviation will not introduce confusion since only a single transition rate fluctuates. The equilibrium occupation probability at site $i$ denoted by $\rho_i^{\rm(eq)}$ can be expressed as \begin{align} \rho_i^{\rm(eq)}=\frac{\exp[-E_i/(k_{\rm B} T)]}{\langle \exp[- E_i/(k_{\rm B} T)] \rangle}= \exp\left[-\frac{E_i}{k_{\rm B} T} -\frac{1}{2}\left(\frac{\sigma}{k_{\rm B} T}\right)^2 \right]. \label{eq:rhoeq} \end{align} By using the Marcus hopping rate, $\Gamma^{\rm sym}$ can be explicitly written as \begin{align} \Gamma^{\rm sym}=\frac{2\pi}{\hbar}\frac{J^2}{\sqrt{4\pi \lambda k_{\rm B} T}} \exp \left(- \frac{\left(E_j-E_i\right)^2}{4\lambda k_{\rm B} T}- \frac{E_j+E_i}{2k_{\rm B} T} -\frac{\lambda}{4k_{\rm B} T}- \frac{\sigma^2}{2(k_{\rm B} T)^2}\right). \label{eq:symMarcus} \end{align} The self-consistency condition is given by (see Appendix A)\cite{Kirkpatrick_73,Haus_87,Kehr_96} \begin{align} \left\langle \frac{\Gamma_{\rm eff}-\Gamma^{\rm sym}}{(z/2-1) \Gamma_{\rm eff}+ \Gamma^{\rm sym}} \right\rangle=0, \label{eq:selfconsistent} \end{align} where $z$ is the coordination number, $\Gamma_{\rm eff}$ denotes the effective mobility and $\langle \cdots \rangle$ denotes the ensemble average expressed by \begin{align} \langle \cdots \rangle = \int_{-\infty}^\infty dE_i \int_{-\infty}^\infty dE_j \frac{1}{2\pi \sigma^2} \exp \left( -\frac{E_i^2+E_j^2}{2 \sigma^2} \right) \cdots. \label{eq:av} \end{align} When $z=2$ (1D), Eq. (\ref{eq:selfconsistent}) reduces to \cite{Kehr_96} \begin{align} \frac{1}{\Gamma_{\rm eff}}= \left\langle \frac{1}{\Gamma^{\rm sym}} \right\rangle. \end{align} The result is the same as the exact one obtained using the mean first passage time expressed as, \cite{Seki_01} \begin{align} \frac{\Gamma_{\rm eff}}{\Gamma(0)} = \exp\left[-\frac{3}{4} \left(\frac{\sigma }{k_{\rm B}T}\right)^2 \right], \label{eq:scaling1_1d} \end{align} where $\Gamma(0)$ is given by Eq. (\ref{eq:Gamma0}) and is proportional to $\exp[-\lambda/(4 k_{\rm B} T)]/\sqrt{\lambda k_{\rm B} T}$. To solve analytically the self-consistency condition for $z>2$, we rewrite the self-consistency condition as \begin{align} \frac{1}{z/2-1}\left\langle 1- \frac{d \Gamma^{\rm sym}}{(z/2-1) \Gamma_{\rm eff}+ \Gamma^{\rm sym}} \right\rangle=0. \label{eq:selfcc} \end{align} By rearrangement, we finally obtain \begin{align} \frac{2}{z} = \left\langle \frac{1}{1+(z/2-1)\Gamma_{\rm eff}/\Gamma^{\rm sym} } \right\rangle. \label{eq:selfcons_basic} \end{align} Here, we note that the factor $1/[1+(z/2-1)\Gamma_{\rm eff}/\Gamma^{\rm sym}]$ resembles Fermi-Dirac distribution function, which we will study closely. In order to see the pure influence of the random site energy, we introduce a normalized transition rate defined by, \begin{align} \Gamma_{\rm r} (\Delta E_i)=\frac{\Gamma_{ij} (\Delta E_i)}{\Gamma (0)} = \exp \left(- \frac{(\Delta E_i)^2}{4\lambda k_{\rm B} T}- \frac{\Delta E_i}{2k_{\rm B} T} \right). \label{eq:Gammar} \end{align} We can express $\Gamma^{\rm sym}/\Gamma_{\rm eff}$ as \begin{align} \frac{\Gamma^{\rm sym}}{\Gamma_{\rm eff}}=\frac{\Gamma_{\rm r} (\Delta E_i)\exp[-E_i/(k_{\rm B} T)]}{G_{\rm eff}}, \label{eq:convert} \end{align} where we defined \begin{align} G_{\rm eff}=\Gamma_{\rm eff} \langle \exp[-E_i/(k_{\rm B} T)] \rangle/\Gamma (0). \label{eq:Geff} \end{align} Equation (\ref{eq:selfcons_basic}) can be reexpressed as \begin{align} \frac{2}{z} = \left\langle \frac{1}{1+\exp \left[\left(E_i-\eta(\Delta E_i) \right)/(k_{\rm B} T)\right] } \right\rangle, \label{eq:selfcons_basic1} \end{align} where $\eta(\Delta E_i)$ is defined by, \begin{align} \eta(\Delta E_i)&= - k_{\rm B} T \ln \left[(z/2-1) G_{\rm eff}/\Gamma_{\rm r} (\Delta E_i)\right] \label{eq:chemp}\\ &=- k_{\rm B} T \ln \left[\left(\frac{z}{2}-1\right) G_{\rm eff}\right] - \frac{(\Delta E_i)^2}{4\lambda}- \frac{\Delta E_i}{2}. \label{eq:chemp1} \end{align} Equation (\ref{eq:selfcons_basic1}) can be further rearranged into \begin{align} \frac{2}{z} = \left\langle \frac{1}{1+\exp \left[\left(E_i+\frac{\Delta E_i}{2}+\frac{(\Delta E_i)^2}{4\lambda}-\eta_0 \right)/(k_{\rm B} T)\right] } \right\rangle, \label{eq:selfcons_basic1_r} \end{align} where $\eta_0$ is defined by \begin{align} \eta_0=- k_{\rm B} T \ln \left[\left(\frac{z}{2}-1\right) G_{\rm eff}\right] . \label{eq:chemp1} \end{align} The quantity inside $\left\langle \cdots \right\rangle$ in Eq. (\ref{eq:selfcons_basic1_r}) can be approximated as $1$ when $E_i+\Delta E_i/2+(\Delta E_i)^2/(4\lambda)$ is smaller than $\eta_0$ and decreases to zero as the value of $E_i+\Delta E_i/2+(\Delta E_i)^2/(4\lambda)$ increases over that of $\eta_0$. In this sense, $\eta_0$ plays a similar role to the chemical potential in Fermi-Dirac distribution function. Note that the value of $\eta_0$ can be determined for a given value of $G_{\rm eff}$ and $z$. The percolation path for the given value of $\eta_0$ consists of random energies satisfying $E_i+\Delta E_i/2+(\Delta E_i)^2/(4\lambda)\leq\eta_0 $. The interpretation of EMA results in terms of a percolation path was previously discussed for the transition rates used to study ion transport. \cite{Schirmacher} We also note that Eq. (\ref{eq:av}) can be rewritten as \begin{align} \langle \cdots \rangle = \int_{-\infty}^\infty d\Delta E_i \int_{-\infty}^\infty dE_i \frac{1}{2\pi \sigma^2} \exp \left( -\frac{(E_i+\Delta E_i/2)^2}{\sigma^2} - \frac{\Delta E_i^2}{4 \sigma^2} \right) \cdots. \label{eq:av1} \end{align} The average with respect to $E_i$ is given by a Gaussian function whose maximum is at $-\Delta E_i/2$. We need different approximation to evaluate the integration with respect to $E_i$ depending on the value of the maximum given by $-\Delta E_i/2$ and $\eta(\Delta E_i)$. The condition $\eta(\Delta E_i)<-\Delta E_i/2$ can be expressed as \begin{align} \ln \left[ \frac{\Gamma_{\rm r} (\Delta E_i)}{(z/2-1)G_{\rm eff}} \right] < -\frac{\Delta E_i}{2 k_{\rm B} T} . \label{eq:cond1} \end{align} For the Marcus rate equation, Eq. (\ref{eq:cond1}) can be expressed using Eq. (\ref{eq:Gammar}) as \begin{align} \exp \left(- \frac{(\Delta E_i)^2}{4\lambda k_{\rm B} T} \right) < \left(\frac{z}{2}-1\right)G_{\rm eff} . \label{eq:cond2_1} \end{align} We note that Eq. (\ref{eq:cond2_1}) holds for $z\geq 4$ at least when $\sigma$ is small so that $G_{\rm eff} \sim 1$. Therefore, $\eta(\Delta E_i)<-\Delta E_i/2$ is the appropriate condition for $z\geq 4$. When $\eta(\Delta E_i)<-\Delta E_i/2$, we can employ the saddle point method to reduce the double integration in Eq. (\ref{eq:selfcons_basic1}) to single integration \begin{align} \frac{2}{z} = \frac{1}{2\sqrt{\pi \sigma^2}} \int_{-\infty}^\infty d\Delta E_i \frac{\exp \left[ - \Delta E_i^2/\left(4 \sigma^2\right)\right]} {1+(z/2-1) G_{\rm eff}\exp \left[-\Delta E_i/\left(2k_{\rm B} T\right)\right]/\Gamma_{\rm r} (-\Delta E_i/2) } . \label{eq:selfcons_basic2} \end{align} We numerically confirm the solution of Eq. (\ref{eq:selfcons_basic2}) by comparison with that of the original self-consistency equation given by Eq. (\ref{eq:selfconsistent}) in Fig. \ref{fig:1}. When $\lambda/(k_{\rm B} T)=10$, we find quite good agreement. When $\lambda/(k_{\rm B} T)=3$, some deviation is observed. \begin{figure} \includegraphics[width=1\columnwidth]{Marcus_EMA} \caption{$\Gamma_{\rm eff} /\Gamma (0)$ plotted as a function of $\sigma^2/(k_{\rm B} T)^2$. (a) 2D ($z=4$) and (b) 3D ($z=6$). The solid lines represent the scaling relation with $C_d=1/2$ obtained using EMA and given by Eq. (\ref{eq:selfcons_basic3}). Black circles and long dashed line indicate semi-analytical EMA results obtained for $\lambda/(k_{\rm B} T)=10$ by numerically evaluating Eq. (\ref{eq:selfconsistent}) (double integration) and Eq. (\ref{eq:selfcons_basic2}) (single integration), respectively. Red squares and red short dashed line indicate analogous semi-analytical EMA results obtained for $\lambda/(k_{\rm B} T)=3$. } \label{fig:1} \end{figure} Furthermore, when $\sigma$ is small we can again employ the saddle point method and obtain $2/z \sim 1/[1+(z/2-1) G_{\rm eff}]$, where we have used $\Gamma_{\rm r} (0)=1$. By introducing the definition of $G_{\rm eff}$ given by Eq. (\ref{eq:Geff}), we obtain a scaling relation, \begin{align} \Gamma_{\rm eff} /\Gamma (0) \approx 1/\langle \exp[-\beta E_i/(k_{\rm B} T)] \rangle = \exp\left[-\frac{1}{2}\left(\frac{\sigma}{k_{\rm B} T}\right)^2\right], \label{eq:selfcons_basic3} \end{align} where the transition rate in the absence of disorder $\Gamma(0)$ is given by Eq. (\ref{eq:Gamma0}). As shown in Fig. \ref{fig:1}, the simple scaling relation of Eq. (\ref{eq:selfcons_basic3}) gives very close result to that obtained from the original self-consistency equation Eq. (\ref{eq:selfconsistent}) when $\lambda/(k_{\rm B} T)=10$. When $\lambda/(k_{\rm B} T)=3$, the degree of accuracy of the scaling relation is reduced. In the following, we study the validity of the scaling relation by using kinetic Monte-Carlo simulations. \section{Simulation results} \label{sec:III} The simulation is carried out on a square lattice ($z=4$) or a cubic lattice ($z=6$), with the lattice constant being assumed as $a=1$. A particle is initially placed at site (0,0) or (0,0,0), respectively. The energy at this site ($E_i$) and the energies at all nearest neighbor sites ($E_j$, $j=1,2,\cdots, z$) are sampled from the normal distribution $N(0,\sigma)$. The transition rates to the nearest neighbor sites, $\Gamma_{ij}$, are calculated from Eq. (\ref{eq:Marcus}), where the frequency factor $\nu_0=\left(2\pi/\hbar\right)J^2/\sqrt{4\pi \lambda k_{\rm B} T}$ is assumed equal to one. It is randomly decided to which of the nearest neighbor sites the particle will hop, with the probability of each hop being proportional to the corresponding transition rate $\Gamma_{ij}$. The time for the hop is sampled from an exponential distribution with the mean value $\tau=\left( \Gamma_{\rm tot}\right)^{-1}$, where $\Gamma_{\rm tot}=\sum_{j=1}^z \Gamma_{ij}$. The selected hop is now executed, and the procedure of sampling energies for new nearest neighbor sites (if not sampled before), calculating the transition rates, selecting the next hop, and so on, is repeated. The simulation run is carried out until the assumed total time $t_{\rm sim}$ is reached, and the squared distance of the particle from the origin $r^2(t_{\rm sim})$ is then recorded. The energies that are assigned to the lattice sites are kept in the memory for the whole duration of the simulation run. The simulation is repeated for $\sim 10^4$ independent runs to obtain the mean value $\langle r^2(t_{\rm sim}) \rangle$. The effective diffusion constant, relative to $D_0$, is then calculated as \begin{align} \frac{D_{\rm eff}}{D_0}=\frac{\langle r^2(t_{\rm sim}) \rangle}{t_{\rm sim} a^2 z \nu_0 \exp \left[-\lambda/(4 k_{\rm B} T) \right]}. \label{eq:sim1} \end{align} $D_{\rm eff}/D_0$ is essentially equivalent to $\Gamma_{\rm eff}/\Gamma(0)$ (cf. Eq. (\ref{eq:ratioD})). The simulation time $t_{\rm sim}$ has to be sufficiently long so that the long-time limit of Eq. (\ref{eq:sim1}) can be achieved. We analyzed the dependence of $D_{\rm eff}/D_0$ on $t_{\rm sim}$ for each set of the parameters, and found that it shows a decreasing trend at small values of $t_{\rm sim}$. For the final results presented in Fig. \ref{fig:2}, sufficiently long simulation times were chosen, for which this decreasing trend could no longer be observed. \begin{figure} \includegraphics[width=1\columnwidth]{MarcusMC_2D_3D} \caption{$\Gamma_{\rm eff} /\Gamma (0)$ plotted as a function of $\sigma^2/(k_{\rm B} T)^2$. (a) 2D ($z=4$) and (b) 3D ($z=6$). The line represents the scaling relation given by Eq. (\ref{eq:selfcons_basic3}) obtained using EMA. The crosses, circles, squares, and triangles indicate the kinetic Monte-Carlo simulation results for $\lambda/(k_{\rm B} T)=15, 10, 5, 3$, respectively. The dashed line in (b) indicates the result of fitting to Eq. (\ref{eq:fitt}) when $\lambda/(k_{\rm B} T)=10$. $C_d=0.42$ is obtained from fitting. } \label{fig:2} \end{figure} The simulation results are compared with the EMA results in Fig. \ref{fig:2}. For 2D ($z=4$), the simulation results and that of the scaling relation given by Eq. (\ref{eq:selfcons_basic3}) coincide for $\lambda/(k_{\rm B} T) \geq 10$. When the value of $\lambda/(k_{\rm B} T)$ is below $10$, the simulation results of $D_{\rm eff} /D (0)$ depend on $\lambda/(k_{\rm B} T)$ and are below the line drawn using Eq. (\ref{eq:selfcons_basic3}). For 3D ($z=6$), the results of kinetic Monte-Carlo simulations are independent of $\lambda/(k_{\rm B} T)$ when $\lambda/(k_{\rm B} T) \geq 10$. Unlike in the case of 2D ($z=4$), the line drawn using Eq. (\ref{eq:selfcons_basic3}) is now below the simulation results. If we assume that the activation energy is not influenced by random energy and is expressed by $E_a=\lambda/4$, we obtain $C_d=0.42$ by fitting to \begin{align} D_{\rm eff}/D_0=\exp\left[-C_d \sigma^2/(k_{\rm B} T)^2\right] \label{eq:fitt} \end{align} when $\lambda/(k_{\rm B} T) \geq 10$. This value is smaller than $C_d=1/2$ obtained from the scaling relation given by Eq. (\ref{eq:selfcons_basic3}). When $\lambda/(k_{\rm B} T)$ is below $10$, the simulation results of $\Gamma_{\rm eff} /\Gamma (0)$ depend on $\lambda/(k_{\rm B} T)$ and approach the line drawn using Eq. (\ref{eq:selfcons_basic3}) when the value of $\lambda/(k_{\rm B} T)$ decreases. The results of EMA show systematic deviation from the simulation results depending on $\lambda/(k_{\rm B} T)$ and the coordination number $z$, although the magnitude of this deviation is not large. The deviation could originate from the use of the simplest version of EMA. In the simplest version of EMA, only a single transition rate is under the influence of random energy. The random energy in other sites are taken into account by the representative random transition rate in the effective medium. The accuracy of this approximation depends on the coordination number and the value of the reorganization energy as shown in Fig. \ref{fig:2}. \section{Discussion} The effective mobility relative to $\mu_0$ is independent of $\lambda/(k_{\rm B} T)$ in 1D. \cite{Seki_01} For higher dimensions ($z>2$), $\mu_{\rm eff}/\mu_0$ depends on $\lambda/(k_{\rm B} T)$ when $\lambda/(k_{\rm B} T)<10$. When $\lambda/(k_{\rm B} T)\geq 10$, Eq. (\ref{eq:fitt}) with $C_d=1/2$ reproduces the simulation results of 2D ($z=4$) and $C_d=0.42$ is obtained from fitting to the simulation results for 3D ($z=6$). So far, various values of $C_d$ were reported for the Marcus transition rate by assuming Eq. (\ref{eq:scaling1}) in 3D. Using a different form of EMA self-consistency equation, Fishchuk {\it et al.} obtained $C_d=1/8$ for 3D when $\lambda/2>\sigma$. \cite{Fishchuk_03} Later, it was suggested that $C_d$ value varies between $0.25-0.44$ depending on $\lambda/\sigma$. \cite{Fishchuk_13} Recently, a scaling form of Eq. (\ref{eq:scaling1}) with $C_d=1/2$ was proposed using a concept of fat percolation. \cite{Cottaar_11} In the fat percolation theory, $E_a$ may contain contribution from random site energy and can be different from $\lambda/4$. The results of fat percolation theory were compared to the numerical results obtained using the master equation method. \cite{Cottaar_11} The obtained numerical values of $C_d$ were in the range between $0.69-0.44$ for simple cubic lattice by regarding $E_a$ as a free parameter for fitting. \cite{Cottaar_11} $C_d$ values determined from fitting can be influenced by $E_a$ values. We share a conclusion of scaling with $C_d=1/2$ for simple cubic lattice obtained by the fat percolation theory. There could be subtle issues regarding how $E_a=\lambda/4$ and $C_d=1/2$ should be corrected for the simple cubic lattice, where $16 \%$ smaller value of $C_d$ is obtained by fitting to the results of kinetic Monte-Carlo simulations using $E_a=\lambda/4$ for $\lambda/(k_{\rm B} T)\geq 10$. In this study, an analytical expression was approximately derived from the self-consistency equation of EMA. In the fat percolation theory, an additional dependence of $E_a$ on $\sigma$ can be considered. \cite{Cottaar_11} The correction term is too small compared to the accuracy of EMA used in this study. For simplicity, we put $E_a=\lambda/4$ to determine $C_d$ using kinetic Monte-Carlo simulations. More elaborate theories are required to study such deviations. Very recently, $E_a=\lambda/4$ and $C_d=1/4$ have been suggested as the upper bound using the generalized effective medium theory. \cite{Radin_15} We can obtain $E_a=\lambda/4$ and $C_d=1/4$ by taking $z \rightarrow \infty$ limit in EMA. (see Appendix B) For simple cubic lattice we have $z=6$. The value of $z=6$ is too small to regard it as $z \rightarrow \infty$. As a result, the result of EMA for $z=6$ is very different from that obtained by taking the limit of $z \rightarrow \infty$. We focused on the effective mobility when the carrier concentration is low. At high carrier concentration, one should note that carrier transitions are not allowed if the target sites are occupied. When the effective mobility is obtained under the steady state at high carrier concentration, low energy states are filled. Since the part of density of states below a certain energy is mainly occupied, the unoccupied density of states differs from the density of states that includes occupied states. The carrier mobility increases by increasing the carrier concentration when the filling effect sets in. \cite{Cottaar_11,Fishchuk_13,Lu_15} Recently, it was under debate whether $C_d$ depends on the ratio between $\lambda$ and $\sigma$ at high carrier concentration. \cite{Cottaar_11,Fishchuk_13,Lu_15} In Ref. \onlinecite{Fishchuk_13}, the dependence of $C_d$ on the ratio between $\lambda$ and $\sigma$ was obtained by Monte-Carlo simulations and an effective medium theory with an averaging method different from that employed here. At sufficiently low carrier concentration, their results and ours should coincide. Unfortunately, since the concentration dependence of $C_d$ is unclear, the results of Ref. \onlinecite{Fishchuk_13} cannot be directly compared with ours. \section{ANALYSIS OF EXPERIMENTAL DATA} In this Section, the theoretical results obtained in the present study are applied to analyze the experimental data. We show two examples of such an analysis, in which we interpret the results of hole mobility measured in 2D and 3D systems. We assume that the effective mobility can be expressed as \begin{align} \mu_{\rm eff}=\frac{C_{\mu}}{\lambda^{1/2} \left(k_{\rm B} T\right)^{3/2}}\exp\left[-\frac{\lambda}{4k_{\rm B}T}- C_d \left(\frac{\sigma}{k_{\rm B} T}\right)^2 \right], \label{eq:conclusion1} \end{align} where $C_{\mu}$ is a constant independent of $T$, $\lambda$ and $\sigma$. For the analysis of the 2D system, we use $C_d=0.5$, as obtained from both the EMA and Monte Carlo simulations at $\lambda \geq 10 k_{\rm B} T$. For the 3D system, we use $C_d=0.42$ obtained from the simulations when $\lambda \geq 10 k_{\rm B} T$. Using the experimental data, we determine the values of the disorder parameter $\sigma$, and compare them with those obtained by the conventional method, where the Miller-Abrahams (MA) rate is used to describe the charge carrier transitions instead of the Marcus reaction rate. The MA rate is expressed as $\Gamma_{ij} (\Delta E_i) = \Gamma_0$ for $\Delta E_i\leq 0$ and $ \Gamma_{ij} (\Delta E_i) = \Gamma_0 \exp \left[- \Delta E_i/(k_{\rm B} T) \right] $ for $\Delta E_i> 0$, where $\Gamma_0$ is a constant independent of $T$ and $\Delta E_i$. As shown by Monte Carlo simulations, when the MA rate is used to model the hopping transitions, the effective mobility for the cubic lattice is well described by \begin{align} \mu_{\rm eff}=C_{\mu}' \exp\left[- C_{\rm MA} \left(\frac{\sigma_{\rm MA}}{k_{\rm B} T}\right)^2 \right], \label{eq:fitting2} \end{align} where $C_{MA}=0.44$. It should be noted that the activation energy $E_a$ does not appear in Eq. (\ref{eq:fitting2}). The charge carrier transport was interpreted in this case as an exclusively disorder-controlled ($E_a=0$) process. In the present study, we obtained $C_d=0.42$ with $E_a=\lambda/4$ and an additional algebraic $T$-dependence under the condition of $\lambda \geq 10 k_{\rm B} T$. \begin{table} \caption{\label{tab:table1} Reorganization energy and disorder parameters obtained from temperature dependence of mobilities reported in Ref. \onlinecite{Hoffmann_13}. } \begin{ruledtabular} \begin{tabular}{lcccc} copolymer\footnote{Ref. \onlinecite{Hoffmann_13}.}& $\lambda$ [eV]\footnotemark[1]& $\sigma$ [eV]\footnote{The values obtained using Eq. (\ref{eq:conclusion1}).}& $\sigma_{\rm MA}$ [eV]\footnote{The values obtained using Eq. (\ref{eq:fitting2}).}& $\sigma/\sigma_{\rm MA}$ [\%]\\ \hline 1 & 0.3 & 0.095&0.109&87\\ 3& 0.3 & 0.098&0.102&96\\ 7 &0.2 & 0.074&0.089&83\\ 9 & 0.3 & 0.065&0.091&71\\ \end{tabular} \end{ruledtabular} \end{table} \begin{figure} \includegraphics[width=0.5\columnwidth]{Marcus_3Danalysis} \caption{Mobility plotted as a function of $1/T$ [1/K]. Squares, triangles, circles, and diamonds indicate the experimental data obtained in \mbox{Ref. \onlinecite{Hoffmann_13}} for compounds 1, 3, 7, and 9, respectively. The lines represent the results of fitting using Eq. (\ref{eq:conclusion1}) with $C_d=0.42$ and the values of $\lambda$ shown in Table \ref{tab:table1}. } \label{fig:3} \end{figure} In recent experiments, both the reorganization energy and the effective hole mobility were measured in conjugated copolymers. \cite{Hoffmann_13} Hole transport in conjugated copolymers can be regarded as random walks in 3D systems. For all copolymers, the values of reorganization energy were estimated in the range of $0.2 \sim 0.3$ eV as summarized in Table \ref{tab:table1}. These values approximately satisfy $\lambda \geq 10 k_{\rm B} T$. Therefore, Eq. (\ref{eq:conclusion1}) with $C_d=0.42$ is applicable. In Ref. \onlinecite{Hoffmann_13}, the experimental data were interpreted by assuming either exclusively polaronic ($\sigma=0$, $E_a\neq0$) or exclusively disorder-controlled ($E_a=0$) transport for the holes. It could be more natural to assume that the hole transport is both affected by disorder of the medium ($\sigma\neq0$) and displays a non-zero activation energy that originates from the reorganization energy. The latter was optically measured in Ref. \onlinecite{Hoffmann_13}, separately from the time-of-flight experiments performed to determine the effective mobility. We analyze 4 types of conjugated alternating phenanthrene indenofluorene copolymers denoted by 1,3,7, and 9 in Ref. \onlinecite{Hoffmann_13}. The reorganization energy obtained from an analysis of fluorescence spectra is given by $\lambda=0.3$ eV for copolymer 1,3,9 and $\lambda=0.2$ eV for copolymer 7. We fit Eq. (\ref{eq:conclusion1}) to the experimental data, as illustrated in Fig. 3, and determine the values of $\sigma$, which are listed in Table \ref{tab:table1} together with the values of $\sigma_{MA}$ reported in Ref. \onlinecite{Hoffmann_13}. The values of $\sigma$ are $4\sim29$ \% smaller than $\sigma_{MA}$. These results indicate that when the reorganization energy is ignored, the disorder parameter $\sigma$ can be significantly overestimated. Regarding the question of whether the hole transport is polaronic or disorder-controlled, we note that the determined values of $\sigma$ and the thermal activation energy of polaron transport given by $E_a=\lambda/4$ are comparable. In this sense, both the reorganization energy and the energetic disorder affect the effective mobility. As an example of 2D charge carrier transport, we consider the hole transport in smectic liquid crystals. Smectic liquid crystals form layered structures and holes are expected to move within a layer. We analyze the temperature dependence of the hole mobility in 6O-BP-6 2D smectic mesophases of biphenyls reported in Ref. \onlinecite{Ohno_03}. In the temperature range shown in Fig. \ref{fig:4}, the liquid crystal is in SmE phase, where molecules form a rectangular lattice in each layer. For reorganization energy, we assume $\lambda=0.3$ eV, a typical value for organic molecules. This value satisfies $\lambda \geq 10 k_{\rm B} T$ so we apply Eq. (\ref{eq:conclusion1}) with $C_d=1/2$ obtained for 2D carrier transport. By analyzing the experimental data, we obtain $\sigma=0.089$ eV, which is 19\% smaller than $\sigma_{\rm MA}=0.11$ eV obtained in Ref. \onlinecite{Ohno_03}. Our value of $\sigma$ is close to the range $0.05-0.06$ eV, which is considered as a typical range of the disorder parameter that characterizes the hole transport in smectic liquid crystals. \cite{Ohno_03} \begin{figure} \includegraphics[width=0.5\columnwidth]{Marcus_2Danalysis} \caption{2D hole mobility in SmE phase of a liquid crystal plotted as a function of $1/T$ [1/K]. The circles indicate experimental data taken from Ref. \onlinecite{Ohno_03}. The solid line is the result of fitting using Eq. (\ref{eq:conclusion1}) with $C_d=1/2$ and $\lambda=0.3$ eV. } \label{fig:4} \end{figure} \section{Conclusion} \label{sec:VI} Using an effective medium approximation (EMA), we have analytically derived the scaling relation given by Eq. (\ref{eq:conclusion1}). Equation (\ref{eq:conclusion1}) describes the effective charge carrier mobility when the elementary transition rate is given by the Marcus equation and the density of states is expressed by a Gaussian. We have also performed kinetic Monte-Carlo simulations for 2D ($z=4$ square lattice) and 3D ($z=6$ cubic lattice) to obtain the parameter $C_d$ by fitting. Our results can be summarized as follows. Previously, $C_d=3/4$ was derived for 1D systems. \cite{Seki_01} We have now obtained $C_d=1/2$ for 2D ($z=4$), and $C_d=0.42$ for 3D ($z=6$) when $\lambda/(k_{\rm B} T) \geq 10$. The last value was obtained by kinetic Monte-Carlo simulations and is somewhat lower than our analytical result ($C_d=1/2$) obtained for the 3D system. We note that the value of $C_d$ for 1D systems is very different from those obtained for other lattices of higher dimensionality. \cite{Seki_16} This result reflects the unique nature of the trajectories of mobile particles in one dimensional periodic lattices. In one dimension, if a transition to a new site does not occur because of a high barrier, the mobile particle jumps back to the previously occupied site, but it will finally succeed to pass the barrier after many trials and a long enough time. When the standard deviation $\sigma$ of the energetic disorder is increased in 1D, the growth of the mean square displacements will be suppressed by repeated trials to overcome the high barriers. On the contrary, transitions over high barriers will be avoided by changing the direction of the particle motion in 2D and 3D. The large difference between the $C_d$ values for 1D and those for 2D and 3D can probably be explained by the above considerations. The kinetic Monte-Carlo simulations confirmed the value $C_d=1/2$ obtained from the EMA for the 2D system. On the other hand, we see a 16\% difference in $C_d$ between the theory and simulation in 3D. This difference could originate from adoption of the simplest EMA, where a single transition rate fluctuates in the effective medium. Although the effect of the coordination number can be partly taken into account by the representative random transition rate in the effective medium, the accuracy will decrease by going from 2D to 3D. The value of $C_d=0.42$ for 3D (cubic lattice) is close to $C_d=0.44$ of MA process. \cite{Bassler_93} In 1D, $C_d=0.75$ is obtained using the Marcus equation while $C_d=1$ is obtained for the MA process. \cite{Cordes_01,Seki_01} These results indicate that the difference decreases by increasing the coordination number and suggest that the universal scaling relation of the form given by Eq. (\ref{eq:conclusion1}) for $z>2$ could be less sensitive to the types of elementary transition rates compared to that in 1D. Recently, a similar scaling relation was proposed for the MA process in a different context. \cite{Seki_Bagchi_2015,Seki_16} There is a subtle issue about determination of the value of $C_d$ for 2D and 3D systems. Previously, the value of $C_d$ of MA process was determined by assuming that the activation energy $E_a$ is zero because the activation energy associated with the reorganization energy is absent. Although the reorganization energy is absent, an activation energy induced by energetic disorder $E_a=[1-(1/\sqrt{2})] \sqrt{\pi} \sigma$ was recently derived by applying EMA using the MA process for 2D systems. \cite{Seki_16} The disorder induced activation energy is important when $\sigma \leq k_{\rm B} T$. Further theoretical studies of this effect are required, especially for 3D systems. It should also be noted that $\Gamma_{\rm eff} /\Gamma (0)$ is insensitive to the value of $\lambda/(k_{\rm B} T)$ irrespective of the values of the coordination number when $\lambda/(k_{\rm B} T) \geq 10$. However, when $\lambda/(k_{\rm B} T) < 10$, $\Gamma_{\rm eff} /\Gamma (0)$ depends on the value of $\lambda/(k_{\rm B} T)$ both for 2D ($z=4$) and 3D ($z=6$). This dependence can be seen both in the simulation results and the results obtained by numerically evaluating the self-consistency equation of EMA. According to the Marcus rate expression given by Eq. (\ref{eq:Marcus}), the dependence of the transition rate on $\Delta E_i$ increases by decreasing the value of $\lambda$. As a result, the effective rate is more affected by the site energy distribution when $\lambda$ is small. The effective rate in the absence of the site energy distribution is given by $\exp[-\lambda/(4 k_{\rm B} T)]/\sqrt{\lambda T}$ but the $\lambda$-dependence may be modified under the strong influence of the site energy distribution when $\lambda/(k_{\rm B} T)$ is not sufficiently large. Interestingly, such an extra $\lambda$-dependence is absent in the exact result of 1D ($z=2$). \cite{Seki_01} Again, the one dimensional result is different from those in higher dimensions. We have obtained the effective mobility in the limit of low carrier density. At high carrier density, some parts of the density of states are occupied by carriers and the distribution of unoccupied states is thereby distorted. The trap filling effect can be important under device operating conditions. In Eq. (\ref{eq:scaling1}), $E_a$ and $C_d$ may depend on the concentration of carriers if carrier concentration is above a threshold value. \cite{Baranovskii14,Cottaar_12,Fishchuk_13} It is important to note that the results in this manuscript are valid if the carrier concentration is below a certain threshold concentration. We did not note any results for 2D ($z=4$) reported previously. Our result obtained for 2D may be useful in analyzing real charge carrier transport processes, beyond theoretical interests. In general, molecular solids can be highly anisotropic in structure. \cite{Jakobsson,Stehr_11} The carrier transport can also be anisotropic reflecting the structure. In this study, we used the Marcus equation assuming the classical high temperature limit of quantum transport between localized states. We assumed incoherent hopping of a polaron formed as a result of localization due to electron-phonon coupling in organic solids. In the studies of high charge mobility in molecular crystals such as pentacene and rubrene, the assumption of a hopping transport between localized states might be inadequate. Recently, the influence of delocalized states and dynamic disorder on the effective mobility has been studied extensively. \cite{Troisi_03,Troisi_11,Troisi_09} At low temperatures, the band transport disturbed by phonon scattering contributes to the particle diffusion in addition to the phonon-assisted hopping.\cite{Grover71,Kitahara76,Troisi_11} If the temperature is sufficiently low so that the wave functions are delocalized, both the localization and the intrinsic transfer rates depend on the inhomogeneous disorder, dimensionality, temperature and can be anisotropic. \cite{Moix_13,Lee_15,Chuang_16} The effect of dimensionality on the temperature dependence of the effective mobility at low temperatures requires further theoretical investigation on the coherence dephasing. \acknowledgments This work was supported by JSPS KAKENHI Grant Number 15K05406. One of us (M.W.) acknowledges support from the National Science Center of Poland (Grant No. DEC-2013/09/B/ST4/02956). \newpage \renewcommand{\theequation}{A.\arabic{equation}} \setcounter{equation}{0} \section*{Appendix A. Derivation of Eq. (\ref{eq:selfconsistent}) for a symmetrized rate} When EMA is formulated using a symmetrized transition rate,\cite{Haus_87,Kehr_96} an additional approximation is introduced for the symmetrization. We denote the position on a hypercubic lattice by $\vec{r}_i$. Transition between neighboring sites can be designated by the displacement vector $\vec{\ell}_k$, where $k$ runs from $1$ to the coordination number $z$. We consider random site energy on the origin denoted by $\vec{r}_0$ and a neighboring lattice site denoted by $\vec{r}_1=\vec{\ell}_1$. The transition between these sites are given by the Marcus equation Eq. (\ref{eq:Marcus}) and expressed by $\Gamma_{0,1}(\Delta E_0)$ with $\Delta E_0=E_{\vec{\ell}_1}-E_0$ for the transition from the origin and $\Gamma_{1,0}(\Delta E_1)$ with $\Delta E_1=E_0-E_{\vec{\ell}_1}$ for the transition from $\vec{r}_1$. We study the effective transition rate for the time evolution of the density $\rho(\vec{r}_i,t)$ expressed by \cite{Haus_87,Kehr_96} \begin{align} \frac{\partial}{\partial t} \rho(\vec{r}_i,t)&=\int_0^t dt_1 \Gamma_{\rm eff}^{\rm unsym} (t-t_1) \sum_{k=1}^z \rho(\vec{r}_i+\vec{\ell}_k,t_1)- z\int_0^t dt_1 \Gamma_{\rm eff}^{\rm unsym} (t-t_1) \rho(\vec{r}_i,t_1)- \nonumber \\ & \left[\Gamma_{j,1-j}(\Delta E_j)-\int_0^t dt_1 \Gamma_{\rm eff}^{\rm unsym} (t-t_1)\rho(\vec{r}_j,t_1) \right] (\delta_{i,j}-\delta_{i+j,1})(\delta_{i,0}-\delta_{i,1}). \label{eq:apA1} \end{align} The initial condition is given by $\rho(\vec{r}_i,0)=\delta_{i,0}$. $\Gamma_{\rm eff}^{\rm unsym} (t)$ indicates the effective transition rate for the original unsymmetrized rate. By the Laplace transformation, we obtain \begin{align} s \hat{\rho}(\vec{r}_i,s)-\delta_{i,0}&=\hat{\Gamma}_{\rm eff}^{\rm unsym} (s) \sum_{k=1}^z \hat{\rho}(\vec{r}_i+\vec{\ell}_k,s)- z \hat{\Gamma}_{\rm eff}^{\rm unsym} (s) \hat{\rho}(\vec{r}_i,s)- \nonumber \\ & \left[\Gamma_{j,1-j}(\Delta E_j)\hat{\rho}(\vec{r}_j,s)-\hat{\Gamma}_{\rm eff}^{\rm unsym} (s)\hat{\rho}(\vec{r}_j,s) \right] (\delta_{i,j}-\delta_{i+j,1})(\delta_{i,0}-\delta_{i,1}), \label{eq:apA2} \end{align} where $\hat{f} (s)$ denotes the Laplace transform of arbitrary function $f(t)$. In the above, $\Gamma_{0,1}(\Delta E_0)$ and $\Gamma_{1,0}(\Delta E_1)$ are not equal. The calculation of the effective rate requires the inverse transformation of 2x2 matrix equation and the final expression is tedious. The simpler expression can be obtained by introducing a symmetrized rate. To formulate EMA by introducing a symmetrized rate, we define reduced density by \begin{align} Q(\vec{r}_i,t)=\rho(\vec{r}_i,t)/\rho_i^{\rm(eq)} \label{eq:apA3} \end{align} and note \begin{align} \Gamma_{i,j}(\Delta E_i) \rho(\vec{r}_i,t)=\Gamma^{\rm sym} Q(\vec{r}_i,t), \label{eq:apA4} \end{align} where $\Gamma^{\rm sym}$ is given by Eq. (\ref{eq:symmetricrates}). If we introduce $\Gamma_{\rm eff,i}^{\rm sym} (t)=\Gamma_{\rm eff}^{\rm unsym} (t) \rho_i^{\rm(eq)}$ according to Eq. (\ref{eq:apA3}), Eq. (\ref{eq:apA2}) can be rigorously rewritten using $\Gamma_{\rm eff,i}^{\rm sym} (t)$ but $\Gamma_{\rm eff,i}^{\rm sym} (t)$ is not homogeneous. Instead, we introduce $\Gamma_{\rm eff} (t) \approx \Gamma_{\rm eff}^{\rm unsym} (t) \langle \rho_i^{\rm(eq)} \rangle$. Since we have $\langle \rho_i^{\rm(eq)} \rangle=1$, we obtain $\Gamma_{\rm eff}(t) \approx \Gamma_{\rm eff}^{\rm unsym}(t)$. Under the approximation, Eq. (\ref{eq:apA2}) can be expressed as \begin{multline} s \hat{Q}(\vec{r}_i,s)-\delta_{i,0 =\hat{\Gamma}_{\rm eff} (s) \sum_{k=1}^z \hat{Q}(\vec{r}_i+\vec{\ell}_k,s)- z \hat{\Gamma}_{\rm eff} (s) \hat{Q}(\vec{r}_i,s)- \\ \left[\Gamma^{\rm sym}\hat{Q}(\vec{r}_j,s)-\hat{\Gamma}_{\rm eff} (s)\hat{Q}(\vec{r}_j,s) \right] (\delta_{i,j}-\delta_{i+j,1})(\delta_{i,0}-\delta_{i,1})+ s \left( 1 - \rho_i^{\rm(eq)} \right) \hat{Q}(\vec{r}_j,s). \label{eq:apA5} \end{multline} The effective rate $\Gamma_{\rm eff}$ obtained from Eq. (\ref{eq:apA5}) can be regarded as the effective rate $\Gamma_{\rm eff}^{\rm unsym}$ of Eq. (\ref{eq:apA2}) approximately. The above equation has the common structure of the simplest EMA except the last term which vanished in the limit of $s\rightarrow 0$. Equation (\ref{eq:selfconsistent}) can be derived from Eq. (\ref{eq:apA5}) by applying usual procedure. \cite{Haus_87,Kehr_96} \renewcommand{\theequation}{B.\arabic{equation}} \setcounter{equation}{0} \section*{Appendix B. Derivation of the upper limit} We rewrite Eq. (\ref{eq:selfconsistent}) as, \begin{align} \frac{1}{\Gamma_{\rm eff}}= \left\langle \frac{z/2}{(z/2-1) \Gamma_{\rm eff}+ \Gamma^{\rm sym}} \right\rangle. \label{eq:ap1} \end{align} We use a systematic expansion expressed by \begin{align} \left(X+Y \right)^{-1}=X^{-1}-X^{-1}Y\left(X+Y \right)^{-1} > X^{-1}-X^{-1}Y X^{-1}, \label{eq:ap2} \end{align} where $X$ and $Y$ are arbitrary function, and $Y>0$ is assumed. By applying the expansion to Eq. (\ref{eq:ap1}) by setting $X=(z/2-1) \Gamma_{\rm eff}$ and $Y=\Gamma^{\rm sym}$, we obtain, \begin{align} \Gamma_{\rm eff} >\frac{z}{z-2} \frac{1}{\Gamma_{\rm eff}} - \left(\frac{z}{z-2}\frac{\Gamma (0)}{\Gamma_{\rm eff}} \right)^2 \frac{2}{z} \left\langle \Gamma^{\rm sym} \right\rangle + \cdots . \label{eq:ap3} \end{align} The expansion is better as the coordination number increases, $z \gg 1$. By rearrangement, Eq. (\ref{eq:ap3}) can be expressed as \begin{align} \Gamma_{\rm eff} <\frac{z}{z-2} \langle \Gamma^{\rm sym} \rangle . \label{eq:expsol} \end{align} By using the Marcus rate equation, we obtain \begin{align} \frac{\Gamma_{\rm eff}}{\Gamma(0)} < \left(\frac{z}{z-2}\right)\frac{\Gamma (0)}{1+\sigma^2/(\lambda k_{\rm B} T)} \exp\left(-\frac{\sigma^2}{4 (k_{\rm B} T)^2} \right) . \label{eq:expapprox} \end{align} The upper limit is close to that proposed recently using a different method. \cite{Radin_15}
3,212,635,537,831
arxiv
\section{Introduction} \label{sec:introduction} In this paper an extensive treatment of type-2 fuzzy sets and membership functions is presented that is more general than the ones that can be found in literature. Moreover, the potentials of fuzzy logic and probability theory are discussed, as well as how real-life random and fuzzy uncertainties may be represented mathematically. \noindent The paper is organized as follows: Section~\ref{sec:intro:FL} provides an extensive discussion about fuzzy logic and the potential of this logic in representing linguistic uncertainties. Section~\ref{sec:probabilistic_logic} discusses the probability theory. Section~\ref{sec:probability_versus_fuzziness} represents various formulations of random and fuzzy uncertainties.% \begin{comment} Table~\ref{table:mathematical_notations} gives the frequently-used mathematical notations. Note that throughout this paper capital letters for sets and small letters for functions are used. A subscript $n$ is used to indicate the type of a fuzzy set or membership function.% \setlength{\tabcolsep}{5pt} \begin{table} \caption{Frequently-used mathematical notations.} \label{table:mathematical_notations} \begin{tabularx}{\linewidth}{l|X} \hline $x$ & variable\\ \hline \end{tabularx} \end{table} \end{comment} \section{Fuzzy logic} \label{sec:intro:FL} Two main concepts that mathematical logic deals with are \emph{sets} and \emph{propositions} \cite{Shoenfield:1967}. Fuzzy logic was introduced by Zadeh in the 1960s \cite{Zadeh:1965,Zadeh:1968} to extend the concepts of set theory and propositional calculus, which by then were analyzed only through classical logic. Fuzzy logic is a continuous multi-valued logic system. fuzzy logic may be considered as a generalization to the classical logic. Linguistic variable, a concept unique to fuzzy logic, allows this logic to serve as a basis for computations based on verbal information \cite{Zadeh:2008,Zadeh:1975,Zadeh:1975-2}. Moreover, inspired by the procedure of reasoning of humans upon imprecise information, FL maps an imprecise concept into one with a higher precision \cite{Zadeh:1999}.% A set in classical logic is a crisp concept: a mathematical object (e.g., a number, partition, matrix, variable, ...) either ``belongs to'' the set, i.e., the degree of membership of the mathematical object to the set is $1$, or ``does not belong to" the set, i.e., the degree of membership of the mathematical object to the set is $0$. Therefore, a crisp set $\pazocal{C}$ may be expressed as a collection of mathematical objects, e.g., \begin{align} \label{eq:crisp_set} \pazocal{C} = \left\{x_1,x_2, \ldots , x_n \right\}. \end{align} Similarly, a proposition in classical logic is either ``true'' (may be quantified by a crisp value $1$) or ``false'' (may be quantified by a crisp value $0$) In fuzzy logic, sets are fuzzy concepts \cite{Zimmermann:1996}. Our main focus is on the general case of type-$n$ fuzzy sets, with $n=1, 2, 3, \ldots$. To motivate this, we start with an example.% \subsection*{Opening example} \begin{figure} \centering \psfrag{age}[][][.8]{Age} \psfrag{MD}[][][.8]{Membership degree} \psfrag{PMD}[][][.8]{Primary membership degree} \psfrag{PMD3}[][][.8][14]{\hspace*{5ex} Primary membership degree} \psfrag{SMD}[][][.8]{Secondary membership degree} \psfrag{a3}[][][.8][-25 ]{Age} \psfrag{40}[][][.7]{$40$} \psfrag{0}[][][.7]{$0$} \psfrag{1}[][][.7]{$1$} \psfrag{9}[][][.7]{\hspace*{-2.5ex}$0.9$} \psfrag{983}[][][.7]{$0.98$} \psfrag{98}[][][.7]{\hspace*{-50ex}$0.98$} \psfrag{57}[][][.7]{\hspace*{-3.8ex}$0.57$} \psfrag{82}[][][.7]{$0.82$} \psfrag{83}[][][.7]{\hspace*{-5ex} {\color {red} $0.83$}} \psfrag{88}[][][.7]{{\color{red}$0.88$}} \psfrag{27}[][][.7]{$27$} \psfrag{y}[][][.8]{young} \includegraphics[width = .55\linewidth]{crisp_young} \caption{Using crisp sets for quantifying \emph{young}.} \label{fig:young_crisp_set} \vspace*{4ex} \includegraphics[width = .55\linewidth]{type_1_young} \caption{Using type-$1$ fuzzy sets for quantifying \emph{young}.} \label{fig:young_type1_fuzzy_set} \vspace*{4ex} \includegraphics[width = .55\linewidth]{type_2_young} \caption{Using type-2 fuzzy sets for quantifying \emph{young}.} \label{fig:young_type2_fuzzy_set} \vspace*{2ex} \includegraphics[width = .65 \linewidth]{type_2_young_3D} \caption{3D representation of the type-2 membership function for quantifying \emph{young}, represented for the specific age of 27.} \label{fig:young_3D_type_2_MF} \vspace*{2ex} \end{figure} $\rhd$ Suppose that based on the information ``Felix is 27'', we would like to answer to the question: \emph{(To what extent) is Felix young?} Since \emph{young} is a qualitative concept, while $27$ is quantitative, these concepts should be bridged by quantifying \emph{young} and corresponding the quantitative information about Felix's age with the quantified definition of \emph{young}.% One may define \emph{young} with the graph represented in Figure~\ref{fig:young_crisp_set}, which corresponds to a definition that categorizes people in two crisp sets, \emph{young} and \emph{not young}, i.e., people at an age below $40$ belong to the set \emph{young} and otherwise they are \emph{not young}. Then, in quantitative terms, ``Felix belongs to the set \emph{young} with a membership degree of $1$'', and in qualitative terms, ``Felix is \textbf{certainly} young'' Alternatively, \emph{young} may be quantified via the graph shown in Figure~\ref{fig:young_type1_fuzzy_set}, where instead of \emph{certainly young} and \emph{certainly old}, ages vary in a spectrum, i.e., the degree of membership to the set \emph{young} varies in $[0,1]$ instead of $\{0 , 1\}$. Then, quantitatively, ``Felix belongs to the set \emph{young} with a membership degree of $0.9$''. Qualitatively, ``Felix is \textbf{to a high extent} young''. \emph{Young} is then quantified using a type-$1$ fuzzy set Next, suppose that the border of the curve that defines \emph{young} is not strictly known. Figure~\ref{fig:young_type2_fuzzy_set} is an example where the intensity of the color black indicates our certainty about where any point at the border of the separating curve may lie in the given 2-dimensional plane. By just looking at the vertical axis (which is called the \emph{primary} membership degree), one may reply ``Felix belongs to the set \emph{young} with a \emph{primary} membership degree in the interval $[0.57,0.98]$'', or ``Felix \textbf{may to a high extent} be young''. Now consider a third dimension that quantifies the intensity of the color black with real values between $0$ and $1$, i.e., black corresponds to $1$ and white corresponds to $0$ (see figure~\ref{fig:young_3D_type_2_MF}). This dimension represents the \emph{secondary} membership degree. Then one can respond ``Felix is young with a \emph{primary} membership degree varying in the interval $[0.57, 0.98]$, and a \emph{secondary} membership degree varying in the interval $[0,1]$. For instance, ``Felix's age corresponds to a secondary membership degree of $0.83$ for the primary membership degree of $0.88$''. A type-$2$ fuzzy set (see \cite{Liang:1999,Mendel:2014,Hagras:2007_2} for more details on this type of fuzzy sets) has been used to quantify \emph{young} for this case More generally, if one interprets the word \emph{young} with a curve with blurry borders that extend in $3$ (instead of $2$) dimensions, i.e., the graph of young can be re-illustrated in $4$ dimensions considering \emph{primary}, \emph{secondary}, and \emph{tertiary} membership degrees, the qualitative word young has actually been quantified using a type-$3$ fuzzy set. This concept can further be extended to type-$n$ fuzzy sets for which the corresponding membership functions can be illustrated in $n + 1$ dimensions, and there are $n$ membership degrees involved. $\lhd$% \vspace*{-2ex} \subsection*{Mathematical Discussion} Next, the concept of type-$n$ fuzzy sets will be detailed mathematically. In contrast to a crisp set, a mathematical object belongs to any type-$n$ fuzzy set, with $n=1,\,2,\,\ldots$, with a primary, secondary, \ldots membership degree varying in $[0,1]$. Similarly, in fuzzy logic a proposition may ``to a certain extent'' be true or false. To link the two theories of fuzzy propositions and fuzzy sets, one may consider two fuzzy sets, the ``set of true propositions'' and the ``set of false propositions''. Any new proposition, which corresponds to one or several mathematical variables called membership degrees, can belong to both these sets at the same time with certain degrees of membership.% Generally, a membership function $ f^{[n]}: \textrm{dom}\left( f^{[n]} \right) \rightarrow [0,1]$ together with its domain, $\textrm{dom}\left( f^{[n]} \right)$, which itself can be a fuzzy set, characterizes a fuzzy set. For a type-1 fuzzy set, \begin{align} \label{eq:fuzzy_set_type1} \pazocal{F}^{[1]}=\bigg\{ \Big(x_1,\mu^{[1]}_1\Big), \Big(x_2,\mu^{[1]}_2\Big), \ldots, \Big(x_n,\mu^{[1]}_n\Big) \bigg\}, \end{align} the corresponding type-1 membership function\footnote{A membership function is named after the corresponding fuzzy set, i.e., they both are of the same type.} $f^{[1]}$ generates the degree of membership of mathematical objects $x_i$ within $\pazocal{C}$ to the fuzzy set $\pazocal{F}^{[1]}$, i.e., \begin{align} \label{eq:fuzzy_membership_function_type1} f^{[1]}: \Big\{x_1, x_2, \ldots , x_n \Big\} \rightarrow [0,1]: \ x_i \mapsto \mu^{[1]}_i \end{align} or equivalently, \begin{align} \label{eq:fuzzy_membership_function_type1_eq} \mu^{[1]}_i = f^{[1]}(x_i),\quad i = 1, 2, \ldots, n. \end{align} In comparison with a crisp set, a type-1 fuzzy set $\pazocal{F}^{[1]}$ is composed of pairs of mathematical objects and their degrees of membership to $ \pazocal{F}^{[1]}$ (see \eqref{eq:fuzzy_set_type1}) \begin{remark} One may generalize the above discussion by re-defining the crisp set $\pazocal{C}$ in \eqref{eq:crisp_set} as a special case of a type-1 fuzzy set, where $\mu^{[1]}_i=1$, $i=1, 2, \ldots, n$. This means that the corresponding type-1 membership function of $\pazocal{C}$ is the unit function. This definition may even further be generalized (see the following discussions for more details), i.e., a crisp set may be re-defined as a type-$n$ fuzzy set, with $n = 1,2, \ldots$, with all the membership functions of order $1, \ldots, n$ the unit function \end{remark} For data-driven approaches in engineering problems, large amount of information that are expressed in the human language, may be used to generate fuzzy sets. The degree to which a fuzzy set can handle the uncertainties and vagueness that exist in this information, depends on the type of the fuzzy set. The main difference between type-1 fuzzy sets and fuzzy sets of type 2 and larger is in the domain of their corresponding membership functions, i.e., the domain of a type-1 membership function is a crisp set $\pazocal{C}$ (see \eqref{eq:fuzzy_set_type1}), or equivalently, a type-1 fuzzy set with the unit function as its membership function, while the domain of a type-$n$ fuzzy set with $n = 2, 3, \ldots$ is a union of various fuzzy sets of type $n-1$.% More specifically, a type-2 fuzzy set $\pazocal{F}^{[2]}$, \begin{align} \label{eq:fuzzy_set_type2} \pazocal{F}^{[2]} = \Bigg\{ \bigg(\Big(x_1,\mu^{[1]}_{1,1}\Big),\mu^{[2]}_{1,1} \bigg), &\bigg(\Big(x_1,\mu^{[1]}_{2,1}\Big),\mu^{[2]}_{2,1} \bigg), \nonumber\\ \ldots, & \bigg(\Big(x_1,\mu^{[1]}_{m_1,1}\Big),\mu^{[2]}_{m_1,1} \bigg), \nonumber\\ &\vdots \nonumber\\ \bigg(\Big(x_n,\mu^{[1]}_{1,n}\Big),\mu^{[2]}_{1,n} \bigg), &\bigg(\Big(x_n,\mu^{[1]}_{2,n}\Big),\mu^{[2]}_{2,n} \bigg), \nonumber\\ \ldots, & \bigg(\Big(x_n,\mu^{[1]}_{m_n,n}\Big),\mu^{[2]}_{m_n,n} \bigg) \Bigg\}, \end{align} corresponds to a type-2 membership function $f^{[2]}$, which generates the secondary degree of membership of any mathematical object $\Big(x_i,\mu^{[1]}_{j,i}\Big)$ within the type-1 fuzzy set $\pazocal{C} \times \pazocal{R}$ to the type-2 fuzzy set $\pazocal{F}^{[2]}$, where $\pazocal{R} \subseteq [0,1]$, and $i=1, \ldots, n$, $j = 1, \ldots, m_i$. Therefore, \begin{align} \label{eq:fuzzy_membership_function_type2} f^{[2]} : \bigg\{ &\Big(x_1,\mu^{[1]}_{1,1}\Big), \Big(x_1,\mu^{[1]}_{2,1}\Big), \ldots, \Big(x_1,\mu^{[1]}_{m_1,1}\Big), \nonumber\\ &\ldots \nonumber\\ &\Big(x_n,\mu^{[1]}_{1,n}\Big), \Big(x_n,\mu^{[1]}_{2,n}\Big), \ldots, \Big(x_n,\mu^{[1]}_{m_n,n}\Big) \bigg\}\rightarrow [0,1]: \nonumber\\ &\left( x_i, \mu^{[1]}_{j,i} \right) \mapsto \mu^{[2]}_{j,i}, \end{align} or equivalently, \begin{align} \label{eq:fuzzy_membership_function_type2_eq} \mu^{[2]}_{j,i} = f^{[2]}\Big(\left( x_i, \mu^{[1]}_{j,i} \right)\Big), \qquad i=1,\ldots,n,\quad j=1,\ldots, m_i. \end{align} We can reformulate the domain of $f^{[2]}$ as: \begin{align} \textrm{dom}\left( f^{[2]} \right) := \bigg\{ &\Big(x_1,\mu^{[1]}_{1,1}\Big),\ \ldots, \Big(x_n,\mu^{[1]}_{1,n}\Big) \Big\} \cup \\ \bigg\{ &\Big(x_1,\mu^{[1]}_{2,1}\Big),\ \ldots, \Big(x_n,\mu^{[1]}_{2,n}\Big) \Big\} \cup \nonumber\\ &\vdots \nonumber\\ \bigg\{ &\Big(x_1,\mu^{[1]}_{m_1,1}\Big),\ \ldots, \Big(x_n,\mu^{[1]}_{m_1,n}\Big) \Big\} \cup \nonumber\\ &\vdots \nonumber\\ \bigg\{ &\Big(x_k,\mu^{[1]}_{m_k,1}\Big) \Big\} = \bigcup_{i=1}^n \bigcup_{j=1}^{m_i} \left\{ \left( x_i, \mu^{[1]}_{j,i} \right) \right\} , \nonumber \end{align} assuming that \[ \displaystyle\argmax_i\ \{ m_i \} = k. \] This shows the domain of the type-$2$ membership function $f^{[2]}$ is the union of $m_k$ type-$1$ fuzzy sets. Therefore, a type-$2$ fuzzy set also corresponds to $m_k$ type-$1$ membership functions $f^{[1]}_j$ that generate the primary membership degrees $\mu^{[1]}_{j,i}$, i.e., \begin{align} \label{eq:primary_membership_functions} \mu^{[1]}_{j,i} = f^{[1]}_j(x_i),\qquad i = 1, \ldots , n, \quad j = 1, \ldots , m_i. \end{align} The domains of these type-$1$ membership functions are $\pazocal{C}$ or a subset of $\pazocal{C}$ In summary, to identify a type-$2$ fuzzy set $\pazocal{F}^{[2]}$ on a domain $\pazocal{C}$ of the independent variables $x_i$, one needs to know the corresponding type-$2$ membership function $f^{[2]}$ and the domain of this function. Equivalently, one should know $f^{[2]}$ and the $m_k$ type-$1$ membership functions $f^{[1]}_j$. From \eqref{eq:fuzzy_set_type2}, a type-2 fuzzy set $\pazocal{F}^{[2]}$ is composed of pairs, where each pair itself includes a pair consisting of the independent variable $x_i$ and a primary membership degree $\mu^{[1]}_{j,i}$ of $x_i$, and a secondary membership degree.% Generally speaking, a type-$n$ fuzzy set $\pazocal{F}^{[n]}$ includes mathematical objects of the form: \[ \Bigg( \ldots \bigg( \left( x_i , \mu^{[1]}_{j_{n-1},\ldots,j_1,i} \right), \mu^{[2]}_{j_{n-1},\ldots,j_1,i} \bigg) \ldots \bigg) \mu^{[n]}_{j_{n-1},\ldots,j_1,i} \Bigg) \] with $j_1 \in \left\{1, \ldots, m_{i,1}\right\}$, \ldots, $j_{n - 1} \in \left\{1, \ldots, m_{i,n -1}\right\} $, and corresponds to a type-$n$ membership function $f^{[n]}$, which generates the $n^{\textrm{th}}$ membership degrees $ \mu^{[n]}_{j_{n-1},\ldots,j_1,i} $ of any mathematical object that belongs to the union of $m_{k_n}$ fuzzy sets of type $n - 1$, with \[ k_n = \displaystyle \argmax_i\{ m_{i , n - 1} \}. \] A type-$n$ fuzzy set corresponds to $\argmax_i\{ m_{i , n - 1} \}$ membership functions of type $n - 1$, to $\argmax_i\{ m_{i , n - 2} \} $ membership functions of type $n - 2$, \ldots, and to $\argmax_i\{ m_{i , 1} \}$ type-$1$ membership functions. One should know all these membership functions, as well as the unique type-$n$ membership function to identify $\pazocal{F}^{[n]}$.% \vspace*{-2ex} \subsection*{Interpretation for real-life problems} \label{sec:physical_interpretation} Based on the discussions given above, the following conclusions can be made. For an independent variable $x_i \in \pazocal{C}$, the \emph{extent} that it belongs to a type-$1$ fuzzy set $\pazocal{F}^{[1]}$ is characterized by one uncertainty, which is specified by the degree of membership of $x_i$ to $\pazocal{F}^{[1]}$.% Any type-$2$ fuzzy set $\pazocal{F}^{[2]}$ corresponds to $m_k$ type-$1$ membership functions and hence, the $m_k$ corresponding type-$1$ fuzzy sets. Therefore, $\forall x_i \in \pazocal{C}$, the \emph{extent} that $x_i $ belongs to $\pazocal{F}^{[2]}$ is characterized by two uncertainties: \begin{compactitem} \item Uncertainty about the extent to which $x_i$ belongs to the union or either of the $m_k$ corresponding type-$1$ fuzzy sets. This is quantified by the primary membership degrees. \item Uncertainty about the extent to which $x_i$, which belongs with specific primary membership degrees to the type-$1$ fuzzy sets, belongs to $\pazocal{F}^{[2]}$. This is quantified by the secondary membership degree. \end{compactitem}% Generally speaking, the \emph{extent} that $x_i \in \pazocal{C}$ belongs to a type-$n$ fuzzy set $\pazocal{F}^{[n]}$ is characterized by $n$ uncertainties: \begin{compactitem} \item Uncertainty about the extent to which $x_i$ belongs to the union or either of the corresponding $\argmax_i\{ m_{i , 1} \}$ type-$1$ fuzzy sets. This is quantified by the primary membership degrees. \item[\vdots] \item Uncertainty about the extent to which $x_i$ belongs to the union or either of the corresponding $\argmax_i\{ m_{i ,n-1} \}$ fuzzy sets of type $n-1$. This is quantified by the $(n-1)^\textrm{th}$ membership degrees. \item Uncertainty about the extent to which $x_i$ belongs to $\pazocal{F}^{[n]}$. This is quantified by the $n^\textrm{th}$ membership degree. \end{compactitem \section{Probabilistic logic} \label{sec:probabilistic_logic} \begin{figure} \psfrag{T}[][][1]{True set $\pazocal{T}$} \psfrag{F}[][][1]{False set $\pazocal{F}$} \psfrag{W}[][][1]{World set $\pazocal{W}$} \psfrag{u}[][][.8]{\hspace*{38ex} {\color{black}$\pazocal{T} \cap \pazocal{F}$}} \psfrag{n}[][][.8]{\hspace*{54ex} {\color{black}$\pazocal{T} \backslash \pazocal{F}$}} \psfrag{p}[][][.8]{ \hspace*{25ex} {\color{black}$\pazocal{F} \backslash \pazocal{T}$} } \includegraphics[width = \linewidth]{fuzzy_logic_true_false_sets_overlap_2} \caption{The sets of true $\pazocal{T}$ and false $\pazocal{F}$ propositions in fuzzy logic may have an overlap, i.e., $\pazocal{T} \cap \pazocal{F} \neq \emptyset$. Moreover,some mathematical objects that belong to $\pazocal{F} \backslash \pazocal{T}$ or $\pazocal{T} \backslash \pazocal{F}$ may not necessarily have a membership degree of $1$.} \label{fig:fuzzy_T_F_overlap} \end{figure} In this section, probabilistic logic \cite{Nilsson:1986} versus fuzzy logic is discussed. With the analogy explained before, a proposition in mathematical logic may correspond to a mathematical object that can belong to either (crisp or fuzzy) sets of true or false propositions. For the sake of brevity, the discussions of this section are represented using the concept of sets only. To notify the correspondence of sets and propositions, the notations $\pazocal{T}$ (referring to \emph{true}) and $\pazocal{F}$ (referring to \emph{false}) are used for the sets that build up the possible world\footnote{Possible world of the event $E$ is a world set that embraces all possible realizations of $E$.}, $\pazocal{W}$, of an event $E$.% \begin{figure} \psfrag{T}[][][1]{True set $\pazocal{T}$} \psfrag{F}[][][1]{False set $\pazocal{F}$} \psfrag{W}[][][1]{World set $\pazocal{W}$} \psfrag{f}[][][.8]{\hspace*{50ex} {\color{white}$\pazocal{T}$}} \psfrag{t}[][][.8]{ \hspace*{25ex} {\color{white}$\pazocal{F}$} } \includegraphics[width = \linewidth]{probabilistic_logic_true_false_sets_2} \caption{The sets of true $\pazocal{T}$ and false $\pazocal{F}$ propositions in probabilistic logic do not have any overlap, and any mathematical object that does not belong to either $\pazocal{F}$ or $\pazocal{T}$, belongs to the other set with a probability of $1$.} \label{fig:probabilistic_T_F_no_overlap} \end{figure} A main difference between fuzzy logic and probabilistic logic is that for the former, $\pazocal{F}$ and $\pazocal{T}$ are fuzzy sets, while for the latter, they are crisp sets. Therefore, the borders of $\pazocal{F}$ and $\pazocal{T}$ in fuzzy logic are exposed to uncertainties. Hence, there may be an overlap between these two sets, i.e., we may have $\pazocal{T} \cap \pazocal{F} \neq \emptyset$ (see Figure~\ref{fig:fuzzy_T_F_overlap}), while in probabilistic logic we necessarily have $\pazocal{T} \cap \pazocal{F}= \emptyset$ (see figure~\ref{fig:probabilistic_T_F_no_overlap}).% Any realization of the repeatable event $E$ can correspond to a mathematical object $x$. In probabilistic logic, in case in $100$ experiments, where $E$ is repeated, for $\Pi_1$ experiments $x$ goes to $\pazocal{F}$ and for $\Pi_2$ experiments $x$ goes to $\pazocal{T}$, then $\Pi_1 + \Pi_2 =100$. The normalized values $\pi_1$ and $\pi_2$ of the natural values $\Pi_1$ and $\Pi_2$ are called the probability of $x$ belonging to $\pazocal{F}$ and $\pazocal{T}$, respectively. From $100$ experiments in fuzzy logic, if in $M_1$ experiments $x$ goes to $\pazocal{F}$ and in $M_2$ experiments $x$ goes to $\pazocal{T}$, then $M_1 + M_2$ is not necessarily $100$ (may be larger than $100$). The reason is that for some experiments $x$ may go to the overlap $\pazocal{T} \cap \pazocal{F}$, i.e., the experiment is doubly counted in both $M_1$ and $M_2$. The normalized values $\mu_1$ and $\mu_2$ of the real values $M_1$ and $M_2$ are called the degrees of membership of $x$ to the fuzzy sets $\pazocal{F}$ and $\pazocal{T}$, respectively.% To generalize, the summation $\sum_{i=1}^{s_\textrm{c}} \pi_i$ of the probabilities $\pi_i$ of $x$ belonging to $s_\textrm{c}$ crisp sets that build up the possible world $\pazocal{W}$ of the event $E$ is necessarily $1$, while the sum $\sum_{i=1}^{s_\textrm{f}} \mu_i$ of the degrees of membership of $x$ to $s_\textrm{f}$ fuzzy sets that build up $\pazocal{W}$ may be smaller than, equal to, or larger than $1$ \section{ Mathematical formulation of uncertainties in real-life events} \label{sec:probability_versus_fuzziness} In real life, an event $E$ may be prone to various types of imprecision and hence, uncertainties \subsection*{Uncertainties before realization of an event} Suppose that the world set $\pazocal{W}$ of possible realizations of an event $E$ is known, i.e., we know and can measure (or estimate the value of) all the characteristics that can define and distinguish any specific position within $\pazocal{W}$ (this is called a \emph{state} in real-life engineering problems).% Before $E$ is realized, there is always uncertainty about which possible realization is going to occur. If the realization of $E$ is measurable (e.g., a temperature, which can be measured directly using a thermostat), the realization is \emph{precise in value}, is determined by the same measurable characteristics as those that define the world set, and hence its position in the world set can strictly be distinguished (see the left-hand side plot in Figure~\ref{fig:measurable_and_nonmeasurable_realizations}). Therefore, the uncertainty is one-fold, i.e., uncertainty about the possible measurements from the world set that will be realized in occurrence of $E$. In this case, if one knows and can measure (or estimate the value of) all the effective factors (these are called the controllable and uncontrollable inputs in real-life engineering problems) that may play a role in any realization of $E$ (or equivalently in any realization of the characteristics that define the world set $\pazocal{W}$), then probabilistic logic can determine the probability of occurrence of any possible realization. In short, when all the concepts and characteristics involved in the procedure of realization of an event are measurable or \emph{precise in value}, then probabilistic logic can handle the uncertainties In case the realization of $E$ is non-measurable (e.g., the comfort, which cannot be measured directly using a measurement device), this realization is \emph{imprecise in value}. Then the uncertainty is two-fold, i.e., uncertainty about which possible non-measurable realization in the world set is going to occur, and uncertainty about the exact position of the realization in the world set. Systems in real life work with measurable concepts. For instance, the non-measurable instruction ``when the room's comfort is low, decrease the temperature'', should be transformed into a measurable instruction for an air conditioning system, e.g., ``when the room's temperature is between $23$ and $25$ degrees of Celsius, decrease the temperature for $3$ degrees of Celsius''. This transforms a concept that is \emph{imprecise in value} into one that is \emph{precise in meaning}, and reduces the original uncertainty about the exact position of the concept in the world set $\pazocal{W}$ to an uncertainty about the exact position of the concept in a known subset of $\pazocal{W}$. Fuzzy logic can transform a realization that is imprecise in value into one that is precise in meaning by assigning a membership function (of, generally, type $n$) to the characteristics that are imprecise in value \begin{figure} \psfrag{W}[][][1]{$\pazocal{W}$} \includegraphics[ width = \linewidth ]{measurable_versus_nonmeasurable_realizations} \caption{When the realization of an event is measurable, it will take an exact position (illustrated by the black dot in the left-hand side plot) in the world set. When the realization of an event is non-measurable, it will take position in a subset (colored in grey in the right-hand side plot) of the world set, while its exact position is uncertain. The corresponding subsets are defined to make the non-measurable realization, precise in meaning. Although the subsets illustrated in this plot are crisp, the may in general be fuzzy.} \label{fig:measurable_and_nonmeasurable_realizations} \end{figure} It is important to note that although the transformation from a realization that is imprecise in value into one that is precise in meaning is with the aim of reducing the uncertainty from the entire world set to a subset of it, further uncertainties may or may not exist in the exact position of the borders of these subsets. For instance, when a specific type of search-and-rescue robot is deployed to a burning building, the domains of temperature for which the robot is functional and dysfunctional should be determined. Knowing the materials, sensors, ... used in the construction of the robot, one can determine the temperature at which this robot or any other robot of this type will become dysfunctional. In this case, the subsets functional and dysfunctional are crisp. In this case the transformation to a realization that is precise in meaning using membership functions (fuzzy logic) is identical to the transformation using probability functions (probabilistic logic). Distinguishing the exact borders of these two subsets for a human search-and-rescuer is more challenging, since humans are not as \emph{homogeneous} as identical robots produced in a factory. Therefore, these borders may vary from person to person and the resulting subsets of functional and dysfunctional can become fuzzy. Then the only right tool for transforming the non-measurable realization into one that is precise in meaning is fuzzy logic \subsection*{Uncertainties after realization of an event} After $E$ is realized, in case the realization of $E$ is measurable, e.g., a temperature, which can be measured directly using a thermostat, the realization is \emph{precise in value} and its position in the world set can strictly be distinguished. When the realization of $E$ is non-measurable, e.g., the comfort, which cannot be measured directly using a measurement device, this realization is imprecise in value. Therefore, there is uncertainty about its position in the world set. The realization should first be quantified to become precise in meaning, and only then, a subset of the world set that embeds all the possible positions of the realization in the world set, together with the degree to which the realization is positioned at any of these possible positions can be determined \bibliographystyle{ieeetr}
3,212,635,537,832
arxiv
\section{Introduction} \begin{figure*}[!ht] \centering \includegraphics[width=0.99\linewidth]{images/Novel_views.pdf} \caption{Visualization of novel view rendering in the proposed method on two samples of Structured3D and Replica360. A plausible viewpoint image with 3D consistency is synthesized at positions different from the camera position of the input.} \label{fig:visualise} \end{figure*} Omnidirectional cameras have become more easily accessible, with a growing number of panoramas shared on media and $360^\circ$ datasets released. $360^\circ$ cameras can capture complete environments in a single shot, which makes $360^\circ$ imagery alluring in many computer vision tasks. They are becoming increasingly popular and widespread in the computer vision community. The omnidirectional $360^\circ$ field-of-view captured by these devices is appealing for tasks such as robust, omnidirectional SLAM \cite{won2020omnislam,sumikura2019openvslam}, scene understanding and layout estimation \cite{jin2020geometric, sun2021hohonet,wang2021led2,zeng2020joint}, or VR photography and video \cite{omniphotos,serrano2019motion}. While many techniques are proposed to synthesize novel views by taking the perspective image(s) as the input, prior work rarely considers the panorama image as a single source for modeling and rendering. Although perspective images can be acquired conveniently, in order to construct a full scene, it requires a set of dense samples. Furthermore, additional camera variables are essential for estimating relative poses and matching \cite{hsu2021moving}. Synthesizing novel views with parallax provides immersive 3D experiences \cite{shum2005virtual}. Traditional computer vision solutions employ reconstruction techniques (e.g., structure from motion \cite{hartley2004camera} and image-based rendering \cite{shum2007rendering,shum1999rendering}) using a set of densely captured images. However, these approaches suffer from the cost of matching and reconstruction computation for both time and capacity. The recent development in this field focuses on deep learning methods for its strong capability of modeling 3D geometry and rendering new frames \cite{hsu2021moving}. In recent years, neural network-based rendering methods have been rapidly developed, and the neural radiance field (NeRF) \cite{mildenhall2021nerf} is a promising method for synthesizing photorealistic views. However, the NeRF requires tens to hundreds of images with known relative positions and the same shooting conditions to be given as input, and such imaging is a large and time-consuming process \cite{hara2022enhancement}. Accordingly, various efforts have been made to reduce the number of input images \cite{yu2021pixelnerf,wang2021ibrnet,trevithick2020grf,dietnerf} or ease the shooting conditions \cite{martin2021nerf,wang2021nerf,lin2021barf,jeong2021self}. We attempt to learn a 3D scene model from a single $360^\circ$ image. Learning NeRF from a single $360^\circ$ image is advantageous because we do not need to align the shooting conditions between images. Furthermore, we do not need to know the relative positions between images because we use only one image that contains a wealth of omnidirectional information \cite{hara2022enhancement}. OmniNeRF \cite{hsu2021moving} is a prior study of this approach; however, it relies only on the neighborhood interpolation capability of the multi-layer perceptron to complete the missing regions caused by occlusion. The single source image does not contain enough information to infer the occlusion and the opposite side of objects \cite{hara2022enhancement}; thus, the results are degraded. \cite{hara2022enhancement} tries to complete the missing regions of the reprojected images by using a self-supervised generative model. However, the model fails when there are large missing areas in the reprojected images. Only a few NeRF methods have been proposed to take advantage of depth measurements simultaneously with color within the volumetric rendering pipeline \cite{deng2022depth,neff2021donerf}. In this work, we explore depth as an additional, cheap source of supervision to guide the geometry learned by OmniNeRF. We propose to extract the depth information simultaneously when the input image is reprojected at other camera positions. It was noticed that considering depth information can improve geometry considerably compared to only color information. OmniNeRF is still estimated per scene and cannot benefit from prior knowledge from other images and objects. Prior knowledge is needed when the scene reconstruction problem is underdetermined. 3D reconstruction systems struggle when regions of an object are never observed. This is particularly problematic when rendering an object at significantly different poses. Unobserved regions during training become visible when rendering a scene with an extreme baseline change. A view synthesis system should generate plausible missing details to fill in the gaps. Even a regularized NeRF learns poor extrapolations to unseen regions due to its lack of prior knowledge. We also exploit the consistency principle utilized in DietNeRF \cite{dietnerf}: objects share high-level semantic properties between their views. Image recognition models learn to extract many such high-level semantic features, including object identity. We transfer prior knowledge from pre-trained image encoders learned on highly diverse 2D single-view image data to the view synthesis problem. In the single-view setting, such encoders are frequently trained on millions of realistic images like ImageNet \cite{deng2009imagenet}. CLIP is a recent multi-modal encoder that is trained to match images with captions in a massive web scrape containing 400M images \cite{radford2021learning}. Due to the diversity of its data, CLIP showed promising zero- and few-shot transfer performance to image recognition tasks. CLIP and ImageNet models also contain prior knowledge useful for novel view synthesis. Our contributions in this paper are as follows: \begin{itemize} \item We propose 360FusionNeRF, a neural scene representation framework based on OmniNeRF that can be estimated from only a single RGB-D $360^\circ$ Panoramic Image and can generate views with unobserved regions. \item In addition to minimizing NeRF’s mean squared error losses at known poses in pixel-space, 360FusionNeRF penalizes a geometric loss via auxiallary depth of the projected images and a self supervised semantic consistency loss via activations of CLIP’s Vision Transformer. \item We demonstrate qualitatively and quantitatively that our proposed method results in a generalizable scene representation and improves perceptual quality. \end{itemize} \section{Related Work} \subsection{Neural 3D Rendering} Neural Radiance Fields (NeRFs) \cite{mildenhall2021nerf} have demonstrated encouraging progress for view synthesis by learning an implicit neural scene representation. Since its origin, tremendous efforts have been made to improve its quality \cite{verbin2021ref,guo2022nerfren,suhail2022light,chen2022aug}, speed \cite{muller2022instant,sun2022direct,fridovich2022plenoxels}, artistic effects \cite{wang2022clip,fan2022unified,jain2022zero}, and generalization ability \cite{wang2021ibrnet,liu2022neural}. Specifically, Mip-NeRF \cite{barron2021mip} propose to cast a conical frustum instead of a single ray for anti-aliasing. Mip-NeRF 360 \cite{barron2022mip} further extends it to the unbounded scenes with efficient parameterization. KiloNeRF \cite{reiser2021kilonerf} speeds up NeRF by adopting thousands of tiny MLPs. MVSNeRF \cite{chen2021mvsnerf} extracts a 3D cost volume and renders high-quality images from novel viewpoints on unseen scenes. DS-NeRF \cite{deng2022depth} adopts additional depth supervision to improve the reconstruction quality. RegNeRF [34] proposes a normalizing flow and depth smoothness regularization. DietNeRF \cite{dietnerf} utilizes the CLIP embeddings to add semantic constraints for unseen views. PixelNeRF \cite{yu2021pixelnerf} utilizes a ConvNets encoder to extract context information by large-scale pre-training and successfully renders novel views from a single input. However, it can only work on simple objects (e.g., ShapeNet ) \cite{xu2022sinnerf}, while the results on complex scenes remain unknown. Furthermore, the approach relies on the availability of the entire reference image for supervision, but the reprojected images are incomplete in a panoramic setting. SinNeRF \cite{xu2022sinnerf} too proposes a multi-supervision NeRF, but its approaches are very object centric. In our work, we focus on complex scene reconstruction with panoramic images. \subsection{360 Panorama View Synthesis} OmniNeRF\cite{gu2022omni} synthesizes novel fish-eye projection images, using spherical sampling to improve the quality of results. 360Roam\cite{huang2022360roam} is a scene-level NeRF system that can synthesize images of large-scale indoor scenes in real-time and support VR roaming. PanoHDR-NeRF \cite{gera2022casual} presents a pipeline to predict the full HDR radiance of an indoor scene without using special hardware, careful scanning of the scene, or intricately calibrated camera configurations. However, this paper focuses on synthesizing novel views from a single Equirectangular Panorama $360^\circ$ RGB-D Image. OmniNeRF \cite{hsu2021moving} learns an entire scene from a single $360^\circ$ RGBD image without the need to set relative positions or identify shooting conditions. However, it only relies on the neighborhood interpolation capability of the multi-layer perceptron to complete the missing regions caused by occlusion and zooming, which leads to artifacts, and the image quality is greatly reduced when moving away from the camera position of the input image \cite{hara2022enhancement}. An alternative method to NeRF, Pathdreamer \cite{koh2021pathdreamer}, synthesizes novel views from a single $360^\circ$ RGB-D image. However, it has the issue of low 3D consistency in the synthesized views due to its reliance on 2D image-to-image translation. In \cite{hara2022enhancement}, a self-supervised trained generative model completes the missing regions of the reprojected images of OmniNeRF, and the completed images are utilized for training the NeRF. They introduce a method to train NeRF while dynamically selecting a sparse set of completed images, to reduce the discrimination error of the synthesized views with real images. However, when there are large missing regions that exceed the image completion capabilities, it fails to synthesize plausible views. \section{Proposed Method} \subsection{Preliminaries} Neural Radiance Fields (NeRFs) \cite{mildenhall2021nerf} synthesize images sampling 5D coordinates (location $(x, y, z)$ and viewing direction $(\theta, \phi)$) along camera rays, map them to color $(r, g, b)$ and volume density $\sigma$. \cite{mildenhall2021nerf} first propose using coordinate-based multi-layer perception networks (MLPs) to parameterize this function and then use volumetric rendering techniques to alpha composite the values at each location to obtain the final rendered images. OmniNeRF \cite{hsu2021moving} generates multiple images at virtual camera positions from a single $360^\circ$ RGB-D image and utilizes these images to train NeRF. A set of 3D points is generated from the given RGB-D panorama, and then these 3D points are reprojected into multiple omnidirectional images that correspond to different virtual camera locations. The generated omnidirectional images are likely to be imperfect as there might be gaps and cracks between pixels due to occlusion or limited resolution. OmniNeRF solves this problem by taking advantage of the pixel-based prediction property of its MLP model, which takes a single pixel rather than an entire image as the input. At each discrete sample on the ray $r(t) = o + td$, where $o$ and $d$ denote the ray origin and ray direction, the final RGB values $C(r)$ are optimized from aggregation of color $c_i$ and opacity $\sigma_i$. A positional encoding technique~\cite{tancik2020fourier} is applied to rays for capturing high-frequency information. The function of color composition follows the rule in volume rendering~\cite{max1995optical}: \begin{equation} \hat{C}(r) = \sum_{i=1}^N T_i \left(1-\exp(- \sigma_i \delta_i)\right) c_i, \\ \end{equation} where $T_i = \exp \left( - \sum_{j=1}^{i-1}\sigma_j \delta_j \right)$ and $\delta_i = t_{i+1} - t_i$ is the interval between two adjacent samples. The overall volume sampling principles are done hierarchically: a `coarse' and a `refined' stage. The coarse and refined networks are identical except for the process of sampling pixels on a ray. At the coarse stage, $N_c$ intervals are uniformly sampled alone the ray, while at the refined stage, $N_f$ intervals are decided in accordance with densities from the coarse stage. These two predictions would be optimized by the ground-truth color, respectively. It optimizes the radiance field by minimizing the mean squared error between rendered color and the ground truth color, \begin{equation} \label{eqn:color-loss} \mathcal{L}_{\text{Color}} = \sum_{r \in R_{i}}|| (C(r) - \hat{C}(r)) ||^2 \end{equation} where $R_i$ is the set of input rays during training. \begin{figure*}[ht!] \centering \includegraphics[width=0.99\linewidth]{images/Methodology.pdf} \caption{The input data includes a panorama and an auxiliary depth map. We generate new training images, and depth maps with various virtual camera poses. Since the information might be missing after re-projection, we could only have partial pixels in each augmented training image. The Projected RGB Images are utilized for Color Supervision, while the Depth maps are used for Geometric Supervision. The Novel Views contribute toward maintaining Semantic Consistency using CLIP ViT. } \label{fig:method} \end{figure*} \subsection{Challenges with OmniNeRF} \subsubsection{Overfitting to Training Views} Conceptually, OmniNeRF is trained by mimicking the image-formation process at observed poses. With many training views, the MLP in OmniNeRF recovers accurate textures and occupancy that allow interpolations to new views. The high-frequency representational capacity allows OmniNeRF to overfit to each input view. Fundamentally, the plenoptic function representation suffers from a near-field ambiguity \cite{zhang2020nerf++} where distant cameras each observe significant regions of space that no other camera observes. In this case, the optimal scene representation is underdetermined. Degenerate solutions can also exploit the view dependence of the radiance field. \cite{dietnerf} had pointed out that while a rendered view from a pose near a training image has reasonable textures, it is skewed incorrectly and has cloudy artifacts from incorrect geometry. As the geometry is not estimated correctly, a distant view contains almost none of the correct information. High-opacity regions block the camera. Without supervision from any nearby camera, opacity is sensitive to random initialization. \subsubsection{No Generalization to Unseen Views} As OmniNeRF is estimated from scratch per scene, it has no prior knowledge about natural objects such as common symmetries and object parts. The fundamental challenge is that NeRF receives no supervisory signal from $\mathcal{L}_{\text{Color}}$ to the unobserved regions and instead relies on the inductive bias of the MLP for any inpainting. We want to introduce prior knowledge that allows NeRF to exploit bilateral symmetry for plausible completions. \subsection{Geometric Supervision} Directly overfitting the reference images leads to a corrupted neural radiance field collapsing towards the provided views. We start by adopting the depth prior to reconstructing reasonable 3D geometry. \subsubsection{Extracting Depth Ground Truths} We follow the projection procedure from OmniNeRF \cite{hsu2021moving}] to extract depth information simultaneously with the view. First, all pixels can be projected to a uniform sphere by their 2D coordinates. For a pixel $(x, y)$ on the panorama, its vertical and horizontal viewing angles can be defined by $\theta = \pi y / H $, $\phi = 2 \pi x / W $, where $H$ and $W$ are the height and width of the panorama. The coordinate center would be the current camera position, namely the ray origin. Likewise, a ray direction means a unit vector from the center to the sphere. Therefore, a novel panoramic view and the depth information can be determined by moving the camera to a new position and examining what would be sampled on the new sphere by the emitted rays based on the above equations. Not all pixels are supposed to be visible from the new viewpoint. The key of the projection mechanism is to verify which parts of the ground truth will be visible to a given ray origin. \subsubsection{Volumetric Rendering} Similar to Color rendering, the depth can be represented with volume density using: \begin{equation} \hat{D}(r) = \sum_{i=1}^N T_i \left(1-\exp(- \sigma_i \delta_i)\right) t_i, \\ \end{equation} where $T_i = \exp \left( - \sum_{j=1}^{i-1}\sigma_j \delta_j \right)$ and $\delta_i = t_{i+1} - t_i$ is the interval between two adjacent samples. \subsubsection{Optimization} The network parameters $\theta$ are optimized using a set of RGB-D frames, each of which has a color, depth, and camera pose information. $\mathcal{L}_{\text{Color}}$ in Equation \ref{eqn:color-loss} acts as the photometric loss. The geometric loss is the absolute difference between predicted and true depths, normalized by the depth variance \cite{sucar2021imap} to discourage weights with high uncertainty: the geometric loss is given by: \begin{equation} \mathcal{L}_{\text{Geo}} = \sum_{r \in R}\frac{\lvert \hat{D}(r)-D(r) \rvert}{\sqrt{\hat{D}_{var}(r)}}, \end{equation} where \(\hat{D}_{var}(r) = \sum_{i=1}^{N} T_i(1 - exp(-\sigma_i\delta_i))(\hat{D}(r) - t_i)^2\) depth variance of the image. \subsection{Semantic Consistency} Unlike the geometry pseudo labels, where we enforce the consistency in 3D space, pseudo semantic labels are adopted to regularize the 2D image fidelity. Concretely speaking, we introduce a global structure prior supported by a pre-trained ViT network. This guidance helps SinNeRF render visually-pleasing results in each view. Vision transformers (ViT) have been proven to be an expressive semantic prior, even between images with misalignment \cite{tumanyan2022splicing} \cite{amir2021deep}. Similar to \cite{xu2022sinnerf}, we propose to adopt a pre-trained ViT for global structure guidance, which enforces semantic consistency between unseen views. Although pixel-wise misalignment exists between the views, we agree with the observation by \cite{xu2022sinnerf} that the extracted representation of ViT is robust to this misalignment and provides supervision at the semantic level. Intuitively, this is because the content and style of the two views are similar, and a deep network is capable of learning invariant representation. Here we adopt CLIP-ViT \cite{radford2021learning}, a self-supervised vision transformer trained on ImageNet \cite{deng2009imagenet} dataset. In practice, CLIP produces normalized image embeddings. When the embedding is a unit vector, the $\mathcal{L}_{\text{SC}}$ simplifies to cosine similarity up to a constant and a scaling factor that can be absorbed into the loss weight $\lambda$: \begin{equation} \mathcal{L}_{\text{SC}}(I_1, I_2) = \lambda \phi(I_1)^T \phi(I_2), \end{equation} where $\phi$(.) is the normalized image embedding and $I_1$, $I_2$ are unseen views. \subsection{Final Pipeline} For generating $\mathcal{L}_{\text{SC}}$, volume rendering is necessary, and it is computationally expensive. Hence, semantic consistency is computed over a smaller resolution of the views. Further, as observed by \cite{dietnerf}, $\mathcal{L}_{\text{SC}}$ converges faster as compared to $\mathcal{L}_{\text{Color}}$ and $\mathcal{L}_{\text{Geo}}$. Hence, $\mathcal{L}_{\text{SC}}$ is minimized every $k$ iterations for every minimization of $\mathcal{L}_{\text{Color}}$ + $\lambda_{\text{Geo}} \mathcal{L}_{\text{Geo}}$. \section{Experiments} \begin{figure*}[!ht] \centering \includegraphics[width=0.99\linewidth]{images/Qualitative.pdf} \caption{Qualitative comparison of OmniNeRF and the proposed method on Replica360, Matterport3D, and Structured3D datasets. } \label{fig:qualitative} \end{figure*} \begin{table*} \setlength{\tabcolsep}{8.5pt} \center \begin{tabular}{@{}l | c c c | c c c | c c c} \toprule & PSNR$\uparrow$ & SSIM$\uparrow$ & LPIPS$\downarrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & LPIPS$\downarrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & LPIPS$\downarrow$\\ \midrule Structured3D & \multicolumn{3}{c}{S1} & \multicolumn{3}{c}{S2} & \multicolumn{3}{c}{S3} \\ \midrule OmniNeRF & 26.41 & 0.8259 & 0.2860 & 22.29 & 0.8627 & 0.2614 & 29.72 & 0.8903 & 0.2472 \\ 360FusionNeRF (ours) & \textbf{28.05} & \textbf{0.8734} & \textbf{0.2260} & \textbf{23.45} & \textbf{0.8731} & \textbf{0.2570} & \textbf{30.20} & \textbf{0.9061} & \textbf{0.2047} \\ \midrule Matterport3D & \multicolumn{3}{c}{M1} & \multicolumn{3}{c}{M2} & \multicolumn{3}{c}{M3} \\ \midrule OmniNeRF & 25.01 & 0.8860 & 0.2720 & \textbf{25.61} & 0.8000 & 0.2994 & 19.01 & 0.8481 & 0.2948 \\ 360FusionNeRF (ours) & \textbf{26.88} & \textbf{0.8934} & \textbf{0.2573} & 25.53 & \textbf{0.8336} & \textbf{0.2575} & \textbf{19.13} & \textbf{0.8622} & \textbf{0.2748} \\ \midrule Replica360 & \multicolumn{3}{c}{R1} & \multicolumn{3}{c}{R2} & \multicolumn{3}{c}{R3} \\ \midrule OmniNeRF & 28.83 & 0.9226 & 0.2715 & 30.61 & 0.9374 & 0.3385 & 27.39 & 0.8865 & 0.3701 \\ 360FusionNeRF (ours) & \textbf{32.76} & \textbf{0.9451} & \textbf{0.2116} & \textbf{33.23} & \textbf{0.9520} & \textbf{0.2790} & \textbf{27.61} & \textbf{0.8951} & \textbf{0.3184} \\ \bottomrule \end{tabular} \caption{ Quantitative evaluation of each novel view synthesis method on 3 scenes for each dataset.} \label{tab:quantitative} \vspace{-1em} \end{table*} \subsection{Dataset} We test our method on both synthetic and real-world datasets. In this work, all the panorama images are under equirectangular projection at the resolution of 512 × 1024 for Structured3D dataset, 1024 $\times$ 2048 for Replica360 and Matterport3D datasets. We randomly select 3 scenes from each dataset and perform a quantitative and qualitative analysis on the same. \subsubsection{Structured3D} Structured3D dataset \cite{zheng2020structured3d} contains 3,500 synthetic departments (scenes) with 185,985 photorealistic panoramic renderings. As the original virtual environment is not publicly accessible, we utilized the rendered panoramas directly. \subsubsection{Matterport3D} Matterport3D dataset \cite{chang2017matterport3d} is a large-scale indoor real-world $360^\circ$ dataset, captured by Matterport’s Pro 3D camera in 90 furnished houses (scenes) . The dataset provides 10,800 RGB-D panorama images, where we find the RGB-D signals near the polar region are missing. \subsubsection{Replica360} Replica360 dataset \cite{straub2019replica} contains 18 highly photo-realistic 3D indoor scene reconstructions at room and building scale. \subsection{Implementation Details} \label{subsec:impl_details} The Adam optimizer \cite{kingma2015ba} is used for the overall training process. The learning rate is initialized to $5.10^{-4}$, which is then exponentially reduced to $5.10^{-5}$. 200,000 epochs train the model for each experiment with a batch size of 1,400 on a DGX A100 GPU. We set $N_c$ = 64 and $N_f$ = 128 in the coarse and refined networks. The network architecture is identical to that of OmniNeRF \cite{hsu2021moving}. \subsection{Qualitative Evaluation} We qualitatively validate the novel view synthesis using a single $360^\circ$ RGB-D image. Figure \ref{fig:qualitative} compares the synthesized novel views by the proposed method and OmniNeRF. Each column corresponds to a scene of a dataset, and each row contains the results of a method. One can see that our method preserves the best geometry as well as perceptual quality. In the Replica360 sample, we can see the vase has been blurred for predictions by OmniNeRF while its shape has been well restored in our method. Some artifacts are created on the walls while they have been much reduced in that case. Even though the chair is thin, our method has produced the shape well. Matterport3D dataset has panoramas blurred at the poles. This makes it difficult for OmniNeRF to reproduce an object near the ceilings or the floor. As seen, the ceiling light has blurred in the background, while it can still be identified by our method. Even the face of the idol and the cups are much clearer and have not lost shape compared to OmniNeRF. In view synthesis for the Structured3D sample, the texture of the walls has been lost in the case of OmniNeRF while it is well maintained by our method. The artifacts are prevented near the bookshelf, and even transparent objects like the glass cup have not collapsed as in the case of OmniNeRF. \subsection{Quantitative Evaluation} We quantitatively evaluated each method using the following three evaluation metrics: \subsubsection{PSNR} Peak-to-signal-noise ratio expresses the mean-squared error in log space. This metric evaluates the performance of the input image reconstruction. We calculate the PSNR between the reference image and the synthesized image at the position of the input image. \subsubsection{SSIM} Structural Similarity Index Measure \cite{wang2004image} quantifies the degradation of image quality in the reconstructed image; the higher is better. However, it often disagrees with human judgements of similarity \cite{zhang2018unreasonable}. \subsubsection{LPIPS} Deep CNN activations mirror aspects of human perception. We measure the perceptual image quality using LPIPS \cite{zhang2018unreasonable}, which computes MSE between normalized features from all layers of a pre-trained VGG encoder \cite{simonyan2015very}. We extract three scenes each from the Structure3D, Matterport3D and Replica360 datasets. Table \ref{tab:quantitative} presents the evaluation results for each image. In almost all scenes, the proposed method outperforms OmniNeRF in terms of the PSNR, SSIM and LPIPS, which indicates that the proposed method can better synthesize plausible views with features close to the dataset. The lower performance for both models in Matterport3D can be due to distortion caused by the blurring of the panoramic images at the poles. In M2, the PSNR score is better for OmniNeRF, while our method outperforms both SSIM and LPIPS. Because of uncertainty, blurry renderings will outperform sharp but incorrect renderings on average error metrics like MSE and PSNR. Arguably, perceptual quality and sharpness are better metrics than pixel error for graphics applications like photo editing and virtual reality, as plausibility is emphasized. \section{Conclusions} This paper proposes a method for synthesizing novel views by learning the neural radiance field from a single $360^\circ$ image. The proposed method reprojects the input image to $360^\circ$ images at other camera positions, and its depth map is estimated. A geometric loss and semantic consistency loss were introduced in addition to the Color loss. Experiments indicated that the proposed method could synthesize plausible novel views while preserving the features of the scene for artificial and real-world scenes. These results confirm the effectiveness of employing geometric and semantic supervision for panoramic novel view synthesis. \section{Acknowledgment} This research was supported by grants from NVIDIA and utilized NVIDIA SDKs (CUDA Toolkit, TensorRT, and Omniverse). \bibliographystyle{IEEEtran} \section{Introduction} \IEEEPARstart{T}{his} is the first sentence of my Introduction. I wish you the best of success. This is the second sentence of my Introduction. This is the following sentence of my Introduction. This is the following sentence of my Introduction. This is the following sentence of my Introduction. This is the following sentence of my Introduction. This is the following sentence of my Introduction. This is the following sentence of my Introduction. This is the following sentence of my Introduction. This is the following sentence of my Introduction. This is the following sentence of my Introduction. This is the following sentence of my Introduction. This is the following sentence of my Introduction. This is the following sentence of my Introduction. This is the following sentence of my Introduction. This is the following sentence of my Introduction. This is the following sentence of my Introduction. This is the following sentence of my Introduction. This is the following sentence of my Introduction. This is the following sentence of my Introduction. This is the following sentence of my Introduction. This is the following sentence of my Introduction. This is the following sentence of my Introduction. This is the following sentence of my Introduction. \section{Next Section} \subsection{Subsection Heading Here} Subsection text here. Subsection text here. Subsection text here. Subsection text here. Subsection text here. Subsection text here. Subsection text here. Subsection text here. Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. Subsubsection text here. Subsubsection text here. Subsubsection text here. Subsubsection text here. Subsubsection text here. Subsubsection text here. Subsubsection text here. Subsubsection text here. Subsubsection text here. Subsubsection text here. Subsubsection text here. Subsubsection text here. Subsubsection text here. Subsubsection text here. Subsubsection text here. \subsubsection{Next Subsubsection Heading Here} Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. \subsection{Next Subsection Heading Here} Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. \section{Next Section} \subsection{Next Subsection Heading Here} Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. \subsection{Next Subsection Heading Here} Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. \section{Next Section} \subsection{Next Subsection Heading Here} Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. \subsubsection{Next Subsubsection Heading Here} Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. \subsubsection{Next Subsubsection Heading Here} Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. \subsubsection{Next Subsubsection Heading Here} Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. \subsection{Next Subsection Heading Here} Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. \section{Next Section} \subsection{Next Subsection Heading Here} Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. \subsection{Next Subsection Heading Here} Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. \subsection{Next Subsection Heading Here} Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. \subsubsection{Next Subsubsection Heading Here} Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. \subsubsection{Next Subsubsection Heading Here} Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. \subsubsection{Next Subsubsection Heading Here} Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. Next subsubsection text here. \section{Next Section} \subsection{Next Subsection Heading Here} Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. \subsection{Next Subsection Heading Here} Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. \subsection{Next Subsection Heading Here} Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. Next subsection text here. \section{Conclusion} The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi
3,212,635,537,833
arxiv
\section{Introduction} \label{sec:introduction} Neural word embeddings \cite{bengio2006,collobert2008,mikolov2013efficient} have received much attention in the distributional semantics community, and have shown state-of-the-art performance in many natural language processing tasks. While they have been compared with co-occurrence based models in simple similarity tasks at the word level \cite{levy2014linguistic,baroni2014don}, we are aware of only one work that attempts a comparison of the two approaches in compositional settings \cite{blacoe2012comparison}, and this is limited to additive and multiplicative composition, compared against composition via a neural autoencoder. The purpose of this paper is to provide a more complete picture regarding the potential of neural word embeddings in compositional tasks, and meaningfully compare them with the traditional distributional approach based on co-occurrence counts. We are especially interested in investigating the performance of neural word vectors in compositional models involving general mathematical composition operators, rather than in the more task- or domain-specific deep-learning compositional settings they have generally been used with so far (for example, by \newcite{socher2012semantic}, \newcite{kalchbrenner-blunsom2013CVSC} and many others). In particular, this is the first large-scale study to date that applies neural word representations in tensor-based compositional distributional models of meaning similar to those formalized by \newcite{coecke2010}. We test a range of implementations based on this framework, together with additive and multiplicative approaches \cite{mitchell2008vector}, in a variety of different tasks. Specifically, we use the verb disambiguation task of \newcite{grefenstette2011experimental} and the transitive sentence similarity task of \newcite{KartSadrQPL} as small-scale focused experiments on pre-defined sentence structures. Additionally, we evaluate our vector spaces on paraphrase detection (using the Microsoft Research Paraphrase Corpus of \newcite{dolan2005microsoft}) and dialogue act tagging using the Switchboard Corpus (see e.g.~\cite{Stolcke.etal00}). In all of the above tasks, we compare the neural word embeddings of \newcite{mikolov2013efficient} with two vector spaces both based on co-occurrence counts and produced by standard distributional techniques, as described in detail below. The general picture we get from the results is that in almost all cases the neural vectors are more effective than the traditional approaches. We proceed as follows: Section \ref{sec:meaning-representation} provides a concise introduction to distributional word representations in natural language processing. Section \ref{sec:compositional-models} takes a closer look to the subject of compositionality in vector space models of meaning and describes the range of compositional operators examined here. In Section \ref{sec:semantic-spaces} we provide details about the vector spaces used in the experiments. Our experimental work is described in detail in Section \ref{sec:experiments}, and the results are discussed in Section \ref{sec:discussion}. Finally, Section \ref{sec:concl-future-work} provides conclusions. \section{Meaning representation} \label{sec:meaning-representation} There are several approaches to the representation of word, phrase and sentence meaning. As natural languages are highly creative and it is very rare to see the same sentence twice, any practical approach dealing with large text segments must be \emph{compositional}, constructing the meaning of phrases and sentences from their constituent parts. The ideal method would therefore express not only the similarity in meaning between those constituent parts, but also between the results of their composition, and do this in ways which fit with linguistic structure and generalisations thereof. \paragraph{Formal semantics} \label{sec:formal-semantics} Formal approaches to the semantics of natural language have long built upon the classical idea of compositionality -- that the meaning of a sentence is a function of the meanings of its parts \cite{frege1892sense}. In compositional type-logical approaches, predicate-argument structures representing phrases and sentences are built from their constituent parts by $\beta$-reduction within the lambda calculus framework \cite{montague1970universal}: for example, given a representation of \emph{John} as $\mathit{john}'$ and \emph{sleeps} as $\lambda x.\mathit{sleep}'(x)$, the meaning of the sentence ``John sleeps'' can be constructed as $\lambda x.\mathit{sleep}'(x)(\mathit{john}') = \mathit{sleep}'(\mathit{john}')$. Given a suitable pairing between words and semantic representations of them, this method can produce structured sentential representations with broad coverage and good generalisability (see e.g.~\cite{Bos2008STEP2}). The above logical approach is extremely powerful because it can capture complex aspects of meaning such as quantifiers and their interaction (see e.g.~\cite{copestake2005minimal}), and enables inference using well studied and developed logical methods (see e.g.~\cite{bos2000first}). \paragraph{Distributional hypothesis} \label{sec:distr-hypoth} However, such formal approaches are less able to express \emph{similarity} in meaning. We would like to capture the intuition that while \textit{John} and \textit{Mary} are distinct, they are rather similar to each other (both of them are humans) and dissimilar to words such as \textit{dog}, \textit{pavement} or \textit{idea}. The same applies at the phrase and sentence level: ``dogs chase cats'' is similar in meaning to ``hounds pursue kittens'', but less so to ``cats chase dogs'' (despite the lexical overlap). Distributional methods provide a way to address this problem. By representing words and phrases as vectors or tensors in a (usually highly dimensional) vector space, one can express similarity in meaning via a suitable distance metric within that space (usually cosine distance); furthermore, composition can be modelled via suitable linear-algebraic operations. \paragraph{Co-occurrence-based word representations} \label{sec:distr-repr} One way to produce such vectorial representations is to directly exploit \newcite{harris1954distributional}'s intuition that semantically similar words tend to appear in similar contexts. We can construct a vector space in which the dimensions correspond to contexts, usually taken to be words as well. The word vector components can then be calculated from the frequency with which a word has co-occurred with the corresponding contexts in a window of words, with a predefined length. \begin{table}[b!] \centering \begin{tabular}{lrrr} \toprule & philosophy & book & school \\ \midrule Mary & 0 & 10 & 22 \\ John & 4 & 60 & 59 \\ girl & 0 & 19 & 93 \\ boy & 0 & 12 & 164 \\ idea & 10 & 47 & 39 \\ \bottomrule \end{tabular} \caption{Word co-occurrence frequencies extracted from the BNC \cite{leech1994claws4}.} \label{tab:comparison} \end{table} Table~\ref{tab:comparison} shows 5 3-dimensional vectors for the words \textit{Mary}, \textit{John}, \textit{girl}, \textit{boy} and \textit{idea}. The words \textit{philosophy}, \textit{book} and \textit{school} signify vector space dimensions. As the vector for \textit{John} is closer to \textit{Mary} than it is to \textit{idea} in the vector space---a direct consequence of the fact that \textit{John}'s contexts are similar to \textit{Mary}'s and dissimilar to \textit{idea}'s---we can infer that \textit{John} is semantically more similar to \textit{Mary} than to \textit{idea}. Many variants of this approach exist: performance on word similarity tasks has been shown to be improved by replacing raw counts with weighted values (e.g.~mutual information)---see \cite{turney2010frequency} and below for discussion, and \cite{kiela-clark:2014:CVSC} for a detailed comparison. \paragraph{Neural word embeddings} \label{sec:neural-embedding} Deep learning techniques exploit the distributional hypothesis differently. Instead of relying on observed co-occurrence frequencies, a neural language model is trained to maximise some objective function related to e.g. the probability of observing the surrounding words in some context \cite{mikolov2013distributed}: \begin{align} \frac{1}{T}\sum^{T}_{t=1}\sum_{-c \leq j \leq c, j\neq0} \log p(w_{t+j}|w_t) \label{eq:objective-func} \end{align} \noindent Optimizing the above function, for example, produces vectors which maximise the conditional probability of observing words in a context around the target word $w_t$, where $c$ is the size of the training window, and $w_1 w_2, \cdots w_T$ a sequence of words forming a training instance. Therefore, the resulting vectors will capture the distributional intuition and can express degrees of lexical similarity. This method has an obvious advantage compared to co-occurrence method: since now the context is \textit{predicted}, the model in principle can be much more robust in data sparsity problems, which is always an important issue for co-occurrence word spaces. Additionally, neural vectors have also proven successful in other tasks \cite{mikolov2013linguistic}, since they seem to encode not only attributional similarity (the degree to which similar words are close to each other), but also relational similarity \cite{turney2006similarity}. For example, it is possible to extract the singular:plural relation (\textit{apple}:\textit{apples}, \textit{car}:\textit{cars}) using vector subtraction: \begin{align*} \overrightarrow{\mathit{apple}} - \overrightarrow{\mathit{apples}} \approx \overrightarrow{\mathit{car}} - \overrightarrow{\mathit{cars}} \end{align*} Perhaps even more importantly, semantic relationships are preserved in a very intuitive way: \begin{align*} \overrightarrow{\mathit{king}} - \overrightarrow{\mathit{man}} \approx \overrightarrow{\mathit{queen}} - \overrightarrow{\mathit{woman}} \end{align*} allowing the formation of analogy queries similar to $\overrightarrow{\mathit{king}} - \overrightarrow{\mathit{man}} + \overrightarrow{\mathit{woman}} = \mathtt{?}$, obtaining $\overrightarrow{\mathit{queen}}$ as the result.\footnote{\newcite{levy2014linguistic} improved \newcite{mikolov2013linguistic}'s method of retrieving relational similarities by changing the underlying objective function.} Both neural and co-occurrence-based approaches have advantages over classical formal approaches in their ability to capture lexical semantics and degrees of similarity; their success at extending this to the sentence level and to more complex semantic phenomena, though, depends on their applicability within compositional models, which is the subject of the next section. \section{Compositional models} \label{sec:compositional-models} Compositional distributional models represent meaning of a sequence of words by a vector, obtained by combining meaning vectors of the words within the sequence using some vector composition operation. In a general classification of these models, one can distinguish between three broad cases: simplistic models which combine word vectors irrespective of their order or relation to one another, models which exploit linear word order, and models which use grammatical structure. The first approach combines word vectors by vector addition or point-wise multiplication \cite{mitchell2008vector}---as this is independent of word order, it cannot capture the difference between the two sentences ``dogs chase cats'' and ``cats chase dogs''. The second approach has generally been implemented using some form of deep learning, and captures word order, but not by necessarily caring about the grammatical structure of the sentence. Here, one works by recursively building and combining vectors for subsequences of words within the sentence using e.g.~autoencoders \cite{socher2012semantic} or convolutional filters \cite{KalchbrennerACL2014}. We do not consider this approach in this paper. This is because, as mentioned in the introduction, their vectors and composition operators are task-specific. These are trained directly to achieve specific objectives in certain pre-determined tasks. We are interested in vector and composition operators that work for \textit{any} compositional task, and which can be combined with results in linguistics and formal semantics to provide generalisable models that can canonically extend to complex semantic phenomena. The third (i.e. the grammatical) approach promises a way to achieve this, and has been instantiated in various ways in the work of \newcite{baroni2010nouns},\newcite{grefenstette2011experimental}, and \newcite{KartSadrCOLING}. \paragraph{General framework} Formally, we can specify the vector representation of a word sequence $w_1 w_2 \cdots w_n$ as the vector $\overrightarrow{s} = \overrightarrow{w_1} \star \overrightarrow{w_2} \star \cdots \star \overrightarrow{w_n}$, where $\star$ is a vector operator, such as addition $+$, point-wise multiplication $\odot$, tensor product $\otimes$, or matrix multiplication $\times$. In the simplest compositional models (the first approach described above), $\star$ is $+$ or $\odot$, e.g.~see \cite{mitchell2008vector}. Grammar-based compositional models (the third approach) are based on a generalisation of the notion of vectors, known as \emph{tensors}. Whereas a vector $\overrightarrow{v}$ is an element of an atomic vector space $V$, a tensor $\overline{z}$ is an element of a tensor space $V \otimes W \otimes \cdots \otimes Z$. The number of tensored spaces is referred to by the \emph{order} of the space. Using a general duality theorem from multi-linear algebra \cite{bourbaki}, it follows that tensors are in one-one correspondence with multi-linear maps, that is we have: \[ \overline{z} \in V \otimes W \otimes \cdots \otimes Z \ \cong \ f_{\overline{z}} \colon V \to W \to \cdots \to Z \] In such a tensor-based formalism, meanings of nouns are vectors and meanings of predicates such as adjectives and verbs are tensors. Meaning of a string of words is obtained by applying the compositions of multi-linear map duals of the tensors to the vectors. For the sake of demonstration, take the case of an intransitive sentence ``Sbj Verb''; the meaning of the subject is a vector $\overrightarrow{\text{Sbj}} \in V$ and the meaning of the intransitive verb is a tensor $\overline{\text{Verb}} \in V \otimes W$. Meaning of the sentence is obtained by applying $f_{\overline{Verb}}$ to $\overrightarrow{\text{Sbj}}$, as follows: \[ \overrightarrow{\mbox{Sbj Verb}} = f_{\overline{Verb}} (\overrightarrow{\text{Sbj}}) \] By tensor-map duality, the above becomes equivalent to the following, where composition has now become the familiar notion of matrix multiplication, that is $\star$ is $\times$: \[ \overline{\text{Verb}} \times \overrightarrow{\text{Sbj}} \] In general and for words with tensors of order higher than two, $\star$ becomes a generalisation of $\times$, referred to by \emph{tensor contraction}, see e.g.~\newcite{KartsaklisEMNLP}. Since the creation and manipulation of tensors of order higher than 2 is difficult, one can work with simplified versions of tensors, faithful to their underlying mathematical basis; these have found intuitive interpretations, e.g. see \newcite{grefenstette2011experimental}, \newcite{KartSadrQPL}. In such cases, $\star$ becomes a combination of a range of operations such as $\times$, $\otimes$, $\odot$, and $+$. \input{table-comp-methods.tex} \paragraph{Specific models} In the current paper we will experiment with a variety of models. In Table~\ref{tbl:comp-methods}, we present these models in terms of their composition operators and a reference to the main paper in which each model was introduced. For the simple compositional models the sentence is a string of any number of words; for the grammar-based models, we consider simple transitive sentences ``$\mbox{Sbj Verb Obj}$'' and introduce the following abbreviations for the concrete method used to build a tensor for the verb: \begin{enumerate} \item $\overline{\text{Verb}}$ is a verb matrix computed using the formula $\sum_i \overrightarrow{\text{Sbj}_i} \otimes \overrightarrow{\text{Obj}_i}$, where $\overrightarrow{\text{Sbj}_i}$ and $\overrightarrow{\text{Obj}_i}$ are the subjects and objects of the verb across the corpus. These models are referred to by \emph{relational} \cite{grefenstette2011experimental}; they are generalisations of predicate semantics of transitive verbs, from pairs of individuals to pairs of vectors. The models reduce the order 3 tensor of a transitive verb to an order 2 tensor (i.e.~a matrix). \item $\widetilde{\text{Verb}}$ is a verb matrix computed using the formula $\overrightarrow{\text{Verb}} \otimes \overrightarrow{\text{Verb}}$, where $\overrightarrow{\text{Verb}}$ is the distributional vector of the verb. These models are referred to by \emph{Kronecker}, which is the term sometimes used to denote the outer product of tensors \cite{grefenstette2011gems}. This models also reduces the order 3 tensor of a transitive verb to an order 2 tensor. \item The models of the last five lines of the table use the so-called \emph{Frobenius} operators from categorical compositional distributional semantics \cite{KartSadrCOLING} to expand the relational matrices of verbs from order 2 to order 3. The expansion is obtained by either copying the dimension of the subject into the space provided by the third tensor, hence referred to by \emph{Copy-Sbj}, or copying the dimension of the object in that space, hence referred to by \emph{Copy-Obj}; furthermore, we can take addition, multiplication, or outer product of these, which are referred to by \emph{Frobenius-Add}, \emph{Frobenius-Mult}, and \emph{Frobenius-Outer} \cite{KartSadrQPL}. \end{enumerate} \section{Semantic word spaces} \label{sec:semantic-spaces} Co-occurrence-based vector space instantiations have received a lot of attention from the scientific community (refer to \cite{kiela-clark:2014:CVSC,polajnar-clark:2014:EACL} for recent studies). We instantiate two co-occurrence-based vectors spaces with different underlying corpora and weighting schemes. \paragraph{GS11} \label{sec:ppmi} Our first word space is based on a typical configuration that has been used in the past extensively for compositional distributional models (see below for details), so it will serve as a useful baseline for the current work. In this vector space, the co-occurrence counts are extracted from the British National Corpus (BNC) \cite{leech1994claws4}. As basis words, we use the most frequent nouns, verbs, adjectives and adverbs (POS tags \texttt{SUBST}, \texttt{VERB}, \texttt{ADJ} and \texttt{ADV} in the BNC XML distribution\footnote{\url{http://www.natcorp.ox.ac.uk/}}). The vector space is lemmatized, that is, it contains only ``canonical'' forms of words. In order to weight the raw co-occurrence counts, we use positive point-wise mutual information (PPMI). The component value for a target word $t$ and a context word $c$ is given by: \begin{equation*} \operatorname{PPMI}(t, c)= \max\left( 0, \log \frac{p(c|t)}{p(c)} \right) \end{equation*} \noindent where $p(c|t)$ is the probability of word $c$ given $t$ in a symmetric window of length 5 and $p(c)$ is the probability of $c$ overall. Vector spaces based on point-wise mutual information (or variants thereof) have been successfully applied in various distributional and compositional tasks; see e.g. \newcite{grefenstette2011experimental}, \newcite{mitchell2008vector}, \newcite{levy2014linguistic} for details. PPMI has been shown to achieve state-of-the-art results \cite{levy2014linguistic} and is suggested by the review of \newcite{kiela-clark:2014:CVSC}. Our use here of the BNC as a corpus and the window length of 5 is based on previous use and better performance of these parameters in a number of compositional experiments \cite{grefenstette2011experimental,grefenstette2011gems,mitchell2008vector,KartSadrCOLING}. \paragraph{KS14} In this variation, we train a vector space from the ukWaC corpus\footnote{\url{http://wacky.sslmit.unibo.it/}} \cite{ukwac}, originally using as a basis the 2,000 content words with the highest frequency (but excluding a list of stop words as well as the 50 most frequent content words since they exhibit low information content). The vector space is again lemmatized. As context we consider a 5-word window from either side of the target word, while as our weighting scheme we use local mutual information (i.e.~point-wise mutual information multiplied by raw counts). In a further step, the vector space was normalized and projected onto a 300-dimensional space using singular value decomposition (SVD). In general, dimensionality reduction produces more compact word representations that are robust against potential noise in the corpus \cite{Landauer,schutze1997ambiguity}. SVD has been shown to perform well on a variety of tasks similar to ours \cite{baroni2010nouns,KartSadrQPL}. \paragraph{Neural word embeddings (NWE)} \label{sec:neur-word-embedd} For our neural setting, we used the skip-gram model of \newcite{mikolov2013distributed} trained with negative sampling. The specific implementation that was tested in our experiments was a 300-dimensional vector space learned from the Google News corpus and provided by the \texttt{word2vec}\footnote{\url{https://code.google.com/p/word2vec/}} toolkit. Furthermore, the \texttt{gensim} library \cite{rehurek_lrec} was used for accessing the vectors. On the contrary with the previously described co-occurrence vector spaces, this version is \textit{not} lemmatized. The negative sampling method improves the objective function of Equation \ref{eq:objective-func} by introducing negative examples to the training algorithm. Assume that the probability of a specific $(c,t)$ pair of words (where $t$ is a target word and $c$ another word in the same context with $t$), coming from the training data, is denoted as $p(D=1|c,t)$. The objective function is then expressed as follows: \begin{equation} \label{eq:neg1} \prod\limits_{(c,t)\in D}p(D=1|c,t) \end{equation} \noindent That is, the goal is to set the model parameters in a way that maximizes the probability of all observations coming from the training data. Assume now that $D'$ is a set of randomly selected incorrect $(c',t')$ pairs that do not occur in $D$, then Equation \ref{eq:neg1} above can be recasted in the following way: \begin{equation} \label{eq:neg2} \prod\limits_{(c,t)\in D}p(D=1|c,t) \prod\limits_{(c',t')\in D'}p(D=0|c',t') \end{equation} In other words, the model tries to distinguish a target word $t$ from random draws that come from a noise distribution. In the implementation we used for our experiments, $c$ is always selected from a 5-word window around $t$. More details about the negative sampling approach can be found in \cite{mikolov2013distributed}; the note of \newcite{goldberg2014word2vec} also provides an intuitive explanation of the underlying setting. \section{Experiments} \label{sec:experiments} Our experiments explore the use of the vector spaces above, together with the compositional operators described in Section~\ref{sec:compositional-models}, in a range of tasks all of which require semantic composition: verb sense disambiguation; sentence similarity; paraphrasing; and dialogue act tagging. \subsection{Disambiguation} \label{sec:disamb} We use the transitive verb disambiguation dataset described in \newcite{grefenstette2011experimental}\footnote{This and the sentence similarity dataset are available at \url{http://www.cs.ox.ac.uk/activities/compdistmeaning/}}. This dataset consists of ambiguous transitive verbs together with their arguments, landmark verbs that identify one of the verb senses, and human judgements that specify how similar is the disambiguated sense of the verb in the given context to one of the landmarks. This is similar to the intransitive dataset described in \cite{mitchell2008vector}. Consider the sentence ``system meets specification''; here, \textit{meets} is the ambiguous transitive verb, and \textit{system} and \textit{specification} are its arguments in this context. Possible landmarks for \emph{meet} are \textit{satisfy} and \textit{visit}; for this sentence, the human judgements show that the disambiguated meaning of the verb is more similar to the landmark \textit{satisfy} and less similar to \textit{visit}. The task is to estimate the similarity of the sense of a verb in a context with a given landmark. To get our similarity measures, we compose the verb with its arguments using one of our compositional models; we do the same for the landmark and then compute the cosine similarity of the two vectors. We evaluate the performance by averaging the human judgements for the same verb, argument and landmark entries, and calculating the Spearman's correlation between the average values and the cosine scores. As a baseline, we compare this with the correlation produced by using only the verb vector, without composing it with its arguments. \input{table-wsd-results.tex} Table~\ref{tab:wsd-results} shows the results of the experiment. NWE \textit{copy-object} composition yields the best correlation with the human judgements, and top performance across all vector spaces and models with a Spearman~$\rho$ of 0.456. For the KS14 space, the best result comes from \textit{Frobenius outer} (0.350), while the best operator for the GS11 space is \textit{point-wise multiplication} (0.348). For simple point-wise composition, only multiplicative GS11 and additive NWE improve over their corresponding verb-only baselines (but both perform worse than the KS14 baseline). With tensor-based composition in co-occurrence based spaces, \textit{copy subject} yields lower results than the corresponding baselines. Other composition methods, except \textit{Kronecker} for KS14, improve over the verb-only baselines. Finally we should note that, despite the small training corpus, the GS11 vector space performs comparatively well: for instance, \textit{Kronecker} model improves the previously reported score of 0.28 \cite{grefenstette2011gems}. \subsection{Sentence similarity} \label{sec:sentence-similarity} In this experiment we use the transitive sentence similarity dataset described in \newcite{KartSadrQPL}. The dataset consists of transitive sentence pairs and a human similarity judgement\footnote{The textual content of this dataset is the same as that of \cite{KartsaklisEMNLP}, the difference is that the dataset of \cite{KartSadrQPL} has updated human judgements whereas the previous dataset used the original annotations of the intransitive dataset of \cite{lapata2010}.}. The task is to estimate a similarity measure between two sentences. As in the disambiguation task, we first compose word vectors to obtain sentence vectors, then compute cosine similarity of them. We average the human judgements for identical sentence pairs to compute a correlation with cosine scores. \input{table-sent-sim-results.tex} Table~\ref{tab:sent-sim-results} shows the results. Again, the best performing vector space is KS14, but this time with \textit{addition}: the Spearman~$\rho$ correlation score with averaged human judgements is 0.732. Addition was the means for the other vector spaces to achieve top performance as well: GS11 and NWE got 0.682 and 0.689 respectively. None of the models in tensor-based composition outperformed addition. KS14 performs worse with tensor-based methods here than in the other vector spaces. However, GS11 and NWE, except \textit{copy subject} for both of them and \textit{Frobenius multiplication} for NWE, improved over their verb-only baselines. \subsection{Paraphrasing} \label{sec:paraphrasing} In this experiment we evaluate our vector spaces on a mainstream paraphrase detection task. Specifically, we get classification results on the Microsoft Research Paraphrase Corpus paraphrase corpus \cite{dolan2005microsoft} working in the following way: we construct vectors for the sentences of each pair; if the cosine similarity between the two sentence vectors exceeds a certain threshold, the pair is classified as a paraphrase, otherwise as not a paraphrase. For this experiment and that of Section~\ref{sec:dialogue-act-tagging} below, we investigate only the addition and point-wise multiplication compositional models, since at their current stage of development tensor-based models can only efficiently handle sentences of fixed structure. Nevertheless, the simple point-wise compositional models still allow for a direct comparison of the vector spaces, which is the main goal of this paper. For each vector space and model, a number of different thresholds were tested on the first 2000 pairs of the training set, which we used as a development set; in each case, the best-performed threshold was selected for a \textit{single} run of our ``classifier'' on the test set (1726 pairs). Additionally, we evaluate the NWE model with a lemmatized version of the corpus, so that the experimental setup is maximally similar for all vector spaces. The results are shown in the first part of Table~\ref{tbl:mspr}. \input{table-par-results.tex} Additive NWE gives the highest performance, with both lemmatized and un-lemmatized versions outperforming the GS11 and KS14 spaces. In the un-lemmatized case, the accuracy of our simple ``classifier'' (0.73) is close to state-of-the-art range. The state-of-the art result (0.77 accuracy and 0.84 F-score\footnote{F-scores use the standard definition $F = 2(\mathit{precision} * \mathit{recall}) / (\mathit{precision} + \mathit{recall})$.}) by the time of this writing has been obtained using 8 machine translation metrics and three constituent classifiers \cite{madnani2012re}. The multiplicative model gives lower results than the additive model across all vector spaces. The KS14 vector space shows the steadiest performance, with a drop in accuracy of only 0.04 and no drop in F-score, while for the GS11 and NWE spaces both accuracy and F-score experienced drops by more than 0.20. \subsection{Dialogue act tagging} \label{sec:dialogue-act-tagging} As our last experiment, we evaluate the word spaces on a dialogue act tagging task \cite{Stolcke.etal00} over the Switchboard corpus \cite{godfrey1992switchboard}. Switchboard is a collection of approximately 2500 dialogs over a telephone line by 500 speakers from the U.S. on predefined topics.\footnote{The dataset and a Python interface to it are available at \url{http://compprag.christopherpotts.net/swda.html}} The experiment pipeline follows \cite{milajevs-purver:2014:CVSC}. The input utterances are preprocessed so that the parts of interrupted utterances are concatenated \cite{webb2005dialogue}. Disfluency markers and commas are removed from the utterance raw texts. For GS11 and KS14 the utterance tokens are POS-tagged and lemmatized; for NWE, we test the vectors in both a lemmatized and an un-lemmatized version of the corpus.\footnote{We use \texttt{WordNetLemmatizer} of the NLTK library \cite{bird2006nltk}.} We split the training and testing utterances as suggested by \newcite{Stolcke.etal00}. Utterance vectors are then obtained as in the previous experiments; they are reduced to 50 dimensions using SVD and a $k$-nearest-neighbour classifier is trained on these reduced utterance vectors (the 5 closest neighbours by Euclidean distance are retrieved to make a classification decision). The results are shown in the second part of Table~\ref{tbl:mspr}. Un-lemmatized NWE \textit{addition} gave the best accuracy (0.63) and F-score (0.60) (averaged over tag classes), i.e.~similar results to \cite{milajevs-purver:2014:CVSC}---although note that the dimensionality of our NWE vectors is 10 times lower than theirs. \textit{Multiplicative} NWE outperformed the corresponding model in \cite{milajevs-purver:2014:CVSC}. In general, addition consistently outperforms multiplication for all the models. Lemmatization dramatically lowers tagging accuracy: the lemmatized GS11, KS14 and NWE models perform much worse than un-lemmatized NWE, suggesting that morphological features are important for this task. \section{Discussion} \label{sec:discussion} Previous comparisons of co-occurrence-based and neural word vector representations vary widely in their conclusions. While \newcite{baroni2014don} conclude that ``context-predicting models obtain a thorough and resounding victory against their count-based counterparts'', this seems to contradict, at least at the first consideration, the more conservative conclusion of \newcite{levy2014linguistic} that ``analogy recovery is not restricted to neural word embeddings [\ldots] a similar amount of relational similarities can be recovered from traditional distributional word representations'' and the findings of \newcite{blacoe2012comparison} that ``shallow approaches are as good as more computationally intensive alternatives'' on phrase similarity and paraphrase detection tasks. It seems clear that neural word embeddings have an advantage when used in tasks for which they have been trained; our main questions here are whether they outperform co-occurrence based alternatives across the board; and which approach lends itself better to composition using general mathematical operators. To partially answer this question, we can compare model behaviour against the baselines in \textit{isolation}. For the disambiguation and sentence similarity tasks the baseline is the similarity between verbs only, ignoring the context---see above. For the paraphrase task, we take the global vector-based similarity reported in \cite{mihalcea2006corpus}: 0.65 accuracy and 0.75 F-score. For the dialogue act tagging task the baseline is the accuracy of the bag-of-unigrams model in \cite{milajevs-purver:2014:CVSC}: 0.60. Sections~\ref{sec:disamb} and \ref{sec:sentence-similarity} show that although the best choice of vector representation might vary, for small-scale tasks all methods give fairly competitive results. The choice of compositional operator seems to be more important and more task-specific: while a tensor-based operation (Frobenius copy-object) performs best for verb disambiguation, the best result for sentence similarity is achieved by a simple additive model, with all other compositional methods behaving worse than the verb-only baseline in the KS14 case. GS11 and NWE, on the other hand, outperform their baselines with a number of compositional methods, although both of them achieve lower performance than KS14 overall. Based on only small-scale experiment results, one could conclude that there is little significant difference between the two ways of obtaining vectors. GS11 and NWE show similar behaviour in comparison to their baselines, while it is possible to tune a co-occurrence based vector space (KS14) and obtain the best result. Large scale tasks reveal another pattern: the GS11 vector space, which behaves stably on the small scale, drags behind the KS14 and NWE spaces in the paraphrase detection task. In addition, NWE consistently yields best results. Finally, only the NWE space was able to provide adequate results on the dialogue act tagging task. Table~\ref{tab:summary} summarizes model performance with regard to baselines. \input{table-summary.tex} \section{Conclusion} \label{sec:concl-future-work} In this work we compared the performance of two co-occurrence-based semantic spaces with vectors learned by a neural network in compositional settings. We carried out two small-scale tasks (word sense disambiguation and sentence similarity) and two large-scale tasks (paraphrase detection and dialogue act tagging). On small-scale tasks, where the sentence structures are predefined and relatively constrained, NWE gives better or similar results to count-based vectors. Tensor-based composition does not always outperform simple compositional operators, but for most of the cases gives results within the same range. On large-scale tasks, neural vectors are more successful than the co-occurrence based alternatives. However, this study does not reveal whether this is because of their neural nature, or just because they are trained on a larger amount of data. The question of whether neural vectors outperform co-occurrence vectors therefore requires further detailed comparison to be entirely resolved; our experiments suggest that this is indeed the case in large-scale tasks, but the difference in size and nature of the original corpora may be a confounding factor. In any case, it is clear that the neural vectors of \texttt{word2vec} package perform steadily off-the-shelf across a large variety of tasks. The size of the vector space (3 million words) and the available code-base that simplifies the access to the vectors, makes this set a good and safe choice for experiments in the future. Of course, even better performances can be achieved by training neural language models specifically for a given task (see e.g.~\newcite{KalchbrennerACL2014}). The choice of compositional operator (tensor-based or a simple point-wise operation) depends strongly on the task and dataset: tensor-based composition performed best with the verb disambiguation task, where the verb senses depend strongly on the arguments of the verb. However, it seems to depend less on the nature of the vectors itself: in the disambiguation task, tensor-based composition proved best for both co-occurrence-based and neural vectors; in the sentence similarity task, where point-wise operators proved best, this was again true across vector spaces. \section*{Acknowledgements} We would like to thank the three anonymous reviewers for their fruitful comments. Support by EPSRC grant EP/F042728/1 is gratefully acknowledged by Milajevs, Kartsaklis and Sadrzadeh. Purver is partly supported by ConCreTe: the project ConCreTe acknowledges the financial support of the Future and Emerging Technologies (FET) programme within the Seventh Framework Programme for Research of the European Commission, under FET grant number 611733. \bibliographystyle{acl}
3,212,635,537,834
arxiv
\section{Introduction and Summary} \label{s1} In this volume there is an introduction to cosmology and CMB physics by Sourdeep, and on the inflationary paradigm by Wands. They summarize the synergy between theory and observations that has produced spectacular advances in our understanding of the universe in the last decades. The emergence of a ``concordance model" is a remarkable success of cosmology and the theory of General Relativity in which the current paradigm relies. However, the widely accepted Hot Big Bang scenario, regarded as the ``standard model of cosmology", contains important limitations, already manifest in its name. The model encompasses a phase in the very early universe in which the density of matter and the space-time curvature grow unboundedly, blowing up at the big bang singularity. The big bang is {\em not} a prediction, but the result of applying the theory {\em beyond its domain of validity}. When the energy density and curvature approaches the Planck scale, the predictions of General Relativity are unreliable; the {\em quantum} aspects of the gravitational degrees of freedom are expected to dominate in that regime. This chapter provides a possible quantum gravity extension of the well established cosmological model from the perspective of loop quantum gravity. Loop quantum cosmology (LQC) arises from the application of principles of loop quantum gravity (LQG) \cite{lqg} to cosmology. The goal is to quantize the {\em sector} of General Relativity containing the symmetries of cosmological space-times, by following the physical ideas and mathematical tools underlying LQG, presented in detail in the chapter by Sahlmann. Restricting attention to cosmology presents several advantages. The existence of underlying symmetries largely simplifies technical issues, and allows to overcome mathematics difficulties that are hard to handle in more generic situations. Yet, the structure is rich enough to contain deep conceptual issues in quantum gravity: What happens with space and time when matter density and curvature reach the Planck scale. Does the big bang singularity persist? What is the meaning of time in the Planck era? How do classical General Relativity and a smooth space-time description arise in the low energy regime? What is the scale at which quantum gravity effects become subdominant? Does quantum gravity have anything to contribute to the origin of cosmic structures and to the inflationary scenario? On the other hand, the astonishing advances in theoretical and observational Cosmology in the last years have been able to relate observations with theories of the very early universe. Cosmology then offers an interesting arena in which quantum gravity can make contact with other theories such as inflation, and probably provides the most promising avenue to confront quantum gravity ideas with observations. But the restriction to cosmological settings also leads to important limitations. In principle, it is not guaranteed that the result of quantizing a symmetry reduced sector of general relativity will reproduce the same physics as the restriction of a full quantum gravity theory to symmetric scenarios. Symmetry reduction often entails a drastic simplification, and one may loose important features of the theory by restricting the symmetry prior to quantization. However, it has been extremely useful in several areas of physics, when the complexity of the problem under consideration made it difficult to find solutions without introducing additional inputs. The Oppenheimer-Snyder model of black hole formation, or the Dirac quantization of the hydrogen atom are examples that were able to encode the key physical ingredients of the problem, in spite of the severe symmetry reduction. Quantum cosmology may well be another example, if it is constructed choosing carefully the key ingredients from full quantum gravity. It is likely that predictions from quantum cosmology will not agree in every detail with those obtained from full quantum gravity applied to cosmological scenarios, but we expect it to capture the main aspects of the complete theory. As in the previous examples, quantum cosmology can provide valuable information about the correct way to quantize gravity, and be as useful as the hydrogen atom has been for quantum mechanics. This chapter provides a brief and pedagogical summary of the advances in Loop Quantum Cosmology, with some emphasis on recent results. They can be divided into three parts, which are in one-to-one correspondence to the three sections in which the chapter is divided: 1) Quantization of cosmological space-times; 2) Inhomogeneous perturbations in LQC; 3) LQC extension of the inflationary scenario. In the remainder of this introduction we summarize the content of each of these sections and provide a global picture.\\ {\bf 1) Quantization of cosmological space-times.} General Relativity is a totally constrained theory, in the sense that the full Hamiltonian generating dynamics is required to vanish. Something similar happens in classical electromagnetism, where {\em part} of the Hamiltonian, the piece that generates gauge transformations, is a constraint. In General Relativity the constraint turns out to be the {\em full} Hamiltonian, reflecting the background independence of the theory. Dirac provided the conceptual framework to quantize constrained systems. At the quantum level, physical states have to be annihilated by the operator corresponding to the classical Hamiltonian, $\hat{\cal C}\, \Psi=0$, and all the physics has to be extracted from this equation. The quantum state $\Psi$ is the wave function of the physical fields, including the gravitational field itself, and classical quantities such as the metric, energy density, curvature tensor, are represented by quantum operators on the physical Hilbert space ${\cal{H}}_{\rm phy}$ it belongs to. The non-trivial mathematical problem is to make sense and solve the quantum constraint equation, and the underlying cosmological symmetries largely facilitate this task The next conceptual issue is to obtain the familiar time evolution that we normally use in physics from this time-less or `frozen' formalism. At the quantum level we do not have a classical metric telling us what are the time-like directions in the manifold, and all what we have is a probability-distribution $\Psi$ of different metrics. A useful strategy has been to follow a `relational-time' approach, in which one of the physical variables of the problem plays the role of time, and the rest evolve with respect to it. By using a {\em massless scalar field as this internal time}, it is possible to construct the Hilbert space of physical states satisfying the quantum constraints, and a precise mathematical framework has been developed to to study the resulting quantum geometry \cite{abl}. It has been shown that all the operators representing physical quantities such as the energy density, space-time curvature, etc, {\em remain bounded on the physical Hilbert space}, even in the deep Planck regime. This is the mathematical sense in which the singularity is resolved in LQC. The physical picture that emerges from the abstract formalism is the following. When the energy density of the universe is comparable to the Planck energy density, the quantum properties of space-time geometry become important and dominate. A sort of quantum repulsive degeneracy force appears at such extreme densities, precludes the universe to continue contracting, and forces the quantum space-time to expand again once the maximum energy density has been attained, replacing the big bang singularity by a {\em quantum bounce}. This maximum energy density is proportional to $\hbar^{-1}$, similar to the finite energy of the ground state of the hydrogen atom that avoids the collapse of the positron and electron as a consequence of the Heisenberg principle. When the energy density and curvature become smaller than approximately one percent of the Planck scale, the quantum effects of gravity become rapidly negligible and classical General Relativity provides an excellent approximation. The resulting quantum dynamics has been analysed in detail and has provided important insights on the behaviour of physics in the Planck regime. The ability of incorporating non-perturbative quantum corrections that are able to completely dominate the evolution in the Planck regime and dilute the big bang singularity and, at the same time, to disappear in the low energy regime to find agreement with the classical description, is a highly non trivial result of LQC. Remarkably, some global aspects of the evolution of the quantum geometry can be encoded in simple {\em effective equation}. Those equations provide a smooth space-time metric that approximates the full quantum evolution of the quantum space-time. They have similar form to the equations arising in General Relativity, but include new terms, proportional to $\hbar$, that make the effective trajectory to depart from the classical one around the Planck era. The effective dynamics provides an excellent approximation of the quantum evolution, even at Planckian densities, provided the quantum state is chosen to be highly peaked in a classical trajectory in the low energy regime where General Relativity provides a good approximation.\\ {\bf 2) Inhomogeneous perturbations in quantum cosmology.} As emphasized in the chapters by Sourdeep and Wands, the theory of inhomogeneous perturbations (of matter and gravitational degrees of freedom) propagating in classical cosmological space-times has been a key mathematical tool in modern cosmological research. One of the deepest insights in cosmology is the idea that the cosmic structures (galaxy clusters, super-clusters, etc) that we see today were originated in the very early universe by a process of {\em amplification of quantum fluctuation by the cosmological expansion}, as explained in the context of cosmic inflation in the chapter by Wands. In the inflationary scenario, this occurs when the energy density in the universe was close to the GUT scale, $(10^{16} {\rm GeV})^4$, around 12 order of magnitude below the Planck energy density. Quantum gravity effects of the background space-time metric are subdominant at those scales, and the theory of quantized fields propagating in a {\em classical} background appears to be the appropriate mathematical framework to work out physical predictions. However, earlier in the evolution of universe, when the curvature and energy density are close to the Planck scale, quantum gravity effects are expected to be important, and they should not be ignored. To have a complete picture of the evolution of cosmic inhomogeneities that encompasses the Planck regime we need to learn how quantum fields propagate on a {\em quantum cosmological space-time} \cite{akl,aan2}. The goal of the second section of this chapter is to review the construction of such a theory. The detailed description of quantum cosmologies provided by LQC is the suitable arena. The construction of QFT on quantum cosmologies follows closely the guiding principle behind LQC: first carry out a truncation of the classical theory adapted to the given physical problem, and then quantize by using LQG techniques. The sector of the classical theory of interest is {\em extended} in this part to cosmological background {\em plus first order inhomogeneous perturbations on it}. The resulting framework originates from first principles, under the assumption that inhomogeneous behave as {\em test fields} on the quantum geometry, and it should provide a bridge between quantum gravity and QFT on curved space-times. Therefore, it is suitable to face important conceptual questions such as: What are the concrete approximations under which the familiar quantum field theory (QFT) in classical space-times arises from this more complete description? What are the precise aspects of the quantum geometry that are `seen' by the quantum fields propagating on it? Does the resulting QFT make sense for trans-Planckian modes? These issues will be discussed with some detail in Section~\ref{sec:2}. In Section~\ref{sec:3}, this framework is applied to the study of gauge invariant cosmic perturbations and phenomenological consequences are worked out. \\ {\bf 3) LQC extension of the inflationary scenario.} The inflationary scenario occupies the leading position in accounting for the origin of the cosmic inhomogeneities observed in the Cosmic Microwave Background (CMB) and large scale structure. This success is mainly rooted in the economy of assumptions, the elegant mechanism that originates the {\em cosmic inhomogeneities from vacuum quantum fluctuations}, a subtle interplay between quantum mechanics and classical gravitation, and particularly the non-trivial agreement with observations. Inflation is however an effective theory, and it is expected that a more fundamental theory will complete it. Examples of open questions that the more complete theory should answer are: What is the nature of the scalar inflaton field? Is there a single or several fields, like in multi-field models? What is the specific shape of the inflaton potential? These questions originate in particle physics, and unfortunately at these stages LQC does not have much to contribute. There are, in addition, important issues related to gravitation: What is the evolution of the space-time before inflation? In General Relativity the big bang singularity is unavoidable in inflationary scenarios \cite{bgv}. Is there a quantum gravity scenario in which the singularity is resolved {\em and} in which the evolution finds an inflationary phase compatible with observations generically, i.e. {\em without a fine-tuning of its parameters}? Such a scenario would allow to extend the inflationary space-times all the way back to the Planck era. Moreover, one could then use the quantum theory of cosmological perturbation on quantum space-times described in section~\ref{sec:3}, to extend the analysis of cosmic inhomogeneities to include Planck scale physics. Section~\ref{sec:4} will review the arguments showing that such an extension is possible in LQC, where one can construct a {\em conceptual} completion of the inflationary theory from the quantum gravity point of view, in which Planck scale physics can be included in the study of cosmological perturbations. The importance of this extension goes, however, beyond the conceptual domain and may open a window for phenomenological consequences. \\ To summarize, this chapter will review recent advances in the completion of the quantization program underlying LQG when restricted to the cosmological sector. We shall explore how the singularity of the homogeneous background is avoided, and how the abstract theoretical framework can descend down to make contact with phenomenology. Although many open issues still remain, at the present time there is a solid body of knowledge, based on a rigorous mathematical framework. These combine with analytical and numerical techniques, and provide an avenue from the big bang singularity resolution to concrete observation of the CMB and galaxy distributions. Due to space restrictions, there are some topics that we shall not cover in this chapter, such as the path integral formulation and its relation with spin foams \cite{ach}, spin foam cosmology \cite{vidotto}, the Gowdy models \cite{hybrid1,hybrid2,hybrid3,hybrid4,hybrid5}, nor numerical issues \cite{brizuela}. We do not provide either a review of all the existing ideas to study LQC effects on cosmic perturbations. See \cite{pert_tensor1, ns_inflation, barrau1, barrau2, barrau3, barrau4, bojowald&calcagni, barrau5, madrid, wilson-ewin} for different approaches to that problem. Further information can be found in the reviews \cite{asrev}, \cite{lqcreview}, \cite{singh-numerical} and \cite{calcagni}. Our convention for the metric signature is $-+++$, we set $c=1$ but keep $G$ and $\hbar$ explicit in our expressions, to emphasize gravitational and quantum effects. When numerical values are shown, we use Planck units. \section{Quantization of cosmological backgrounds} \label{sec:2} In this section we shall consider the quantum theory of the homogeneous background within the context of Loop Quantum Cosmology. First we shall discuss what it means for a cosmological model to be quantized, or to use the standard nomenclature, to define a {\em quantum cosmology}. Just as with the quantization of any mechanical system such as the hydrogen atom, the first step is to cast the model to be quantized in a Hamiltonian language. That is, one has to identify configuration variables $q^i$ and their corresponding momenta $p_j$, with the property that the Poisson bracket is $\{q^i,p_j\}=\delta^i_j$. The next step in the quantization process is to find a Hilbert space ${\cal H}$ and operators $\hat{q}^i$ and $\hat{p}_j$ satisfying $[ \hat{q}^i, \hat{p}_j ]=i\hbar\, \delta^i_j$. Then one has to define an operator $\hat{H}$ corresponding to the Hamiltonian (and to other physically relevant observables), in order to define dynamics through the Schr\"odinger equation: $-i\hbar\,\partial_t \Psi= \hat{H}\Psi$. In the case where the classical system under consideration is a {\em totally constrained system}, instead of a Hamiltonian $H$ defining dynamics, both the classical description and the corresponding quantization are more subtle. Here the dynamical variables are subject to a constraint ${\cal C}(q,p)=0$. Furthermore, there is no Hamiltonian defining dynamics, and the canonical transformations generated by the constraint ${\cal C}$ are interpreted as {\em gauge}. That is, points on the phase space connected by a canonical transformations generated by the constraint are physically equivalent. Thus, the curve on phase space made out of all the physically equivalent points represents a {\em gauge orbit} and can be identified with a point on the true, {\em physical} phase space. Observables will be those functions $f(q,p)$ that are constant along the gauge orbits (i.e. satisfying $\{f,{\cal C}\}=0$). Since there is no true dynamics, the system is said to posses a {\em frozen dynamics}. A natural question is whether one can extract some `dynamics' from the frozen formalism. In some cases, one can use one of the variables (or an appropriately selected function) as an internal time $T(q,p)$, with respect to which the gauge orbit can be described in terms of a relational dynamics (that is, where the `dynamics' is described by correlations between the variable $T$ and the rest of the variables). Let us now review the quantization process when we have a totally constrained system. The first step is to define a {\em kinematical Hilbert space} ${\cal H}_{\mathrm{kin}}$. This space serves as an arena for the implementation of the constraint, that is now required to be represented as a self-adjoint operator $\hat{\cal C}$ on ${\cal H}_{\mathrm{kin}}$. Not all states in the kinematical Hilbert space are regarded as physical. The condition that selects those physical states was put forward by Dirac and has the form, \nopagebreak[3]\begin{equation} \hat{\cal C}\cdot\Psi_{\mathrm{phy}}=0\, .\label{dirac-cond} \end{equation} Once one has found the physical states $\Psi_{\mathrm{phy}}$ (that might belong to ${\cal H}_{\mathrm{kin}}$ or not), one needs to specify an inner product $\langle\cdot|\cdot\rangle_{\mathrm{phy}}$ in order to construct ${\cal H}_{\mathrm{phy}}$, the {\em physical} Hilbert space. Physical observables will be operators $\hat{F}$ that leave the space of physical states invariant. This translates into the condition $[\hat{F},\hat{\cal C}]=0$. In some cases, when there is an internal time variable $T$, one can recast the Dirac condition (\ref{dirac-cond}) as an `evolution' equation where $T$ plays the role of time, as in the Schr\"odinger equation. One interesting feature of the simplest cosmological models is that they are totally constrained systems, so the general framework we have outlined is applicable. Even more, one can complete the quantization program and obtain a complete physical description where a massless scalar field $\phi$ plays the role of internal relational time. One can then pose physical questions pertaining to observables of cosmological interest, such as the Hubble parameter and curvature scalars. Interestingly, for the simplest models, one can indeed find {\em two} different, inequivalent, quantizations. The first one corresponds to the so-called Wheeler-De Witt (WDW) quantization that was put forward by De Witt and Misner in the 60's. The second quantization corresponds precisely to the one we shall here consider in detail, known as loop quantum cosmology. As we shall describe in more detail later, the basic difference between these two programs corresponds to the choice of kinematical Hilbert space ${\cal H}_{\mathrm{kin}}$. The choice made by De Witt and others was, in a sense, the most natural one, resembling the Schr\"odinger quantum mechanics that has been so useful to describe many physical systems. On the other hand, the choice one makes in LQC is somewhat exotic from the perspective of standard quantum mechanics, but is selected when the underlying symmetries pertinent to the gravitational field are seriously taken into account. The second and physically most important difference between these two representations is that their predictions regarding the fate of the classical singularity are radically different. While the WDW theory predicts that the singularity remains, as defined by the behavior of the expectation values of physically relevant operators such as energy density, in the case of LQC the singularity is generically avoided. Instead of a big bang (or big crunch) one has a bounce connecting a contracting branch with an expanding one; the energy density and curvature scalars are bounded from above, so that physics is well defined throughout the intrinsic dynamical evolution of the quantum state describing the universe. Let us now briefly describe the structure of the remainder of this section. In the first part, we study in detail the $k$=0 FLRW model with vanishing cosmological constant, and discuss some of its main features. In the second part we discuss other models. The first one we consider is the closed $k$=1 model also without a cosmological constant. Next, we briefly discuss $k$=0 FLRW models with a cosmological constant and some anisotropic models. In the third part, we introduce the so called effective equations. We give a brief introduction to the subject and discuss in detail the case of $k$=0 FLRW model. Next we consider the $k$=1 case, followed by a discussion of anisotropic effective space-times, including the Bianchi I, II and IX models. \subsection{$k=0$ FLRW, singularity resolution} \label{sec:2.a} The simplest model that one can consider is a $k$=0 homogeneous and isotropic FLRW cosmological model foliated by 3-manifolds $\Sigma$ that are topologically ${\mathbb{R}}^3$. In order to find a Hamiltonian description for the model, we have to start with an action principle. Due to homogeneity, the action is not well defined unless one introduces and fixes a fiducial cell ${\cal V}$. This will play the role of a co-moving volume. We can introduce a flat fiducial metric $\mathring{q}_{ab}$ on $\mathbb{R}^3$ with respect to which the coordinate volume of ${\cal V}$ is $1 = \int_{\cal V}\, \sqrt{\mathring{q}}\, {\rm d}^3\! x$. Without loss of generality, in what follows we shall set $1=1$. The flat FLRW spacetime is described by the metric \nopagebreak[3]\begin{equation} {\rm d} s^2 = - N^2 {\rm d} t^2 + a(t)^2 {\rm d} {\bf x}^2 \end{equation} where $N$ is the lapse function, $\mathring{q} \leftrightarrow {\rm d} {\bf x}^2$ is the flat fiducial metric, and $a$ is the {\em scale factor} of the universe. Now, the action principle is \nopagebreak[3]\begin{equation} \label{action} S = \nonumber \frac{1}{16 \pi G} \, \int {\rm d} t\,\int_{\cal V} {\rm d}^3\! x \sqrt{|g|}\, R = \nonumber \frac{1}{16 \pi G} \, \int {\rm d} t\, N\, a^3\, R ~. \end{equation} with $R$ the scalar curvature of the spacetime. The gravitational part of the phase space consists of $a$ and its conjugate momenta that is found to be: $$P_a = - \frac{3}{4 \pi G N} \, a \,\dot a\, .$$ In this simplest model, the matter we shall consider is a homogeneous massless scalar field $\phi$. The action for such a field is: $$ S_{\rm matt}=\frac{1}{2}\int{\rm d} t\,\frac{a^3\dot{\phi}^2}{N}\, .$$ From this, the momenta $p_{(\phi)}$ associated to the scalar field is $p_{(\phi)}= \frac{\dot{\phi}\,a^3}{N}$, and the Hamiltonian constraint that defines the `dynamics' is then, \nopagebreak[3]\begin{equation} \label{WDW-HC} {\cal C}_{\rm tot}= \frac{2\pi G}{3}\frac{P_a^2}{a} - \frac{1}{2}\frac{p_{(\phi)}^2}{a^3}\approx 0\, . \end{equation} To summarize, the phase space is four dimensional with coordinates $(a,P_a;\phi,p_{(\phi)})$, satisfying $\{a,P_a\}=1$ and $\{\phi,p_{(\phi)}\}=1$. In the standard Wheeler-De Witt approach, the next step is to consider the kinematical Hilbert space to consist of `wavefunctions' $\Psi_{\mathrm{wdw}}=\Psi(a,\phi)$ of the `configuration' variables $(a,\phi)$. In this case, the operators are represented in the usual fashion, as: $\hat{a}\cdot\Psi(a,\phi)= a\Psi(a,\phi)$ and $\hat{P}_a=-i\hbar\partial_a\Psi(a,\phi)$, and similarly for the other variables. Then, one promotes the constraint (\ref{WDW-HC}) to an operator, and finds solutions to the Dirac condition (\ref{dirac-cond}). This has been described in detail in \cite{aps3,acs}. In order to define the corresponding phase space in loop quantum cosmology, we need to follow some more steps. The first one is that one needs to introduce a new set of variables for the gravitational degrees of freedom. As explained in Sahlmann's contribution to this volume, loop quantum gravity, and consequently LQC is based in a connection $A$ and its corresponding momenta $E$, a generalization of the magnetic potential and electric field of electromagnetism. Let us then write the phase space in terms of these so-called Ashtekar-Barbero variables. First, introduce a fiducial triad $\mathring{e}^a_i$ and co-triad $\mathring{\omega}^i_a$ compatible with $\mathring{q}_{ab}$. The conjugate phase space variables are the $SU(2)$ connection $A^i_a = \Gamma^i_a + \gamma K^i_a$ and the densitized triad $E^a_i$ satisfying \nopagebreak[3]\begin{equation} \{ A^i_a(x),E^b_j(y)\} = 8 \pi G\, \gamma\, \delta^b_a \delta^i_j \delta^3(x,y)\, . \end{equation} Here $\Gamma^i_a$ is the spin connection measuring the intrinsic curvature (which vanishes in the $k=0$ model), $\gamma$ is the Barbero-Immirzi parameter and $K^i_a$ is the extrinsic curvature 1-form related to the extrinsic curvature $K_{ab}$ as $K^i_a = e^{b i} K_{ab}$, with $e^a_i$ the un-densitized triad. Due to the underlying symmetries of the homogeneous isotropic spacetimes we are considering, these variables can be written as \cite{abl} \nopagebreak[3]\begin{equation}\label{AE_defs} A^i_a \, = \, c \, \mathring{\omega}^i_a \quad ; \quad E^a_i \, = \, p \, \sqrt{\mathring{q}} \, \mathring{e}^a_i \, . \end{equation} Thus, the dynamical variables in the isotropic cosmological regime are $p$ and $c$. The relationship between the `triad' $p$ and the scale factor is, \nopagebreak[3]\begin{equation} \label{pa2} |p| = a^2 \, . \end{equation} The connection component gets related to the rate of change of scale factor as \nopagebreak[3]\begin{equation} \label{cdota} c = \gamma \, \frac{\dot a}{N}\, , \end{equation} holding only for the physical solutions of General Relativity (GR). The gravitational part of the phase space is characterized by the conjugate variables $c$ and $p$ satisfying: \nopagebreak[3]\begin{equation} \{c,p\} = \frac{8 \pi G \gamma}{3}\, . \end{equation} and the complete phase space has coordinates $(c,p;\phi,p_{(\phi)})$. The dynamics thus fund in the Hamiltonian language is completely equivalent to the standard description based in Einstein's equations. To see that, one can find the Hubble parameter $H=\dot{a}/{a}=\dot{p}/(2p)$ by computing $\dot{p}=\{p,{\cal C}\}$, where ${\cal C}$ is now written in terms of the variables $(c,p;\phi,p_{(\phi)})$ (see (\ref{grav-const-cp}) below). From there one can write the standard Friedman equation: $H^2=\frac{8\pi G}{3}\, \rho$, with $\rho=p_{(\phi)}^2/2V^2$. Let us now consider the issue of quantization. As previously discussed, the choice of kinematical Hilbert space in LQC is different from the WDW case. That is, we do not expect to represent $\hat{c}$ and $\hat{p}$ as multiplication and derivation, for example. The idea instead is to construct a quantum theory that is closest to the quantization used in loop quantum gravity, as discussed for instance in \cite{lqg}. This means in particular a different choice of kinematical Hilbert space. Recently this {\it polymeric} quantization for cosmological models has been shown to be unique when invariance under diffeomorphisms is imposed \cite{ach4} (in complete analogy with the corresponding results in full LQG \cite{lost,cf}). The new strategy is the following. Instead of re-writing the Hamiltonian constraint (\ref{WDW-HC}) in terms of the $(c,p)$ variables, one starts with the full expression of the Hamiltonian constraint, in terms of variables $A$ and $E$. Then, one uses the simplification given by Eq.(\ref{AE_defs}). As it turns out, the choice of the polymeric Hilbert space as the kinematical arena for the implementation of the constraint --following the LQG route to quantization-- has the important feature that it does {\em not} admit the $\hat{c}$ operator. That is, only exponential functions of the gravitational connection $c$ such as \nopagebreak[3]\begin{equation} h_k^{(\lambda_c)} = \cos (\lambda_c \, c/2) \mathbb{I} + 2 \, \sin (\lambda_c \, c/2) \tau_k \end{equation} become well defined. These objects have the geometrical interpretation of being the `holonomies', or parallel transports of the connection $A$. These functions generate an algebra of so-called almost periodic functions whose elements are of the form $\exp(i \lambda_c \, c/2)$. The resulting kinematical Hilbert space is then $L^2({\mathbb R}_{\mathrm{Bohr}},{\rm d} \mu_{\mathrm{Bohr}})$, a space of square integrable functions on the Bohr compactification of the real line. Beside the name of the space, it is straightforward to understand the nature of this space. For instance, the eigenstates of $\hat p$, labelled by $|\mu\rangle$, satisfy $\langle \mu_1|\mu_2 \rangle = \delta_{\mu_1,\mu_2}$. This is to be contrasted with the usual Schr\"odinger representation where, instead of the Kronecker delta one has the Dirac delta. In particular, this eigenstates are {\it normalized} and constitute a basis for the kinematical Hilbert space ${\cal H}_{\mathrm{poly}}$. This constitutes the main difference from the standard Schr\"odinger representation where the eigenstates of momentum $\hat{p}\,|\mu\rangle = \mu\,|\mu\rangle$ are {\it not} normalized and satisfy $\langle\nu|\mu\rangle = \delta(\mu,\nu)$. Note also that this plane waves states are {\it not} a basis for the $L^2(\mathbb{R},{\rm d} x)$ Hilbert space. There exists an important result in mathematical physics stating that for a finite dimensional phase space, the Schr\"odinger Hilbert space is the only choice of representation of the canonical commutation relations, satisfying some regularity conditions. This result goes under the name of the Stone-Von Neumann uniqueness theorem \cite{afw}. Thus, one could have imagined that, since the system has a finite number of degrees of freedom, both the WDW and the LQC representations should be equivalent. However, that expectation is not realized. The {\it polymeric} representation used in LQC and the standard one are unitarily inequivalent. This is due to a crucial property of the LQC operators, implying that the polymer quantum mechanics does not posses some of the regularity conditions that go into the hypothesis of the Stone-Von Neumann theorem. To explore those propertie further, let us consider the action of the two fundamental operators on the eigenstates $|\mu\rangle$, \nopagebreak[3]\begin{equation} \label{p_act} \hat p\,| \mu \rangle = \frac{8 \pi \gamma \ell_{\rm Pl}^2}{6} \mu\, |\mu \rangle \quad ; \quad {\widehat{\exp(i \lambda_c \, c/2)}}\, |\mu\rangle = |\mu + \lambda_c \rangle \end{equation} Note that the `displacement' operator ${\widehat{\exp(i \lambda_c \, c/2)}}$ is not continuous when $\lambda_c\to 0$, since the states $|\mu\rangle$ and $|\mu+\lambda_c\rangle$ are always orthogonal to each other, for all values of $\lambda_c > 0$. Also note that a basis of the polymer Hilbert space is uncountable as the label $\mu$ for the eigenstates can take any value in the real line. In order to obtain the quantum constraint the key step is to rewrite the classical gravitational constraint with field strength $F_{ab}^i$ as, \nopagebreak[3]\begin{equation} \label{eq:cgrav} C_{\mathrm{grav}} = - \gamma^{-2} \int_{\cal V} {\rm d}^3 x\, \epsilon_{ijk} \,\frac{E^{ai}E^{bj}}{\sqrt{|\det E|}}\, F_{ab}^i ~ \end{equation} Further, one writes the field strength $F_{ab}^i$ in terms of holonomies and triads and then quantize (where we have chosen $N=1$). The matter part of the constraint is quantized in the regular way, where the Schr\"odinger representation is used. A further simplification is to choose $N=a^3=V$ from the very beginning. If we rewrite the line element with this choice we have ${\rm d} s^2=-a^6{\rm d}\tau + a^2 {\rm d} {\bf x}^2$, for which the classical constraint reads, \nopagebreak[3]\begin{equation} p_{(\phi)}^2 - \frac{3}{4\pi G\gamma^2}\, p^2 c^2 =0\, . \label{grav-const-cp} \end{equation} With this choice, the gravitational constraint has then the form, \nopagebreak[3]\begin{equation} \label{eq:cgrav2} {\cal C}_{\mathrm{grav}} = - \gamma^{-2} {\epsilon^{ij}}_{k} \,\mathring{e}^{a}_{i}\mathring{e}^{b}_{j}\, F_{ab}^k ~ \end{equation} The field strength can be classically written in terms of a trace of holonomies over a square loop $\Box_{ij}$, considered over a face of the elementary cell, with its area shrinking to zero: \nopagebreak[3]\begin{equation} \label{F} F_{ab}^k\, = \, -2\,\lim_{Ar_\Box \rightarrow 0} \,\, {\rm Tr\,}\, \left(\frac{h^{(\lambda_c)}_{\Box_{ij}}-1 }{\lambda_c^2} \right) \,\, \tau^k\, \mathring{\omega}^i_a\,\, \mathring{\omega}^j_b\, = \,\lim_{\lambda_c \rightarrow 0} {\epsilon^k}_{ij}\, \mathring{\omega}^i_a\,\, \mathring{\omega}^j_b\,\left( \frac{\sin^2{\lambda_c c}}{\lambda_c^2}\right) \end{equation} with \nopagebreak[3]\begin{equation} h^{(\lambda_c)}_{\Box_{ij}}=h_i^{(\lambda_c)} h_j^{(\lambda_c)} (h_i^{(\lambda_c)})^{-1} (h_j^{(\lambda_c)})^{-1}\, . \end{equation} Since the underlying geometry in the quantum theory resulting from LQG is discrete, the loop $\Box_{ij}$ can be shrunk at most to the area which is given by the minimum eigenvalue of the area operator in LQG: $\Delta = \tilde\kappa\, \ell_{\rm Pl}^2$ with $\tilde\kappa$ of order one. Note that it has been standard in the LQC literature to choose $\tilde\kappa= 2 \sqrt{3} \pi \gamma$ \cite{abl}, but it can also be taken as a parameter to be determined \cite{acs}. The area of the loop with respect to the physical metric is $\lambda_c^2 |p|$. Requiring the classical area of the loop $\Box_{ij}$ to have the quantum area gap as given by LQG, we are led to set $\lambda_c = \sqrt{\Delta/|p|}$. Since $\lambda_c$ is now a function of triad, the action of $\exp(i \lambda_c(p) c)$ becomes complicated on the states in triad ($\mu$) basis. However, its action in volume ($\nu$) basis is very simple: it drags the state by a unit affine parameter. It is then convenient to introduce the variable ${\rm b} := \frac{c}{|p|^{1/2}}$, such that $\lambda_c c = \lambda_{\rm b} {\rm b}$ where $\lambda_{\rm b} := \sqrt{\Delta}$ is the new affine parameter. Note that ${\rm b}$ is conjugate variable to $\nu$, satisfying $\hbar \{{\rm b},\nu\} = 2$, where $\nu$ labels the eigenstates of the volume operator \nopagebreak[3]\begin{equation} \hat V \, |\nu\rangle = 2 \pi \ell_{\rm Pl}^2 \gamma |\nu| \, |\nu\rangle ~. \end{equation} The action of the exponential operator then becomes very simple: \nopagebreak[3]\begin{equation} \widehat{\exp(i \lambda_c c/2)} \, |\nu\rangle ~ = ~\widehat{\exp(i \lambda_{\rm b} {\rm b}/2)} \, |\nu\rangle ~ = ~ |\nu + \lambda_{\rm b}\rangle ~. \end{equation} In what follows we shall only consider $\lambda_{\rm b}$ a constant and shall only denote it by $\lambda$. Further, all of the identities used to write classical constraint in terms of holonomies remain unaffected and the quantum constraint operator on wave functions $\tilde{\Psi}(\nu,\phi)$ of $\nu$ and $\phi$ is obtained \nopagebreak[3]\begin{equation} \label{hc4} \partial_\phi^2\, \tilde{\Psi}(\nu,\phi) = 3\pi G\, |\nu|\, \frac{\sin\lambda{\rm b}}{\lambda}\, |\nu|\, \frac{\sin\lambda{\rm b}}{\lambda}\, \tilde{\Psi}(\nu,\phi)\, \end{equation} Writing out the explicit action of operators $\sin \lambda{\rm b}$, (\ref{hc4}) simplifies to: \nopagebreak[3]\begin{eqnarray} \label{hc5} \partial_\phi^2 \,\tilde{\Psi} (\nu, \phi) &=& 3\pi G\, \nu\, \frac{\sin\lambda{\rm b}}{\lambda}\, \nu\, \frac{\sin\lambda{\rm b}}{\lambda}\, \tilde{\Psi}(\nu,\phi) \nonumber\\ &=&\frac{3\pi G}{4\lambda^2}\, \nu \left[\, (\nu+2\lambda) \tilde\Psi(\nu+4\lambda) - 2\nu \tilde\Psi(\nu) + (\nu -2\lambda) \tilde\Psi(\nu-4\lambda)\, \right]\nonumber\\ &=:& \Theta_{(\nu)}\, \tilde\Psi(\nu,\phi)\, . \label{Quant-Const} \end{eqnarray} The geometrical part, $\Theta_{(\nu)}$, of the constraint is a difference operator in steps of $4\lambda$. \nopagebreak[3]\begin{equation} C^+(\nu) \Psi(\nu + 4 \lambda) + C^0 \Psi(\nu) + C^- \Psi(\nu - 4 \lambda) = \hat C_{\mathrm{matt}} \, \Psi(\nu) \end{equation} where $C^{\pm}$ and $C^0$ are functions of $|\nu|$ \cite{aps2}. Note that the equivalent of the Wheeler-De Witt equation is now a {\em difference} equation in the geometrical variable, instead of a differential equation. Then, physical states correspond to solutions to the quantum constraint (\ref{Quant-Const}), but they should also belong to the positive frequency part of the Hamiltonian constraint, and satisfy the `Schr\"odinger equation', \nopagebreak[3]\begin{equation} -i\hbar \,\partial_\phi\, \Psi(\nu,\phi)=\sqrt{\Theta}\,\Psi(\nu,\phi)\, .\label{schr-eq} \end{equation} Furthermore, they should be symmetric under $\nu\to -\nu$ and have finite norms under the inner product, \nopagebreak[3]\begin{equation} \langle\Psi_1|\Psi_2\rangle = \sum_\nu\, \overline{\Psi}_1(\nu,\phi_o)\, |\nu|^{-1}\, \Psi_2(\nu,\phi_o)\, . \end{equation} where the constant $\phi_o$ is arbitrary. As discussed above, these physical states can be interpreted as being solutions to `evolution equations' with respect to the internal time $\phi$. One can indeed define The next step is to define relational observables that will have a clear interpretation in terms of $\phi$. For instance, one can define the operator $\hat{V}_{\phi_0}$, as the operator corresponding to {\em the volume $V$ when the scalar field takes the value $\phi_0$}. One can indeed define such Heisenberg operators by the standard prescription: \nopagebreak[3]\begin{equation} \hat{V}|_{\phi_0}\cdot\Psi_{\mathrm{phy}}(\nu,\phi):=e^{i\sqrt{\Theta}(\phi-\phi_0)}\,\hat{V}\,e^{-i\sqrt{\Theta}(\phi-\phi_0)}\,\Psi_{\mathrm{phy}}(\nu,\phi) , \end{equation} where $\hat{V}$ is the standard Schr\"odinger operator (acting by multiplication in this case). In this manner one can define operators corresponding to matter energy density $\hat{\rho}_{\phi_0}$ and curvature scalars, all with a clear interpretation as being defined at `time $\phi_0$'. As it turns out, one can perform a Fourier transform into the conjugate variable to $\nu$, and the resulting quantum constraint, a differential equation, can be solved exactly \cite{acs}. This allows one to have closed expressions for the expectation values of the Heisenberg operators. Let us now describe this {\em solvable} model within LQC. \vskip0.5cm \noindent{\em Solvable loop quantum cosmology (SLQC)}. We now wish to work in the ${\rm b}$ representation because the geometrical part of the quantum constraint will also become a differential operator. Since $\tilde\Psi(\nu,\phi)$ have support on the `lattice' $\nu = 4n\lambda$, and since ${\rm b}$ is canonically conjugate to $\nu$, their Fourier transforms $\Psi({\rm b},\phi)$ have support on the continuous interval $(0, \pi/\lambda)$: \nopagebreak[3]\begin{equation} \Psi({\rm b},\phi) := \sum_{\nu=4n\lambda}\, e^{\f{i}{2} \nu{\rm b}}\,\, \tilde\Psi(\nu,\phi); \quad \hbox{\rm so that} \quad \tilde\Psi(\nu, \phi) = \frac{\lambda}{\pi}\, \int_0^{\pi/\lambda} \!\! {\rm d}{\rm b}\, e^{- \f{i}{2} \nu{\rm b}}\,\, \Psi({\rm b},\phi)\, .\end{equation} From the form (\ref{hc4}) of the constraint it is obvious that it would be a second order differential operator in the ${\rm b}$-representation. Let us set $\tilde\chi(\nu,\phi) = (\lambda/\pi \nu)\tilde{\Psi}(\nu,\phi)$. Then, on $\chi({\rm b},\phi)$, the constraint (\ref{hc4}) becomes \nopagebreak[3]\begin{equation} \label{hc7} \partial^2_\phi \, {\chi}({\rm b},\phi) = \alpha^2 \, \left(\frac{\sin \lambda{\rm b}}{\lambda}\, \partial_{\rm b}\right)^2\,\, {\chi}({\rm b},\phi) \end{equation} with $\alpha=\sqrt{12\pi G}$. Note however, that \emph{we did not} arrive at (\ref{hc7}) simply by replacing ${\rm b}$ in the expression of the classical constraint by $\sin\lambda{\rm b}/\lambda$ as is often done Rather, (\ref{hc7}) results directly from the `improved' LQC constraint if one begins with a harmonic time coordinate already in the classical theory. To simplify the constraint further, let us set \nopagebreak[3]\begin{equation}\label{x} x = \frac{1}{\alpha}\, \ln (\tan \frac{\lambda{\rm b}}{2}),\quad \hbox{\rm or}\quad {\rm b} = \frac{2}{\lambda}\, \tan^{-1}\, (e^{\alpha\, x})\end{equation} so $x$ ranges $(-\infty,\infty)$. Then (\ref{hc5}) becomes just the Klein-Gordon equation \nopagebreak[3]\begin{equation} \label{hc8}\partial^2_\phi\,\, \chi(x,\phi) = \partial_x^2\,\,\chi(x,\phi) =: -\Theta\,\, \chi(x,\phi)\, .\end{equation} The physical Hilbert space is given by positive frequency solutions to (\ref{hc8}), i.e. satisfy \nopagebreak[3]\begin{equation} \label{hc9} -i \partial_\phi \chi(x,\phi) = \sqrt{\Theta}\, \chi (x,\phi)\, . \end{equation} We can again express the solutions in terms of their initial data and decompose them into left and right moving modes $\chi(x,\phi)= \chi_L(x_+)+ \chi_R(x_-)$. The physical states that we shall consider are positive frequency solutions of (\ref{hc8}). Since there are no fermions in the model, the orientations of the triad are indistinguishable and $\chi(x,\phi)$ satisfy the symmetry requirement $\chi(-x,\phi) = -\chi(x,\phi)$. Thus, we can write $\chi(x,\phi) = (F(x_+) - F(x_-))/\sqrt{2}$, where $F$ is an arbitrary `positive frequency solution'. To be precise, $F(x)$ is a positive momentum function, i.e. with a Fourier transform that has support on the positive axis. With such a choice, the solution to the constraint equation become of positive frequency. The physical inner product is given by, \nopagebreak[3]\begin{eqnarray} (\chi_1, \chi_2)_{\rm phy} &=& -i\int_{\phi =\phi_0} [\bar\chi_1(x,\phi)\partial_\phi \chi_2(x,\phi) -(\partial_\phi \bar\chi_1(x,\phi))\chi_2(x,\phi)] \, {\rm d} x\\ &=&i\int_{-\infty}^\infty [\partial_x \bar F_1(x_+) F_2(x_+) -\partial_x \bar F_1(x_-) F_2(x_-)] \, {\rm d} x ~. \label{inner-prod} \end{eqnarray} The action of the operator $\hat{p}_{(\phi)}$ on physical states is then: $\hat{p}_{(\phi)}\, \chi = -i\hbar\, \partial_\phi \chi \equiv \sqrt{-\partial_x^2}\; \chi$. We can now compute the expectation values and fluctuations of fundamental operator such as $\hat V|_{\phi{_o}}$, and $\hat p_{(\phi)}$. For {\it any} state on the physical Hilbert space the expectation value of the volume operator at `time $\phi$' is given by \nopagebreak[3]\begin{equation} \langle\hat{V}\rangle_\phi := (\chi, \hat{V}|_\phi\chi)_{\rm phy} = 2\pi \gamma \ell_{\rm Pl}^2 (\chi ,|\hat \nu| \chi)_{\rm phy} \end{equation} where $|\hat \nu|$ is the absolute value operator obtained from \nopagebreak[3]\begin{equation} \hat \nu=-\frac{2\lambda}{\alpha}\cosh(\alpha x)i\partial_x \, . \end{equation} Using the inner product \eqref{inner-prod} the expectation value of $|\hat \nu|$ is given by \nopagebreak[3]\begin{eqnarray} (\chi,|\hat \nu| \chi)_{\rm phy} &=&i\int_{-\infty}^\infty [\partial_x \bar F(x_+)( \hat \nu F(x_+)) -\partial_x \bar F(x_-)(-\hat \nu F(x_-))] \, {\rm d} x \nonumber \\ &=& \frac{2\lambda}{\alpha}\int_{-\infty}^\infty [\partial_x \bar F(x_+)\cosh(\alpha x)\partial_x F(x_+) +\partial_x\bar F(x_-)\cosh(\alpha x)\partial_x F(x_-)]\, {\rm d} x \nonumber \\ &=& \frac{4\lambda}{\alpha}\int_{-\infty}^\infty \left|\frac{{\rm d} F}{{\rm d} x}\right|^2 \cosh(\alpha(x-\phi)) \, {\rm d} x \, . \end{eqnarray} From these expressions one can find the expectation value of certain relational (Heisenberg) operators. For instance, the expectation value for the volume operator, at time $\phi$, takes the form, \nopagebreak[3]\begin{equation} \langle\hat{V}\rangle_\phi = V_+\,e^{\, \alpha \,\phi} +V_-\,e^{-\alpha\,\phi} \label{v-exp} ~, \end{equation} with, $V_\pm$ constants that depend on the details of the initial (normalized) wave-function: \nopagebreak[3]\begin{equation} V_{\pm} = \frac{4 \pi \gamma \ell_{\rm Pl}^2 \lambda}{\alpha}\int \left|\frac{{\rm d} F}{{\rm d} x}\right|^2\,e^{\mp\alpha\, x} {\rm d} x \end{equation} >From (\ref{v-exp}), it follows that the expectation value of the volume $\hat{V}|_\phi$ is large at both very early and late times and has a non-zero global minimum $$V_{\mathrm{min}} = 2 (V_+ V_-)^{1/2}\, .$$ The {\it bounce} occurs at time $$ \phi_{\rm b}^V = (2\, \alpha)^{-1} \ln(V_-/V_+)\, . $$ Around $\phi = \phi_{\rm b}^V$, the expectation value of the volume $\langle \hat V\rangle_\phi$ is symmetric. Thus we see that {\em all} states undergo a {\em big bounce} that replaces the big bang (in which the volume goes to zero as $\phi\to \pm\infty$). Note: In the case of the WDW quantization, the expected volume reaches zero as $\phi\to \pm\infty$, so in this sense one still reaches the singularity. Another important observable to consider is the energy density $\hat{\rho}|_{\phi_0}$. Interestingly, this quantity possesses an absolute upper bound on the physical Hilbert space. Let us now see how this bound for energy density arises. Fix any state $\chi (x,\phi) = (1/\sqrt{2}) (F(x_+) - F(x_-))$ in $\mathcal{H}_{\rm phy}$. Let us work in the Schr\"odinger picture at a fixed instant of time, say $\phi_0$. Then, it follows that $\rho=\langle\hat{\rho}|_{\phi_0}\rangle$ is given by \cite{acs}, \nopagebreak[3]\begin{equation} \label{rhobound} \rho = \frac{3}{8\pi\gamma^2 G}\,\, \frac{1}{\lambda^2}\,\,\, \frac{\left[\int_{-\infty}^{\infty}\! {\rm d} x |\partial_x F|^2\right]^2} {\left[\int_{-\infty}^{\infty}\! {\rm d} x |\partial_x F|^2\,\, \cosh (\alpha x) \right]^2}\, \end{equation} where the integrals are performed at $\phi=\phi_0$. Since $\cosh (\alpha x) \ge 1$, it follows that the ratio of the the two integrals is bounded above by 1. This immediately implies that there is an absolute bound given by \nopagebreak[3]\begin{equation} \langle\hat{\rho}\rangle_\phi \le \rho_{\rm max} \quad\quad {\rm with} \quad \quad \rho_{\rm max} := \frac{3}{8\pi\gamma^2 G}\, \frac{1}{\lambda^2}\, . \end{equation} It is interesting to note that this quantity depends inversely with the {\em loop quantum geometry scale} $\lambda$. Thus, in the limit $\lambda\to 0$, where we expect to recover the WDW theory, the density becomes unbounded. That is precisely what is found in the complete quantization of the WDW theory \cite{acs}. Using the standard choice for $\lambda$ in LQC, namely $\lambda^2 = 4\pi\sqrt{3}\,\gamma\ell_{\rm Pl}^2$, we obtain:\\ $ \rho_{\rm max} = (\sqrt{3})/(32\pi^2\gamma^3 G^2 \hbar)\approx 0.41 \rho_{\rm Pl}$, where we have used the standard choice for $\gamma$ coming from the black hole entropy computation in LQG where $\gamma=0.237$. In a similar fashion, it is straightforward to see that one can also bound the corresponding operator for the `Hubble parameter operator' $\hat{H}|_{\phi}$ for physical states. In this case the bound takes the form, $\langle \hat{H}\rangle_{\phi}< 1/(2\lambda\gamma)$. Note that, just as in the case of energy density, the bound on the Hubble parameter is inversely proportional to the loop quantum cosmology scale $\lambda$. In the limit $\lambda \to 0$, the corresponding quantity becomes unbounded. Let us now summarize the main features of the complete quantization of this simple cosmological model \begin{enumerate} \item The bounce is not restricted to semi-classical states but occurs for states in a dense sub-space of the physical Hilbert space. \item There exists a supremum of the expectation value for the energy density. This maximum allowed density is $\rho_{\rm max} = \sqrt{3}/(32 \pi^2 \gamma^3 G^2 \hbar)$. We note that existence of an absolute maximum of the energy density in this cosmological model implies a non-singular evolution, in terms of physical quantities. The singularity is therefore, resolved. \item When curvatures become much smaller than the Planck curvature (or for $\rho \ll \rho_{\rm max}$) the expectation values of the Dirac observables agree with the values obtained from classical GR. \item For states which are semi-classical at late times, i.e. those which lead to a large classical universe, the backward evolution leads to a quantum bounce in which the energy density of the field becomes arbitrarily close to $\rho_{\rm max} \approx 0.41 \rho_{\mathrm{Pl}}$. \item States that evolve to be semiclassical at late times, as determined by the dispersion in canonically conjugate observables, have to evolve from states that also had semiclassical properties before the bounce (even when there might be asymmetry in their relative fluctuations without affecting semiclassicality) \cite{cs2,kp2,cm1}. Semiclassicality is preserved to an amazing degree across the bounce. \end{enumerate} This concludes our discussion of the quantization of the homogeneous background in the case that the matter content is a massless scalar field. This is the simplest isotropic model and is completely solvable. The question now is how to generalize these results for other isotropic and anisotropic models. That will be subject of the next subsection. \subsection{Other cosmologies} \label{sec:2.b} \subsubsection{$k$=1 FLRW} \label{sec:2.b.1} There are several generalization one might consider away from the $k$=0, $\Lambda$=0, FLRW cosmology. The simplest case is to consider the $k$=1 FLRW cosmological model \cite{apsv,warsaw1,ck2}. Even when it is not phenomenologically favored, it is important since it represents a spatially closed model that in the classical theory has both an expanding and a contracting phase continuously joined by a `recollapse' point where $H=\frac{\dot{a}}{a}=0$. Therefore, it is an important test if one can recover the classical recollapse from the quantum theory. The spacetimes under consideration are of the form $M=\Sigma\times \mathbb{R}$, where $\Sigma$ is a topological three-sphere $\mathbb{S}^3$. It is standard to endow $\Sigma$ with a fiducial basis of one-forms ${}^o\!\omega^i_a$ and vectors ${}^o\!e^a_i$. The fiducial metric on $\Sigma$ is then ${}^o\!q_{ab}:= {}^o\!\omega^i_a\,{}^o\!\omega^j_b\,k_{ij}$, with $k_{ij}$ the Killing-Cartan metric on su(2). Here, the fiducial metric ${}^o\!q_{ab}$ is the metric of a three sphere of radius $a_0$. The volume of $\Sigma$ with respect to ${}^o\!q_{ab}$ will be denoted by $V_0=2\pi^2\,a_0^3$. We also define the quantity $\ell_0:=V_0^{1/3}$. It can be written as $\ell_0=:\vartheta\, a_0$, where the quantity $\vartheta:=(2\pi^2)^{1/3}$ will appear in many expressions. The isotropic and homogeneous connections and triads can be written in terms of the fiducial quantities as follows, \nopagebreak[3]\begin{equation} A_a^i=\frac{c}{\ell_0}\,{}^o\!\omega^i_a\qquad ;\qquad E^a_i=\frac{p}{\ell^2_0}\sqrt{{}^o\!q}\,{}^o\!e^a_i\, . \end{equation} Here, $c$ is dimension-less and $p$ has dimensions of length. The metric and extrinsic curvature can be recovered from the pair $(c,p)$ as follows, $ q_{ab}=\frac{|p|}{\ell^2_0}\,{}^o\!q_{ab}$, and $\gamma K_{ab}=\left(c-\frac{\ell_0}{2}\right)\frac{|p|}{\ell^2_0}\,{}^o\!q_{ab}$. Note that the total volume $V$ of the hypersurface $\Sigma$ is given by $V=|p|^{3/2}$. >From here, one can calculate the curvature $F^k_{ab}$ of the connection $A_a^i$ on $\Sigma$ as, $F^k_{ab}=\frac{c^2-2\vartheta c}{\ell^2_0}\;{\epsilon_{ij}}^k\,{}^o\!\omega^i_a\,{}^o\!\omega^j_b$. The only relevant constraint is the Hamiltonian constraint that has the form, \nopagebreak[3]\begin{equation} {\cal C}_{\textrm{grav}}=-\frac{3}{8\pi G\gamma^2}\,\sqrt{|p|}\left[(c-\vartheta)^2 + \gamma^2\vartheta^2\right] \end{equation} It is convenient to also use the variables \cite{acs}: ${\rm b}:=c/|p|^{1/2}$ and $V=p^{3/2}$. The quantity $V$ is just the volume of $\Sigma$ and ${\rm b}$ is its canonically conjugate, $\{{\rm b},V\} = 4\pi G\gamma$. We can then compute the evolution equations of $V$ and ${\rm b}$ in order to find interesting geometrical scalars. Then, \nopagebreak[3]\begin{equation} \dot{V}=\{V,{\cal C}_{\textrm{grav}}\}= \frac{3}{\gamma}\left({\rm b} V - \vartheta V^{2/3}\right) \end{equation} from which we can find the standard Friedman equation using the constraint equation ${\cal C}= {\cal C}_{\textrm{grav}} + {\cal C}_{\textrm{matt}}\approx 0$ and ${\cal C}_{\textrm{matt}}=V\rho$, we have $H^2:=\left(\frac{\dot{V}}{3V}\right)^2=\frac{8\pi G}{3} \,\rho-\frac{\vartheta^2}{V^{2/3}}$ The basic strategy of loop quantization, just as in the $k$=0 case, is that the effects of quantum geometry are manifested by means of holonomies around closed loops to carry information about field strength of the connection. In order to define the quantum theory, taking again $N=a^3$, one can work in the $\nu$ representation and define operators associated to curvature and spin connection to arrive to a difference operator $\Theta_{(k=1)}$ of the form, \nopagebreak[3]\begin{eqnarray} \partial^2_\phi \Psi(\nu,\phi) &=& \Theta_{(k=1)}\, \Psi(\nu,\phi) \nonumber \\ &=& -\Thet \Psi(\nu,\phi) + \frac{3\pi G}{\lambda^2}\,\nu\left[\sin^2\left(\frac{\lambda\vartheta}{\tilde{K}\nu^{1/3}}\right)+(1+\gamma^2)\left(\frac{\lambda\vartheta}{\tilde{K}}\right)\right]\,\Psi(\nu,\phi)\, , \end{eqnarray} with $\tilde{K}=2\pi\gamma\ell_{\rm Pl}$. Numerical solutions of this equation were studied in detail in \cite{apsv} for sharply peaked states, and were shown to posses not only a bounce very close to the critical density $\rho_{\rm max}$, but also a recollapse at a density and volume very close to the classical value. Thus, this model provides a very striking example of a quantum gravitational system that possesses satisfactory UV and IR behavior. The relative dispersion of $\hat{V}|_\phi$ does increase but the increase is very small: For a universe that undergoes a classical recollapse at $\approx$ 1 Mpc, a state that nearly saturates the uncertainty bound initially, with uncertainties in $\hat{p}_\phi$ and $\hat{V}|_\phi$ spread equally, the relative dispersion in $\hat{V}|_\phi$ is still $\approx 10^{-6}$ after some $10^{50}$ cycles \cite{apsv}. The expectation values of volume has a quantum bounce which occurs at $\rho=\rho_{\rm max}$ up to the correction terms of the order of $\ell_{\rm Pl}^2/V^{2/3}_{\rm bounce}$. For universes that grow to macroscopic sizes, the correction is totally negligible. For example, for a universe which grows to a maximum volume of $1Gpc^3$, the volume at the bounce is approximately $10^{117}\ell_{\rm Pl}^3$. On the other hand, the numerical simulations show that one indeed recovers the recollapse with very large precision for semiclassical states that reach large volumes \cite{apsv}. An important lesson that this model teaches us is that energy density and curvature are the relevant quantities to define what the Planck scale is, and not the size of the universe at the bounce (that, as we have seen, can be very large in Planck units). One should also note that, while semiclassical states alternate between the Planck scale (UV) and the low density, large volume GR regime (IR), states that are `truly quantum' --or far from semiclassical-- might have a bounce at a density much lower than $\rho_{\rm max}$, and not grow to large volumes before recollapse. There exists another quantization in which the curvature is not obtained by means of closed holonomies, but rather by approximating the {\it connection} by open holonomies, as is done in anisotropic models with non-trivial curvature \cite{ck2}. The structure of the constraints is different but its quantum solutions have not been explored numerically yet. Let us comment on the quantization of the $k$=-1 case. Some early attempts to find such a quantization were put forward in \cite{k=-1,szulc}, but those efforts still suffer from some drawbacks, such as absence of essential self-adjointness. A quantization based in open holonomies as in \cite{ck2} is still to be constructed. \subsubsection{FLRW with $\Lambda\neq 0$} \label{sec:2.b.2} The results found for a zero cosmological constant can be generalized to the case of a non-zero cosmological constant. For a mass-less scalar field and both signs of the constant, we have singularity resolution, in the sense that the big bag/crunch is replaced by a bounce, just as in the $\Lambda=0$ case. For simplicity we shall consider the $\Lambda<0$, $k$=0 case, but the results can be generalized to $k$=1 as well. The Hamiltonian constraint, for $N=1$, takes the form, \nopagebreak[3]\begin{equation} {\cal C}= \frac{p_{(\phi)}^2}{2V} -\frac{3}{8\pi G\gamma^2}\,{\rm b}^2 V +\frac{\Lambda}{16\pi G}\, V\approx 0\, . \end{equation} One can solve the equations of motion and express the dynamics in terms of the scalar field $\phi$ as, \nopagebreak[3]\begin{equation} V(\phi)=\frac{\alpha\,p_{(\phi)}}{\sqrt{3|\Lambda|}}\;\frac{1}{\cosh[\alpha(\phi-\phi_o)]} \end{equation} With this, there is a big bang singularity in the past $\phi\to -\infty$ and a big crunch in the future, when $\phi\to \infty$. There is a point of recollapse, when the volume reaches its maximum value $V_{\rm max}=({\alpha\,p_{(\phi)}})/({\sqrt{3|\Lambda|}})$, at $\phi=\phi_o$, with some resemblance to the $k$=1 case.The quantum constraint takes now the form, \nopagebreak[3]\begin{equation} \partial_\phi^2\Psi(\nu,\phi) = -\Theta\,\Psi(\nu,\phi) - \frac{\pi G\gamma^2|\Lambda|}{2}\,\nu^2 \Psi(\nu,\phi)\, , \end{equation} with $\Theta$ the operator corresponding to the $k$=0, $\Lambda$=0 case. The operator can be consistently defined, and numerically solved \cite{bp} to give a picture very similar to the $k$=1 case with vanishing cosmological constant. The big bang/crunch is replaced by a bounce, in such a way that a sharply peaked state goes through a series of bounces and recollapses in an almost periodic fashion. Let us now consider the $\Lambda>0$ case. The solution to the classical equations is slightly different from the negative case and takes the form \cite{ap,kp1}, \nopagebreak[3]\begin{equation} V(\phi)=\frac{\alpha\,p_{(\phi)}}{\sqrt{3|\Lambda|}}\;\frac{1}{\sinh[\alpha(\phi-\phi_o)]} \end{equation} This is qualitatively very different from the previous case. Now, an expanding solution with a big bang singularity at the past, $\phi\to-\infty$, reaches an infinite volume for a {\it finite} value of $\phi$, namely when $\phi=\phi_o$. Similarly, there are contracting solutions that `start', for $\phi=\phi_o$, with an infinite volume and end in a big crunch singularity when $\phi\to \infty$. At the point $\phi=\phi_o$, the proper time diverges and the matter density vanishes. One can see that one can actually continue the classical evolution past this `singular' point \cite{ap}. In the quantum theory, this new behavior manifests itself in the fact that the operator $\Theta_\Lambda$ fails to be essentially self-adjoint, and one has the freedom of choosing different self-adjoint extensions. Interestingly enough, for all of them, the evolution of semiclassical states is almost indistinguishable. Evolution is well defined past the point $\phi=\phi_o$ and the universe recollapses. As in all previous cases, the big bang/crunch singularity is replaced by a bounce. \subsubsection{Anisotropic Cosmologies.} \label{sec:2.b.3.} Isotropic loop quantum cosmology, as we have seen, enjoys a very robust formulation; one has complete mathematical control over the quantum theory, one can make physical predictions using analytical or numerical tools and can therefore draw conclusions about the behavior of a background isotropic quantum geometry. The same is not true for anisotropic solutions. While the quantum constraints have been formulated in several cases, one does not have full mathematical control regarding their time evolution, and one has not been able to solve, even numerically, their dynamical evolutions. In this part we shall summarize the formulation of the quantum models as we currently understand them. Let us consider the spacetime of the form $M=\Sigma\times\mathbb R$ where $\Sigma$ is a spatial 3-manifold which can be identified by the symmetry group of the chosen model and is endowed with a fiducial metric ${}^oq_{ab}$ and associated fixed fiducial basis of 1-forms ${}^o\omega_a^i$ and vectors ${}^oe_i^a$. If $\Sigma$ is non-compact then we fix a fiducial cell, $\mathcal V$, adapted to the fiducial triads with finite fiducial volume. We also define $L_i$ which is the length of the $i$th side of the cell along ${}^oe_i$ and $1=L_1L_2L_3$. We choose for compact $\Sigma$, $L_i=1^{1/3}$ with $i=1,2,3$. Since all of the models in which we are interested are homogeneous and, if we restrict ourselves to diagonal metrics, one can fix the gauge in such a way that $A_a^i$ has 3 independent components, $c^i$, and $E_i^a$ has 3 independent components, $p_i$, \begin{equation} A_a^i=\frac{c^i}{L_i}{}^o\omega_a^i\quad \textrm{and}\quad E_i^a=\frac{p_iL_i}{1}\sqrt{{}^oq}{}\ ^oe_i^a \end{equation} where $p_i$, in terms of the scale factors $a_i$, are given by $|p_i|=L_iL_ja_ja_k$ ($i\neq j\neq k$). Using $(c^i,p_i)$ for anisotropic models, the Poisson brackets can be expressed as $\{c^i,p_j\}=8\pi G\gamma\delta_j^i$. With this choice of variables and gauge fixing, the Gauss and diffeomorphism constraints are satisfied and the only constraint is the Hamiltonian constraint \begin{equation}\label{FHC} \mathcal C_H=\int_\mathcal V N\left[-\frac{\epsilon^{ij}_{\ k}E_i^aE_j^b}{16\pi G\gamma^2\sqrt{|q|}}\left( F_{ab}^k-(1+\gamma^2)\Omega_{ab}^k\right)+\mathcal H_{\rm matter}\right]\textrm{d}^3x \, , \end{equation} with $N$ the lapse function, $\mathcal H_{\rm matter}=\rho V$ and $\Omega_{ab}$ the curvature of the spin connection $\Gamma_a^i$ compatible with the triads. Using a strategy similar to the isotropic case, the field strength $F_{ab}^k$ is given by \begin{equation} F_{ab}^k=2\lim_{Area_\square\rightarrow 0}\epsilon_{ij}^{\ \ k}\textrm{Tr}\bigg(\frac{h_{\square_{ij}}^{\mu^\prime}-\mathbb I}{\mu^\prime_i\mu^\prime_j}\tau^k\bigg){}^o\omega_a^i{}^o\omega_b^j \, . \label{fs} \end{equation} The strategy to choose the corresponding loops is slightly different from the isotropic case. We take $\mu_i^\prime=\bar\mu_i L_i$ where $\bar\mu_i$ is a dimensionless parameter and, by previous considerations, is equal to $\bar\mu_i=\lambda\sqrt{|p_i|}/\sqrt{|p_jp_k|}$ ($i\neq j\neq k$) \cite{awe2,madrid-bianchi}. For Bianchi II and IX models, this strategy fails because the resulting operator is not almost periodic. Therefore, we express the connection $A_a^i$ in terms of holonomies and then use the standard definition of curvature $F_{ab}^k$. The operators corresponding to the connection are given by \cite{awe3} \begin{equation} \hat c_i=\widehat{\frac{\sin\bar\mu_ic_i}{\bar\mu_i}}\, . \,\, \end{equation} Note that using this quantization method for flat FLRW \cite{aps3} and Bianchi I \cite{awe2} models, one has the same result as the direct quantization of curvature $F_{ab}^k$ (with proper identification of the parameters), but for a closed FLRW it leads to a different quantum theory which is more compatible with the isotropic limit of Bianchi IX \cite{ck2, we, ck3}. We call the first method of quantization {\it curvature based quantization} and the second one {\it connection based quantization}. In Bianchi II and Bianchi IX models the terms related to the curvatures, $F_{ab}^k$ and $\Omega_{ab}^k$, contain some negative powers of $p_i$ which are not well defined operators. To solve this problem we use the same idea as Thiemann's strategy, \begin{equation} |p_i|^{(\ell-1)/2}=-\frac{\sqrt{|p_i|}L_i}{4\pi G\gamma j(j+1)\tilde\mu_i\ell}\textrm{Tr}(\tau_i h_i^{(\tilde\mu_i)}\{h_i^{(\tilde\mu_i)-1},|p_i|^{\ell/2}\}) \, , \label{np} \end{equation} where $\tilde\mu_i$ is the length of a curve, $\ell \in (0,1)$ and $j\in \frac{1}{2}\mathbb{N}$ is for the representation. Therefore, for these three different operators we have three different curve lengths ($\mu,\mu^\prime,\tilde\mu$) where $\mu$ and $\tilde\mu$ can be some arbitrary functions of $p_i$, so for simplicity we can choose all of them to be equal to $\mu^\prime$. On the other hand we have another free parameter in the definition of negative powers of $p_i$ where, for simplicity, we take $j=1/2$. Since the largest negative power of $p_i$ which appears in the constraint is $-1/4$, we will take $\ell=1/2$ and obtain it directly from Eq.(\ref{np}), and after that we express the other negative powers by them. The eigenvalues for the operator $\widehat{|p_i|^{-1/4}}$ are given by \begin{equation} J_i(V,p_1,p_2,p_3)=\frac{h(V)}{V_c}\prod_{j\neq i}p_j^{1/4}\, , \end{equation} with \begin{equation} h(V)=\sqrt{V+V_c}-\sqrt{|V-V_c|},\,\, \textrm{ and } \,\,\, V_c=2\pi\gamma\lambda\ell_p^2\, . \end{equation} By using these results and choosing some factor ordering, we can construct the total constraint operator. Note that different choices of factor ordering will yield different operators, but the main results will remain almost the same. By solving the constraint equation $\hat{\mathcal C}_H\cdot\Psi=0$, we can obtain the physical states and the physical Hilbert space $\mathcal H_{\rm phys}$. As a final step, one would need to identify the physical observables, that in our case would correspond to relational observables as functions of the internal time $\phi$. These steps have proven to be exceptionally difficult and have prevented from solving the resulting difference equations numerically, even for the simplest case of Bianchi I. \\ \subsection{Effective Equations} \label{sec:2.c} When analyzing the numerical solutions of the $k$=0, $\Lambda$=0 FLRW model, the authors of \cite{aps3} noticed that sharply peaked states followed trajectories in the $(V,\phi)$ plane that have a bounce, and therefore do not satisfy the classical Einstein equations. Furthermore, they realized that the expectation value of $\hat{V}|_\phi$ {\it does indeed} follow a trajectory that satisfies (to a very good approximation) some equations that are now referred to as the {\it effective equations}. As it turns out, these effective equations can be derived from an effective Hamiltonian constraint ${\cal C}_{\rm eff}$. The question that arises then is how to derive, from the quantum theory defined by a quantum constraint $\hat{\cal C}$, the effective Hamiltonian. A second question pertains to the domain of validity of these effective equations. That is, for which states and in which regimes are these equations a good approximation to the exact quantum dynamics? As we shall see in this part, for the models that are well understood, effective equations describe very accurately the dynamics for appropriately defined semiclassical states. In the case of models for which we do not posses the full quantum dynamics, one can expect that the effective theory to describe very well the quantum theory for semiclassical states far from the `deep quantum regime' (where it is expected to fail). Thus, in the anisotropic Bianchi I, II and IX models, the effective description that we shall here consider provide a description in which the singularity is also replaced by a bounce. Let us begin by briefly describing how one obtains this effective descriptions from the quantum theory. The idea is to employ the geometric formulation of quantum mechanics \cite{as}, which provides an appropriate formalism from which one can find the effective Hamiltonian constraint ${\cal C}_{\mathrm{eff}}$ by computing the expectation value $\langle \hat{C}\rangle_\psi$ of the quantum Hamiltonian constraint on an appropriately defined semiclassical state $\psi$. From that expression one can find the effective equations of motion by replacing ${\cal C}_{\mathrm{eff}}$ in Hamilton's equations: $\dot{q}=\{q,{\cal C}_{\mathrm{eff}}\}$ and $\dot{p}=\{p,{\cal C}_{\mathrm{eff}}\}$. Let us now be more precise. In the geometric formulation of quantum mechanics the space of quantum states is seen as a symplectic space $\Gamma_Q$, equipped with a symplectic structure $\Omega_Q$ that is given by the imaginary part of the Hermitian inner product $\langle\cdot,\cdot\rangle$ on $\mathcal{H}$. For each observable $\hat{F}$ one can define a function $\bar{F}:=\langle\hat{F}\rangle$ on normalized states. There is a corresponding Hamiltonian vector field for each function $X^\alpha_{\bar{F}}=\Omega_Q^{\alpha\beta}\partial_\beta\bar{F}$. There is an interesting interplay between these vectors and the vector one would obtain by acting with the operator $\hat{F}$ on a state $\Psi$, \nopagebreak[3]\begin{equation} (\hat{F}\Psi)^\alpha=i\hbar\,X^\alpha_{\bar{F}}|_\Psi \end{equation} Furthermore, the commutator of observables in the Hilbert space and the corresponding {\it quantum} Poisson bracket $\{\bar{F},\bar{G}\}_Q:=\Omega^{\alpha\beta}_Q\partial_\alpha\bar{F}\partial_\beta \bar{G}$ satisfy the relation, \nopagebreak[3]\begin{equation} \langle[\hat{F},\hat{G}]\rangle = i\hbar\,\{\bar{F},\bar{G}\}_Q \end{equation} Thus, quantum dynamics can just be seen as ordinary Hamiltonian dynamics on the quantum phase space $\Gamma_Q$, as defined by the corresponding vector field $X^\alpha_{\bar{H}}$. How can we relate then this quantum evolution with the classical evolution on the phase space $\Gamma$? The idea is to project the dynamics on $\Gamma_Q$ to $\Gamma$ by means of appropriate coordinate functions. To be precise, let us assume that the classical phase space $\Gamma$ has coordinates $(q^i,p_i)$. In the Hilbert space one has the corresponding operators $(\hat{q}^i,\hat{p}_j)$. Then, one can define the projection $\Pi:\Gamma_Q \to \Gamma$ as follows: $\Pi:\Psi \to (\bar{q}^i,\bar{p}_j)$. One can now, given a quantum dynamical trajectory $\Upsilon_t$ on $\Gamma_Q$, define the corresponding projected classical trajectory $\gamma_t$ in $\Gamma$ as: $\gamma_t=\Pi(\Upsilon_t)$. The question that arises then is whether one can find an {\it effective} Hamiltonian $H_{\rm eff}$, defined on the classical phase space (and therefore being a function of $(q^i,p_j)$ and possible some parameters), such that the trajectory $\gamma_t=(\bar{q}^i,\bar{p}_j)$ follows Hamilton's equations $\dot{\bar{q}}^i=\{q^i,H_{\rm eff}\}$ and $\dot{\bar{p}}_j=\{p_j,H_{\rm eff}\}$. For this conditions to be satisfied, one must choose a particular `initial state' in order to select a preferred trajectory $\Upsilon_t$. In practice one looks for something simpler. In the so called, `embedding approach', one seeks an embedding $\Gamma \to \bar{\Gamma}_Q\subset\Gamma_Q$ of the finite dimensional phase space into the infinite dimensional quantum space $\Gamma_Q$ that is well suited to capture the quantum dynamics, in the sense that the dynamical evolution lies approximately within $\bar{\Gamma}_Q$. To define $\bar{\Gamma}_Q$, for any given point $\gamma^o\in\Gamma$, where $\gamma^o=(q_i^o,p_i^o)$, one prescribes a quantum state $\Psi_{\gamma^o}$ for all $\gamma^o\in\Gamma$. A first requirement is that the embedding should be such that $q_i^o=\langle\Psi_{\gamma^o}\,\hat{q}_i\,\Psi_{\gamma^o}\rangle$ and $p_i^o=\langle\Psi_{\gamma^o}\,\hat{p}_i\,\Psi_{\gamma^o}\rangle$. The second condition is dynamical and non-trivial; it requires that the quantum Hamiltonian vector field should be approximately tangent to $\bar{\Gamma}_Q$. If this is satisfied, one can project the exact quantum evolution $\Upsilon_t$ to $\bar{\Gamma}_Q$ to obtain $\bar{\Upsilon}_t$, and from this, project down to $\gamma_t=\Pi(\bar{\Upsilon}_t)$. It is natural to regard, as a candidate for $H_{\rm eff}$, the expectation value of the quantum Hamiltonian on the embedded submanifold: $H_{\rm eff}(q_i^o,p_j^o):= \langle\Psi_{\gamma^o}\hat{H}\Psi_{\gamma^o}\rangle$. One should note that for the ordinary harmonic oscillator, coherent states represent an exact dynamical embedding. That is, the exact quantum evolution lies within $\bar{\Gamma}_Q$ and the effective Hamiltonian coincides with the classical one. There are no {\it quantum} corrections to the dynamics from these states. Let us now consider some important cases in homogeneous loop quantum cosmology. \subsubsection{$k$=0 FLRW cosmology} \label{sec:2.c.1} Using the geometric methods of quantum mechanics just described, one can write an effective Hamiltonian which provides an excellent approximation to the behaviour of expectation values of Dirac observables in the numerical simulations \cite{vt}. The effective Hamiltonian will in principle also have contributions from terms depending on the properties of the state such as its spread. Effect of these terms turns out to be negligible as displayed from the detailed numerical analysis \cite{aps2,apsv}. Thus, the effective Hamiltonian constraint is, for $N$=1, \nopagebreak[3]\begin{equation} \label{effham} {\cal C}_{\rm eff}=\frac{3}{8 \pi G\gamma^2} \, \frac{\sin^2(\lambda\, {\rm b})}{\lambda^2} V - {\cal C}_{\mathrm{matt}} \, , \end{equation} which leads to modified Friedman and Raychaudhuri equations on computing the Hamilton's equations of motion (as we shall see below). Using (\ref{effham}) one can find that the energy density $\rho = H_{\mathrm{matt}}/V$ equals $3 \sin^2(\lambda\, {\rm b})/(8 \pi G \gamma^2 \lambda^2)$. Since the latter reaches its higher possible value when $\sin^2(\lambda\, {\rm b})=1$, the density has a maximum given by \nopagebreak[3]\begin{equation} \rho_{\rm max}= \frac{3}{8 \pi G \gamma^2 \lambda^2}\, , \end{equation} Thus, we see that that the maximum energy density obtained from the effective Hamiltonian is identical to the supremum $\rho_{\mathrm{sup}}$ for the density operator in $k$=0, LQC. The difference is, of course, that in the effective dynamics every trajectory undergoes a bounce and reaches the maximum possible density, while in the quantum theory not every state is close to the critical density at the quantum bounce. It is easy to solve for the dynamics defined by the effective Hamiltonian. The equations of motion are found using the effective constraint: $\partial_t F =: \dot{F} = \{F,{\cal C}\}$, with $t$ the cosmic time. The only equation of motion different from the classical one (on the constraint surface) is \nopagebreak[3]\begin{equation} \dot V=\frac{3}{\gamma\lambda}V\sin{(\lambda{\rm b})}\cos{(\lambda{\rm b})}\, , \end{equation} leading to the modified Friedman equation for the Hubble parameter \nopagebreak[3]\begin{equation} H^2:=\left(\frac{\dot{a}}{a}\right)^2=\left(\frac{\dot{V}}{3V}\right)^2=\frac{8\pi G}{3}\,\rho\, \left(1-\frac{\rho}{\rho_{\rm max}}\right)\, ,\label{eff-fried} \end{equation} where $\rho_{\rm max}=\frac{9}{2\alpha^2}\frac{1}{\lambda^2}$ is the scalar field density at the bounce. For every trajectory there are quantum turning points at ${\rm b} =\pm\frac{\pi}{2\lambda}$, where $\dot{V}=0$, corresponding to a bounce. Note that, at the bounce $\ddot V\vert_{\beta =\frac{\pi}{2\lambda}}=2\,\alpha^2 V \rho_{\rm max} >0$, so the bounce corresponds to a minimum of volume. Also, note that the Hubble parameter is absolutely bounded $|H|\leq 1/(2\lambda\gamma)$, indicating that the congruence of cosmological observers can never have caustics, independently of the matter content. In the case of effective theories the proper time appears as a natural choice for an evolution parameter, but one can always look for internal, relational notions of time. Since, $\dot{{\rm b}}\le 0$ one can choose ${\rm b}$ as a relational time in the effective theories, and consider the evolution with respect to ${\rm b}$. The advantage of this election is that no external time variable is needed. Every trajectory, that corresponds to ${\rm b} >0$, has a bounce at ${\rm b} =\frac{\pi}{2\lambda}$, and this value tends to infinity as $\lambda\to 0$. In the effective theories, we consider the interval ${\rm b}\in [-\frac{\pi}{2\lambda},\frac{\pi}{2\lambda})$. One should note that all functions and observables in $\bar{\Gamma}_\lambda$ are periodic in ${\rm b}$ with period $\pi/\lambda$. It is then completely equivalent to regard the coordinate ${\rm b}$ as compactified on a circle. The solutions are defined for every ${t}$ and are given by \cite{cv1}, \nopagebreak[3]\begin{equation} \cot{\lambda{\rm b}}=\frac{3{t}}{\gamma\lambda}\, ,\ \ \ V_{\lambda}({t} )=\frac{\alpha}{3}\,p_{\phi}\,\sqrt{\gamma^2\lambda^2+9{t}^2}\, , \end{equation} and \nopagebreak[3]\begin{equation} \phi_{\lambda} ({t} )=\phi_0 +\lambda\varphi +\frac{1}{\alpha}\ln{\frac{3{t}+\sqrt{\gamma^2\lambda^2+9{t}^2}} {3{t}_0+\sqrt{\gamma^2\lambda^2+9{t}_0^2}}}\, , \end{equation} so that $\phi_{\lambda}({t}_0)=\phi_0 +\lambda\varphi$ and the initial condition approaches the classical one (for ${t}={t}_0$) as $\lambda\to 0$. Note that $\phi_{\lambda}(0)\to\frac{{\rm sgn}{\rm b}}{\kappa}\ln{\lambda}$ as $\lambda\to 0$. Let us now consider an intrinsic description of the dynamics in terms of the scalar field. One can solve $V$ as a function of $\phi$, \nopagebreak[3]\begin{equation} V_{\lambda}(\phi )=V_+e^{\alpha ({\rm sgn}{\rm b} ) (\phi -\phi ({t}_0))} +V_-e^{-\alpha ({\rm sgn}{\rm b} )(\phi -\phi ({t}_0))}\, , \end{equation} where $V_+=\frac{1}{2}(V_0+\sqrt{V_0^2-\beta^2})$ and $V_-=\frac{\beta^2}{4}(V_+)^{-1}$, where $V_0=V(\phi ({t}_0))$, and $\beta =\frac{1}{3}\gamma\lambda\alpha p_\phi$. Note that the effective theory recovers the quantum dynamics of $\langle\hat{V}\rangle|_\phi$ {\it exactly}, for all states of the physical Hilbert space. That is, there are no further quantum corrections to the dynamics of $V_\lambda(\phi)$. With this, one can see that the effective theory defines an effective homogeneous and isotropic spacetime metric, that takes the form, \nopagebreak[3]\begin{equation} ({\rm d} s^2)_{\rm eff} = - {\rm d}{t} + a^2({t})_{\rm eff}\; {\rm d} {\bf x}^2 \end{equation} with $a({t})_{\rm eff}= \left(\frac{\alpha}{3}\right)^{\frac{1}{3}}\,\frac{p_{\phi}}{1}\, (\gamma^2\lambda^2+9\,{t}^2)^{\frac{1}{6}}$. It is trivial to see that in the $\lambda\to 0$ limit, one recovers the classical spacetime metric satisfying Einstein equations. As we have seen, the quantum corrections captured by the effective Hamiltonian modify the Friedman equation in a non-trivial way, ensuring that quantum effects become important near the Planck scale in such a way that a repulsive force is capable of stopping the collapsing universe and turn it around into an expanding phase. Let us explore a little bit more how this quantum repulsive force can be seen. First, a modified Raychaudhuri equation can be written \cite{ps}, \nopagebreak[3]\begin{equation} \frac{\ddot{a}}{a}=-\frac{4\pi G}{3}\,\rho\left(1-4\, \frac{\rho}{\rho_{\rm max}}\right) -4\pi G\,P\left( 1-2\, \frac{\rho}{\rho_{\rm max}}\right)\, . \end{equation} It is also illustrative to write an equation for the rate of change of the Hubble parameter \cite{cs3}, \nopagebreak[3]\begin{equation} \dot{H}= -4\pi G (\rho + P)\left( 1-2\, \frac{\rho}{\rho_{\rm max}} \right)\, .\label{dotH} \end{equation} These equations imply that the matter conservation equation \nopagebreak[3]\begin{equation} \dot{\rho} + 3H\, (\rho + P) = 0\, , \end{equation} has the same form as in the classical theory, even when both Friedman and Raychaudhuri equations suffer loop quantum corrections. From Eq.~(\ref{dotH}) one sees that, for matter satisfying the WEC, there is a super-inflationary phase, corresponding to $\dot{H}>0$, whenever the matter density satisfies $\rho>\rho_{\rm max}/2$. Note that in the $\lambda\to 0$ limit, we recover the corresponding classical equations. Another system of interest, for the remainder sections of this Chapter, is a scalar field subject to a potential ${\rm V}(\phi)$. Even for the simplest potential ${\rm{V}}(\phi)=m^2\phi^2/2$ the classical dynamics is drastically modified; after the big bang there is a, `slow roll', inflationary period. A pressing question is how this dynamics gets modified in the effective LQC scenario. We know that every trajectory follows the effective Friedman equation (\ref{eff-fried}) and has a bounce when $\rho=\rho_{\rm max}$, followed by a period of superinflation. How does that behavior affect the presumed inflationary period occurring at much smaller densities? First note that in that case, the energy density has the form: $\rho=\dot{\phi}^2/2 + m^2\phi^2/2$, so there is a convenient way of depicting the bounce as the curve, in the $(\phi,\,\dot{\phi})$ plane, satisfying $\rho_{\rm max}=\dot{\phi}^2/2 + m^2\phi^2/2$. The dynamics is therefore bounded by such ellipsoid. The equation satisfied by the scalar field has the same form as in the classical case: $\ddot{\phi} + 3H + {\rm{V}}_{,\phi}=0$. One can solve these equations numerically \cite{svv,ck-inflation} and finds that after the superinflationary phase, the dynamics follows very closely the GR dynamics and exhibits an `attractor' behavior as well. As we shall see in later sections, this feature of the dynamics is responsible for phenomenologically relevant inflation to be generic. Let us end this part with some comments. i) This set of effective equations have the property that one recovers General Relativity in the small density `IR' limit, and that they are independent of the fiducial ${\cal V}$. These are non-trivial requirements that impose strong conditions on the particular form of the quantum constraint operator \cite{cs1}. ii) Inverse volume effects can introduce modifications to the effective equations that have various consequences, such as loss of the universal conservation equation for matter, and extra superinflationary corrections. However, the physical validity of considering such inverse correction for the $k$=0 is seriously challenged. iii) It has been shown that for generic matter content, the LQC effective equations imply that strong singularities are generically resolved \cite{ps}. iv) A consistency check for the validity of effective equations pertains to the behavior of appropriately defined semiclassical states. Such states have been constructed and the predictions of the effective theory put to the test \cite{cm1}. It was shown that both the density at the bounce and the minimum value of volume are very well described by the effective theory. \subsubsection{$k$=1 FLRW} \label{sec:2.c.2} Let us now start with the isotropic closed FLRW model. As discussed before, there are two quantization available for this model. Correspondingly, the effective equations will yield two inequivalent theories. For the first quantization, based in the curvature as defined by closed holonomies, and neglecting the so called inverse triad corrections, one can arrive at the form of the effective Hamiltonian constraint, \nopagebreak[3]\begin{equation} \mathcal{C}_{\textrm{eff}}=-\frac{3}{8\pi G\gamma^2\lambda^2}V\left[\sin^2(\lambda{\rm b} - D)-\sin^2D+(1+\gamma^2)D^2\right]+\rho V \end{equation} with $D:=\lambda\vartheta/V^{1/3}$. We can now compute the equations of motion from the effective Hamiltonian as, $$\dot{V}=\{V,\mathcal{C}_{\textrm{eff}}\}=\{V,{\rm b}\}\frac{\partial\mathcal{C}_{\textrm{eff}}}{\partial{\rm b}}=\frac{3}{\lambda\gamma}V\sin(\lambda{\rm b} - D)\cos(\lambda{\rm b} - D)\, . $$ From here, we can find the expansion as, \nopagebreak[3]\begin{equation} \theta=\frac{\dot{V}}{V}=\frac{3}{\lambda\gamma}\sin(\lambda{\rm b} - D)\cos(\lambda{\rm b} - D)=\frac{3}{2\lambda\gamma}\sin2(\lambda{\rm b}-D)\label{exp-1}\, . \end{equation} >From the above equation we can see that the Hubble parameter is also absolutely bounded by $|H|=|\theta|/3\leq 1/2\lambda\gamma$. We can now compute the modified, {\it effective Friedman equation}, by computing $H^2=\frac{\theta^2}{9}$, \nopagebreak[3]\begin{equation} \begin{split} H^2 & = \frac{1}{\lambda^2\gamma^2}\left(\frac{8\pi G\gamma^2\lambda^2}{3}\rho+\sin^2D-(1+\gamma^2)D^2\right) \left(1-\frac{8\pi G\gamma^2\lambda^2}{3}\rho-\sin^2D+(1+\gamma^2)D^2\right)\\ &=\frac{8\pi G}{3}(\rho-\rho_1)\left(1-\frac{\rho-\rho_1}{\rho_{\rm max}}\right) \end{split}\label{eff-frid-1} \end{equation} where $\rho_1=\rho_{\rm max} [(1+\gamma^2)D^2-\sin^2D]$ and $\rho_{\rm max}=3/(8\pi G\gamma^2\lambda^2)$ is the {\it critical density} of the $k=0$ FLRW model. Let us now consider the other quantization, based on defining the connection using holonomies along open paths. As mentioned before, this is the only available route for anisotropic cosmologies when there is intrinsic curvature (such as in Bianchi II and IX). The effective Hamiltonian constraint one obtains from that quantum theory \cite{ck2}, when neglecting inverse scale factor effects (as was done in \cite{apsv} and \cite{ps-fv}), is \nopagebreak[3]\begin{equation} \mathcal{C}_{\textrm{eff}}=-\frac{3}{8\pi G\gamma^2\lambda^2}V\left[(\sin\lambda{\rm b} - D)^2+\gamma^2 D^2\right]+\rho V\, . \end{equation} It is then straightforward to compute the corresponding effective equations of motion. In particular, by computing $\dot{V}=\{V,\mathcal{C}_{\textrm{eff}}\}$, we can find the expression for the expansion as \nopagebreak[3]\begin{equation} \theta=\frac{3}{\lambda\gamma}\cos\lambda{\rm b} \left(\sin\lambda{\rm b} - D\right)\label{exp-2}\, . \end{equation} Note that in this case, the expansion (and Hubble) is not absolutely bounded, due to the presence of the term linear in $D$. An important feature of these effective equations is that they describe with great accuracy the expectation value of volume during the numerical evolution of semiclassical quantum states \cite{apsv}. It is also worth notice that for large values of the recollapse volume, the effective and the classical equations coincide. In the case of the connection based quantization \cite{ck2}, there are two different bounces, that approach the unique bounce of the curvature based equations when the universe grows to be large \cite{ck2}. Let us now consider the effective equations for anisotropic models. \subsubsection{Anisotropic Models: Bianchi I, II and IX} \label{sec:2.c.3} Considering the effective description of anisotropic models is interesting in view of the BKL conjecture \cite{bkl1,ahs}, that states that locally, generic spacetimes approaching the classical singularity behave as a combination of Bianchi cosmological models. The effective Hamiltonian constraint for Bianchi I and II can be written in a single expression \cite{awe2,awe3,CKM}, \begin{align*} \label{H-BII} \mathcal{C}_{\rm BII} & = \frac{p_1p_2p_3}{8\pi G\gamma^2\lambda^2} \left[\frac{}{}\sin\bar\mu_1c_1\sin\bar\mu_2c_2+\sin\bar\mu_2c_2 \sin\bar\mu_3c_3+\sin\bar\mu_3c_3\sin\bar\mu_1c_1\right] \nonumber\\ & \quad + \frac{1}{8\pi G\gamma^2} \Bigg[\frac{\alpha(p_2p_3)^{3/2}}{\lambda\sqrt{p_1}}\sin\bar\mu_1c_1 -(1+\gamma^2)\left(\frac{\varepsilon p_2p_3}{2p_1}\right)^2 \Bigg] - \frac{p_\phi^2}{2} \approx 0 \, \end{align*} where the parameter $\varepsilon$ allows us to distinguish between Bianchi I ($\varepsilon$= 0) and Bianchi II ($\varepsilon$= 1). This Hamiltonian together with the Poisson Brackets $\{c^i,p_j\}=8\pi G\gamma\delta_j^i$ and $\{\phi,p_\phi\}=1$ gives the effective equations of motion. In these previous effective Hamiltonians, one chooses the lapse $N=V$. In Bianchi IX, we choose $N$=1 to include more inverse triad corrections, then the effective Hamiltonian is given by \cite{CKM} \begin{equation*} \label{H-BIX} \begin{split} \mathcal{C}_{\rm BIX}=&-\frac{V^4A(V)h^6(V)}{8\pi GV_c^6\gamma^2\lambda^{2}}\bigg(\sin\bar\mu_1c_1\sin\bar\mu_2c_2+\sin\bar\mu_1c_1\sin\bar\mu_3c_3\\ &+\sin\bar\mu_2c_2\sin\bar\mu_3c_3\bigg) +\frac{\vartheta A(V)h^4(V)}{4\pi GV_c^4\gamma^2\lambda}\bigg(p_1^2p_2^2\sin\bar\mu_3c_3+p_2^2p_3^2\sin\bar\mu_1c_1\\ &+p_1^2p_3^2\sin\bar\mu_2c_2\bigg) -\frac{\vartheta^2(1+\gamma^2)A(V)h^4(V)}{8\pi GV_c^4\gamma^2}\bigg(2V\bigg[p_1^2+p_2^2+p_3^2\bigg]\\ &-\bigg[(p_1p_2)^{4}+(p_1p_3)^{4} +(p_2p_3)^{4}\bigg]\frac{h^6(V)}{V_c^6}\bigg) +\frac{h^6(V)V^2}{2V^6_c}p_\phi^2 \approx 0 \end{split} \end{equation*} Let us discuss the issue of singularity resolution when these equations are studied numerically. i) All solutions have a bounce. In other words, singularities are resolved. In the closed FRW and the Bianchi IX model, there are infinite number of bounces and recollapses due to the compactness of the spatial manifold. ii) One can have a different kind of bounce dominated by shear $\sigma$, but only in Bianchi II and IX. In Bianchi I, the dynamical contribution from matter is always bigger than the one from the shear, even in the solution which reaches the maximal shear at the bounce \cite{ac-bianchi}. iii) In the flat isotropic model all the solutions to the effective equations have a maximal density equal to the critical density, and a maximal expansion ($\theta^2_{\rm max} = 6\pi G \rho_{\rm max} = 3/(2\gamma\lambda)$) when $\rho=\rho_{\rm crit}/2$. For FRW $k=1$ model, every solution has its maximum density but in general the density is not absolutely bounded. In the effective theory which comes from connection based quantization, expansion can tend to infinity. For the other case, expansion has the same bound as the flat FRW model. However, by adding some more corrections from inverse triad term, one can show that actually in both effective theories the density and the expansion have finite values. iv) For Bianchi I, in all the solutions $\rho$ and $\theta$ are upperly bounded by its values in the isotropic case and $\sigma$ is bounded by $\sigma^2_{\max} = 10.125/(3\gamma^2\lambda^2)$ \cite{ps-bianchi,singh-gupt}. For Bianchi II, $\theta, \sigma$ and $\rho$ are also bounded, but for larger values than the ones in Bianchi I, i.e., there are solutions where the matter density is larger than the critical density. With point-like and cigar-like classical singularities \cite{ac-bianchi}, the density can achieve the maximal value ($\rho \approx 0.54\rho_{\rm Pl} $) as a consequence of the shear being zero at the bounce and curvature different from zero. v) For Bianchi IX the behaviour is the same as in closed FRW, if the inverse triad corrections are not used, then the geometric scalars are not absolutely bounded. But if the inverse triad corrections are used then, on each solution, the geometric scalars are bounded but there is not an absolute bound for all the solutions \cite{CKM,singh-gupt}. \section{Inhomogeneous perturbations in LQC} \label{sec:3} The theory of quantized fields in curved space-times has become an essential tool in modern early-universe cosmology. In that framework one studies the behavior of quantum fields propagating in space-times with generic Lorentzian geometries, as in General Relativity. One expects this theory to describe accurately physical processes in situations where we are confident about the validity of its building blocks: a description of matter fields in terms of quantum field theories, and a space-time geometry given by a smooth, classical space-time metric. These assumptions are reasonable, for instance, during the inflationary era in which the energy density and curvature are believed to be more than ten orders of magnitude below the Planck scale. However, earlier in the history of the universe, closer to the Planck era, quantum gravity effects become important and the description of space-time geometry in terms of a smooth metric is expected to fail. To include physics in the Planck regime QFT in curved backgrounds needs to be generalized to a QFT in {\em quantum} space-times. The singularity-free quantum geometry provided by LQC, summarized in the previous section, provides a suitable arena to formulate such a theory, and the quantization of scalar fields on those quantum cosmologies was introduced by Ashtekar, Kaminski and Lewandowski in \cite{akl}, and further developed in \cite{aan2,puchta, dapor1,dapor2}. Having in mind the most interesting application of this framework, we summarize here the construction of the QFT of scalar and tensor metric perturbations propagating in a quantum FLRW universe, i.e. the {\em quantum gravity theory of cosmological perturbations}. For more detailed information, see \cite{akl, aan2}. As mentioned in the introduction of this chapter, the construction will follow the guiding principle that has been useful in the quantization of the background: first carry out a truncation of the classical theory to select the sector of General Relativity of interest, and then move to the quantum theory by using LQG techniques. Starting from General Relativity with a scalar field as matter source, we will truncate the phase space to the sector containing cosmological backgrounds {\em plus} inhomogeneous, gauge invariant, first order perturbations, and then write down the dynamical equations on that classical, reduced phase space. The main approximation behind this truncation, and underlaying the subsequent quantization, is that the back-reaction of inhomogeneous perturbations on the homogeneous degrees of freedom is neglected. The second step is to move to the quantum theory. Physical states will depend on background homogeneous degrees of freedom as well as on inhomogeneous ones. Our basic approximation, however, enables us to write these quantum states as tensor product of the homogeneous part, which will evolve independently of perturbations, and first order inhomogeneities thereon. The homogeneous part will therefore be the same as the quantum geometries obtained in the previous section, in which the big bang singularity is replaced by a bounce. The surprising result appears in the evolution of perturbations. Without further approximation, the evolution of inhomogeneities on those quantum geometries turns out to be {\em mathematically equivalent} to the quantum theory of those fields propagating on a {\em smooth} background characterized by a metric tensor. The components of that smooth metric, however, do not satisfy the classical Einstein equation. They are obtained from expectation values of certain combinations of background operators, and incorporate {\em all} the information of the underlying quantum geometry that is `seen' by perturbations. The message is that the propagation of inhomogeneous perturbations is not sensitive to all the details of the quantum space-time, but only to certain aspects, which appear precisely in a way that allows to encode them in a smooth background metric. This is an unforeseen simplification that facilitates enormously the treatment of field theoretical issues. The last step is to develop the necessary tools to check the self-consistency of this construction. It is necessary to show that, in the physical situations under consideration, the Hilbert space of physical interest contains a large enough subspace in which the back-reaction of perturbations on the background is indeed negligible, in such a way that our initial truncation is justified. That should be done by comparing the expectation value of the Hamiltonian and stress-energy tensor for perturbations with that of background fields. Those computation will require of techniques of regularization and renormalization.\\ \subsection{The classical framework} \label{sec:3.a} The goal of this subsection is to summarize the construction of the truncated theory of classical FLRW space-times coupled to a scalar field, plus gauge invariant, linear perturbations on it, and write down the equations describing their dynamics. The reader is referred to the extensive literature for more details (see, for instance, \cite{reportbrandenberger}). We adopt here the Hamiltonian framework which, as shown in \cite{langlois}, is particularly transparent on the task of finding gauge invariant variables. It will also provide the appropriate arena to pass to the quantum theory in the next section. For simplicity and for physical interest, we work here with a spatially flat FLRW universe. The procedure can be divided in three steps: 1) Starting from the full phase space, expand the configuration variables and their conjugate momenta in perturbations, and truncate the expansion at first order. Expand also the constraints of the theory (the scalar and vector constraints) and keep only terms containing zero and first order perturbations. 2) Use the constraints linear in first order perturbations to find gauge invariant variables. Those variables coordinatize the so-called truncated, reduced phase space. 3) Use the part of the constraints quadratic in zero and first order perturbations to write down the dynamics. See \cite{aan2} for further details and subtle points of this construction. \subsubsection{The truncated phase space} \label{sec:3.a.1} Let us consider General Relativity coupled to a scalar field on a space-time manifold $M=\Sigma\times \mathbb{R}$, with $\Sigma=\mathbb{R}^3$. Due to the infinite volume in $\Sigma$, spatial integrals of homogeneous quantities will introduce infrared divergences. To be able to write meaningful mathematical expression, it is convenient to introduce a fiducial cell ${\cal V}$ and restrict all integrals to it. ${\cal V}$ can be chosen to be arbitrarily large, or at least larger than the observable universe. At the quantum level this will be equivalent to restrict to ${\cal V}$ the support of test functions in operator valued distributions. We will work with ADM variables for the gravitational sector, where the canonical conjugated pairs consist in a positive definite 3-metric on $\Sigma$, $q_{ab}$, and its conjugate momentum $p^{ab}$ (the same analysis can be done in connexion variables, by including the corresponding Gauss constraint; see \cite{dt,pert_scalar,ghtw,joao2}, \cite{aan2}). The full phase space $\Gamma$ consists of quadruples $\{q_{ab}(\vec{x}),p^{ab}(\vec{x}),\Phi(\vec{x}),\Pi(\vec{x})\}\in\Gamma$, where $\Pi(\vec{x})$ is the conjugate momentum of the scalar field $\Phi(\vec{x})$. Because we are interested in expanding around $\Gamma_{\rm hom} \subset \Gamma$, the (FLRW) isotropic and homogenous sector of $\Gamma$, it is convenient to introduce a fiducial flat metric $\mathring{q}_{ab}$, and use it to raise and lower indices. We will denote $\vec{x}=(x_1,x_2,x_3)$ the Cartesian coordinates defined by $\mathring{q}_{ab}$ on ${\cal V}$, $\mathring{V}$ the volume of ${\cal V}$ with respect to $\mathring{q}_{ab}$, which we take equal to one to simplify the notation, and $\mathring{q}=1$ the determinant of $\mathring{q}_{ab}$. Consider now curves $\gamma[\epsilon]$ in $\Gamma$, which pass through $\Gamma_{\rm hom}$ at $\epsilon=0$. Expanding the phase space variables around $\epsilon=0$, we have: \nopagebreak[3]\begin{eqnarray} \label{expan} q_{ab}[\epsilon](\vec{x}) &=& a^2 \mathring{q}_{ab} + \epsilon\, \delta q^{(1)}_{ab}(\vec{x}) + \ldots + \frac{\epsilon^n}{n!}\, \delta q^{(n)}_{ab}(\vec{x}) + \ldots \nonumber\\ p^{ab}[\epsilon](\vec{x}) &=& \, \frac{P_{a}}{6\, a} \mathring{q}^{ab} + \epsilon\, \delta p^{ab\, (1)}(\vec{x}) + \ldots \, \nonumber\\ \Phi[\epsilon](\vec{x}) &=& \phi + \epsilon\, \varphi^{(1)}(\vec{x}) + \ldots \, , \nonumber \\ \Pi[\epsilon](\vec{x}) &=& \, p_{(\phi)} + \epsilon\, {\pi}^{(1)}(\vec{x})+ \ldots \end{eqnarray} It is convenient to consider the first order perturbations $\delta q^{(1)}_{ab}(\vec{x}), \, \delta p^{ ab \, (1)}(\vec{x}), \varphi^{(1)}(\vec{x}), {\pi}^{(1)}(\vec{x})$ as \emph{purely inhomogeneous} functions of $\vec{x}$, in the sense that the integral of any of them on ${\cal V}$ is zero. By truncating the above expansions at first order we obtain the {\em truncated} phase space, made of four pairs of conjugate variables: $\Gamma_{\rm{Trun}}=\{(a,P_{a},\phi, p_{(\phi)},\delta q^{(1)}_{ab}, \, \delta p^{ ab \, (1)}, \varphi^{(1)}, {\pi}^{(1)})\}=\Gamma_{\rm hom}\times\Gamma_1$, where the only non-zero Poisson brackets between the basic variables are: \nopagebreak[3]\begin{eqnarray} \label{pbs}\{a,\, P_{a}\} = 1, &\quad& \{\delta q^{(1)}_{ab}(\vec{x}_1),\, \delta p^{ cd \, (1)}(x_2)\} = \delta^c_{(a}\, \delta_{b)}^d\, \bar\delta(\vec{x}_1,\vec{x}_2),\nonumber\\ \{\phi,\, p_{(\phi)}\} = 1, &\quad& \{\varphi^{(1)}(\vec{x}_1),\, {\pi}^{(1)}(\vec{x}_2)\} = \bar\delta(\vec{x}_1, \vec{x}_2), \end{eqnarray} where $\bar\delta(\vec{x}_1,\vec{x}_2) = (\delta(\vec{x}_1,\vec{x}_2) - 1)$ is the Dirac delta distribution on the space of purely inhomogeneous fields. >From now on we will work only with first order perturbations, so we will omit the super-index $(1)$ to simplify the notation. Because of the homogeneity of the background it is convenient to Fourier transform the perturbation fields and carry out the standard scalar-vector-tensor decomposition, in which the 6 degrees of freedom of $\delta q_{ab}$ are decompose into two scalar, two vector, and two tensor modes (see e.g.\cite{langlois}, \cite{aan2} for details). Because perturbations are inhomogeneous, the restriction to the fiducial cell $\cal{V}$ is not strictly necessary, and one can avoid the artificial quantization of $\vec{k}$ that it introduces. However, from the physical point of view one can absorb modes with wavelength larger than the observable universe in the background. Therefore, we will consider that the Fourier integrals incorporate an infrared cut-off $k_o$ provided by the size of the observable universe. \subsubsection{Constraints and reduced phase space} \label{sec:3.a.2} A similar expansion to (\ref{expan}) can be carried out for the constraints. In General Relativity the Hamiltonian is a sum of constrains, the familiar scalar $\mathbb{S}[N]$, and vector $\mathbb{V}[\vec{N}]$ constraints. If $\gamma[\epsilon]$ is now a curve that lies in the constraint hyersurface of $\Gamma$, and intersects $\Gamma_{\rm hom}$ at $\epsilon=0$, by referring to the constraints collectively as ${\cal C}(q^{ab},p_{ab}, \Phi,\Pi)$ (suppressing the smearing fields for simplicity), we expand around $\epsilon=0$ to obtain a hierarchy of equations: \nopagebreak[3]\begin{equation} {\cal C}^{(0)}:={\cal C}|_{\epsilon =0}= 0,\quad {\cal C}^{(1)}:=\frac{d {\cal C}}{d\epsilon}|_{\epsilon=0} = 0, \quad \ldots\quad {\cal C}^{(n)}:=\frac{d^n {\cal C}}{d\epsilon^n}|_{\epsilon=0} = 0,\quad \ldots \end{equation} \begin{itemize} \item The zeroth-order constraint, ${\cal C}^{(0)}= 0$, is just the restriction of the full constraint to the homogeneous subspace $\Gamma_{\rm hom}$. The zeroth-order vector constraint is trivially satisfied because of the gauge fixing on the zeroth-order variables, introduced by the use of the fiducial metric $\mathring{q}_{ab}$ in (\ref{expan}). The zeroth-order scalar constraint, $ {\mathbb{S}}_{0}$, is quadratic in zeroth-order variables and can be interpreted as the generator of background dynamics. This dynamics is exactly the same as that of the unperturbed theory. \item First order constraints are linear in first order variables. They generate gauge transformations in $\Gamma_{\rm Trun}$ and, as usual, tell us that some of our degrees of freedom are not physical. Initially we have $6\, (\times \infty)$ degrees of freedoms in $\delta q_{ab}(\vec{x})$, plus 1 degree of freedom in the scalar field $\varphi(\vec{x})$, a total of 7. As mentioned above, $\delta q_{ab}(\vec{x})$ is conveniently decomposed in Fourier space into two scalars, two vector, and two tensor modes. We have the scalar and three vector constraints, a total of 4. Therefore, the number of physical degrees of freedom is $7-4 =3$. There is an elegant systematic procedure to construct gauge invariant variables out of those 3 degrees of freedom, and we refer the reader to \cite{langlois} for details. It can be summarize as follows. In FLRW backgrounds, scalar perturbations are affected by the scalar constraint and only one of the vector constraints; they reduce the three scalar degrees of freedom that we have initially, two from gravity and one from the matter sector, to only one physical scalar mode. Vector perturbations are affected by two of the vector constraints that kill completely the vector modes. In other words, in the absence of matter with vector degrees of freedom, as in the case we are studying, there are no physical vector perturbations. Tensor modes are not affected by any of the constraints and therefore the two original tensor modes are the physical ones, i.e. they are gauge invariant. In summary, after imposing the constraints we are left with one scalar degree of freedom, which we choose to be the familiar Mukhanov variable $\mathcal{Q}$, and two tensor modes $\mathcal{T}^{(1)}$ and $\mathcal{T}^{(2)}$. They are gauge invariant variables. and together with their conjugate momenta form the {\em reduced}, truncated phase space of first-order perturbations, $\tilde{\Gamma}_{\rm Trun}$. Equations ${\cal C}^{(n)}=0$ with $n>1$ do not add further constraints on first oder perturbations. \item The second-order constraints in the full phase space $\Gamma$ involve terms quadratic in first-order perturbations as well as linear terms in second-order perturbations. When a second order constraint ${\cal C}^{(2)}$ is restricted to the truncated phase space $\tilde\Gamma_{\rm Trun}$, terms containing second order perturbations are disregarded, and the resulting combination of quadratic terms in first-order perturbation with coefficients containing background quantities, $\tilde {\cal C}^{(2)}$, {\em is no longer a constraint}. The truncated second-order scalar constraint $\tilde {\mathbb{S}}_{2}$ is interpreted as the Hamiltonian that generates the dynamics of gauge invariant first-order perturbations. It has the form $\tilde {\mathbb{S}}_{2}=\tilde {\mathbb{S}}^{(\mathcal{Q})}_{2}+\tilde {\mathbb{S}}^{(\mathcal{T}^{(1)})}_{2}+\tilde {\mathbb{S}}^{(\mathcal{T}^{(2)})}_{2}$, which indicates that scalar and tensor modes evolve independently of each other, where \nopagebreak[3]\begin{equation} \label{pert-ham} \tilde{\mathbb{S}}_2^{(\mathcal{T})}[N]= \frac{N}{2 (2\pi)^3 }\,\, \int d^3 k \, \left( \frac{4 \kappa}{a^{3}}\, |\mathfrak{p}^{(\mathcal{T})}_{\vec{k}}|^2 + \frac{a\, k^2}{4 \kappa} |\mathcal{T}_{\vec{k}}|^2 \right)\, . \end{equation} with $\kappa=8\pi G$. The two tensor modes behave identically, and we have denoted them collectively by $\mathcal{T}$. For pedagogical reasons we only write down the expressions for tensor perturbations. See \cite{aan2,aan3} for explicit expressions for scalar modes. In the above equations $\mathfrak{p}^{(\mathcal{T})}_{\vec{k}}$ are the conjugate momenta of $\mathcal{T}_{\vec{k}}$, with Poisson brackets $\{ \mathcal{T}_{\vec{k}},p^{(\mathcal{T})}_{-\vec{k}'} \}=(2\pi)^3\delta(\vec{k}-\vec{k}')$. Tensor perturbations, except for the constant factor $1/(2\sqrt{\kappa})$ that provides the appropriate dimensions, behave exactly as massless, free scalar fields (scalar perturbations $\mathcal{Q}_{\vec{k}}$ behave as a scalar field subject to a time dependent `emergent' potential). The (homogeneous) lapse function $N$ indicates the time coordinate one is using. For instance, $N=1$ corresponds to standard cosmic time $t$, $N=a$ to conformal time $\eta$, and $N=a^3 /p_{(\phi)}$ to choosing the scalar field $\phi$ as a time variable, which turns out to be the natural choice in the quantum theory. \end{itemize} To summarize, the phase space of physical interest is the reduced, truncated phase space $\tilde\Gamma_{\rm Trun}$ made of elements $ \{ (a,P_{a},\phi, p_{(\phi)}); (\mathcal{Q}_{\vec{k}},\mathfrak{p}^{(\mathcal{Q})}_{\vec{k}},\mathcal{T}^{(1)}_{\vec{k}}, \mathfrak{p}^{(\mathcal{T}^{(1)})}_{\vec{k}},\mathcal{T}^{(2)}_{\vec{k}}, \mathfrak{p}^{(\mathcal{T}^{(2)})}_{\vec{k}})\}\in \tilde\Gamma_{\rm Trun}$. The homogenous degrees of freedom evolve with the zeroth-order Hamiltonian. This evolution takes place entirely in $\Gamma_{\rm hom}$, and is independent of perturbations, reflecting the main approximation of the truncated theory. The homogenous dynamical trajectory can then be `lifted' to $\tilde\Gamma_{\rm Trun}$, providing a well-defined evolution of first-order perturbations on the homogenous background. This evolution is specified by the Hamiltonian $\tilde {\mathbb{S}}_{2}$. \subsection{Quantum theory of cosmological perturbations on a quantum FLRW \label{QFTQST}} \subsubsection{Quantization of $\tilde \Gamma_{\rm Trunc}$} \label{3.b.1} In this section we pass to the quantum theory starting from the reduced, truncated phase space $\tilde\Gamma_{\rm Trun}$. The structure of the classical phase space $\tilde\Gamma_{\rm Trun}=\Gamma_{\rm hom}\times \tilde \Gamma_{1}$ suggests that in the quantum theory the total wave function $\Psi$ has the form \nopagebreak[3]\begin{equation} \label{tenpro} \Psi(a,\mathcal{T}_{\vec{k}}, \phi)=\Psi_0(a,\phi)\otimes \psi(\mathcal{T}_{\vec{k}}, \phi) \, . \end{equation} This product structure is maintained as long as the test field approximation holds. Because back-reaction is neglected, the background part $\Psi_0$ evolves independently of perturbations, and the solutions for $\Psi_0$ are the ones obtained in section \ref{sec:2}. When written in terms of the relational time $\phi$, they satisfy the equation $\hat{p}_{(\phi)} \Psi_o \equiv -i\hbar\, \partial_\phi \Psi_0 = \hat{H}_0\Psi_0$, where the operator $\hat{H}_0\equiv \sqrt{\Theta}$ is obtained from expressions (\ref{hc5}) and (\ref{schr-eq}). The remaining task is to `lift' this trajectory to the full Hilbert space, by writing down the quantum theory for $\psi$ propagating on the quantum geometry specified by $\Psi_0$. The evolution of $\psi$ will be specified by the operator analogue of $\tilde {\mathbb{S}}_{2}^{(\mathcal{T})}$, which generates the dynamics in the classical phase space. In the classical theory $\tilde {\mathbb{S}}_{2}^{(\mathcal{T})}$ depends on inhomogeneous degrees of freedoms, but also on the homogeneous ones via the scale factor $a$. Therefore, in the quantum theory the corresponding operator will act on perturbations $\psi$ as well as on $\Psi_0$. Our goal is to generalize the theory of QFT in curved space-times in which, on the one hand, quantum fields propagate in an {\em evolving} classical FLRW specified by $a_{\rm cl}(\eta)$ and, on the other hand, perturbations are commonly quantized using the Heisenberg picture. Therefore, to facilitate the comparison, we pass in this section to the Heisenberg picture. In obtaining the evolution equations for the operator $\hat\mathcal{T}_{\vec{k}}$ and its conjugated momentum we will use $\phi$ as internal time, because it is the evolution variable that appears naturally in the quantum theory, while standard cosmic or conformal time are represented by operators. Internal time $\phi$ corresponds to use the lapse function $N =a^3 /p_{(\phi)}$ in the expression (\ref{pert-ham}). By choosing an appropriate factor ordering to convert it to an operator, we have (as it is common in quantum theory, we are not free of factor ordering ambiguities) \nopagebreak[3]\begin{eqnarray} \label{eqmotop} \partial_{\phi} \hat\mathcal{T}_{\vec{k}}(\phi) =\frac{i}{\hbar}[\hat\mathcal{T}_{\vec{k}}, \hat{\tilde{\mathbb{S}}}_{2}^{(\mathcal{T})}]&=& \, 4 \kappa \, (\hat{p}_{(\phi)}^{-1}\otimes \hat{\mathfrak{p}}^{(\mathcal{T})}_{\vec{k}} ) \, ; \nonumber \\ \partial_{\phi} \hat\mathfrak{p}^{(\mathcal{T})}_{\vec{k}}(\phi) =\frac{i}{\hbar} [ \hat \mathfrak{p}^{(\mathcal{T})}_{\vec{k}},\hat{\tilde{\mathbb{S}}}_{2}^{(\mathcal{T})}]&=&- \, \frac{ k^2}{4 \kappa} \, ( \hat{p}_{(\phi)}^{-1/2} \,\hat a^4(\phi) \, \hat{p}_{(\phi)}^{-1/2} \otimes \hat \mathcal{T}_{\vec{k}} ) \, .\end{eqnarray} These equations involve background operators as well as perturbations. However, the test field approximation allows us to `trace over' the background degrees of freedom. This can be done by taking expectation value with respect to the background wave function $\Psi_0$ (in the Heisenberg picture) obtained in the previous section \nopagebreak[3]\begin{eqnarray} \label{eqmot} \partial_{\phi} \hat\mathcal{T}_{\vec{k}}&=& \, 4 \kappa \, \langle \hat H_0^{-1}\rangle \, \hat\mathfrak{p}^{(\mathcal{T})}_{\vec{k}} \, , \nonumber \\ \partial_{\phi} \hat\mathfrak{p}^{(\mathcal{T})}_{\vec{k}} &=& - \, \frac{ k^2}{4 \kappa} \, \langle \hat H_0^{-1/2} \, \hat a^4(\phi)\, \hat H_0^{-1/2} \rangle \, \hat \mathcal{T}_{\vec{k}} \, , \end{eqnarray} where background operators have been replaced by expectation values and, additionally, we have used the evolution equation $\hat{p}_{(\phi)} \Psi_o = \hat{H}_0\Psi_0$. The test field approximation ensures that we are not losing any information when passing from (\ref{eqmotop}) to (\ref{eqmot}). These are the Heisenberg equations for perturbations, in which the coefficients are given by {\em expectation values of background operators in the quantum geometry specified by} $\Psi_0$. This is a quantum field theory of cosmological perturbation on a {\em quantum} FLRW universe. Note that the above equation is exact, and not further approximation has been made beyond the test field approximation. In this theory, space-time geometry is no described by a unique classical metric, it is rather characterized by a probability distribution $\Psi_0$ that contains the unavoidable quantum fluctuations. The propagation of perturbations is sensitive to those fluctuations, and not only the mean effective trajectory $\langle \hat a \rangle$. However, it is remarkable that those effects can be encoded in a couple of expectation values of background operators: $\langle \hat{H}_o^{-1}\rangle$ and $\langle \hat{H}_o^{-\frac{1}{2}}\,\hat{a}^4(\phi)\, \hat{H}_o^{-\frac{1}{2}}\rangle$ \cite{akl, aan2}. Borrowing the analogy from \cite{aan2}, this is similar to what happens in the propagation of light in a medium: the electromagnetic waves interact in a complex way with the atoms in the medium, but the net effect of those interactions can be codified in a few parameters, such as the refractive index. Similarly, although the final equations (\ref{eqmot}) depend in a simple way on the quantum geometry, it had be very difficult to guess the precise `moments' of the quantum geometry that are involved in the evolution of perturbations. We can now compare the above evolution equations with the familiar quantum field theory of cosmological perturbations on classical FLRW geometries, in which the Heisenberg equations, when $\phi$ is used as time, are written in terms of the classical background quantities $a(\phi)$ and $p_{(\phi)}$ as \nopagebreak[3]\begin{equation} \label{claseqmot} \partial_{\phi} \hat\mathcal{T}_{\vec{k}} = \, \frac{4 \kappa}{p_{(\phi)}} \, \hat\mathfrak{p}^{(\mathcal{T})}_{\vec{k}} \, ; \quad \quad \partial_{\phi} \hat\mathfrak{p}^{(\mathcal{T})}_{\vec{k}} =- \, \frac{k^2}{4 \kappa} \, \frac{a(\phi)^4}{ p_{(\phi)}} \, \hat \mathcal{T}_{\vec{k}} \, .\end{equation} Comparing with ({\ref{eqmot}) we see that the QFT in a quantum background $\Psi_o$ is {\em indistinguishable} from a QFT on a {\em smooth FLRW metric } \nopagebreak[3]\begin{equation} \tilde{g}_{ab}\, {\rm d} x^a {\rm d} x^b \equiv {\rm d}\tilde{s}^2 = - (\tilde{p}_{(\phi)})^{-2}\, \tilde{a}^6(\phi)\, {\rm d}\phi^2 + \tilde{a}(\phi)^2\, {\rm d} \vec{\mathrm{x}}^2 \end{equation} where \nopagebreak[3]\begin{equation} (\tilde{p}_{(\phi)})^{-1} = \langle \hat{H}_o^{-1}\rangle \quad\quad {\rm and} \quad\quad \tilde{a}^4 = \frac{\langle \hat{H}_o^{-\frac{1}{2}}\, \hat{a}^4(\phi)\, \hat{H}_o^{-\frac{1}{2}}\rangle}{\langle \hat{H}_o^{-1}\rangle}\, . \end{equation} In terms of the more familiar conformal time used in cosmology, we have ${\rm d}\tilde{s}^2 = \tilde{a}^2(\tilde\eta)\, (-{\rm d}\tilde\eta^2 + \, {\rm d}\vec{x}^2)$, with ${\rm d}\tilde{\eta} = [ \tilde{a}^2(\phi)]\, \tilde{p}_{(\phi)}^{-1}\, {\rm d}\phi$. This smooth metric captures all the information of quantum geometry that is `seen' by perturbations. Note that its components contain $\hbar$ and it does not satisfy the Einstein equation, not even the LQC effective equations. In terms of this smooth metric, we can write the Heisenberg equations (\ref{eqmot}) as a second order differential equation \nopagebreak[3]\begin{equation} \label{Teqn} \hat{\mathcal{T}}_{\vec{k}}^{\prime\prime} + 2 \frac{\tilde{a}^\prime}{\tilde{a}}\, \hat{\mathcal{T}}_{\vec{k}}^\prime + k^2 \hat{\mathcal{T}}_{\vec{k}} = 0 \, ,\end{equation} where the prime now denotes derivative with respect to $\tilde\eta$. This equation is mathematically equivalent to the familiar formulation of QFT in classical FLRW space-time, where all the effects of the quantum background geometry have been encoded in a {\em dressed, smooth metric tensor} $\tilde g_{ab}$. This unexpected mathematical analogy highly simplifies the analysis, not only conceptually, but also at the technical level. It allows to extend well-stablished techniques from classical space-times to define the physical Hilbert space and the appropriate regularization and renormalization of composite operators on it (see \cite{aan2,aan3} for details of that construction). These are the necessary tools to make sense of the momentum integrals appearing in, e.g. the Hamiltonian $\hat{\tilde{\mathbb{S}}}_2$, that so far were formal, and to regularize the expectation value of the energy-momentum tensor in the physical Hilbert space. \subsubsection{The physical Hilbert space \label{hilbertspace}} In this subsection we briefly summarize how techniques of regularization and renormalization from linear QFT on classical space-times, can be extended to characterize the physical Hilbert space of cosmological perturbations on quantum backgrounds, and to regularize composite operators on it. Among the existing methods of regularization we will work in the adiabatic approach \cite{parker66, parker-fulling74}, which is particularly convenient to perform explicit computations, including the numerical implementation required in the next section. The spatial homogeneity and isotropy of our FLRW background allows us to expand the field operator $\hat \mathcal{T}(\vec{x},\tilde\eta)$ in Fourier modes (a similar construction holds for scalar perturbations) \nopagebreak[3]\begin{equation} \label{fieldexp} \hat \mathcal{T}(\vec{x},\tilde\eta)=\frac{1}{(2\pi)^3} \int {\rm d}^3k \left( \hat A_{\vec{k}} \, e_k(\tilde\eta)+\hat A^{\dagger}_{\vec{k}} \, e^{\star}_k(\tilde\eta) \right) \, e^{i \vec{k}\vec{x}} \, . \end{equation} The field operator $\hat \mathcal{T}(\vec{x},\tilde\eta)$ satisfies the equation of motion (\ref{Teqn}) as long as the mode functions $e_k(\tilde\eta)$ are solution of the wave equation \nopagebreak[3]\begin{equation} \label{we} e''_k(\tilde\eta)+2 \frac{\tilde a''}{\tilde a}\, e'_k(\tilde\eta)+k^2 \, e_k(\tilde\eta)=0 \, ,\end{equation} were prime indicates derivative with respect to $\tilde \eta$. The solutions $e_k(\tilde\eta)$ can be understood as `generalized positive frequency modes', because they play the role of standard positive frequency solutions $e^{-i k t}/\sqrt{2 k}$ in Minkowski space-time. The canonical commutation relations for the field operator $\hat \mathcal{T}(\vec{x},\tilde\eta)$ and its conjugate momentum imply \nopagebreak[3]\begin{equation} [\hat A_{\vec{k}} ,\hat A^{\dagger}_{\vec{k}'}] = i \hbar \, (2\pi)^3 \, \delta(\vec{k}-\vec{k}') \, \langle e_k(\tilde\eta),e_{k'}(\tilde\eta)\rangle^{-1} \, ; \quad [\hat A_{\vec{k}} ,\hat A_{\vec{k}'}]=0 \, , \end{equation} where \nopagebreak[3]\begin{equation}\langle e_k(\tilde\eta),e_{k'}(\tilde\eta)\rangle := \frac{ \tilde a^2}{4 \kappa}(e_k(\tilde\eta)e'^{\star}_{k'}(\tilde\eta)-e'_k(\tilde\eta)e^{\star}_{k'}(\tilde\eta)) \, . \end{equation} Therefore, if we impose the normalization condition $\langle e_k(\tilde\eta),e_{k}(\tilde\eta)\rangle= i$, $\hat A_{\vec{k}}$ and $\hat A^{\dagger}_{\vec{k}}$ will satisfy the familiar commutation relations of creation and annihilation operators. Note that the scalar product $\langle e_k(\tilde\eta),e_{k'}(\tilde\eta)\rangle$ is constant in time if $e_k(\tilde\eta)$ and $e_{k'}(\tilde\eta)$ are solutions of (\ref{we}). The Hilbert space is then constructed as follows. The vacuum state $|0\rangle$ (associate with the set of generalized positive frequency modes $e_k$) is defined as the state annihilated by all $\hat A_{\vec{k}}$. The associated Fock space $\mathcal{H}_1$ arises by the repeated action of creation operators $\hat A^{\dagger}_{\vec{k}}$ on the vacuum. It is important to notice that the vacuum state constructed in this way is {\em translational and rotational invariant}, as can be checked, e.g. by explicit construction of the two point function. It is clear from the construction that a different choice for the generalized positive frequency bases $e_k$ in (\ref{fieldexp}), provides different $\hat A_{\vec{k}}$ and $\hat A^{\dagger}_{\vec{k}}$ operators, and therefore a {\em different definition of vacuum state}. None of those vacua is preferred as compare to the others. Even more, different vacua may not even belong to the same Hilbert space, and the quantum theories constructed from each of them are in that case unitarily inequivalent. The existence of unitarily inequivalent quantization is common in QFT in curved space-times (see e.g. \cite{waldbook}). In cosmological backgrounds, however, it is possible to add appropriate regularity conditions to the mode functions $e_k$ to select a preferred Hilbert space. The {\em adiabatic condition} \cite{parker66,parker69,parker-fulling74} in FLRW backgrounds imposes that, in the asymptotic limit in which the physical momentum $k/\tilde a$ is much larger than the energy scale provided by the space-time curvature $E_R$, $e_k$ must approach the Minkowski space-time positive frequency modes, $e^{-i k t}/\sqrt{2 k}$, {\em at an appropriate rate} (for a brief summary see, e.g. \cite{aan2}, and references cited there). The modes $e_k$ satisfying this conditions are called modes of $Nth$ adiabatic order, and the associated vacuum an adiabatic vacuum of the same order, where the order is specified by the exact rate of approach to the Minkoswkian solutions. Notice that the adiabatic condition does not single out a preferred vacuum, because there are many different families $e_k$ satisfying it to a given order (it imposes only an {\em asymptotic} restriction for large $(k/\tilde a)/E_R)$. However, it is possible to show that if we restrict to adiabatic order $N\geq 2$, {\em all different vacua belong to the same Hilbert space} $\mathcal{H}_1$. (This is strictly true if we restrict our QFT to the compact fiducial cell $\cal V$. In the non-compact case one needs to be more precise in the sense in which the Hilbert space is unique \cite{waldbook}, because infra-red divergences appear). Additionally, if $N\geq 4$ there is a well defined procedure to extract the physical, finite information from the formal expression of operators of interest for us, the Hamiltonian and the stress-energy tensor, by subtracting ultra-violet divergences in a local and state independent way, while respecting the covariance of the theory. The Hamiltonian operator generating time evolution (in conformal time), and the energy density $\hat \rho$, are related by \nopagebreak[3]\begin{equation} \hat{ \tilde {\mathbb{S}}}^{(\mathcal{T})}_{2,{\rm formal}}=\frac{1}{(2\pi)^3 }\,\, \int {\rm d}^3 k \, \frac{2 \kappa}{\tilde{a}^{2}}\, |\hat \mathfrak{p}^{(\mathcal{T})}_{\vec{k}}|^2 + \frac{\tilde{a}^2 \, k^2}{8 \kappa} |\hat \mathcal{T}_{\vec{k}}|^2=\tilde a^4\int {\rm d}^3x \, \hat \rho^{(\mathcal{T})}_{\rm formal} \, . \end{equation} If $|0\rangle$ is a 4th-order adiabatic vacuum associated with a family of solutions $e_k(\tilde\eta)$, the renormalized expectation value of the energy density is given by \nopagebreak[3]\begin{equation} \langle 0|\hat \rho^{(\mathcal{T})}(\tilde\eta)| 0\rangle_{\rm ren} =\frac{\hbar}{8 \kappa \tilde{a}^2} \int \frac{{\rm d}^3k }{(2\pi)^3} \left[ |e'_k|^2+k^2 |e_k|^2 -\frac{4\kappa}{\tilde{a}^2} \, C^{(\mathcal{T})}(k,\tilde\eta)\right]\, .\end{equation} with \nopagebreak[3]\begin{equation} C^{(\mathcal{T})}(k,\tilde\eta)=k+\frac{{\tilde{a}}'^2}{2 {\tilde{a}}^2 k}+\frac{4 {\tilde{a}}'^2 {\tilde{a}}''+\tilde{a} {\tilde{a}}''^2-2 \tilde{a} {\tilde{a}}'\, {\tilde{a}}^{'''}}{8 {\tilde{a}}^3 k^3} \, , \end{equation} where $C(k,\tilde\eta)$ are the subtraction terms provided by adiabatic regularization \cite{parker-fulling74}. The renormalized expression for the expectation value of the Hamiltonian operator is obtained from the previous three equations. The above subtractions make the expectation value of the hamiltonian and energy density finite for {\em any state} in the Hilbert space of 4th-order adiabatic states, $\mathcal{H}_1$. Additionally, the procedure has the properties that any method of regularization/renormalization are expected to satisfy, enunciated in the Wald's axioms \cite{waldbook}. Although strictly speaking the above expressions provide only quadratic forms in the Hilbert space, recent results indicate that they are expectation values of operator value distributions $\hat \rho^{(\mathcal{T})}$ and $\tilde {\mathbb{S}}^{(\mathcal{T})}_{2}$ in $\mathcal{H}_1$. In summary, our QFT in quantum FLRW admits a straightforward extension of the adiabatic approach of linear QFT in classical backgrounds. The physical Hilbert space $\mathcal{H}_1$ is then singled out by restricting to 4th order adiabatic states. In addition, the adiabatic condition provides the necessary control on ultra-violet divergences that allows a systematic procedure to regularize the Hamiltonian and the stress-energy tensor on $\mathcal{H}_1$. This completes the formulation of the theory. \subsubsection{Criterion for self-consistency\label{selfconsistency}} The last step in the construction is to check whether the underlaying approximation in our truncated theory, the test field approximation, is satisfied throughout the evolution. In our QFT in quantum space-times this question translates to check whether the expectation value of the stress-energy tensor can be neglected when compared to the background one. However, in an homogeneous and isotropic background a sufficient condition for this to be satisfied is that energy density on scalar and tensor perturbations, $\langle \hat \rho(\tilde\eta)\rangle$, be much smaller than the background energy density $\langle \rho_o \rangle $ {\em at any time} during dynamical phase of interest \cite{aan2}. It is evident that one can always find states for perturbation for which that requirement is not satisfied. Therefore, the relevant question is: is there a sufficiently large subspace of the physical Hilbert space for which the previous condition on the energy density is satisfied? If the answer is in the affirmative then one has a self-consistent approach in which test-field approximation holds. This is a key question to ensure self-consistency, and has to be answered when this framework is applied to a concrete physical problem, as we do in the next section. \subsection{Comments} The previous framework is suitable to face interesting conceptual questions arising in quantum gravity. For instance, when does standard QFT in curved space-times become a good approximation? Is it safe to use standard QFT during inflation? This question can be answered straightforwardly because both theories have been written in the same form. From equation (\ref{Teqn}) it is clear that the standard QFT is recovered in the regime in which the quantum aspects of the geometry can be neglected, and Section (\ref{sec:2.c}) provided the conditions under which this happens. When the background energy density $\langle \rho_o\rangle$ is below one thousandth of $\rho_{P\ell}$, quantum corrections become negligible and General Relativity becomes an excellent approximation. This is the regime in which standard QFT arises from the more fundamental framework presented in this section. Therefore, in the inflationary era where $\langle \rho_o\rangle \lesssim 10^{-10} \rho_{P\ell}$, we expect the familiar QFT to be an excellent approximation. By construction, this framework encompasses the Planck regime and is suitable to discuss trans-Planckian issues and distinguish real problems from apparent ones. In LQG there is a priori no impediment for trans-Planckian modes to exist. It may seem at first that the existence of a minimum area may preclude their existence, but quantum geometry is subtle and, for instance, there is no minimum value for volume or length. In addition, if we pay attention to the construction of the background quantum theory, trans-Planckian quantities appear there without causing problems: the value of the momentum $p_{(\phi)}$ of the background scalar field $\phi$ is generally large in Planck units. However, the background energy density is {\em bounded above} by a fraction of the Planck energy density. Something similar happens in our quantum field theory. There trans-Planckian modes are admitted {\em as long as the total energy density in perturbations remains small as compared to the background}. That is the real trans-Planckian problem, which becomes a non-trivial issue in the deep Planck regime where the volume of the universe acquires its minimum value. \section{LQC extension of the inflationary scenario \label{sec:4}} The previous sections have summarized the physical ideas and mathematical tools necessary to undertake the quantization of the sector of General Relativity containing the symmetries of cosmological space-times and the study of cosmic perturbation thereon. The goal of this section is to apply those techniques to extend the current picture of the evolution of our universe to include the Planck regime. The cosmological $\Lambda$CDM model with an early phase of inflation contains conceptual limitations that are dictated by the domain of applicability of the physical theories in which it is based: General Relativity and Quantum Field Theory. One needs a theory of quantum gravity to extend the model to include physics at the Planck era. Subsection \ref{sec:4.a} summarizes how, by introducing a scalar field with suitable potential, LQC provides a space-time in which the big-bang singularity is resolved by the quantum effects of gravity, and in which an inflationary phase arises almost unavoidably at later times. In subsection \ref{sec:4.a} it is shown how the evolution of cosmological perturbation can be extended to include the pre-inflationary space-times provided by LQC. In this sense the current scenario for the evolution of our universe and the genesis of cosmic inhomogeneities is extended all the way to the big bounce \cite{aan1}. This extension goes beyond the conceptual level, as it appears a narrow window in which the effects of Planck scale physics could be imprinted in the CMB and galaxy distributions, and concrete ideas connecting those effects with forthcoming observations have been proposed. \subsection{Inflation in LQC} \label{sec:4.a} As we have mentioned in previous sections, after the bounce there is a period of superinflation where $\dot{H}>0$ until the density reaches half its value at the bounce, after which one has $\dot{H}<0$. It was first hoped that this period of superinflation would be enough to account for the necessary number of {\it e-foldings} compatible with observations, but this period turns out to be too short when there is no potential for the scalar field. Thus, it is clear that one needs such a potential to compare the LQC predictions with the inflationary paradigm. The simplest case one can consider is quadratic potential ${\rm{V}}(\phi)=(1/2)m^2\phi^2$, that has been extensively studied in the literature and is compatible with the 7-years WMAP observations \cite{wmap}. The existence of the bounce solves one of the conceptual challenges that the standard scenario, based on the GR dynamics poses. That is, in the GR dynamics, there is always a past singularity, even in the presence of eternal inflation \cite{bgv}. The standard formalism is therefore, conceptually incomplete. The question that we shall pose in this part is the following: Can we estimate how probable it is to have enough inflation for the cosmological background? Let us be more precise with the question. We know that every effective trajectory undergoes a bounce, and some of them will experience enough e-foldings and will be of phenomenological relevance. Rather amazingly, WMAP has provided us with a small observational window for the scalar field at the onset of inflation \cite{wmap,as3}, written in terms of a reference time $t_{k_*}$ for which a reference mode $k_*$ used by WMAP exited the Hubble radius in the early universe. With an $4.5\%$ accuracy, the data is, in Planck units \cite{wmap,as3}: \[ \phi(t_{k_*})=\pm 3.15 \, ,\qquad \dot{\phi}(t_{k_*})=\mp 1.98 \times 10^{-7} \, , \qquad H(t_{k_*})=7.83\times 10^{-6} \, . \] We can now pose the question more precisely. From all the solutions $\mathbb{S}$ to the effective equations in LQC, how many of them pass through the allowed interval? This poses yet another question. How are we going to `count' trajectories? Is there a canonical way of measuring them? A proposal to answer this question was put forward long ago \cite{ghs,hp} based on the idea of using the Liouville measure on phase space $\mathbb{S}$, that is invariant under time evolution. The idea then is to compute the volume of $\mathbb{S}_{\rm wmap}$, those solutions that pass through the WMAP window, relative to the total volume of $\mathbb{S}$: \nopagebreak[3]\begin{equation} {\rm Prob} =\frac{{\rm Vol}(\mathbb{S}_{\rm wmap})}{{\rm Vol}(\mathbb{S})}\, .\label{prob-infla} \end{equation} In order to compute this probability, one has to be careful with the way one measures all possible trajectories (for a discussion see \cite{sw}). Let us begin with the kinematical phase space $(V,{\rm b};\phi,p_{(\phi)})$. The constrained surface $\bar{\Gamma}$ (as defined by the constraint ${\cal C}$) is three dimensional and can be given coordinates $(V,{\rm b},\phi)$. But in that surface, the symplectic structure is degenerate and does not define a volume form. For that one has to go to the space of physical states, or {\it reduced} phase space $\hat{\Gamma}$, formed by the gauge orbits on the constrained surface. An alternative is to perform a {\it gauge fixing} to select a two dimensional surface which is transversed only once by each gauge orbit. As we have seen in the previous section the evolution of coordinate ${\rm b}$ is monotonous, so one can fix the gauge by selecting ${\rm b}={\rm b}_0$. With this choice, $\hat{\Gamma}$ has coordinates $(V,\phi)$. Now, the pullback $\hat{\Omega}$ of $\Omega$ to $\hat{\Gamma}$ defines the Liouville measure there. The problem is that, with respect to this measure, the volume of $\hat{\Gamma}$ is infinite! One has to define a procedure to `regularize' the integral to have finite results. The key observation is that, in the $k$=0 case we are considering, there is an extra gauge freedom that arises from the fact that the size of the fiducial cell ${\cal V}$ one starts with is arbitrary. This means that a rescaling of the cell ${\cal V}\to \ell^3{\cal V}$ should leave the physics invariant. This rescaling translates into a rescaling of the canonical variables as $V\to \ell^3 V$, $p_{(\phi)}\to \ell^3 p_{(\phi)}$, while ${\rm b}$ and $\phi$ remain invariant. However, this transformation on phase space does not leave the symplectic invariant, so it can not be regarded as a canonical transformation. Still, one has to {\it gauge out} this symmetry in order to obtain truly physical quantities. The problem is that $\hat{\Omega}$ does not project down to the quotient. One possibility is to preform a further `gauge fixing' by selecting a cross section $\tilde{\mathbb{S}}$ of $\mathbb{S}$. For instance, one could choose a given value of volume $V=V_0$ (for our previous choice ${\rm b}={\rm b}_0$). One can then restrict the measure to the cross section $\tilde{\mathbb{S}}$ to obtain the measure ${\rm d}\tilde\mu$, that now depends only on $\phi$. In effective LQC, one has \nopagebreak[3]\begin{equation} {\rm d}\tilde{\mu}=\left[\frac{3\pi}{\lambda^2}\,\sin^2(\lambda{\rm b}_0) - 8\pi^2\gamma^2{\rm{V}}(\phi)\right]^{1/2}\,{\rm d}\phi\, ,\label{measure-mu} \end{equation} As expected, the GR measure is obtained by taking $\lambda\to 0$ \cite{ck-inflation}. Even when the construction involved the Liouville measure that is invariant under Hamiltonian time evolution, the resulting measure ${\rm d}\tilde{\mu}$ on the space of physically distinguishable configurations $\tilde{\mathbb{S}}$ {\it depends on the choice of gauge fixing parameter} ${\rm b}_0$, in a non-trivial way \cite{ck-inflation,as3}. A choice of ${\rm b}$, in turn, fixes a value of the energy density $\rho=\rho_0$, which implies that the probability will depend on the energy density at which it is computed. In General Relativity there is no natural value of density for computing the probability, other than the big bang itself. The problem is that the density is infinite there and the range of $\phi$ is unbounded, so the volume is also infinite. Another possibility would be to introduce a cut-off at, say, the Planck density \cite{klm}, but there is no reason to believe that GR is valid at that scale. In fact, one of the main lessons of loop quantum cosmology is that GR is not valid near the Planck scale (in energy density) but the isotropic degrees of freedom are rather described by the effective LQC theory. In this description, there {\it is} a natural preferred density which is precisely the density at the bounce $\rho_{\rm max}$. Thus, in what follows we shall take the bounce as the natural point where to compute probability. The corresponding `gauge fixing' implementing this choice is then ${\rm b}_0=\pi/2\lambda$. Let us now rephrase the question that we initially posed at the beginning of this part: What is the relative number of solutions $\tilde{\mathbb{S}}_{\rm wmap}$ that pass through the observational WMAP window, from the total number of solutions $\tilde{\mathbb{S}}$ at the bounce? As explained before, the probability is computed using formula (\ref{prob-infla}), where the volume is now obtained by integrating a uniform distribution (as a function of $\phi$). The key to computing the probability is then a detailed knowledge of the global dynamics, for all possible values $\phi_B$ of the scalar field at the bounce. Extensive numerical evolutions have shown that almost all trajectories fall within the observational window. It is only for the small window $-5.46< \phi_B < 0.934$ from the total range of $\phi_B\in [-7.44 \times 10^{5},7.44 \times 10^{5}]$ that the future dynamics lies {\it outside} the WMAP window \cite{as3}. For this interval, the probability that the dynamics falls outside of the observational window is {\it less} than $3\times 10^{-6}$. To understand this, one can see the LQC dynamics as shown in Fig.~\ref{Fig:1}, where one considers a uniform distribution at the bounce and follows the dynamics. As can be easily seen, most trajectories funnel into a very small region that is precisely where the WMAP window is. Just before the onset of inflation the density is approximately $10^{-11}$ smaller than the density at the bounce. At that density the allowed WMAP region is only $4\%$ of the total allowed range in $\phi$ \cite{as3}. Thus, as seen in the Figure, almost all of the trajectories starting at the bounce scale fall into a very small region at the onset of inflation \cite{ck-inflation}. \begin{figure}[htb] \centerline{\includegraphics [scale=0.35]{Fig4.pdf}} \caption{In this figure we plot the exterior, maximal density surface $\rho_{\rm max}$ and a surface of constant density $\rho_{\textrm{onset}}\ll \rho_{\textrm{max}}$ (not drawn to scale, of course) on the $(\dot\phi,\phi)$ plane. Trajectories with a uniform distribution at the LQC bounce ellipsoid are plotted. Note that trajectories for which there is enough inflation get funnelled into a small region in the smaller $\rho_{\textrm{onset}}$ ellipse. Near this surface, the GR and LQC dynamics almost coincide} \label{Fig:1} \end{figure} One should also note that this attractor feature of the global dynamics, together with the non-invariance of the measure ${\rm d}\tilde{\mu}$, explains why the probability is much smaller when computed in General Relativity at the onset of inflation \cite{gt,ck-inflation}. Let us summarize. In LQC it is natural to consider the bounce as the point where to compute probability of inflation. The global dynamics is such that most of the trajectories get funnelled into the small WMAP window at the onset of inflation where the density is 11 orders of magnitude smaller than the density at the bounce. Thus, one can conclude that having enough inflation is generic in loop quantum cosmology for the homogeneous and isotropic background, when semiclassical states are considered. \subsection{Pre-inflationary evolution of cosmic perturbations} \label{sec:4.b} In this section we apply the quantum theory of cosmological perturbations on the quantum, pre-inflationary space-time to extend the study of cosmic inhomogeneities all the way back to the Planck era. In addition to the {\em conceptual} completion provided by the inclusion of Planck scale physics, the resulting framework opens an exciting avenue to extend observations into the Planck regime. Before entering into technical details, we summarize here the physical idea behind this possibility. It is known since the seminal work by Parker in the 60's \cite{parker66,parker69}, that a dynamical expansion of the universe is able to excite quanta, or `particles', of test fields out from an initially vacuum state. This phenomenon of particle creation is one of the main features of QFT in curved space-times, and plays a key role in black hole thermal radiance and in the generation of cosmic inhomogeneities during inflation. If $\vec{k}$ represents a co-moving Fourier mode of a test scalar field in FLRW, excitations on that mode may be created if the energy scale provided by the space-time scalar curvature is comparable to the physical wavelength $\lambda = 2\pi a/k$ at some time during the evolution. The amount of quanta created during a period of expansion in each mode depends on the details of the scale factor $a(t)$ as a function of time. Let us focus on the finite range of momenta that is accessible in cosmological observations. The previous argument tell us that, even if those modes are `born' in the ground state at time of the bounce, particles may be created during the evolution. The resulting state, e.g. at the onset of inflation, would then depart from the vacuum state at that time as a consequence of the non-trivial evolution, and the spectrum of particles created will carry information about the pre-inflationary space-time geometry. Furthermore, it has been shown in the context of inflation that {\em the predictions for the CMB and the distribution of galaxies are sensitive to the details of the state describing perturbations at the onset of inflation} \cite{holman-tolley,agullo-parker,ganc,agullo-navarro-salas-parker}, and concrete observation have been proposed that could reveal information about that state \cite{halo-bias1,halo-bias2,halo-bias3}. In other words, those observations may reveal information about the propagation of perturbations {\em before} inflation, when quantum gravity corrections dominate. In the inflationary scenario observable modes have wavelength much smaller than the radius of curvature at the onset of inflation (in the cosmological argot, modes are deeply inside the Hubble radius). The sometimes implicit assumption in inflationary physics is that, whatever happened before inflation, wavelength of interest were much smaller than the radius of curvature {\em at any time before inflation}. Under this assumption, pre-inflationary dynamics for those modes is indistinguishable from an evolution in Minkowski space-time, and the use of a vacuum state is justified. The relevant question is then: is this assumption accurate in the pre-inflationary background provided by LQC? More explicitly, consider modes with physical wavelength smaller that the radius of curvature at the beginning of inflation, and propagate them backward in time until the bounce. Do those wavelength generically remain smaller that the radius of curvature of the dressed metric $\tilde g_{ab}$ during the entire pre-inflationary evolution? The detailed analysis of \cite{aan1,aan3} shows that the answer to this question is in the negative (see Fig.~\ref{Fig:2}). While short enough wavelengths (large enough momenta) remain always smaller that the curvature radius, there are modes which at some time during the evolution have physical size comparable to it. The evolution of those modes {\em is} sensitive to the space-time curvature and the quantum state at the onset of inflation will depart from the vacuum. \begin{figure}[htb] \centerline{\includegraphics[width=10cm]{curvature_k.pdf}} \caption{This plot shows: i) The scalar curvature of the effective geometry (red solid line), ii) The physical momentum squared $(k/\tilde{a}(t))^2$, for $k=6$ (dotted black line), and $k=10$ (dashed black line), and iii) $(k_R/\tilde{a}(t))^2$, where $k_R$ is the co-moving scale associated with the maximum value of the curvature (dotted-dashed green line); as a function of cosmic time $t$. By convention, we choose the scale factor of the effective geometry to be one at the bounce, $\tilde a(0)=1$. Both axes are in Planck units. Curvature attains the maximum value at the bounce and decreases very fast after it. Modes with momentum $k$ larger than the scale of curvature at the bounce, $k > k_R$, have physical momentum larger than the curvature during the entire evolution (dashed black line). Those modes do not `feel' the curvature and evolve as if they were in Minkowski space-time. On the other hand, modes that at the bounce have physical momentum smaller that the curvature, $k<k_R$, quickly evolve to become of the same order as the curvature scale (black dotted line), and therefore their evolution will differ considerably from that in flat space. At later times those modes also become two energetic to feel the space-time curvature.} \label{Fig:2} \end{figure} Notice that in LQC the maximum value of the curvature takes place at the bounce time and this value is universal, fixed by the quantum geometry and independent of the form of the scalar field potential. If we call $k_R$ the co-moving scale associated with this maximum value of the curvature, we expect excitations with $k\lesssim k_R$ to be created during the evolution, concretely in the Planck regime near the bounce. On the other hand, for modes with $k\gg k_R$ pre-inflationary dynamics has negligible effect. From this qualitative discussion we may expect observable effects from Planck scale physics in CMB and large scale structure if observations are accessible to modes $k$ around or smaller than the universal scale $k_R$ provided by LQC. In the remainder of this section we provide precise computations that support this qualitative physical picture. We start by specifying the initial condition for both background and perturbations at the bounce. We then evolve those perturbations until the end of slow-roll inflation, compute the resulting quantum state and the power spectrum for scalar and tensor perturbations, and study under what set of initial conditions quantum gravity corrections may be sizeable for observable modes. \subsubsection{Initial Conditions} \label{sec:4.b.1} In the standard inflationary paradigm one specifies `initial data' for the background and perturbations at the onset of slow-roll. From a fundamental point of view, it would be more satisfactory to impose initial conditions at the `beginning' rather than at an intermediate time in the evolution of the universe. In classical cosmology the `beginning' is the big bang singularity, and it is not possible to unambiguously defined initial condition at that time. In LQC the big bang is replaced by a quantum bounce where physical quantities do not blow up, providing a preferred time to specify initial data. In the test field approximation, the total wave function naturally decomposes as a product $\Psi=\Psi_0\otimes \psi$, and this form holds as long as back-reaction of perturbations remains negligible. We need therefore to specify initial data for both, $\Psi_0$ and $\psi$.\\ $\bullet$ {\em Background.} For computational purposes, it is convenient to make the following further simplification on the background dynamics. As described in section~\ref{sec:2.a}, the background wave function $\Psi_0$ can be chosen to be highly peaked along the entire evolution, including the deep Planck regime. The `peak' of that wave function describes an effective geometry characterized by the scale factor $\bar a(\phi)=\langle \hat a(\phi)\rangle$, which satisfies the effective equation (\ref{eff-fried}). Because the dispersion of $\Psi_0$ remains very small during evolution, it is convenient to ignore quantum fluctuations in our computations, by making a `mean field' approximation in which the expectation values of powers of background operators, such as $\hat a$ and $\hat H_o$, are replaced by the same powers of their expectation. For instance, in the evolution of quantum inhomogeneities given by Eq. (\ref{Teqn}), this is equivalent to replace $\tilde a \approx \bar a$. At the practical level this is an excellent approximation, e.g. numerical errors in simulations turn out to be larger than those introduced by the mean field approximation. In subsection \ref{sec:4.a} we described the effective pre-inflationary background arising in LQC for the representative example of a quadratic potential. In that effective geometry initial data is entirely specified by the value of the scalar field at the bounce, $\phi_B$, and, unless $\phi_B$ lies in a small region $R$ around $\phi_B$=0, the evolution generically finds an inflationary phase at late times compatible with WMAP observations \cite{wmap}. Therefore, we will choose $\Psi_0$ to be a state sharply peaked in an effective trajectory specified by a value of $\phi_B$ that lies outside the region $R$. The effect of choosing different values of $\phi_B$ can be understood using the effective equations (\ref{eff-fried}) together with numerical simulations. On the one hand, immediately after the bounce the background evolution is entirely dominated by quantum gravity effects, and it is largely insensitive to the concrete value of $\phi_B$. Except for very small momenta $k$, the times at which perturbations $\mathcal{Q}_k$ and $\mathcal{T}_k$ `feel' the space-time curvature is precisely just after the bounce (see Fig.~\ref{Fig:2}). Therefore, the features that those modes acquire during the evolution turn out to be quite insensitive to the value of $\phi_B$. On the other hand, different values of $\phi_B$ do modify significantly the space-time geometry at later times. The larger $\phi_B$, the longer it takes to reach the end of slow-roll inflation, or, equivalently, the larger the amount of expansion of the universe between the bounce and the end of slow-roll. A larger amount of expansion implies that observable modes had larger physical momentum at the time of the bounce. Because by convention {\em we fix the scale factor at the bounce} $\bar a_B=1$ (rather than $\bar a_{\rm today}=1$), the effect of choosing different values of $\phi_B$ essentially translates into a change in the range of co-moving momenta $k$ relevant for observations, moving to larger $k$'s as $\phi_B$ increases. If $[k_{\rm{min}}$, $k_{\rm{max}}\approx 2000 k_{\rm{min}}]$ is the window covered by WMAP, we have, for instance, $k_{\rm{min}}\approx 2.8\times 10^{-3}$ for $\phi_B=1$, $k_{\rm{min}}\approx 0.14$ for $\phi_B=1.1$ and $k_{\rm{min}}\approx 8.2$ for $\phi_B=1.2$. The physical momentum $k/\bar a_{\rm today}$ of modes observed today is of course the same in all cases, but the convention $\bar a_B=1$ makes that different amount of expansion (i.e. different $\phi_B$) translates into different co-moving $k$ for those modes.\\ $\bullet$ {\em Perturbations.} As already occurs in classical space-times, quantum fields in quantum cosmological backgrounds does not admit a preferred state that we can call {\em the vacuum}. In backgrounds with large enough number of isometries, e.g. Minkowski or de Sitter space-time, a preferred ground state can be singled out by imposing symmetry in combination with regularity conditions. In our quantum FLRW we follow the same criteria, and look for quantum states $\psi$ invariant under the isometries of the background, spatial translations and rotations, with appropriate ultraviolet behavior. In section \ref{hilbertspace} we summarized the construction of the Hilbert space $\mathcal{H}_1$ of 4th-order adiabatic states. In $\mathcal{H}_1$, the family of 4th-order adiabatic vacua is the preferred set of initial conditions selected by symmetry and regularity requirements. This is the set of initial data we choose for perturbations. As opposed to Poincare or de Sitter invariance, symmetry under spatial translations and rotations is not restrictive enough to select a unique state, but it substantially narrows down the possibilities. The next subsection will summarize the time evolution of different choices of initial state within the family of 4th-order adiabatic vacua, and will show that quantities of interest such as the power spectrum of observable modes, are all very similar. Physically, the choice of a 4th-order adiabatic vacuum at the time of the bounce corresponds to assume `initial quantum homogeneity'. One is requiring that the portion of the universe corresponding to our observable patch at the time of the bounce is {\em as homogeneous as quantum mechanics allows}, i.e. only vacuum fluctuation of inhomogeneities are present. This is a strong assumption. The motivation comes from \cite{aan1,aan3}: \begin{itemize} \item In a universe containing a phase of inflation lasting at least for 60 $e$-folds, the physical size of observable universe was very small at the bounce time, $\lesssim 10 \ell_{\rm Pl}$, for the solutions of interest. \item The `quantum degeneracy force' responsible of the bounce has a diluting effect that may produce homogeneity at scales of the order of the Planck length at the bounce. This is the new ingredient that LQC provides at the time of the bounce to produce homogeneity at Planck scale distances. \item There is a precise sense in which the assumption of quantum homogeneity captures a quantum version of the Weyl curvature hypothesis \cite{penrose-weyl}. \end{itemize} \subsubsection{Power Spectrum} \label{sec:4.b.2} Our task is to use the equations of the quantum theory summarized in section \ref{QFTQST} to compute the state of cosmic inhomogeneities at the end of the inflationary epoch, by starting from the initial condition specified above for background and perturbations at the time of the bounce. Due to computational limitations, it is convenient to restrict numerical simulations to backgrounds for which the bounce is kinetic energy dominated, where it has been shown that quantum fluctuations of $\Psi_0$ remain very small along the entire evolution. Several numerical simulations have been carried out for effective backgrounds with initial conditions $\phi_B\in(0.93,1.5)$, which turns out to be the most interesting range \cite{aan3}. It is not expected that new features appear for larger values of $\phi_B$, but computational limitations make difficult to check it explicitly. For perturbations, simulations have been carried out using different choices of 4th-order adiabatic vacua, and the results are all very similar. Fig.~\ref{Fig:3} and \ref{Fig:4} are obtained by using the `obvious' or `standard' 4th-order vacuum at the bounce time $\tilde\eta_B$ (see \cite{aan2} for precise definition), and they show the relevant information of the evolved state. \begin{figure}[htb] \centerline{\includegraphics[width=11cm]{particlenumber.pdf}} \caption{ Number $n_k$ of scalar `excitations/particles' with comoving momentum $\vec{k}$ in the interval $[\vec{k},\vec{k}+d\vec{k}]$, per comoving unit volume contained in the evolved state as compared to the BD vacuum during inflation. The plot is computed for $\phi_B=1.15$ and for the `obvious' 4th-order adiabatic vacuum at the bounce. The horizontal axes is in Planck units.} \label{Fig:3} \end{figure} \begin{figure}[htb] \centerline{\includegraphics[width=10cm]{powerspectrum.pdf}} \caption{Ratio of the LQC power spectrum for scalar perturbation to the standard inflationary power spectrum. Crosses show the ratio for different values of $k$. The LQC power spectrum oscillates rapidly for small $k$. The solid curve averages over bins of width $\Delta k=0.5$. The inset shows a zoom-in of the interesting region around $k$=9.} \label{Fig:4} \end{figure} First of all, to gain intuition on the effect of the pre-inflationary evolution, we compare the evolved state with the natural vacuum during inflation, the so-called Bunch-Davies (BD) vacuum. Fig.~\ref{Fig:3} shows the number $n_k$ of `excitations/particles' with momentum $\vec{k}$ per comoving unit volume in space and momentum, contained in the evolved state relative to the BD vacuum during inflation. The plot is computed for $\phi_B=1.15$ but, as explained in subsection \ref{sec:4.b.1}, it is not altered by choosing a different value inside our family. Changing the value of $\phi_B$ has essentially the effect of shifting the location of the observationally relevant window $[k_{\rm min}, k_{\rm max}\approx 2000 k_{\rm min}]$ in the horizontal axes of the plot, which moves steadily to the right as $\phi_B$ increases. Fig.~\ref{Fig:3} is in good agreement with the qualitative arguments presented at the beginning of section \ref{sec:4.b}. Namely, the pre-inflationary evolution affects modes with low $k$, for which a considerable amount of excitations have been `created'. On the contrary, modes with large $k$ remain in the ground state at the onset of inflation. As it was expected, for $k> k_R\approx 7.7$ (recall that $k_R$ is the comoving scale associated with the scalar curvature of the effective metric at the bounce), the number of BD particles contained in the evolved state is very close to zero. Therefore, if $k_{\rm min} \gtrsim k_R$, that corresponds to $\phi_B\gtrsim 1.2$, the evolved state is indistinguishable from the BD vacuum for observable modes. For $\phi_B\lesssim 1.2$ the state at the onset of inflation differs significantly from the vacuum for modes in the interesting window and, as analyzed in detail in \cite{holman-tolley,agullo-parker,ganc,agullo-navarro-salas-parker}, those deviations have an important effect on the predictions of inflation for the spectrum of cosmic inhomogeneities, specially regarding non-Gaussianity. There exist concrete proposals for observables in the CMB \cite{halo-bias1,halo-bias3} and in the distribution of galaxies \cite{halo-bias1,halo-bias2} that should be sensitive to the effects of the created particles. A quantity of direct observational interest is the power spectrum of tensor and scalar perturbation \nopagebreak[3]\begin{equation} P_{\mathcal{T}}(k)=\hbar \frac{k^3}{2\pi^2} |e_k|^2 \, , \quad \quad P_s(k)=\hbar \frac{k^3}{2\pi^2} \left(\frac{\dot\phi}{H}\right)^2 |q_k|^2 \, , \end{equation} where all quantities are evaluated at the end of inflation, $H$ is the Hubble rate, and $q_k(t)$ and $e_k(t)$ are the Fourier modes of scalar and tensor perturbations, respectively. Fig.~\ref{Fig:4} shows the relation between the LQC power spectrum computed with the evolved state and the standard inflationary power spectrum that assumes the BD-vacuum, for scalar perturbations. The conclusions are similar to the ones obtained from Fig.~\ref{Fig:3}, namely for $\phi_B\gtrsim 1.2$ the power spectrum of observable modes is indistinguishable from the standard inflationary predictions. For smaller values of $\phi_B$ deviations become sizable for modes of observational interest. For instance, for $\phi_B=1.15$ we have $k_{\rm min}\approx 1$ and deviations from standard prediction will appear for modes with $\ell\lesssim 30$ in the WMAP angular decomposition. These deviations are inside current uncertainties. However, the fact that the state for perturbations differs from the BD-vacuum opens a window to observe those effects. The analogous plot for tensor modes has the same form as Fig.~\ref{Fig:4}, and the conclusions are also the same \cite{aan3}. In particular, there are no important corrections for the tensor-to-scalar ratio, although the inflationary consistency relation, which relates the tensor-to-scalar-ratio and the tensor spectral index, is modified \cite{aan3}. \subsubsection{Self-consistency} The last step is to check whether there exist a big enough set of physical states $\psi$ on the Hilbert space for which the truncation underlying our quantum theory, the test field approximation, holds during the entire evolution. This is an intricate question because: i) It requires a detailed analytical control of the necessary regularization on states and composite operators on our Hilbert space; ii) Numerical implementation of those techniques are necessary to check self-consistency {\em at any time during the evolution}, dealing with the subtleties of having numerical control on the subtraction of quantities that tend rapidly to infinity, during a period that covers around $11$ orders of magnitude in energy density. Section \ref{QFTQST} summarized the necessary tools to check self-consistency and pointed out that a sufficient condition is that the energy density in perturbations $\langle \hat \rho \rangle$ be negligible compared to the background $\langle \hat \rho_o \rangle$ {\em at any time} during the evolution. Fig.~\ref{Fig:5} shows the result of the numerical evolution of the energy density for scalar perturbations (analogous results hold for tensor perturbations). The plot shows the ratio $\langle \hat \rho_{\mathcal{Q}} \rangle/\langle \hat \rho_o \rangle$ for a background corresponding to $\phi=1.23 $ and the `obvious' 4th adiabatic order vacuum specified at the bounce. This ratio remains small for the entire evolution, including the Planck regime. The initial condition $\phi=1.23$ corresponds to $k_{\rm min}\approx 30$, therefore the number of excitations over the BD state on observable modes is negligible (see Fig.~\ref{Fig:3}) for this background. Additionally, there exist an analytical argument \cite{aan3} ensuring that, given a state for perturbations for which back-reaction is negligible, there exist a well defined neighborhood of that state with the same property. Each of those provide a state at the beginning of slow-roll indistinguishable from the BD vacuum. They provide therefore, viable extensions of the standard inflationary scenario that includes Planck scale physics \cite{aan1,aan3}. \begin{figure}[htb] \centerline{\includegraphics[width=10cm]{backreaction.pdf}} \caption{Ratio of the energy density of scalar perturbation to the background energy density as a function of cosmic time. The initial conditions were chosen as $\phi_B=1.23$ for the background, and the `obvious' 4th-order adiabatic vacuum at the bounce for perturbations. Slow-roll inflation starts about $3\times 10^5$ Planck seconds after the bounce. During the entire evolution the ratio remains small. This example constitutes a self-consistent extension of the evolution of cosmic inhomogeneities to include the Planck era.} \label{Fig:5} \end{figure} For the range $\phi_B<1.2$ there are only upper bounds for $\langle \hat \rho_{\mathcal{Q}} \rangle$ which are far from being optimal. At the present time there are no explicit computations for which the test field approximation is satisfied for $\phi_B$ in that window, and additional work is required to establish the self-consistency of our truncation scheme. \section{Conclusions} One of the most pressing questions a quantum theory of gravity has pertains to both theoretical and observational issues in cosmology. In the theoretical front the standard model is based on General Relativity that possesses an initial singularity, a signal that the theory breaks down at some point. On the observational front, the CMB spectrum poses very stringent conditions for any theory of the early universe. One of such scenarios is given by the inflationary paradigm, that explains very successfully the detailed structure of the inhomogeneities seen in the CMB as an imprint of quantum fluctuations of certain fields just before the inflationary phase. Can one have a formalism that provides a satisfactory, nonsingular description both at the Planck scale and at the onset of inflation? Interestingly, loop quantum cosmology allows to answer both questions in the affirmative. As we have described in this Chapter, when one considers the homogeneous degrees of freedom, the so called `background geometry', the formalism provides precise singularity resolution, replacing the classical big bang with a big bounce. The dynamics of semiclassical states is very well described by an effective theory that captures the leading quantum gravity effects and allows one to describe the spacetime geometry in terms of an effective background metric. The inflationary scenario is very powerful to explain in great detail many features of the observed CMB spectrum. It is however, incomplete in various directions. In particular, it is based on General Relativity where the spacetimes under consideration are past incomplete, that is, singular. As we have described in detail, one can indeed extend the scenario back in time to the Planck scale. For that one needs two new ingredients. The first one is a formalism that allows one to treat quantum perturbations of the spacetime metric propagating not on a classical spacetime, but rather on a {\em quantum} spacetime. The second ingredient involves consistency conditions that ensure us that one can `evolve' the quantum perturbations back to the Planck scale without violating the approximations that yield validity to the formalism. As we have seen one can indeed consistently consider the extension of the inflationary scenario. Perhaps the most important question is whether this extension to the quantum bounce provides a window for Planck scale physics to be observed in the CMB. As we have described, the sector of the parameter space that has been explored provides predictions that are fully consistent with the standard inflationary scenario, under current observations. Further explorations are needed to decide whether the scenario provided by LQC is both consistent in the full parameter space and provides us with distinct testable predictions. \section*{Acknowledgements} We would like to thank A. Ashtekar, P. Singh and W. Nelson for discussions and collaboration. I.A. thanks the Marie Curie program of the EU for funding. This work was partly funded by DGAPA-UNAM IN103610, CONACyT CB0177840, and NSF PHY0854743 grants and by the Eberly Research Funds of Penn State.
3,212,635,537,835
arxiv
\section{Introduction} Bagger, Lambert \cite{Bagger:2006sk,Bagger:2007jr,Bagger:2007vi} and Gustavsson \cite{Gustavsson:2007vu,Gustavsson:2008dy} discovered an interesting model for multiple M2-branes (which we will call BLG model in the following) based on an algebraic structure called Lie 3-algebra. Since membranes are expected to be the fundamental building blocks of M-theory, it is intriguing to ask how much does the BLG model know about M-theory. Important information of M-theory is contained in the structure of the eleven dimensional space-time superalgebra, or ``M-theory superalgebra" \cite{Townsend:1995gp}. The BLG model is not space-time supersymmetric, at least manifestly. However, since fundamental membrane action is expected to have space-time supersymmetry, we may hope that the BLG model can be related to a gauge-fixed form of some manifestly space-time supersymmetric formulation. In this paper we show that the most part of the eleven dimensional space-time super-Poincar\'e algebra with central extensions can actually be constructed from the BLG model, and indeed it captures important aspects of M-theory; namely charges of BPS branes. One of the crucial ingredients in constructing the space-time superalgebra is an existence of a central element in the Lie 3-algebra which the BLG model is based on. The shift of bosonic as well as fermionic fields in this central element are symmetries of the BLG model. The shift of the bosonic fields corresponds to translations in space-time, whereas the shift in the fermionic fields represents non-linearly realized part of the space-time super-Poincar\'e algebra. Similar discussions on the worldvolume supersymmetry algebra of the BLG model which is identified with the linearly realized part of the space-time supersymmetry can be found in a recent paper \cite{Passerini:2008qt}. We extend the results by including configurations which take values in non-trace elements (trace elements and non-trace elements are defined in section \ref{BLGmodel}) and obtained more central charges which provide necessary pieces of the M-theory superalgebra. The algebra and the central charges which arise by including the fermionic shift symmetry are our new results. One of our main interests is on the charge of the five-brane constructed in \cite{Ho:2008nn,Ho:2008ve}, and they are obtained only by including the fermionic shift symmetry in the algebra. Five-brane charges are of particular interests because in the matrix model for M-theory \cite{Banks:1996vh} transverse five-branes are not seen in the superalgebra \cite{Banks:1996nn}. Space-time superalgebra of a deformed BLG model without central extensions was constructed in \cite{Gomis:2008cv}. Other aspects of BPS configurations for the worldvolume supersymmetry of the BLG model were studied in \cite{Hosomichi:2008qk,Jeon:2008bx,Krishnan:2008zm}. \section{Space-time superalgebra from multiple membranes} \subsection{The Bagger-Lambert-Gustavsson model} \label{BLGmodel} The Bagger-Lambert action which was proposed as a description for multiple M2-branes \cite{Bagger:2007jr} (see also \cite{Bagger:2006sk,Bagger:2007vi,Gustavsson:2007vu,Gustavsson:2008dy}) has ${\cal N} = 8$ worldvolume supersymmetry. Furthermore, it has a novel gauge symmetry based on an algebraic structure called Lie 3-algebra \cite{Filippov}. For a linear space ${\cal V} = \sum_{a=1}^{\dim {\cal V}} v_a T^a; v_a \in \mathbb{C}$, Lie 3-algebra structure is defined by a multi-linear map which we call 3-bracket $[*,*,*]$ : ${\cal V}^{\otimes 3} \rightarrow {\cal V}$ satisfying following properties:\\ \ \\ 1. Skew-symmetry: \begin{equation} \label{skew} [A_{\sigma(1)}, A_{\sigma(2)} , A_{\sigma(3)}] = (-1)^{|\sigma|} [A_1, A_2, A_3]. \end{equation} 2. Fundamental identity: \begin{eqnarray} \label{FI} &&[A_1, A_2, [B_1, B_2, B_3]] \nonumber \\ &=& [[A_1,A_2,B_1],B_2,B_3] + [B_1,[A_1,A_2,B_2],B_3] + [B_1,B_2,[A_1,A_2,B_3]].\nonumber\\ \end{eqnarray} A linear space endowed with a Lie 3-algebra structure will be called Lie 3-algebra and typically denoted as ${\cal A}$ in this paper. In terms of the basis $T^a$, Lie 3-algebra can be expressed in terms of the structure constants $f^{abc}{}_d$: \begin{eqnarray} \label{st} [T^a,T^b,T^c] = f^{abc}{}_d T^d . \end{eqnarray} An element $T^a \in {\cal A}$ is called a center if $[T^a,T^b,T^c]=0, \forall\, T^b,T^c\in{\cal A}$, and $f^{abc}{}_d = 0$ in this case. To construct the action, we will also need an inner product in Lie 3-algebra. We assume the structure ${\cal V} = {\cal V}_{tr} \oplus {\cal V}_{ntr}$, where elements in ${\cal V}_{tr}$ have inner product and elements in ${\cal V}_{ntr}$ do not. We will refer to the elements in ${\cal V}_{tr}$ as trace elements, and elements in ${\cal V}_{ntr}$ as non-trace elements. By definition, elements $T^a, T^b \in {\cal V}_{tr}$ have inner product $\langle *, *\rangle$: ${\cal V}_{tr} \otimes {\cal V}_{tr} \rightarrow {\mathbb{C}}$: \begin{eqnarray} \langle T^a,T^b \rangle = h^{ab} . \end{eqnarray} We will call $h^{ab}$ as metric of Lie 3-algebra. We require following invariance of the inner product which is important for the gauge invariance of the Bagger-Lambert action: \begin{eqnarray} \label{invm} \langle [T^a, T^b, T^c], T^d \rangle + \langle T^c, [T^a, T^b, T^d] \rangle = 0. \end{eqnarray} Together with the skew-symmetry property (\ref{skew}), the invariance of the metric (\ref{invm}) requires the indices of structure constants $f^{abcd} \equiv f^{abc}{}_{e} h^{ed}$ to be totally anti-symmetric: \begin{eqnarray} \label{tantis} f^{abcd} = \frac{1}{4!}f^{[abcd]} . \end{eqnarray} Remember that (\ref{tantis}) is guaranteed only for trace elements with invariant metric. Inner product and metric are not defined for non-trace elements. Nevertheless, the 3-bracket can map non-trace elements to a trace element. These non-trace elements will play important role in this paper. For more about Lie 3-algebra in the BLG model, see e.g. \cite{Ho:2008bn,Papadopoulos:2008sk,Gauntlett:2008uf,% Gomis:2008uv,Benvenuti:2008bt,Ho:2008ei,% Lin:2008qp,% FigueroaO'Farrill:2008zm,deMedeiros:2008bf}. The Bagger-Lambert action is given by \cite{Bagger:2007jr} \begin{eqnarray} S = \int d^3 x \; {\cal L}, \end{eqnarray} where the Lagrangian density ${\cal L}$ is given by \begin{eqnarray} \label{BLaction} &&{\cal L} = -\frac{1}{2} \langle D^{\mu}X^I, D_{\mu} X^I\rangle + \frac{i}{2} \langle\bar\Psi, \Gamma^{\mu}D_{\mu}\Psi\rangle +\frac{i}{4} \langle\bar\Psi, \Gamma_{IJ} [X^I, X^J, \Psi]\rangle \nonumber \\ &&\qquad \quad -V(X) + {\cal L}_{CS}. \end{eqnarray} $X^I \in {\cal V}_{tr}$\footnote{% Later we will relax this condition slightly and allow constant backgrounds $X^I$ to take values in non-trace elements.} is a scalar field on the worldvolume and $I$ is a $SO(8)$ vector index. $\Psi \in {\cal V}_{tr}$ are Majorana spinors on $1+2$ dimensional worldvolume, but can be combined into a single Majorana spinor in eleven dimensions subject to the chirality condition $\Gamma \Psi = - \Psi$, $\Gamma \equiv \Gamma_{012}$. Notations for gamma matrices are summarized in the appendix. $D_{\mu}$ is the covariant derivative \begin{equation} \label{Dmu} (D_\mu X^I(x))_a = \partial _{\mu} X^I_a(x) -\tilde{A}_\mu{}^b{}_a(x) X^I_b(x), \quad \tilde{A}_{\mu}{}^b{}_a \equiv A_{\mu cd} f^{cdb}{}_a , \end{equation} where $A_\mu$ is a worldvolume gauge field. $V(X)$ is the potential \begin{equation} V(X) = \frac{1}{12}\langle [X^I, X^J, X^K], [X^I, X^J, X^K]\rangle . \end{equation} The Chern-Simons term for the gauge potential is given by \begin{eqnarray} \label{CS} {\cal L}_{CS} = \frac{1}{2}\varepsilon^{\mu\nu\lambda} \left(f^{abcd}A_{\mu ab}\partial_{\nu}A_{\lambda cd} + \frac{2}{3} f^{cda}{}_g f^{efgb} A_{\mu ab} A_{\nu cd} A_{\lambda ef} \right). \end{eqnarray} The Bagger-Lambert action is invariant under the following gauge transformation: \begin{eqnarray} \label{gauge} \delta_\Lambda X^I_a &=& \Lambda_{cd}[T^c,T^d,X^I]_a =\Lambda_{cd} f^{cde}{}_a X^I_e =\tilde{\Lambda}^e{}_a X^I_e, \nonumber\\ \delta_\Lambda \Psi_a &=& \Lambda_{cd}[T^c,T^d,\Psi]_a = \Lambda_{cd} f^{cde}{}_a \Psi_e = \tilde{\Lambda}^e{}_a \Psi_e, \nonumber\\ \delta_\Lambda \tilde{A}_{\mu}{}^b{}_a &=& \partial_\mu \tilde{\Lambda}_{\mu}{}^b{}_a -\tilde{\Lambda}^b{}_{c} \tilde{A}_{\mu}{}^c{}_a + \tilde{A}_{\mu}{}^b{}_c \tilde{\Lambda}^c{}_{a}, \quad \tilde{\Lambda}^b{}_{a} \equiv f^{cdb}{}_a \Lambda_{cd}. \end{eqnarray} The fundamental identity (\ref{FI}) leads to \begin{eqnarray} \label{tr3bra} \delta_\Lambda [\Phi(1),\Phi(2),\Phi(3)] = \Lambda_{cd}[T^c,T^d,[\Phi(1),\Phi(2),\Phi(3)]], \end{eqnarray} where $\Phi$'s collectively represent $X^I$ and $\Psi$. The metric is not involved in reaching (\ref{tr3bra}) and this formula applies to both trace elements and non-trace elements. On the other hand, the invariance of the metric (\ref{invm}) leads to \begin{eqnarray} \label{invm2} \delta_{\Lambda} \langle Y , Z \rangle = \Lambda_{ab} \left( \langle [T^a,T^b,Y],Z \rangle + \langle Y, [T^a,T^b,Z] \rangle \right) =0 . \end{eqnarray} for any trace elements $Y,Z$ which transform as $\delta_\Lambda Y = \Lambda_{cd}[T^c,T^d,Y]$, $\delta_\Lambda Z = \Lambda_{cd}[T^c,T^d,Z]$. (\ref{tr3bra}) and (\ref{invm2}) can be used to show the gauge invariance of the Bagger-Lambert action. \subsection{Worldvolume supersymmetry of the BLG model} The Bagger-Lambert action is invariant under the following supersymmetry transformations:\footnote{% See \cite{Mauri:2008ai} for a ${\cal N}=1$ superfield formalism.} \begin{eqnarray} \label{WVSUSY} \delta_\epsilon X^I_a &=& i\bar{\epsilon}\Gamma^I \Psi_a, \nonumber \\ \delta_\epsilon \Psi_a &=& D_{\mu}X^I_a \Gamma^\mu\Gamma^I \epsilon - \frac{1}{6} X^I_b X^J_c X^K_d f^{bcd}{}_a \Gamma^{IJK}\epsilon, \nonumber \\ \delta_\epsilon \tilde{A}_{\mu}{}^b{}_a &=& i\bar{\epsilon}\Gamma_{\mu}\Gamma_I X^I_c \Psi_d f^{cdb}{}_a, \end{eqnarray} where the supersymmetry parameter satisfies $\Gamma \epsilon = \epsilon$. The charge density, i.e. the temporal component of the Noether current associated with the supersymmetry transformation (\ref{WVSUSY}), is found to be \begin{eqnarray} \label{ql} q^L = - \Gamma^\mu\Gamma^I\Gamma^0 \langle D_\mu X^I, \Psi \rangle - \frac{1}{6} \Gamma^{IJK} \Gamma^0 \langle [X^I,X^J,X^K], \Psi \rangle , \end{eqnarray} and the Noether charge is \begin{eqnarray} \label{QL} Q^L = \int d^2x \, q^L . \end{eqnarray} The suffix $L$ indicates that it is identified with the linearly realized part of the space-time supersymmetry. In this paper we will often be interested in central charges which are proportional to the volume of the membranes, which can be infinite for infinitely extended membranes. A standard way to avoid infinities associated with such infinite volume arising from the (anti-)commutation relations of Noether charges is to use charge density. In the following, it is understood that the fermions $\Psi$ are set to zero after calculating the Dirac bracket, since we are interested in bosonic backgrounds. The Dirac bracket of $q^L$ and $Q^L$ is calculated to be \begin{eqnarray} \label{QLQL} i \{q^L, Q^L \}_D &=& 2 p_\mu \Gamma_+ \Gamma^\mu C \nonumber \\ &+& z_{IJ} \Gamma^{IJ} C + z_{0ijIJ} \Gamma^{0ijIJ} C \nonumber \\ &+& z_{0iIJKL} \Gamma^{0iIJKL} C + z_{jIJKL} \Gamma^{jIJKL} C \nonumber \\ &+& z_{0IJKL} \Gamma^{0IJKL} C + z_{ijIJKL} \Gamma^{ijIJKL} C, \end{eqnarray} where \begin{eqnarray} \label{zIJ} z_{IJ} = \frac{1}{2} \left( - \varepsilon^{0ij} \langle D_i X^I,D_j X^J \rangle + \langle D_0 X^K , [X^K,X^I,X^J] \rangle \right) , \end{eqnarray} \begin{eqnarray} \label{z0ijIJ} z_{0ijIJ} = \frac{1}{2} \left( \langle D_i X^I,D_j X^J \rangle - \frac{1}{2}\varepsilon^{0}{}_{ij} \langle D_0 X^K , [X^K,X^I,X^J] \rangle \right) , \end{eqnarray} \begin{eqnarray} \label{z0iIJKL} z_{0iIJKL} = \frac{1}{6} \langle D_i X^I , [X^J,X^K,X^L] \rangle , \end{eqnarray} \begin{eqnarray} \label{ziIJKL} z_{iIJKL} = - \frac{1}{6} \varepsilon^{0j}{}_{i} \langle D_j X^I,[X^J,X^K,X^L] \rangle , \end{eqnarray} \begin{eqnarray} \label{z0IJKL} z_{0IJKL} = -\frac{1}{8} \langle [X^M,X^I,X^J],[X^M,X^K,X^L] \rangle , \end{eqnarray} \begin{eqnarray} \label{zijIJKL} z_{ijIJKL} = -\frac{1}{16} \varepsilon^{0}{}_{ij} \langle [X^M,X^I,X^J],[X^M,X^K,X^L] \rangle . \end{eqnarray} In the above, anti-symmetrization of the space-time indices is understood. And $\Gamma_\pm \equiv (1 \pm \Gamma)/2$. This projection arises from the chirality of the supercharges: $\Gamma Q^L = Q^L$. In the second, the third and the fourth lines of (\ref{QLQL}), two terms in the same line arise from two different Gamma matrices in the projection $\Gamma_+ = (1+\Gamma)/2$. The bosonic part of the Hamiltonian density is given by \begin{eqnarray} {\cal H} = p_0 = \frac{1}{2} \langle D_0 X^I , D_0 X^I \rangle + \frac{1}{2} \langle D_i X^I , D_i X^I \rangle + V(X) , \end{eqnarray} and the momentum density is given by \begin{eqnarray} p_i = \langle D_0 X^I, D_i X^I \rangle . \end{eqnarray} We refer to the appendix for details. These central charges have been discussed in \cite{Passerini:2008qt};\footnote{% The expressions for the central charges look different just because we haven't used the equation of motions.} the combination of the central charges (\ref{zIJ}) and (\ref{z0ijIJ}) was found to be the charge of vortices, and identified with M2-branes intersecting with the multiple M2-branes. The combination of the central charges (\ref{z0iIJKL}) and (\ref{ziIJKL}) was found to be the charge of Basu-Harvey solution \cite{Basu:2004ed} which had been identified with M2-branes ending on M5-branes. Readers interested in further discussions are advised to consult \cite{Passerini:2008qt}. The central charges (\ref{z0IJKL}) and (\ref{zijIJKL}) vanish if we only consider trace elements in the Lie 3-algebra due to the total anti-symmetry of the indices $I,J,K,L$ and the fundamental identity (\ref{FI}). However, we would like to take into account constant background configurations of $X^I$'s which take values in non-trace elements. As long as they give trace elements after putting into the 3-brackets in the Bagger-Lambert action, the action is still well-defined and gauge invariant, provided the fluctuations around the background are still restricted to trace elements.\footnote{% Recall (\ref{tr3bra}) and (\ref{invm2}). The configuration is gauge covariant, but the value of the action is invariant under gauge transformations with parameters taking values in trace elements.} This kind of configurations give rise to BPS brane charges. For example, in the case when the Bagger-Lambert action reduces to D2-brane action by giving expectation value to the field $X^{10}_{0}$ in the notation of \cite{Ho:2008ei}, (\ref{z0IJKL}) and (\ref{zijIJKL}) reduce to the form $ \sim \mbox{tr} [X^I,X^J][X^K,X^L] $, where $[*,*]$ is the commutator of matrices and tr is the trace for matrices, and the matrix size is to be taken to infinity.\footnote{% In this case, actually the commutator of $X^I$'s are still non-trace elements, and the central charge diverges. This is attributed to the infinite volume of indefinitely extended D6-branes discussed below. The charge density per D6-brane worldvolume is still finite.} This term is analogous to the D4-brane charge (as well as the charges of the D0-branes within the D4-branes) in the matrix model for M-theory found in \cite{Banks:1996nn}, and one should keep this kind of terms in order to obtain all the BPS-brane charges in the model. In the current example, the action reduces to D2-branes instead of D0-branes for the matrix model, so the charge should be interpreted as D6-brane charge. This type of configuration is also crucial for the construction of the five-brane from the BLG model in \cite{Ho:2008nn,Ho:2008ve} and we will discuss this in section \ref{M5}. \subsection{Space-time superalgebra from the BLG model} It has been noticed that the choice of Lie 3-algebra in the BLG model already contains the choice of space-time in which membranes are embedded \cite{VanRaamsdonk:2008ft,Lambert:2008et,Distler:2008mk}. This is not surprising if we recall the analogous situation in multiple D-brane worldvolume theory, where the gauge group contains information of space-time, e.g. orientifold for gauge group $SO$. In the BLG model, when there is a central element in the Lie 3-algebra there is a bosonic shift symmetry in this direction: \begin{eqnarray} \label{trans} &&\quad \delta_{\vec{a}} X^I_\odot = a^I \quad (a^I : \mbox{constant}), \nonumber \\ &&\delta_{\vec{a}} \Psi_a = \delta_{\vec{a}} \tilde{A}_\mu{}^b{}_a=0, \end{eqnarray} as well as the fermionic shift symmetry\footnote{% The fermionic shift symmetry has been used in \cite{Ho:2008ve} to obtain the worldvolume supersymmetry of the five-brane action constructed from the BLG model.} \begin{eqnarray} \label{shift} &&\quad {\delta}_\eta \Psi_a = \delta_{a\odot} \eta, \nonumber \\ &&{\delta}_\eta X^I_a = \delta_\eta \tilde{A}_\mu{}^b{}_a=0 . \end{eqnarray} We use index $\odot$ to denote the generator corresponding to the central element. In the following, we restrict ourselves to the case where the metric for this central element takes following form: \begin{eqnarray} \label{cmet} h^{\odot a} &=& \delta^{\odot a}. \end{eqnarray} With this metric, it is natural to identify $X^I_\odot$ as the center of mass coordinate in the direction transverse to membranes up to normalization, and (\ref{trans}) is nothing but the translational symmetry in this direction. We further assume that there is only one such central element with metric of the form (\ref{cmet}) in the Lie 3-algebra,\footnote{% We allow other central elements with non-positive-definite metric \cite{Gomis:2008uv,Benvenuti:2008bt,Ho:2008ei}.} because it is strange if there are two sets of center of mass coordinates.\footnote{% Though it may work with some kind of gauging.} In our setting, the Noether charge density associated with the transformation (\ref{shift}) is given by \begin{eqnarray} \label{tq} {q}^{NL} = - \Gamma^0 {\Psi}_\odot , \end{eqnarray} and the Noether charge is \begin{eqnarray} \label{tQ} {Q}^{NL} = \int d^2x \, {q}^{NL} , \end{eqnarray} where the suffix $NL$ indicates that it is identified with the non-linearly realized part of the space-time supersymmetry. Note that ${Q}^{NL}$ has the same chirality with the worldvolume fermions $\Psi$, i.e. $\Gamma {Q}^{NL} = - {Q}^{NL}$, as opposed to $Q^L$. The BLG model is not space-time super-Poincar\'{e} symmetric, at least manifestly. However, if we want to regard the model as a description of multiple M2-branes, it is natural to expect that it is a gauge fixed form of some space-time supersymmetric and worldvolume reparametrization invariant formulation. In the case of single supermembrane, it has been shown that the space-time supersymmetry reduces to the worldvolume supersymmetry by static gauge fixing \cite{Achucarro:1987nc,Bergshoeff:1987qx,Achucarro:1988qb}. After the gauge fixing, the linearly realized part of the space-time supersymmetry becomes global supersymmetry on the worldvolume theory, whereas the Nambu-Goldstone modes for the non-linearly realized part of the space-time supersymmetry become fermion fields on the worldvolume \cite{Hughes:1986dn}. In our case, fields $\Psi$ can be thought of as Nambu-Goldstone fermions for non-linearly realized space-time supersymmetry. In the following we will show that the charge ${Q}^{NL}$ associated with the fermionic shift symmetry (\ref{shift}) almost provides the non-linearly realized part of the space-time supersymmetry, though there is a missing piece as we will see shortly. The Dirac bracket of ${q}^{NL}$ and $Q^L$ are given as \begin{eqnarray} \label{QNLQL} &&i \{ {q}^{NL}, Q^L \}_D + i \{ q^L, {Q}^{NL} \}_D \nonumber \\ &=& p_I \Gamma^I C + \frac{1}{2} z_{iI} \Gamma^{iI} C + \frac{1}{2} z_{ijIJK} \Gamma^{ijIJK} C , \end{eqnarray} where \begin{eqnarray} p_I \equiv \partial_0 X^I_\odot \end{eqnarray} is the momentum density in the direction transverse to the membranes. The central charge densities are found to be \begin{eqnarray} \label{tilt} z_{iI} = 2 \partial_j X^I_\odot \varepsilon_i{}^{j0} \, , \end{eqnarray} \begin{eqnarray} \label{5charge} z_{ijIJK} = - \frac{1}{6} \varepsilon^0{}_{ij} \langle [X^I,X^J,X^K ],T_\odot \rangle . \end{eqnarray} The central charge density (\ref{tilt}) describes tilting multiple membranes. For example, let us consider the situation where $X^I_\odot$ is compactified on a circle with radius $R^I$, and $x^j$ is compactified on a circle with radius $r^j$. Then \begin{eqnarray} X^I_\odot = \frac{n}{m} \frac{R^I}{r^j} x^j \end{eqnarray} is a configuration of membranes which winds the $I$-th direction for $n$ times and $j$-th direction for $m$ times. This configuration gives topological winding numbers through the central charge density (\ref{tilt}). The central charge density (\ref{5charge}) vanishes when all $X^{I}$'s in the 3-bracket take values in trace elements of the Lie 3-algebra, due to the definition of the central element and the invariance of the metric (\ref{invm}). This is not necessarily the case if we consider constant configurations where $X^I$'s take values in non-trace elements. As long as we obtain a trace element after putting such $X^I$'s into the 3-bracket, the inner product in (\ref{5charge}) is well-defined and gives a finite number. The Bagger-Lambert action is also well-defined for such configuration, provided it is regarded as a background;\footnote{% Note that the covariant derivative (\ref{Dmu}) can be rewritten as $D_\mu X^I = \partial_\mu X^I - A_{\mu\,cd} [T^c,T^d,X^I]$.} fluctuations from the background should still be in the space of trace elements. To describe a five-brane in the BLG model based on Nambu-Poisson bracket \cite{Ho:2008nn,Ho:2008ve}, the background configuration is indeed given by such $X^I$'s taking values in non-trace elements, and (\ref{5charge}) gives the charge of the five-brane. We will come back to this point again in section \ref{M5}. The Dirac bracket of ${q}^{NL}$ and ${Q}^{NL}$ is given by \begin{eqnarray} \label{tQtQ} i \{ {q}^{NL}, {Q}^{NL} \}_D = \Gamma_- \Gamma^0 C = \frac{1-\Gamma}{2} \Gamma^0 C . \end{eqnarray} The last expression in (\ref{tQtQ}) can be interpreted as a sum of the mass density and the charge density of the static multiple membranes. (The absence of such term in (\ref{QLQL}) can be regarded as cancellation of mass and charge for the BPS configuration of membranes \cite{Witten:1978mh}.) However, it does not contain contributions from excitations on the worldvolume to the energy nor the momentum in the worldvolume direction, which are required for making up the eleven dimensional super-Poincar\'e algebra. Besides this point, we can construct full space-time supercharge density $q$ and charge $Q$ as follows: \begin{eqnarray} q = q^L + 2 \sqrt{N} {q}^{NL}, \end{eqnarray} \begin{eqnarray} Q = Q^L + 2 \sqrt{N} {Q}^{NL} . \end{eqnarray} Here, we have introduced a constant $N$ which can be interpreted as ``number" of membranes. The reason for this factor is as follows: Up to normalization $X^I_\odot$ is interpreted as the center of mass coordinate in the transverse directions. To define the center of mass, we need to know the number of membranes. However, there's no definite rule for relating the dimension of a Lie 3-algebra and the number of membranes. In the case of the Lie 3-algebra constructed from ordinary Lie algebra in order to derive D2-brane action from the Bagger-Lambert action \cite{Gomis:2008uv,Benvenuti:2008bt,Ho:2008ei}, the number of membranes should be equal to the number of D2-branes and determined from the rank of the Lie group, e.g. $N$ for $U(N)$. We will discuss the case when Nambu-Poisson bracket is chosen as Lie 3-algebra in the next subsection. Since the number of membranes is decided case by case depending on the choice of Lie 3-algebra, we just denote this number as $N$. Finally, we obtain the eleven dimensional super-Poincar\'e algebra with central extensions: \begin{eqnarray} \label{QQ} i \{q,Q\}_D &=& 2 (\Gamma^0 - \Gamma^{12}) C N + 2 p_\mu \Gamma_+ \Gamma^\mu C + 2 p_I \Gamma^I C \sqrt{N} \nonumber \\ &+& z_{iI} \Gamma^{iI} C \sqrt{N} + z_{ijIJK} \Gamma^{ijIJK} C \sqrt{N} \nonumber \\ &+& z_{IJ} \Gamma^{IJ} C + z_{0ijIJ} \Gamma^{0ijIJ} C \nonumber \\ &+& z_{0iIJKL} \Gamma^{0iIJKL} C + z_{jIJKL} \Gamma^{jIJKL} C \nonumber \\ &+& z_{0IJKL} \Gamma^{0IJKL} C + z_{ijIJKL} \Gamma^{ijIJKL} C. \end{eqnarray} As mentioned above, the first term of (\ref{QQ}) is interpreted as coming form tension and charge per volume of $N$ membranes. From the kinetic term the relative normalization between $X^I_\odot$ and the center of mass coordinate is read off as $X^I_\odot = \sqrt{N} X^I_{C.M.}$, where $X^I_{C.M.}$ is the center of mass coordinate. Then $p_I \sqrt{N} = p_{I}^{C.M.} N$ is the total momentum in the direction transverse to membranes, and $N$ appears in an appropriate way for a number of membranes. Eq. (\ref{QQ}) is almost the eleven dimensional super-Poincar\'e algebra, except that the piece $2p_\mu \Gamma_- \Gamma^\mu C$ is missing in the right hand side of (\ref{QQ}). It is important that the piece $2p_I \Gamma^I C$ for the eleven dimensional super-Poincar\'e algebra has been obtained. If we had a space-time supersymmetric formulation with worldvolume reparametrization invariance for multiple membrane action which reduces to the Bagger-Lambert action after gauge fixing, this would be understood as due to the static gauge and kappa symmetry gauge fixing. We hope to clarify this point in the future. Further speculations will be given in the last discussion section. \subsection{On M5-brane charges in the BLG model} \label{M5} An example of Lie 3-algebra is given by Nambu-Poisson bracket on an ``internal" three-manifold. For simplicity, we take $T^3$ to be the internal three-manifold. For more about the use of Nambu-Poisson bracket in the BLG model, see \cite{Ho:2008bn,Ho:2008nn,Ho:2008ve}. We consider the Nambu-Poisson bracket on $T^3$ given by \begin{eqnarray} \label{Nambu} \left\{f,g,h\right\}_{\rm NP} = \sum_{\dot\mu\dot\nu\dot\lambda} \, \Omega \varepsilon^{\dot\mu\dot\nu\dot\lambda}\partial_{\dot\mu} f(y) \partial_{\dot\nu} g(y) \partial_{\dot\lambda} h(y) . \end{eqnarray} Here $y^{\dot\mu}$ ($\dot\mu=\dot 1,\dot2,\dot3$) are flat coordinates on $T^3$ with the identification $y^{\dot\mu} \sim y^{\dot\mu}+ 2\pi$, and $\Omega$ is a constant. The invariant inner product can be defined by the integral over $T^3$: \begin{equation} \langle f,g\rangle \equiv \int_{T^3} d^3y \, f(y) g(y). \end{equation} The trace elements of the Lie 3-algebra are given by square-integrable periodic functions on $T^3$. If we denote the basis of such functions on $T^3$ as $\chi^a(y)$ ($a=1,2,3,\cdots$), the Nambu-Poisson bracket can be written with structure constants: \begin{eqnarray} \{ \chi^a, \chi^b, \chi^c\}_{\rm NP}=\sum_d {f^{abc}}_d \chi^d\, . \end{eqnarray} Using the definition of the Nambu-Poisson bracket (\ref{Nambu}), it is easy to check that the fundamental identity (\ref{FI}) holds. We normalize the basis as $\langle \chi^a ,\chi^b \rangle = \delta^{ab}$; then the normalized central element is given as $T^\odot = 1/\sqrt{(2\pi)^3}$. We would like to consider the case where the target space is also compactified on a $T^3$. By this we mean the identification in the central element: \begin{eqnarray} \label{ttorus} X^I(y) \sim X^I(y) + 2\pi R^I , \end{eqnarray} for say $I=3,4,5$, where $R^I$ is the compactification radius in the $I$-th direction. Now let us consider a background configuration \begin{equation} \label{wrap} X^{I}= R^I m_{I} y^{\dot{\mu}},\quad \dot\mu = I-2\quad (I = 3,4,5). \end{equation} The functions $y^{\dot\mu}$ ($\dot\mu = \dot{1}, \dot{2}, \dot{3}$) are not periodic functions on $T^3$: They have a jump at $y^{\dot\mu} = 2\pi$. However, when the target space is also compactified as in (\ref{ttorus}), such jump can be set to null for the configuration (\ref{wrap}) due to the identification in the target space. In this case, it is natural to include these elements in the Lie 3-algebra. However, there is no natural way to define invariant inner product for these elements. For example, \begin{eqnarray} &&\int_{T^3} d^3y \, \{ y^{\dot{1}} , y^{\dot{2}} , 1 \}_{NP} \cdot y^{\dot{3}} =0 \nonumber \\ &\ne& - \int_{T^3} d^3y \, 1 \cdot \{ y^{\dot{1}} , y^{\dot{2}} , y^{\dot{3}} \}_{NP} = \Omega (2\pi)^3 . \end{eqnarray} This means that the integration over $T^3$ does not provide an invariant metric for these new elements. Therefore, these elements should be included as non-trace elements. In the Bagger-Lambert action, these $X^I$'s in non-trace elements always appear inside the Nambu-Poisson brackets; and the Nambu-Poisson brackets with such non-trace elements give trace elements, since the derivative inside the Nambu-Poisson bracket acting on $y^{\dot\mu}$ gives a constant which is a trace element. As long as such configuration is regarded as a non-dynamical background independent of the worldvolume coordinates, the Bagger-Lambert action is still well-defined and gauge invariant. Now we come back to the issue of the ``number of membranes" discussed in the previous subsection. In the current case where there is a natural notion of identity ``1" in the elements and the metric is positive definite, it is natural to interpret the number we get when we put ``1" into the inner product as the number of membranes. This is nothing but the volume of the internal manifold $T^3$. Therefore we set $N = (2\pi)^3$. The background configuration (\ref{wrap}) contributes to the five-brane charge (\ref{5charge}) as \begin{eqnarray} \label{ex5} z_{ijIJK} \sqrt{N} = - \frac{1}{3!} \varepsilon_{IJK} {(2\pi)^3} \varepsilon^0{}_{ij} \Omega R^3 R^4 R^5 m_{3}m_{4}m_{5} . \end{eqnarray} (\ref{ex5}) is interpreted as a charge of a five-brane wrapping the $I$-th direction for $m_I$ times. Note that the potential term in the Bagger-Lambert action can be rewritten as \begin{eqnarray} &&V(X) \nonumber \\ &=& \frac{1}{12} \biggl( \langle [X^I,X^J,X^K] - W^{IJK} T^\odot, [X^I,X^J,X^K] - W^{IJK} T^\odot \rangle \nonumber \\ &&\qquad + 2 W^{IJK} \langle [X^I,X^J,X^K], T^\odot \rangle - W^{IJK}W^{IJK} \biggr)\nonumber \\ &=& \frac{1}{12} \langle [X^I,X^J,X^K] - W^{IJK} T^\odot, [X^I,X^J,X^K] - W^{IJK} T^\odot \rangle \nonumber \\ &&\qquad - \frac{1}{2} W^{IJK} \varepsilon^{0ij} z_{ijIJK} - \frac{1}{12} W^{IJK} W^{IJK} , \end{eqnarray} where $W^{IJK}$ is a constant totally anti-symmetric tensor \begin{eqnarray} W^{IJK} = \varepsilon^{IJK}\Omega R^3R^4R^5m_3m_4m_5 . \end{eqnarray} Therefore the static configuration (\ref{wrap}) saturates the minimal energy bound for given winding numbers. Some time ago a matrix model was proposed as a description of M-theory \cite{Banks:1996vh}, and BPS branes in this model were analyzed from the central extension of the superalgebra \cite{Banks:1996nn}. It was found that the charge of transverse five-branes, i.e. five-branes transverse to the M-theory circle which relates M-theory to type IIA string, is absent in this model. This can be a problem if the model is the fundamental definition of M-theory, though the model may better be regarded as M-theory in a particular frame in which some information of the full M-theory has been dropped off. From our results for the M-theory superalgebra (\ref{QQ}), we can draw a scenario for how such thing can happen in the BLG model: The action for the matrix model for M-theory is basically that of the large number of multiple D0-branes in type IIA string theory. From the Bagger-Lambert action, such action may be obtained by first reducing it to multiple D2-brane action \cite{Mukhi:2008ux,Gomis:2008uv,Benvenuti:2008bt,Ho:2008ei}, then wrapping D2-branes on $T^2$, and then performing T-duality transformations in the $T^2$ directions. To obtain multiple D2-brane action from the Bagger-Lambert action, it is necessary to reduce Lie 3-algebra to ordinary Lie algebra. This should be achieved by some background configuration in the BLG model which describes a compactification on the M-theory circle. However, by this background configuration the five-brane charges expressed using Lie 3-algebra in (\ref{z0iIJKL}), (\ref{ziIJKL}) or (\ref{5charge}) must also reduce to the expression using ordinary Lie algebra. This should be interpreted as five-branes are also wrapping the circle direction. Thus when one obtains the matrix model for M-theory from the Bagger-Lambert action, transverse five-brane charges which uses Lie 3-algebra structure in an essential way, i.e. those which do not reduce to a form written with ordinary Lie algebra, necessarily drop out from the model. \section{Summary and discussions} In this paper we studied the space-time supersymmetry of the BLG model when there is a central element in the Lie 3-algebra, and obtained the eleven dimensional super-Poincar\'e algebra with central extensions, except the piece $2 p_\mu \Gamma_- \Gamma^\mu C$. The first crucial ingredient in the construction of the space-time superalgebra was to include the fermionic shift symmetry associated with the central element in the Lie 3-algebra. This fermionic shift symmetry was identified with the non-linearly realized part of the space-time supersymmetry. Together with the linearly realized worldvolume supersymmetry, it makes up the eleven dimensional super-Poincar\'e algebra. The second important ingredient was to take into account the non-trace elements for constant background configurations. The central charges constructed from non-trace elements provide important pieces of the M-theory superalgebra. For example, the charge of the five-brane constructed in \cite{Ho:2008nn,Ho:2008ve} can only be constructed by taking into account such non-trace elements. Compared with the matrix model for M-theory which can be regarded as regularization of supermembrane action in the light-cone gauge \cite{deWit:1988ig}, the BLG model lacks relation to a manifestly space-time supersymmetric formulation at this moment. Nevertheless, in this paper we could obtain the eleven dimensional super-Poincar\'{e} algebra almost fully. This suggests the existence of a manifestly space-time supersymmetric formulation with worldvolume reparametrization invariance which reduces to the BLG model after gauge fixing. It will be very interesting to construct such manifestly space-time supersymmetric formulation for the BLG model, and understand why the piece $2 p_\mu \Gamma_- \Gamma^\mu C$ is missing in our algebra. In the case where the Lie 3-algebra is Nambu-Poisson bracket, it is likely that such manifestly space-time supersymmetric formulation is some covariant formulation of single M5-brane worldvolume action in three-form field background rather than multiple M2-brane action: If we can find a way to relate such formulation to the five-brane action constructed from the Bagger-Lambert action in \cite{Ho:2008nn,Ho:2008ve}, we will be able to understand the origin of our super-Poincar\'e algebra. An interesting worldvolume reparametrization invariant formulation of single M5-brane action which might be related to the BLG model was constructed in \cite{Park:2008qe}, though only the bosonic part has been worked out. A worldvolume supergravity action which in a limit reduces to the Bagger-Lambert action was constructed in \cite{Bergshoeff:2008ix}. When the Lie 3-algebra does not have a central element, the fermionic shift symmetry is absent. In this case the space-time supersymmetry should be less compared with the flat space. This may be regarded as a supersymmetric counterpart of the absence of space-time translational symmetry in the orbifold interpretation of the model based on so-called ${\cal A}_4$ algebra \cite{VanRaamsdonk:2008ft,Lambert:2008et,Distler:2008mk}. The BLG model is superconformal at the classical level, and expected to be so at the quantum level. The superconformal symmetry should correspond to the near horizon super-isometry in AdS-CFT correspondence, and this is one of the strongest motivations for studying this model. It will be interesting to construct central extension of the superconformal algebra explicitly in the BLG model. \section*{Acknowledgments} We would like to express special thanks to Pei-Ming Ho for many helpful explanations and discussions. We would also like to thank Yosuke Imamura and Wen-Yu Wen for discussions, and Takayuki Hirayama and Dan Tomino for reading the manuscript and for useful comments. This work is supported in part by National Science Council of Taiwan under grant No. NSC 97-2119-M-002-001.
3,212,635,537,836
arxiv
\section{Introduction} Explaining the fermion mass hierarchy and mixing pattern is an outstanding challenge of particle physics \cite{Froggatt:1978nt}\cite{attempts}\cite{Nandi:2008zw}. The fermion masses are parameterized by the Standard Model Yukawa interactions of chiral fermions with a single Higgs doublet. It is technically natural for the dimensionless Yukawa couplings to take small values, since global chiral flavor symmetries are restored (at tree level) in the limit that these couplings vanish, but it is a total mystery why these values are spread over more than five orders of magnitude, in a suggestive pattern of inter-generational and intra-generational hierarchies. Although the gauge sector of the SM is well established, little is yet known about the Higgs sector. Higgs physics may be much richer than the minimal SM formulation, presenting new dynamics at the TeV scale that will be accessible to experiments at the LHC. Most work on extended Higgs sectors has been motivated by frameworks for understanding the naturalness and hierarchy problem of the SM Higgs boson, but not by the hierarchy problems of the SM flavor sector. One reason is that models that attempt to generate the flavor-breaking patterns of the SM Yukawas from new TeV scale dynamics are strongly constrained by experimental searches for flavor-changing neutral currents (FCNCs) and charged lepton flavor violation (CLFV). The top quark Yukawa coupling has a value close to one, suggesting that a SM Yukawa coupling is the correct explanation for the top mass. The smallness of the other Yukawas suggests that some or all of the other quarks and the charged leptons do not couple directly to the electroweak symmetry breaking order parameter, which in the SM is represented by the vacuum expectation value (vev) of the Higgs scalar. Thus a good starting point to construct theories of flavor is to specify a field or mechanism to act as the messenger of electroweak symmetry breaking to the other quarks and leptons. One simple choice for a messenger is a TeV mass scalar leptoquark, postulated to have a renomalizable coupling between the top quark and the SM leptons \cite{Dobrescu:2008sz,Balakrishna:1987qd}. Radiative corrections can then generate a natural hierarchy of fermion masses related to powers of a loop factor. An even simpler choice for a messenger is an electroweak mass scalar that transforms as a SM singlet and extends the Higgs sector of the SM. In this work, we explore this idea of an extended Higgs sector related to the generation of the fermion mass hierarchy. We present a simple framework where the Higgs doublet $H$ couples directly to a complex scalar $S$ that is a SM singlet and is charged under a new local $U(1)_S$ symmetry carried by a vector boson $Z'$. All of the SM fermions are singlets under this new $U(1)_S$ (apart from small effects from $Z-Z'$ mixing), which is broken spontaneously at the electroweak scale by the vacuum expectation value of $S$. In our framework the singlet scalar $S$ is the messenger to SM fermions of both flavor breaking and electroweak symmetry breaking. All SM fermions apart from the third generation quark doublet $q_{3L}$ and right-handed top $u_{3R}$ are assumed to carry a nonzero charge under a gauged chiral flavor symmetry forbidding all SM dimension 4 Yukawa couplings except that of the top quark. We assume that the flavor symmetry is spontaneously broken at a scale $\mathrel{\raisebox{-.6ex}{$\stackrel{\textstyle>}{\sim}$}} 1$ TeV by the vacuum expectation of one or more complex scalar ``flavon'' fields $F_i$. The flavor charges of the SM fermions forbid any dimension 4 couplings to either $F_i$ or to the Higgs field $H$. We introduce new fermions that are vectorlike under both the SM gauge symmetries and $U(1)_S$; these fermions naturally acquire masses $\mathrel{\raisebox{-.6ex}{$\stackrel{\textstyle>}{\sim}$}}$ TeV that we will generically denote as $M$, and have dimension 4 couplings to both $F_i$ and to $H$. Integrating out these heavy fermions gives higher dimension effective couplings of the SM fermions to $H$ that replace the role of Yukawa couplings in the SM. These couplings contain explicit flavor breaking in the form of $\langle F_i \rangle /M$, which we take to be of order 1, as well as being suppressed by powers of $S^\dagger S / M^2$, whose vev we take to be of order 1/50. In our framework all of the observed SM fermion mass hierarchies are generated from powers of $\langle S \rangle /M \sim 1/7$, which is essentially the ratio of the electroweak scale to the TeV scale, often called the ``little hierarchy''. We can be agnostic about the source of the little hierarchy itself, since many possibilities have been proposed. The additional challenge of our framework is to achieve simultaneously the appropriate flavon physics at the TeV scale. Models in our framework have, in addition to the SM particle content, a light singlet scalar $s$ that mixes with the Higgs boson $h$. Exchanges of $s$ between SM fermions are a new source of FCNC. There is an extra $Z'$ at the electroweak (EW) scale, but apart from small $Z-Z'$ mixing effects it does not couple to SM fermions. There may be other $Z'$s and one or more flavon scalars at the TeV scale. We predict a host of new heavy fermions around the TeV scale; these are also a source of new FCNC and CLFV effects. We show that flavon charge patterns that reproduce the observed SM fermion masses and mixings also supply enough extra suppression of FCNC and CLFV effects to satisfy current experimental bounds. In addition to explaining the hierarchy of fermion masses and mixings, models in our framework have many interesting phenomenological implications. Mixing of the singlet $s$ with the Higgs boson $h$ can cause large deviations from the SM predictions for the Higgs decay branching fractions, potentially observable at the Tevatron or LHC. The $s$ mass eigenstate itself will also be produced at the LHC, and could be confused with $h$ if it turns out to be the lightest mass eigenstate. While new FCNC effects are suppressed, we predict contributions to $D^0$$-$$\bar{D}^0$ mixing, $B_s\rightarrow{\mu^+ \mu^-}$, that are close to the current value or limit. The exotic top quark decays $t\rightarrow{ch}$ and $t\rightarrow{cs}$ can have branching fractions on the order of $10^{-3}$. Our paper is organized as follows. In section 2, we present the basic outline of our framework. In section 3, we discuss the constraints on the model parameters from the low energy phenomenology. Section 4 contains the phenomenological implications and predictions of the model, especially for the new top decays and Higgs signals at the Tevatron and LHC. In section 5, we outline a possible ultraviolet completion realizing our proposal. Section 6 contains our conclusions and further discussion. \section{Model and formalism} We extend the gauge symmetry of the SM by a $U(1)_S$ local symmetry and an additional local flavon symmetry which in the simplest case would be a $U(1)_F$. All of the SM fermions are neutral with respect to $U(1)_S$, while all of the SM fermions apart from the third generation quark doublet $q_{3L}$ and right-handed top $u_{3R}$ are charged under the chiral $U(1)_F$. We introduce a complex scalar field $S$ which has charge 1 under $U(1)_S$, is neutral under the flavon symmetry, and is a SM singlet. We also introduce one or more complex scalar fields $F_i$, the ``flavon'' scalars. In the simplest case there would be a single flavon scalar $F$ that has charge 1 under $U(1)_F$, is neutral under $U(1)_S$, and is a SM singlet. The Higgs field $H$ is taken as neutral under $U(1)_S \times U(1)_F$. We assume that the flavon charges of the SM fermions are such that only the top quark has an allowed dimension 4 Yukawa interaction. The $S$ field is assumed to develop a vev that spontaneously breaks the $U(1)_S$ symmetry. In frameworks where the little hierarchy between the electroweak scale and the TeV scale is generated, this could occur naturally by extending the Higgs sector to include $S$, with a mixed potential. The pseudoscalar component of $S$ is then ``eaten'' to give mass to the $U(1)_S$ $Z'$ gauge boson. Notice that the vev of $S$ does not in itself break any of the global flavor symmetries of the Yukawa-less SM; $S$ is only a messenger of flavor breaking, just as it is also a messenger of electroweak breaking. This is the fundamental distinction that allows $S$ to exist at the electroweak scale without inducing unacceptably large flavor violating effects. The flavon scalars $F_i$ are assumed to develop vevs that spontaneously break the local flavon symmetry at the TeV scale, with the pseudoscalar components of the $F_i$ eaten to give the flavon gauge bosons mass. To preserve the little hierarchy, we assume that the direct mixing between the $F_i$ and the extended Higgs sector is negligible. In this framework the Yukawa interactions of the lighter quarks and leptons are replaced by higher dimension operators that couple these fermions to $H$, $S$, and the $F_i$. As we will show later in an explicit example, these can be generated as effective couplings by integrating out new heavy fermions at the TeV scale. These effective couplings should respect all of the SM gauge symmetries, as well as $U(1)_S$ and the flavon symmetries. In particular, the $U(1)_S$ charged field $S$ can only appear as powers of $S^\dagger S / M^2$, where $M$ denotes a generic TeV scale parameter. Powers of $F_i/M$ and $F_i^\dagger /M$ can also appear, but the exact form depends on the flavon charge assignments of the SM fermions. Since we will assume that vevs of the $F_i$ are of order $M$, we can absorb the $F_i/M$ dependence into the dimensionless complex couplings $h_{ij}$, where $i$, $j$ are generation labels; all these couplings we will then take to be of order 1. The observed SM fermion mass hierachy is generated from the following low energy effective interactions: \begin{eqnarray} {\cal L}^{\rm Yuk} &=& h_{33}^u \overline{q}_{3L} u_{3R} \bar{H} + \left({S^\dagger S \over M^2}\right) \left(h_{33}^d \overline{q}_{3L} d_{3R} H + h_{22}^u \overline{q}_{2L} u_{2R} \bar{H}+h_{23}^u \overline{q}_{2L} u_{3R} \bar{H} + h_{32}^u \overline{q}_{3L} u_{2R} \bar{H}\right) \nonumber \\ &&+ \left({S^\dagger S \over M^2}\right)^2 \left(h_{22}^d\overline{q}_{2L} d_{2R} H + h_{23}^d \overline{q}_{2L} d_{3R} H + h_{32}^d \overline{q}_{3L} d_{2R} H + h_{12}^u\overline{q}_{1L}u_{2R}\bar{H} + h_{21}^u\overline{q}_{2L}u_{1R} \bar{H} \right. \nonumber \\ &&+ \left. h_{13}^u\overline{q}_{1L} u_{3R} \bar{H} + h_{31}^u \overline{q}_{3L} u_{1R} \bar{H} \right) + \left({S^\dagger S \over M^2}\right)^3 \left(h_{11}^u \overline{q}_{1L} u_{1R} \bar{H} + h_{11}^d\overline{q}_{1L} d_{1R} H \right. \nonumber \\ &&+ \left. h_{12}^d\overline{q}_{1L} d_{2R} H + h_{21}^d \overline{q}_{2L} d_{1R} H + h_{13}^d \overline{q}_{1L} d_{3R} H + h_{31}^d \overline{q}_{3L} d_{1R} H \right ) + h.c. \label{ONE} \end{eqnarray} Note that the above interactions are very similar to those proposed in reference \cite{bn}, except our interactions involve suppression by powers of $\left({S^\dagger S \over M^2}\right)$, instead of $\left({H^\dagger H \over M^2}\right)$. We will refer to this as the Babu-Nandi texture. The hierarchies among the fermion masses and mixings are obtained from a single small dimensionless parameter, \begin{equation} \epsilon \equiv { v_s \over M}, \label{TWO} \end{equation} where $v_s$ is the vev of $S$. As was shown in \cite{bn}, a good fit to the observed fermion masses and mixings is obtained with $\epsilon \sim 0.15$. The couplings $h_{ij}$ are all of order one; the largest coupling needed is $h^u_{23} =1.4$, while the smallest coupling needed is $h^u_{22}=0.14$. The Babu-Nandi texture is not unique, and it does not predict any precise fermion mass relations, since there are slightly more unspecified order 1 parameters than there are Yukawa parameters in the SM. \subsection{Fermion masses and CKM mixing} The gauge symmetry of our model is the usual $SU(3)_c\times SU(2)_L \times U(1)_Y$ of the SM, plus two additional local symmetries: $U(1)_S$ and the flavon symmetry. The SM symmetry is broken spontaneously by the usual Higgs doublet $H$ at the electroweak scale. We assume that the extra $U(1)_S$ symmetry is also broken spontaneously at the electroweak scale by a SM singlet complex scalar field $S$. The flavon symmetry, $U(1)_F$ in the simplest case, is broken spontaneously above a TeV by a SM singlet scalar flavon field $F$. The pseudoscalar part of the complex scalar field $S$ is absorbed by the $Z'$ gauge boson $U(1)_S$ to get its mass. Thus after symmetry breaking the remaining scalars at the electroweak scale are neutral bosons $h$ and $s$. Parameterizing the Higgs doublet and singlet in the unitary gauge as \begin{equation} H = \left(\matrix{0 \cr \frac{h}{\sqrt{2}}+v}\right)~~S = \left(\frac{s}{\sqrt{2}}+v_s\right), \label{THREE} \end{equation} with $v \simeq 174$ GeV, and defining an additional small parameter \begin{equation} \beta \equiv { v \over M}, \label{FOUR} \end{equation} we obtain, from eqs. (\ref{ONE}-\ref{FOUR}) the following mass matrices for the up and down quark sector: \begin{eqnarray} M_u = \left(\matrix{h_{11}^u \epsilon^6 & h_{12}^u \epsilon^4 & h_{13}^u \epsilon^4 \cr h_{21}^u\epsilon^4 & h_{22}^u \epsilon^2 & h_{23}^u \epsilon^2 \cr h_{31}^u \epsilon^4 & h_{32}^u \epsilon^2 & h_{33}^u}\right)v, ~~~~~ M_d = \left(\matrix{h_{11}^d \epsilon^6 & h_{12}^d \epsilon^6 & h_{13}^d \epsilon^6 \cr h_{21}^d\epsilon^6 & h_{22}^d \epsilon^4 & h_{23}^d \epsilon^4 \cr h_{31}^d \epsilon^6 & h_{32}^d \epsilon^4 & h_{33}^d \epsilon^2}\right)v~. \label{FIVE} \end{eqnarray} The charged lepton mass matrix is obtained from $M_d$ by replacing the couplings $h_{ij}$ appropriately. Note that these mass matrices are the same as in \cite{bn}, and as was shown there, good fits to the quark and charged lepton masses, as well as the CKM mixing angles are obtained by choosing $\epsilon\sim 0.15$, and all the couplings $h_{ij}$ of order one. To leading order in $\epsilon$, the fermion masses are given by \begin{eqnarray}\label{eqn:diagmasses} (m_t,\;m_c\;,m_u) &\simeq& (\vert h_{33}^u\vert,\; \vert h_{22}^u\vert\epsilon^2,\; \vert h_{11}^u - h_{12}^uh_{21}^u/h_{22}^u\vert \epsilon^6)\,v\;,\nonumber\\ (m_b,\; m_s,\; m_d) &\simeq& (\vert h_{33}^d\vert\epsilon^2,\; \vert h_{22}^d\vert\epsilon^4,\; \vert h_{11}^d\vert\epsilon^6)\,v\;,\\ (m_{\tau},\;m_{\mu},\;m_e) &\simeq& (\vert h_{33}^\ell\vert\epsilon^2,\; \vert h_{22}^\ell\vert\epsilon^4,\; \vert h_{11}^\ell\vert\epsilon^6)\,v \; ,\nonumber \end{eqnarray} while the quark mixing angles are \begin{eqnarray}\label{eqn:mixings} \vert V_{us}\vert &\simeq& \left\vert \frac{h_{12}^d}{h_{22}^d} - \frac{h_{12}^u}{h_{22}^u} \right\vert\epsilon^2 \; ,\nonumber\\ \vert V_{cb}\vert &\simeq& \left\vert \frac{h_{23}^d}{h_{33}^d} - \frac{h_{23}^u}{h_{33}^u} \right\vert\epsilon^2 \; , \\ \vert V_{ub}\vert &\simeq& \left\vert \frac{h_{13}^d}{h_{33}^d} - \frac{h_{12}^uh_{23}^d}{h_{22}^uh_{33}^d} - \frac{h_{13}^u}{h_{33}^u} \right\vert\epsilon^4 \; . \nonumber \end{eqnarray} Generically all of the $h_{ij}$ can be nonvanishing, but in a particular ultraviolet (UV) completion flavon charge conservation may push some of them to higher order in $\epsilon$ or to vanish altogether. However from (\ref{eqn:diagmasses}) and (\ref{eqn:mixings}) we see that the Babu-Nandi texture is rather robust: the only flavor off-diagonal couplings needed to reproduce the observed mixings are one or more of $h_{12}^d$, $h_{12}^u$, one or more of $h_{23}^d$, $h_{23}^u$, and one or more of $h_{13}^d$, $h_{13}^u$; the rest can either vanish or appear at higher order in $\epsilon$. \subsection{Yukawa interactions and FCNC} Our model has flavor changing neutral current interactions in the Yukawa sector. Using eqs.(1-4), the Yukawa interaction matrices $Y^{h}_u$, $Y^{h}_d$, $Y^{s}_u$, $Y^{s}_d $ for the up and down sector, for $h^0$ and $s^0$ fields are obtained to be \begin{eqnarray} \sqrt{2} Y^{h}_u = \left(\matrix{h_{11}^u \epsilon^6 & h_{12}^u \epsilon^4 & h_{13}^u \epsilon^4 \cr h_{21}^u\epsilon^4 & h_{22}^u \epsilon^2 & h_{23}^u \epsilon^2 \cr h_{31}^u \epsilon^4 & h_{32}^u \epsilon^2 & h_{33}^u}\right), ~~~~~ \sqrt{2} Y^{h}_d = \left(\matrix{h_{11}^d \epsilon^6 & h_{12}^d \epsilon^6 & h_{13}^d \epsilon^6 \cr h_{21}^d\epsilon^6 & h_{22}^d \epsilon^4 & h_{23}^d \epsilon^4 \cr h_{31}^d \epsilon^6 & h_{32}^d \epsilon^4 & h_{33}^d \epsilon^2}\right), \label{SIX} \end{eqnarray} with the charged lepton Yukawa coupling matrix $Y_\ell$ obtained from $Y_d$ by replaing $h_{ij}^d \rightarrow h_{ij}^\ell$. \begin{eqnarray} \sqrt{2} Y^{s}_u = \left(\matrix{6h_{11}^u \epsilon^5\beta & 4h_{12}^u \epsilon^3\beta & 4h_{13}^u \epsilon^3\beta \cr 4h_{21}^u\epsilon^3\beta & 2h_{22}^u \epsilon\beta & 2h_{23}^u \epsilon\beta \cr 4h_{31}^u \epsilon^3\beta & 2h_{32}^u \epsilon\beta & 0}\right), ~~~~~ \sqrt{2} Y^{s}_d = \left(\matrix{6h_{11}^d \epsilon^5\beta & 6h_{12}^d \epsilon^5\beta & 6h_{13}^d \epsilon^5\beta \cr 6h_{21}^d\epsilon^5\beta & 4h_{22}^d \epsilon^3\beta & 4h_{23}^d \epsilon^3\beta \cr 6h_{31}^d \epsilon^5\beta & 4h_{32}^d \epsilon^3\beta & 2h_{33}^d \epsilon\beta}\right), \label{SEVEN} \end{eqnarray} with the charged lepton Yukawa coupling matrix $Y_\ell$ obtained from $Y_d$ by replaing $h_{ij}^d \rightarrow h_{ij}^\ell$. There are several important features that distinguish our model from the proposals in \cite{bn,gl,Dorsner:2002wi}: i) Note, from eqs.(\ref{FIVE}) and (\ref{SIX}), in our model, the Yukawa couplings of h to the SM fermions are exactly the same as in the SM. This is because the fermion mass hierarchy in our model is arising from $\left({S^\dagger S \over M^2}\right)$. This is a distinguishing feature of our model from that proposed in \cite{bn,gl} where the Yukawa couplings of $h$ are flavor dependent, because the hierarchy there arises from $\left({H^\dagger H \over M^2}\right)$. ii) In our model, we have an additional singlet Higgs boson whose couplings to the SM fermions are flavor dependent as given in eq. (\ref{SEVEN}). Again, this is because the hierarchy in our model arises from $\left({S^\dagger S \over M^2}\right)$. In particular, $s^0$ does not couple to the top quark, and its dominant fermionic coupling is to the bottom quark. This will have interesting phenomenological implications for the Higgs searches at the LHC. iii) We note from eqs. (\ref{FIVE}-\ref{SIX}) that the mass matrices and the corresponding Yukawa coupling matrices for $h$ are proportional as in the SM. Thus there are no flavor changing Yukawa interactions mediated by $h$. However, this is not true for the Yukawa interactions of the singlet Higgs as can be seen from eqs. (\ref{FIVE}) and (\ref{SEVEN}). Thus $s$ exchange will lead to flavor violation in the neutral Higgs interactions. \subsection{Higgs sector and the $Z'$} The Higgs potential of our model, consistent with the SM and the extra $U(1)_S$ symmetry, can be written as \begin{eqnarray} V(H,S) = -\mu^{2}_H (H^{\dag} H) - \mu^{2}_S (S^{\dag} S) + \lambda_H (H^{\dag}H)^2 + \lambda_S (S^{\dag} S)^2 +\lambda_{HS}(H^{\dag}H)(S^{\dag} S). \label{EIGHT} \end{eqnarray} Note that after absorbing the three components of H in $W^{\pm}$ and Z, and the pseudoscalar component of S in $Z'$, we are left with only two scalar Higgs, $h^0$ and $s^0$. The squared mass matrix in the $(h^0, s^0)$ basis is given by \begin{equation} {\cal M}^2 = 2 v^2\left(\matrix{ 2\lambda_H & \lambda_{HS} \alpha \cr \lambda_{HS} \alpha & 2\lambda_S \alpha^2 \cr }\right), \label{NINE} \end{equation} where $\alpha=v_s/v$. The mass eigenstates $h$ and $s$ can be written as \begin{eqnarray} h^0 &=& h \cos\theta + s \sin\theta, \nonumber \\ s^0 &=& - h \sin\theta + s \cos\theta, \label{TEN} \end{eqnarray} where $\theta$ is the mixing angle in the Higgs sector. In the Yukawa interactions discussed above, as well as in the gauge interactions involving the Higgs fields, the fields appearing are $h^0$ and $s^0$, and these can be expressed in terms of $h$ and $s$ using eq. (\ref{TEN}). The mass of the $Z'$ gauge boson is given by \begin{equation} m^{2}_{Z'} = 2 g^{2}_E v^{2}_s \label{ELEVEN} \end{equation} Note that the $Z'$ does not couple to any SM particles directly. The $Z'$ coupling to the SM particles will be only via dimension six or higher operators. Such couplings will be generated by the vectorlike fermions in the model to be discussed in section 5. \section {Phenomenological Implications: Constraints from existing data} In this section, we discuss the constraints on our model from the existing experimental results. As can be seen from eq. (\ref{SEVEN}), the exchange of $s$ gives rise to tree level FCNC processes. This will cause $K^0$$-$$\bar{K}^0$ mass splitting, $D^0$$-$$\bar{D}^0$ mixing, $K_L \rightarrow \mu^+\mu^-$, $B^{0}_{s} \rightarrow \mu^+\mu^-$, as well as contributions to the electric dipole moment (EDM) of neutron and electron, and other rare processes that we discuss below. \subsection{$\mathbf{K^0-\bar{K}^0}$ mixing} In our model, this arises from the tree level $s$ exchange between $d\bar{s}$ and $\bar{s}d$, and is proportional to $\beta^2 \epsilon^{10}$. Taking $\beta\sim \epsilon$ $\sim 0.15$, and the values of the couplings $h^{d}_{12}$ and $h^{d}_{21}$ to be of order 1, the contribution to $\Delta m_K^{\rm Higgs} \simeq 10^{-16}$ to $10^{-17}$ GeV, for an $s$ mass of 100 GeV. The experimental value of $\Delta m_K$ is $3.5 \times 10^{-15}$ GeV \cite{pdg}. Thus, since the contribution goes like $m_s^{-4}$, s can be lighter than 100 GeV. Note that $\epsilon = v_s/M$ is fixed to be $\sim 0.15$ to explain fermion mass hierarchy and the CKM mixing. However, $\beta = v/M$ is a parameter in our model. Although the $\Delta m_K$ constraint allows a somewhat larger value of $\beta$, we shall see that $D^0-\bar{D^0}$ mixing constrains $\beta \sim \epsilon$. \subsection{$\mathbf{D^0-\bar{D}^0}$ mixing}\label{sec:3.2} This contribution is again due to the tree level $s$ exchange between $u\bar{c}$ and $\bar{u}c$, and is proportional to $\beta^2\epsilon^6$, and hence is enhanced compared to $\Delta m_K$. Again, taking the couplings $h^{u}_{12}$ and $h^u_{21}$ to be of order one and $\beta\sim \epsilon$, we get $\Delta m_D \sim 10^{-14}$ GeV for $m_s = 100$ GeV. This is to be compared with the current experimental value of $1.6\times 10^{-14}$ GeV \cite{pdg,cdf}. Thus $\Delta m_D$ gives a much stronger restriction on the model parameters. $\beta$ can not be much larger than $\epsilon$, and $s$ can not be much lighter than 100 GeV. If our proposal is correct, an electroweak singlet scalar should be observed at the LHC. \subsection{Other rare processes} In our model, tree level $s$ exchange between $d\bar{s}$ and $\mu^+ \mu^-$ will contribute to $K_L \rightarrow \mu^+\mu^-$. This contribution is proportional to $\beta^2\epsilon^{10}$, and leads to a contribution to this branching ratio $\sim 10^{-14}$ for $\beta \sim \epsilon$ and $m_s\sim 100$ GeV. This is very small compared to the current experimental value of $\sim 6.9 \times 10^{-9}$ \cite{pdg}. Similarly, the contribution to the other rare processes such as $K_L \rightarrow \mu e$, $K \rightarrow \pi \bar{\nu} \nu$, $\mu \rightarrow e \gamma$, $\mu \rightarrow 3e$, $B_d-\bar{B_d}$ mixing, etc are several orders of magnitude below the corresponding experimental limits. \subsection{Constraint on the mass of $s$} Experiments at LEP2 have set a lower limit of $114.4$ GeV for the mass of the SM Higgs boson, from nonobservation of the associated production $e^+ e^- \rightarrow Zh$. In our model, since the singlet Higgs can mix with the doublet $h$, there will be a limit for $m_s$ depending on the value of the mixing angle, $\theta$. For $\sin^2\theta\ge 0.25$, the bound of $114.4$ applies also for $m_s$ \cite{lpew}. However, $s$ can be lighter if the mixing is small. \subsection{Constraint on the mass of the $Z'$} We have assumed that the extra $U(1)$ symmetry in our model is spontaneously broken at the EW scale. But the corresponding gauge coupling, $g_E$ is arbitrary and hence the mass of $Z'$ is not determined in our model. However, very accurately measured $Z$ properties at LEP1 put a constraint on the $Z-Z'$ mixing to be $\sim 10^{-3}$ or smaller \cite{pdg,Langacker:2008yv}. In our model, the $Z'$ does not couple to any SM particle directly. $Z-Z'$ mixing can take place at the one loop level with the new vectorlike fermions in the loop. The mixing angle is \begin{equation} \theta_{ZZ'}\sim \frac{g_Z g_E}{16 \pi^2} \left(\frac{m_Z}{M}\right)^2, \label{TWELVE} \end{equation} where M is the mass of the vectorlike fermions with masses in the TeV scale. Even with $g_E \sim 1$, we get $\theta_{ZZ'} \sim 10^{-4}$ or less. Thus there is no significant bound for the mass of this $Z'$ from the LEP1. This $Z'$ can couple to the SM particles via dimension six operators with the interaction of the form \begin{equation} L = \frac{\bar{\psi}_L\sigma^{\mu\nu} \psi_R H Z'^{\mu\nu}}{M^2} \; . \label{THIRTEEN} \end{equation} As was shown in \cite{dob}, no significant bound on $m_{Z'}$ emerges from these interactions. \section{Phenomenological Implications: New physics signals} Motivated to explain the observed mass hierarchy in the fermion sector, we have constructed a model which has a complex singlet Higgs (in addition to the usual doublet), a new $U(1)_S$ gauge symmetry at the EW scale, and a new set of vectorlike fermions at the TeV scale. Thus our model has new particles such as a scalar Higgs and a new $Z'$ boson at the EW scale, and heavy vectorlike quarks and leptons. The model has many phenomenological implications for the production and decays of the Higgs bosons, top quark physics, a new scenario for $Z'$ physics, and the production and decays of the vectorlike fermions. \subsection{Higgs signals} \subsubsection{Higgs coupling to the SM fermions} As can be seen from (\ref{SIX}), the couplings of the doublet Higgs $h$ to the SM fermions are identical to that in the SM, whereas the couplings of the singlet Higgs have a different flavor dependence. In particular, the singlet Higgs $s$ does not couple to the top quark, whereas its couplings to $(b,\tau; c,s,\mu; u,d,e)$ involve the flavor dependent factors $(2,2; 2,4,4; 6,6,6)$ respectively, in the limit of zero mixing between $h$ and $s$. Including the mixing, these factors will be modified. Thus our model will be distinguished from the SM by the fact that the Higgs has nonstandard couplings to fermions predicted in terms of two model parameters: the ratio of vevs $\alpha$ and the mixing angle $\theta$. \subsubsection{Higgs decays} The couplings of the Higgs bosons $h$ and $s$ to the fermions and the gauge bosons can be obtained from eqns. (\ref{SIX}) and (\ref{SEVEN}), and are given in Table 1. \begin{table} \begin{center} \begin{tabular}{|l|c||l|c|} \hline \bf{Interaction} & \bf{Coupling} & \bf{Interaction} & \bf{Coupling}\\ $s\rightarrow u\overline{u}$ & $\frac{m_u}{v\sqrt{2}}\left(\sin\theta+\frac{6\cos\theta}{\alpha}\right)$ & $h\rightarrow u\overline{u}$ & $\frac{m_u}{v\sqrt{2}}\left(\cos\theta-\frac{6\sin\theta}{\alpha}\right)$\\ $s\rightarrow d\overline{d}$ & $\frac{m_d}{v\sqrt{2}}\left(\sin\theta+\frac{6\cos\theta}{\alpha}\right)$ & $h\rightarrow d\overline{d}$ & $\frac{m_d}{v\sqrt{2}}\left(\cos\theta-\frac{6\sin\theta}{\alpha}\right)$\\ $s\rightarrow \mu^+\mu^-$ & $\frac{m_\mu}{v\sqrt{2}}\left(\sin\theta+\frac{4\cos\theta}{\alpha}\right)$ & $h\rightarrow \mu^+\mu^-$ & $\frac{m_\mu}{v\sqrt{2}}\left(\cos\theta-\frac{4\sin\theta}{\alpha}\right)$\\ $s\rightarrow s\overline{s}$ & $\frac{m_s}{v\sqrt{2}}\left(\sin\theta+\frac{4\cos\theta}{\alpha}\right)$ & $h\rightarrow s\overline{s}$ & $\frac{m_s}{v\sqrt{2}}\left(\cos\theta-\frac{4\sin\theta}{\alpha}\right)$\\ $s\rightarrow \tau^+\tau^-$ & $\frac{m_\tau}{v\sqrt{2}}\left(\sin\theta+\frac{2\cos\theta}{\alpha}\right)$ & $h\rightarrow \tau^+\tau^-$ & $\frac{m_\tau}{v\sqrt{2}}\left(\cos\theta-\frac{2\sin\theta}{\alpha}\right)$\\ $s\rightarrow c\overline{c}$ & $\frac{m_c}{v\sqrt{2}}\left(\sin\theta+\frac{2\cos\theta}{\alpha}\right)$ & $h\rightarrow c\overline{c}$ & $\frac{m_c}{v\sqrt{2}}\left(\cos\theta-\frac{2\sin\theta}{\alpha}\right)$\\ $s\rightarrow b\overline{b}$ & $\frac{m_b}{v\sqrt{2}}\left(\sin\theta+\frac{2\cos\theta}{\alpha}\right)$ & $h\rightarrow b\overline{b}$ & $\frac{m_b}{v\sqrt{2}}\left(\cos\theta-\frac{2\sin\theta}{\alpha}\right)$\\ $s\rightarrow t\overline{t}$ & $\frac{m_t}{v\sqrt{2}}\sin\theta$ & $h\rightarrow t\overline{t}$ & $\frac{m_t}{v\sqrt{2}}\cos\theta$\\ $s\rightarrow ZZ$ & $\frac{2 m_Z^2}{v\sqrt{2}}\sin\theta$ & $h\rightarrow ZZ$ & $\frac{2 m_Z^2}{v\sqrt{2}}\cos\theta$\\ $s\rightarrow Z'Z'$ & $\frac{m_{Z'}^2}{v\alpha\sqrt{2}}\cos\theta$ & $h\rightarrow Z'Z'$ & $\frac{m_{Z'}^2}{v\alpha\sqrt{2}}\sin\theta$\\ $s\rightarrow W^+W^-$ & $\frac{2 m_W^2}{v\sqrt{2}}\sin\theta$ & $h\rightarrow W^+W^-$ & $\frac{2 m_W^2}{v\sqrt{2}}\cos\theta$\\ & & $h\rightarrow ss$ & $\lambda_{\mathrm{hss}}$\\ \hline \end{tabular} \caption{Yukawa and gauge couplings of $h$ and $s$.} \end{center} \label{table1} \end{table} The coupling of $h$ to $s$ given by: \begin{eqnarray*} \lambda_{\mathrm{hss}} &=&\frac{m_h^2}{4 v}\left\{(1-\mu)\sin 2\theta\left[\cos^3\theta-\alpha\sin^3\theta+\sin 2\theta(\alpha\cos\theta-\sin\theta)\right]+\right.\\ &&\left. 3\sin 2\theta\left[\sin\theta\left(1+\mu-(1-\mu)\cos 2\theta\right)-\cos\theta\left(1+\mu-(1-\mu)\cos 2\theta\right)/\alpha\right]\right\} \; , \end{eqnarray*} where $\mu=m_s^2/m_h^2$. Because of the flavor dependence of the couplings, the branching ratios (BR) for $h$ to various final states are altered substantially from those in the SM. These branching ratios for $h$ to various final states are shown in Figs. \ref{h-2x_theta_0}-\ref{h-2x_theta_40} for values of the mixing angle $\theta = 0,~20^\circ,~26^\circ,$ and $40^ \circ$ respectively. \begin{figure} \begin{center} \includegraphics[scale=1.3]{h-2x_theta_0.eps} \caption{Branching ratio of $h\rightarrow2x$, for $\theta$$=$$0$ and $\alpha$$=$$1$ \cite{hdecay}.} \label{h-2x_theta_0} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=1.3]{h-2x_theta_20.eps} \caption{Branching ratio of $h\rightarrow2x$, for $\theta$$=$$20^\circ$ and $\alpha$$=$$1$.} \label{h-2x_theta_20} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=1.3]{h-2x_theta_26.eps} \caption{Branching ratio of $h\rightarrow2x$, for $\theta$$=$$26^\circ$ and $\alpha$$=$$1$.} \label{h-2x_theta_26} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=1.3]{h-2x_theta_40.eps} \caption{Branching ratio of $h\rightarrow2x$, for $\theta$$=$$40^\circ$ and $\alpha$$=$$1$.} \label{h-2x_theta_40} \end{center} \end{figure} For $\theta = 0$, i.e. no mixing, these BR's are the same as for the SM Higgs. Note that for both $\theta = 20^\circ$ and $ 26^\circ$, the $gg$ and the $\gamma\gamma$ BR's are enhanced substantially compared to the SM. This is due to drastic reduction for the $b \bar{b}$ mode from an approximate cancellation in the corresponding coupling as can be seen from Table 1. In particular, for $\theta = 26^\circ$, the effect is quite dramatic. For a light Higgs ($m_h$ around $115$ GeV), the usually dominant $b \bar{b}$ mode is highly suppressed and the $\gamma\gamma$ mode is enhanced by a factor of almost $10$ compared to the SM. This is to be contrasted with the proposal of Refs. \cite{bn,gl} in which the $h \rightarrow \gamma\gamma$ mode is reduced by about a factor of $10$. Thus the Higgs signal in this mode for a Higgs mass of $\sim{114 - 140}$ GeV gets a big enhancement, making its potential discovery via this mode much more favorable at the LHC. Such a signal may be observable at the Tevatron for a Higgs mass $\sim{114}$ as the luminosity accumulates, but would require about 10 fb$^{-1}$ or more of data \cite{tevatron_search}. Another interesting effect is the Higgs signal via the $WW^*$ for the light Higgs. In the SM, this mode becomes important for the Tevatron search for Higgs masses greater than about 135 GeV, where the BR to $WW^*$ is approximate equal to that of $b\bar{b}$. Currently Tevatron experiments have excluded a SM Higgs with mass around $170$ GeV (where the BR to $WW^*$ is around $100$ percent) for this mode \cite{teva170}. In our framework, for $\theta = 20^\circ$ for example, the crossover between the $WW^*$ mode and the $b\bar{b}$ mode takes place sooner than $135$ GeV. Thus the Tevatron experiments will be more sensitive to the lower mass range than for a SM Higgs, and should be able to exclude masses much smaller than $160$ GeV. For a heavy Higgs, $m_h > 200$ GeV, the Higgs will be accessible via the golden mode $h \rightarrow{ZZ}$. However, in this case, both $h$ and $s$ decay via this mode with comparable BR's (see Figs. \ref{h-2x_theta_20} and \ref{h-2x_theta_40} for $\theta = 20^\circ$ and $40^\circ$). So initially it will be hard to tell whether we are seeing $h$ or $s$, a case of Higgs look-alikes. An accurate measurement of this cross section times the BR, and the mass of the observed Higgs, we will be able to distinguish a heavy $h$ from a heavy $s$, since the production cross sections depend on the mixing angle. \subsection{Top quark physics}\label{sec:4.2} In the SM, the $t\rightarrow{c h}$ mode is severely suppressed with a BR $\sim10^{-14}$ \cite{tchsm}. In our model, as can be seen from eqs.(\ref{SIX}) and (\ref{SEVEN}), although $t\rightarrow{c h}$ is zero at tree level, we have a large coupling for $t\rightarrow{c s} \sim {2 \epsilon \beta}$ (note $s$ here denotes the singlet Higgs, not the strange quark). This gives rise to a significant BR for the $t\rightarrow{c s}$ mode for a Higgs mass of up to about $150$ GeV. If the mixing between the h and s is substantial, both decay modes, $t\rightarrow{c s}$ and $t\rightarrow{c h}$ will have BR $\sim{10^{-3}}$. With a very large $t \bar{t}$ cross section , $\sigma_{t\bar{t}}\sim{10^3}$ pb at the LHC, this could be an observable production mode for Higgs bosons at the LHC. \subsection{$\mathbf{Z^\prime}$ physics} Our model has a $Z'$ boson near the EW scale from the spontaneous breaking of the extra $U(1)$ symmetry. As discussed before, since the $Z-Z'$ mixing is very small $\sim{10^{-4}}$ or less, its mass is not constrained by the very accurately measured $Z$ properties at LEP. Its mass can be as low as a few GeV from the existing constraints. This $Z'$ does not couple to SM particles with dimension 4 operators. It does couple to $s$ at tree level via the $sZ' Z'$ interaction. Thus it can be produced via the decay of $s$ (or $h$ if there is a substantial mixing between $h$ and $s$). This gives an interesting signal for the Higgs decays: $s\rightarrow{Z' Z'}$, $h\rightarrow{Z' Z'}$ if allowed kinematically. In Figs. \ref{h-2x_ZP_theta_20} and \ref{s-2x_ZP_theta_20}, we give the BR's for $h$ and $s$ decays for a $Z'$ mass of $40$ GeV. The $Z'$ will decay to the SM particles via the $Z-Z'$ mixing with the same branching ratio as the $Z$. Thus the clear final state signal will be $l^+ l^- l^+ l^-$ pairs $(l = e, \mu)$ with each pair having the invariant mass of the $Z'$. Such a signal will be easily detectable at the LHC. If the $Z'$ happens to be very light, (say a few GeV), and the mixing angle is extremely tiny, there is a possibility that the $Z'$s may produce displaced vertices at the detector. Both of these will be very unconventional signals for Higgs bosons at the LHC. \begin{figure} \begin{center} \includegraphics[scale=1.3]{h-2x_ZPandSIG_20.eps} \caption{Branching ratio of $h\rightarrow2x$ including $h\rightarrow ss$ and $h\rightarrow Z'Z'$ where $m_{Z'}=40$ GeV and $m_{s}=100$ GeV. Here $\theta$$=$$20^\circ$ and $\alpha$$=$$1$.} \label{h-2x_ZP_theta_20} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=1.3]{s-2x_ZP_theta_20.eps} \caption{Branching ratio of $s\rightarrow2x$ including $s\rightarrow Z'Z'$ where $m_{Z'}=40$ GeV. Here $\theta$$=$$20^\circ$ and $\alpha$$=$$1$.} \label{s-2x_ZP_theta_20} \end{center} \end{figure} \subsection{$\mathbf{B_s^0\rightarrow\mu^+\mu^-}$} In our framework this decay gets a contribution from an FCNC interaction mediated by $s$ exchange. The amplitude for this decay is $A \sim 4 h_{22}^d h_{22}^\ell \epsilon^6 \beta^2$. Taking $\beta\sim\epsilon$, $A \sim 4 h_{22}^d h_{22}^\ell \epsilon^8$, and with the couplings $h_{22}^d, h_{22}^\ell \sim 1$, we obtain the branching ratio, $BR(B_s^0\rightarrow\mu^+\mu^-)\sim 10^{-9}$. The current experimental limit for this BR is $4.7\times 10^{-8}$ \cite{pdg}, and thus there is a possibility that this decay could be observed at the Tevatron. \subsection{Production and decay of heavy fermions} Our framework requires vectorlike quarks and leptons, both $SU(2)$ doublets and weak singlets, with masses around the TeV scale. The heavy quarks be pair produced at high energy hadron colliders via the strong interaction. For example, for a 1 TeV vectorlike quark, the production cross section at the LHC is $\sim{60}$ fb \cite {mangano}. We need several such vectorlike quarks for our model. So the total production cross section could be as large as a few hundred fb. These will decay to the light quarks of the same electric charge and Higgs bosons ($h$ or $s$): Thus the signal will be two high $p_T$ jets together with the final states arising from the Higgs decays. For a heavy Higgs, in the golden mode ($h\rightarrow{Z Z}, s\rightarrow{Z Z}$, this will give rise to two high $p_T$ jets plus four $Z$ bosons. In the case of a light $Z'$, the final state signal will be two high $p_T$ jets plus up to 8 charged leptons in the final state (with each lepton pair having the invariant mass of the $Z'$). \section {UV Completion} We present two concrete examples of models from which an effective action like eq. (\ref{ONE}) can be derived. The first example only reproduces the second and third generation quark couplings, but its simplicity serves to introduce the basic issues and mechanisms. The second example is a complete three generation TeV scale model of quark flavor. The correct lepton couplings can be obtained from a copy of the same structure used for the down-type quarks. We assume that neutrino masses benefit from some additional see-saw mechanism, although it is not obvious that we can't obtain them by refining the TeV scale flavon model. \subsection{Two generation model} For this pedagogical example we will employ two important simplifications: \begin{itemize} \item We only reproduce the second and third generation quark couplings. In the next subsection we extend this to include the first generation. \item We will choose charge assignments such that the couplings $h_{32}^u$, $h_{32}^d$, and $h_{23}^d$ are higher order in $\epsilon$. As already mentioned nonzero values of these couplings are not needed to reproduce the observed SM quark masses and mixings. \end{itemize} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|||c|c|c|c|} \hline \bf{Field} & $\mathbf{U(1)_Y}$ & $\mathbf{U(1)_S}$ & $\mathbf{U(1)_F}$ &\bf{Field} & $\mathbf{U(1)_Y}$ & $\mathbf{U(1)_S}$ & $\mathbf{U(1)_F}$ \\ \hline $H$ & 1/2 & 0 & 0 & $U_{1L}$ & 2/3 & 1 & 0 \\ $S$ & 0 & 1 & 0 & $U_{1R}$ & 2/3 & 1 & 1 \\ $F$ & 0 & 0 & 1 & $U_{2L}$ & 2/3 & -1 & 3 \\ $q_{3L}$ & 1/6 & 0 & 0 & $U_{2R}$ & 2/3 & -1 & 3 \\ $q_{2L}$ & 1/6 & 0 & 2 & $D_{1L}$ & -1/3 & -1 & -1 \\ $u_{3R}$ & 2/3 & 0 & 0 & $D_{1R}$ & -1/3 & -1 & -1 \\ $u_{2R}$ & 2/3 & 0 & 3 & $D_{2L}$ & -1/3 & 2 & 3 \\ $d_{3R}$ & -1/3 & 0 & -1 & $D_{2R}$ & -1/3 & 2 & 2 \\ $d_{2R}$ & -1/3 & 0 & 3 & $D_{3L}$ & -1/3 & 1 & 3 \\ $Q_{1L}$ & 1/6 & -1 & -1 & $D_{3R}$ & -1/3 & 1 & 3 \\ $Q_{1R}$ & 1/6 & -1 & 0 & & & & \\ $Q_{2L}$ & 1/6 & 1 & 1 & & & & \\ $Q_{2R}$ & 1/6 & 1 & 2 & & & & \\ $Q_{3L}$ & 1/6 & -1 & 3 & & & & \\ $Q_{3R}$ & 1/6 & -1 & 2 & & & & \\ $Q_{4L}$ & 1/6 & 2 & 2 & & & & \\ $Q_{4R}$ & 1/6 & 2 & 1 & & & & \\ \hline \end{tabular} \end{center} \caption{\label{table:smallcharge} Charge assignments in the two generation model for the scalar fields $H$, $S$, $F$, and the SM quark fields $q_{3L}$, $q_{2L}$, $u_{3R}$, $u_{2R}$, $d_{3R}$, and $d_{2R}$. Also listed are the color triplet weak doublet heavy quark pairs $Q_{iL}$, $Q_{iR}$ and the color triplet weak singlet heavy quark pairs $U_{iL}$, $U_{iR}$, $D_{iL}$, $D_{iR}$.} \end{table} With these simplifications we postulate a TeV scale model with the field content shown in Table \ref{table:smallcharge}, where the hypercharges are listed along with the charge assignments under $U(1)_S$ and $U(1)_F$. The Higgs doublet $H$ is the only scalar that carries hypercharge, while the SM singlet $S$ is the only scalar carrying $U(1)_S$ charge. The SM singlet flavon $F$ is the only scalar carrying $U(1)_F$ charge. The SM quarks are neutral under $U(1)_S$. The third generation up-type quark fields also carry no $U(1)_F$ charge, while the other quark fields have flavor-dependent nonzero $U(1)_F$ charges. We introduce four pairs of new color triplet weak doublet fermion fields $Q_{iL}$, $Q_{iR}$, two pairs of color triplet up-type weak singlets $U_{iL}$, $U_{iR}$, and three pairs of color triplet down-type weak singlets $D_{iL}$, $D_{iR}$. Each pair is vectorlike with respect to the SM gauge group and $U(1)_S$, thus no anomalies are introduced with respect to these gauge groups, and each vectorlike pair naturally acquires a Dirac mass of order $M$ (when they have the same $U(1)_F$ charge) or of order the vev of $F$ (when their $U(1)_F$ charges differ by one). We assume that both the vev of $F$ and $M$ are greater than, but of order of, a TeV. Any residual anomaly in $U(1)_F$ can be handled either by introducing more heavy fermions or using the Green-Schwarz mechanism above the TeV scale. With these charge assignments the only dimension 4 couplings involving the second and third generation SM quarks are: \begin{eqnarray} &&\hspace{-10pt} f_1\overline{q}_{3L}u_{3R}\bar{H} + f_2\overline{q}_{3L} Q_{1R} S + f_3\overline{D}_{1L} d_{3R} S^\dagger + f_4\overline{q}_{2L} Q_{2R} S^\dagger \nonumber\\ &&\hspace{-10pt} + f_5\overline{U}_{1L} u_{3R} S + f_6\overline{q}_{2L} Q_{3R} S + f_7\overline{U}_{2L} u_{2R} S^\dagger + f_8\overline{D}_{3L} d_{2R} S + h.c. \quad , \label{FOURTEEN} \end{eqnarray} where the $f_i$ are dimensionless coupling constants. Thus the top quark receives the correct mass from electroweak symmetry breaking for $\vert f_1 \vert \simeq 1$. The other couplings involve the $S$ scalar, but not the Higgs $H$ or the flavon $F$. Both electroweak symmetry breaking and flavor symmetry breaking are communicated to the rest of the SM quark sector via a Froggart-Nielsen type mechanism, integrating out the heavy TeV scale fermions from tree level diagrams that connect SM quark left doublets to SM quark right singlets and to $H$ or $\bar{H}$. The renormalizable couplings involving just the heavy fermions are: \begin{eqnarray} &&\hspace{-10pt} f_9 \overline{Q}_{1R} Q_{1L} F + f_{10}\overline{Q}_{1L} D_{1R} H + M\overline{D}_{1R} D_{1L} \nonumber\\ &&\hspace{-10pt} + f_{11}\overline{Q}_{2R} Q_{2L} F + f_{12}\overline{Q}_{2L} U_{1R} \bar{H} + f_{13}\overline{U}_{1R} U_{1L} F \\ &&\hspace{-10pt} + f_{14}\overline{Q}_{3R} Q_{3L} F^\dagger + f_{15}\overline{Q}_{3L} U_{2R} \bar{H} + M\overline{U}_{2R} U_{2L} \nonumber\\ &&\hspace{-10pt} + f_{16}\overline{Q}_{2L} Q_{4R} S^\dagger + f_{17}\overline{Q}_{4L} Q_{2R} S + f_{18}\overline{Q}_{4R} Q_{4L} F^\dagger + f_{19}\overline{Q}_{4L} D_{2R} H \nonumber\\ &&\hspace{-10pt} + f_{20}\overline{D}_{2R} D_{2L} F^\dagger + f_{21}\overline{D}_{2L} D_{3R} S + M\overline{D}_{3L} D_{3R} + h.c. \quad . \nonumber \end{eqnarray} Thus, integrating out the heavy fermions in the tree level diagram composed from the couplings \begin{eqnarray} f_2\overline{q}_{3L} Q_{1R} S + f_9 \overline{Q}_{1R} Q_{1L} F + f_{10}\overline{Q}_{1L} D_{1R} H + M\overline{D}_{1R} D_{1L} + f_3\overline{D}_{1L} d_{3R} S^\dagger \label{twenty} \end{eqnarray} produces an effective coupling below the TeV scale proportional to \begin{eqnarray} f_2f_3f_9f_{10}\frac{F}{M}\frac{S^\dagger S}{M^2}\overline{q}_{3L}d_{3R}H + h.c. \; . \end{eqnarray} Integrating out the heavy fermions in the tree level diagram composed from the couplings \begin{eqnarray} f_4\overline{q}_{2L} Q_{2R} S^\dagger + f_{11}\overline{Q}_{2R} Q_{2L} F + f_{12}\overline{Q}_{2L} U_{1R} \bar{H} + f_{13}\overline{U}_{1R} U_{1L} F + f_5\overline{U}_{1L} u_{3R} S \label{twentytwo} \end{eqnarray} produces an effective coupling below the TeV scale proportional to \begin{eqnarray} f_4f_5f_{11}f_{12}f_{13}\frac{F^2}{M^2}\frac{S^\dagger S}{M^2}\overline{q}_{2L}u_{3R}\bar{H} + h.c. \; . \end{eqnarray} Integrating out the heavy fermions in the tree level diagram composed from the couplings \begin{eqnarray} f_6\overline{q}_{2L} Q_{3R} S + f_{14}\overline{Q}_{3R} Q_{3L} F^\dagger + f_{15}\overline{Q}_{3L} U_{2R} \bar{H} + M\overline{U}_{2R} U_{2L} + f_7\overline{U}_{2L} u_{2R} S^\dagger \label{twentyfour} \end{eqnarray} produces an effective coupling below the TeV scale proportional to \begin{eqnarray} f_6f_7f_{14}f_{15}\frac{F^\dagger}{M}\frac{S^\dagger S}{M^2}\overline{q}_{2L}u_{2R}\bar{H} + h.c.\; . \end{eqnarray} Finally, integrating out the heavy fermions in the tree level diagram composed from the couplings \begin{eqnarray} &&\hspace{-10pt} f_4\overline{q}_{2L} Q_{2R} S^\dagger + f_{17}^*\overline{Q}_{2R} Q_{4L} S^\dagger + f_{19}\overline{Q}_{4L} D_{2R} H \nonumber\\ &&\hspace{-10pt} + f_{20}\overline{D}_{2R} D_{2L} F^\dagger + f_{21}\overline{D}_{2L} D_{3R} S + M\overline{D}_{3R} D_{3L} + f_8\overline{D}_{3L} d_{2R} S \label{twentysixfirst} \end{eqnarray} produces an effective coupling below the TeV scale proportional to \begin{eqnarray} f_4f_8f_{17}^*f_{19}f_{20}f_{21}\frac{F^\dagger}{M}\frac{(S^\dagger S)^2}{M^4} \overline{q}_{2L}d_{2R}H + h.c. \; . \end{eqnarray} There is an additional very similar tree level diagram contributing to $h_{22}^d$ composed from the couplings \begin{eqnarray} &&\hspace{-10pt} f_4\overline{q}_{2L} Q_{2R} S^\dagger + f_{11}\overline{Q}_{2R} Q_{2L} F + f_{16}\overline{Q}_{2L} Q_{4R} S^\dagger + f_{18}\overline{Q}_{4R} Q_{4L} F^\dagger + f_{19}\overline{Q}_{4L} D_{2R} H \nonumber\\ &&\hspace{-10pt} + f_{20}\overline{D}_{2R} D_{2L} F^\dagger + f_{21}\overline{D}_{2L} D_{3R} S + M\overline{D}_{3R} D_{3L} + f_8\overline{D}_{3L} d_{2R} S \label{twentysixsecond} \end{eqnarray} which produces an effective coupling below the TeV scale proportional to \begin{eqnarray} f_4f_8f_{11}f_{16}f_{18}f_{19}f_{20}f_{21}\frac{F(F^\dagger)^2}{M^3}\frac{(S^\dagger S)^2}{M^4} \overline{q}_{2L}d_{2R}H + h.c. \; . \end{eqnarray} \begin{figure} \centering \includegraphics[scale=0.7]{diag20.eps} \caption{The Feynman diagram associated with eq. (\ref{twenty})} \label{fig:diag20} \end{figure} \begin{figure} \centering \includegraphics[scale=0.7]{diag22.eps} \caption{The Feynman diagram associated with eq. (\ref{twentytwo})} \label{fig:diag22} \end{figure} \begin{figure} \centering \includegraphics[scale=0.7]{diag24.eps} \caption{The Feynman diagram associated with eq. (\ref{twentyfour})} \label{fig:diag24} \end{figure} \begin{figure} \centering \includegraphics[scale=0.7]{diag26.1.eps} \end{figure} \begin{figure} \centering \includegraphics[scale=0.7]{diag26.2.eps} \caption{The Feynman diagram associated with eq. (\ref{twentysixsecond})} \label{fig:diag26} \end{figure} \subsection{Three generation model} Here we present an concrete example of a full three generation TeV scale model that reproduces an effective action like eq. (\ref{ONE}) at the electroweak scale. This model uses a single electroweak messenger scalar $S$, but employs three TeV scale flavon scalars $F_1$, $F_2$, and $F_3$, each corresponding to a different broken $U(1)_{F_i}$ flavon symmetry. As before the SM quarks are neutral under $U(1)_S$. The third generation up-type quark fields also carry no $U(1)_{F_i}$ charges, while the other quark fields have flavor-dependent nonzero $U(1)_{F_i}$ charges. The model has a rather large number of new heavy fermions: seven pairs of new color triplet weak doublet fermion fields $Q_{iL}$, $Q_{iR}$, six pairs of color triplet up-type weak singlets $U_{iL}$, $U_{iR}$, and eight pairs of color triplet down-type weak singlets $D_{iL}$, $D_{iR}$. Each pair is vectorlike with respect to the SM gauge group and $U(1)_S$, thus no anomalies are introduced with respect to these gauge groups, and each vectorlike pair naturally acquires a Dirac mass of order $M$ (when they have the same $U(1)_{F_i}$ charges) or of order the vev of some $F_i$ (when one of their $U(1)_{F_i}$ charges differs by one). We assume that both the $F_i$ vevs and $M$ are of order a TeV. Any residual anomaly in the $U(1)_{F_i}$ symmetries can be handled either by introducing more heavy fermions or using the Green-Schwarz mechanism at the TeV scale. We do not suggest that this model is the most efficient one implementing the basic concepts of our proposal. We have made an explicit trade-off, in some sense, of maximizing the number of the new heavy fermions required in order to minimize the complexity of the messenger sector and the charge assignments. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|||c|c|c|c|c|c|} \hline \bf{Field} & $\mathbf{U(1)_Y}$ & $\mathbf{U(1)_S}$ & $\mathbf{U(1)_{F1}}$ & $\mathbf{U(1)_{F2}}$ & $\mathbf{U(1)_{F3}}$ &\bf{Field} & $\mathbf{U(1)_Y}$ & $\mathbf{U(1)_S}$ & $\mathbf{U(1)_{F1}}$ & $\mathbf{U(1)_{F2}}$ & $\mathbf{U(1)_{F3}}$ \\ \hline $q_{1L}$ & 1/6 & 0 & 1 & 2 & 1 & $U_{1L}$ & 2/3 & 1 & 0 & 1 & 1 \\ $q_{2L}$ & 1/6 & 0 & 0 & 1 & 0 & $U_{1R}$ & 2/3 & 1 & 0 & 1 & 0 \\ $q_{3L}$ & 1/6 & 0 & 0 & 0 & 0 & $U_{2L}$ & 2/3 & 1 & 1 & 0 & 0 \\ $u_{1R}$ & 2/3 & 0 & 0 & 1 & 1 & $U_{2R}$ & 2/3 & 1 & 1 & 1 & 0 \\ $u_{2R}$ & 2/3 & 0 & 1 & 0 & 0 & $U_{3L}$ & 2/3 & 2 & 2 & 1 & -1 \\ $u_{3R}$ & 2/3 & 0 & 0 & 0 & 0 & $U_{3R}$ & 2/3 & 2 & 2 & 1 & -1 \\ $d_{1R}$ & -1/3 & 0 & 1 & 2 & 0 & $U_{4L}$ & 2/3 & 2 & 1 & 1 & 0 \\ $d_{2R}$ & -1/3 & 0 & 0 & 1 & 1 & $U_{4R}$ & 2/3 & 2 & 2 & 1 & 0 \\ $d_{3R}$ & -1/3 & 0 & 1 & 0 & 0 & $U_{5L}$ & 2/3 & 2 & 0 & 1 & 0 \\ $Q_{1L}$ & 1/6 & 1 & 1 & 2 & 0 & $U_{5R}$ & 2/3 & 2 & 0 & 2 & 0 \\ $Q_{1R}$ & 1/6 & 1 & 1 & 2 & 1 & $U_{6L}$ & 2/3 & 3 & 0 & 2 & 0 \\ $Q_{2L}$ & 1/6 & 1 & 1 & 1 & 0 & $U_{6R}$ & 2/3 & 3 & 1 & 2 & 0 \\ $Q_{2R}$ & 1/6 & 1 & 0 & 1 & 0 & $D_{1L}$ & -1/3 & 1 & 1 & 2 & 0 \\ $Q_{3L}$ & 1/6 & 1 & 1 & 0 & 0 & $D_{1R}$ & -1/3 & 1 & 1 & 2 & 1 \\ $Q_{3R}$ & 1/6 & 1 & 0 & 0 & 0 & $D_{2L}$ & -1/3 & 1 & 0 & 1 & 1 \\ $Q_{4L}$ & 1/6 & 2 & 1 & 0 & 0 & $D_{2R}$ & -1/3 & 1 & 1 & 1 & 1 \\ $Q_{4R}$ & 1/6 & 2 & 1 & 1 & 0 & $D_{3L}$ & -1/3 & 1 & 1 & 0 & 0 \\ $Q_{5L}$ & 1/6 & 2 & 2 & 2 & 0 & $D_{3R}$ & -1/3 & 1 & 1 & 0 & 0 \\ $Q_{5R}$ & 1/6 & 2 & 1 & 2 & 0 & $D_{4L}$ & -1/3 & 2 & 1 & 0 & 0 \\ $Q_{6L}$ & 1/6 & 2 & 2 & 1 & -1 & $D_{4R}$ & -1/3 & 2 & 1 & 0 & 0 \\ $Q_{6R}$ & 1/6 & 2 & 2 & 1 & 0 & $D_{5L}$ & -1/3 & 2 & 1 & 1 & 1 \\ $Q_{7L}$ & 1/6 & 3 & 1 & 2 & 0 & $D_{5R}$ & -1/3 & 2 & 1 & 1 & 0 \\ $Q_{7R}$ & 1/6 & 3 & 2 & 2 & 0 & $D_{6L}$ & -1/3 & 2 & 1 & 2 & 1 \\ $H$ & 1/2 & 0 & 0 & 0 & 0 & $D_{6R}$ & -1/3 & 2 & 1 & 2 & 0 \\ $S$ & 0 & 1 & 0 & 0 & 0 & $D_{7L}$ & -1/3 & 3 & 1 & 1 & 0 \\ $F_1$ & 0 & 0 & 1 & 0 & 0 & $D_{7R}$ & -1/3 & 3 & 1 & 2 & 0 \\ $F_2$ & 0 & 0 & 0 & 1 & 0 & $D_{8L}$ & -1/3 & 3 & 1 & 2 & 0 \\ $F_3$ & 0 & 0 & 0 & 0 & 1 & $D_{8R}$ & -1/3 & 3 & 1 & 2 & 1 \\ \hline \end{tabular} \end{center} \caption{\label{table:bigcharge} Charge assignments in the three generation model for the scalar fields $H$, $S$, $F_i$, the SM quark fields $q_{iL}$, $u_{iR}$, $d_{iR}$, and the heavy quark pairs $Q_{iL}$, $Q_{iR}$, $U_{iL}$, $U_{iR}$, $D_{iL}$, $D_{iR}$.} \end{table} With the charge assignments listed in Table \ref{table:bigcharge} the only dimension 4 couplings of fermions to the Higgs scalar are \begin{eqnarray} &&\hspace*{-20pt} f_1\overline{q}_{3L}u_{3R}\bar{H} +f_2\overline{Q}_{2L}U_{2R}\bar{H} +f_3\overline{Q}_{4R}U_{4L}\bar{H} +f_4\overline{Q}_{6L}U_{3R}\bar{H} \nonumber\\ &&\hspace*{-20pt} +f_5\overline{Q}_{7L}U_{6R}\bar{H} +f_6\overline{Q}_{3L}D_{3R}H +f_7\overline{Q}_{4L}D_{4R}H +f_8\overline{Q}_{7L}D_{7R}H + h.c. \; . \end{eqnarray} The only dimension 4 couplings of fermions to the the $S$ messenger scalar are \begin{eqnarray} &&\hspace*{-20pt} f_9\overline{q}_{1L}Q_{1R} S^\dagger +f_{10}\overline{q}_{2L}Q_{2R} S^\dagger +f_{11}\overline{q}_{3L}Q_{3R} S^\dagger +f_{12}\overline{U}_{1L}u_{1R} S \nonumber\\ &&\hspace*{-20pt} +f_{13}\overline{U}_{2L}u_{2R} S +f_{14}\overline{D}_{1L}d_{1R} S +f_{15}\overline{D}_{2L}d_{2R} S +f_{16}\overline{D}_{3L}d_{3R} S \nonumber\\ &&\hspace*{-20pt} +f_{17}\overline{Q}_{2L}Q_{4R} S^\dagger +f_{18}\overline{Q}_{1L}Q_{5R} S^\dagger +f_{19}\overline{Q}_{7L}Q_{5R} S^\dagger +f_{20}\overline{Q}_{5L}Q_{7R} S^\dagger \nonumber\\ &&\hspace*{-20pt} +f_{21}\overline{U}_{4L}U_{2R} S +f_{22}\overline{U}_{5L}U_{1R} S +f_{23}\overline{U}_{6L}U_{5R} S +f_{24}\overline{D}_{4L}D_{3R} S \\ &&\hspace*{-20pt} +f_{25}\overline{D}_{3L}D_{4R} S^\dagger +f_{26}\overline{D}_{5L}D_{2R} S +f_{27}\overline{D}_{6L}D_{1R} S +f_{28}\overline{D}_{1L}D_{6R} S^\dagger \nonumber\\ &&\hspace*{-20pt} +f_{29}\overline{D}_{7L}D_{5R} S +f_{30}\overline{D}_{8L}D_{6R} S +f_{31}\overline{D}_{6L}D_{8R} S^\dagger + h.c. \;. \nonumber \end{eqnarray} The direct fermion mass terms and mixings consistent with the flavon symmetries and SM gauge symmetries generated by operators of dimension 4 or less are \begin{eqnarray} &&\hspace*{-20pt} f_{32}\overline{Q}_{1L} Q_{1R} F_3^\dagger +f_{33}\overline{Q}_{2L} Q_{2R} F_1 +f_{34}\overline{Q}_{3L} Q_{3R} F_1 +f_{35}\overline{Q}_{3L} Q_{3R} F_2^\dagger \nonumber\\ &&\hspace*{-20pt} +f_{36}\overline{Q}_{4L} Q_{4R} F_2^\dagger +f_{37}\overline{Q}_{5L} Q_{5R} F_1 +f_{38}\overline{Q}_{5L} Q_{6R} F_2 +f_{39}\overline{Q}_{6L} Q_{6R} F_3^\dagger \nonumber\\ &&\hspace*{-20pt} +f_{40}\overline{Q}_{7L} Q_{7R} F_1^\dagger +f_{41}\overline{U}_{1L} U_{1R} F_3 +f_{42}\overline{U}_{2L} U_{2R} F_2^\dagger +M\overline{U}_{3L} U_{3R} +f_{43}\overline{U}_{3L} U_{4R} F_3^\dagger \nonumber\\ &&\hspace*{-20pt} +f_{44}\overline{U}_{4L} U_{4R} F_1^\dagger +f_{45}\overline{U}_{5L} U_{5R} F_2^\dagger +f_{46}\overline{U}_{6L} U_{6R} F_1^\dagger +f_{47}\overline{D}_{1L} D_{1R} F_3^\dagger \\ &&\hspace*{-20pt} +f_{48}\overline{D}_{2L} D_{2R} F_1^\dagger +M\overline{D}_{3L} D_{3R} +M\overline{D}_{4L} D_{4R} +f_{49}\overline{D}_{5L} D_{5R} F_3 \nonumber\\ &&\hspace*{-20pt} +f_{50}\overline{D}_{4L} D_{5R} F_2^\dagger +f_{51}\overline{D}_{6L} D_{6R} F_3 +f_{52}\overline{D}_{7L} D_{7R} F_2^\dagger +M\overline{D}_{8L} D_{7R} +f_{53}\overline{D}_{8L} D_{8R} F_3^\dagger + h.c. \;, \nonumber \end{eqnarray} where for simplicity of notation we have used $M$ to denote all the TeV scale mass parameters. Thus, integrating out the heavy fermions in the tree level diagram composed from the couplings \begin{eqnarray} f_{11}\overline{q}_{3L} Q_{3R} S^\dagger + f_{34}^*\overline{Q}_{3R} Q_{3L} F_1^\dagger + f_{10}\overline{Q}_{3L} D_{3R} H + M\overline{D}_{3R} D_{3L} + f_3\overline{D}_{3L} d_{3R} S \label{twentyb} \end{eqnarray} produces an effective coupling below the TeV scale proportional to \begin{eqnarray} f_{11}f_{34}^*f_{10}f_{3}\frac{F_1^\dagger}{M}\frac{S^\dagger S}{M^2}\overline{q}_{3L}d_{3R}H + h.c. \; . \end{eqnarray} Integrating out the heavy fermions in the tree level diagram composed from the couplings \begin{eqnarray} f_{10}\overline{q}_{2L} Q_{2R} S^\dagger + f_{33}^*\overline{Q}_{2R} Q_{2L} F_1^\dagger + f_{2}\overline{Q}_{2L} U_{2R} \bar{H} + f_{42}^*\overline{U}_{2R} U_{2L} F_2 + f_{13}\overline{U}_{2L} u_{2R} S \label{twentyoneb} \end{eqnarray} produces an effective coupling below the TeV scale proportional to \begin{eqnarray} f_{10}f_{33}^*f_{2}f_{42}^*f_{13}\frac{F_1^\dagger F_2}{M^2}\frac{S^\dagger S}{M^2}\overline{q}_{2L}u_{2R}\bar{H} + h.c. \; . \end{eqnarray} Integrating out the heavy fermions in the tree level diagram composed from the couplings \begin{eqnarray} &&\hspace*{-20pt} f_{10}\overline{q}_{2L} Q_{2R} S^\dagger + f_{33}^*\overline{Q}_{2R} Q_{2L} F_1^\dagger +f_{17}\overline{Q}_{2L} Q_{4R} S^\dagger +f_{36}^*\overline{Q}_{4R} Q_{4L} F2 + f_{7}\overline{Q}_{4L} D_{4R} H \nonumber\\ &&\hspace*{-20pt} + M\overline{D}_{4R} D_{4L} + f_{24}^*\overline{D}_{4L} D_{3R} S + M\overline{D}_{3R} D_{3L} + f_3\overline{D}_{3L} d_{3R} S \label{twentytwob} \end{eqnarray} produces an effective coupling below the TeV scale proportional to \begin{eqnarray} f_{10}f_{33}^*f_{17}f_{36}^*f_{7}f_{24}^*f_{3}\frac{F_1^\dagger F_2}{M^2}\frac{(S^\dagger S)^2}{M^4}\overline{q}_{2L}d_{3R}H + h.c.\; . \end{eqnarray} Integrating out the heavy fermions in the tree level diagram composed from the couplings \begin{eqnarray} &&\hspace*{-20pt} f_{11}\overline{q}_{3L} Q_{3R} S^\dagger + f_{34}^*\overline{Q}_{3R} Q_{3L} F_1^\dagger + f_{10}\overline{Q}_{3L} D_{3R} H + f_{24}^*\overline{D}_{3R} D_{4L} S^\dagger + f_{50}\overline{D}_{4L} D_{5R} F_2^\dagger \nonumber\\ &&\hspace*{-20pt} + f_{49}^*\overline{D}_{5R} D_{5L} F_3^\dagger + f_{26}\overline{D}_{5L} D_{2R} S + f_{48}^*\overline{D}_{2R} D_{2L} F_1 + f_{15}\overline{D}_{2L} d_{2R} S \label{twentytwoc} \end{eqnarray} produces an effective coupling below the TeV scale proportional to \begin{eqnarray} f_{11}f_{34}^*f_{10}f_{24}^*f_{50}f_{49}^*f_{26}f_{48}^*f_{15}\frac{F_1^\dagger F_1 F_2^\dagger F_3^\dagger}{M^4} \frac{(S^\dagger S)^2}{M^4}\overline{q}_{3L}d_{2R}H + h.c.\; . \end{eqnarray} There is also another tree level contribution to $h_{32}^d$, proportional to \begin{eqnarray} f_{11}f_{34}^*f_{10}f_{24}f_{50}f_{49}^*f_{26}f_{48}^*f_{15}\frac{F_1^\dagger F_1 F_2^\dagger F_3^\dagger}{M^4} \frac{(S^\dagger S)^2}{M^4}\overline{q}_{3L}d_{2R}H + h.c.\; . \end{eqnarray} Integrating out the heavy fermions in the tree level diagram composed from the couplings \begin{eqnarray} &&\hspace*{-20pt} f_{10}\overline{q}_{2L} Q_{2R} S^\dagger + f_{33}^*\overline{Q}_{2R} Q_{2L} F_1^\dagger +f_{17}\overline{Q}_{2L} Q_{4R} S^\dagger +f_{36}^*\overline{Q}_{4R} Q_{4L} F2 + f_{7}\overline{Q}_{4L} D_{4R} H + M\overline{D}_{4R} D_{4L} \nonumber\\ &&\hspace*{-20pt} + f_{50}\overline{D}_{4L} D_{5R} F_2^\dagger + f_{49}^*\overline{D}_{5R} D_{5L} F_3^\dagger + f_{26}\overline{D}_{5L} D_{2R} S + f_{48}^*\overline{D}_{2R} D_{2L} F_1 + f_{15}\overline{D}_{2L} d_{2R} S \label{twentytwod} \end{eqnarray} produces an effective coupling below the TeV scale proportional to \begin{eqnarray} f_{10}f_{33}^*f_{17}f_{36}^*f_{7}f_{50}f_{49}^*f_{26}f_{48}^*f_{15}\frac{F_1^\dagger F_1 F_2^\dagger F_2 F_3^\dagger}{M^5} \frac{(S^\dagger S)^2}{M^4}\overline{q}_{3L}d_{2R}H + h.c.\; . \end{eqnarray} Integrating out the heavy fermions in the tree level diagram composed from the couplings \begin{eqnarray} &&\hspace*{-20pt} f_{9}\overline{q}_{1L} Q_{1R} S^\dagger +f_{32}^*\overline{Q}_{1R} Q_{1L} F_3 +f_{18}^*\overline{Q}_{1L} Q_{5R} S^\dagger +f_{19}^*\overline{Q}_{5R} Q_{7L} S^\dagger + f_{8}\overline{Q}_{7L} D_{7R} H \nonumber\\ &&\hspace*{-20pt} + M\overline{D}_{7R} D_{8L} + f_{30}\overline{D}_{8L} D_{6R} S + f_{28}\overline{D}_{6R} D_{1L} S + f_{14}\overline{D}_{1L} d_{1R} S \label{twentytwoe} \end{eqnarray} produces an effective coupling below the TeV scale proportional to \begin{eqnarray} f_{9}f_{32}^*f_{18}^*f_{19}^*f_{8}f_{30}f_{28}f_{14}\frac{F_3}{M} \frac{(S^\dagger S)^3}{M^6}\overline{q}_{1L}d_{1R}H + h.c.\; . \end{eqnarray} There are four other very similar tree level contributions to $h_{11}^d$. Integrating out the heavy fermions in the tree level diagram composed from the couplings \begin{eqnarray} &&\hspace*{-20pt} f_{9}\overline{q}_{1L} Q_{1R} S^\dagger +f_{32}^*\overline{Q}_{1R} Q_{1L} F_3 +f_{18}^*\overline{Q}_{1L} Q_{5R} S^\dagger +f_{19}^*\overline{Q}_{5R} Q_{7L} S^\dagger + f_{5}\overline{Q}_{7L} U_{6R} \bar{H} + f_{46}^*\overline{U}_{6R} U_{6L} F_1 \nonumber\\ &&\hspace*{-20pt} + f_{23}\overline{U}_{6L} U_{5R} S + f_{45}^*\overline{U}_{5R} U_{5L} F_2 + f_{22}\overline{U}_{5L} U_{1R} S + f_{41}^*\overline{U}_{1R} U_{1L} F_3^\dagger + f_{12}\overline{U}_{1L} u_{1R} S \label{twentytwof} \end{eqnarray} produces an effective coupling below the TeV scale proportional to \begin{eqnarray} f_{9}f_{32}^*f_{18}^*f_{19}^*f_{5}f_{46}^*f_{23}f_{45}^*f_{22}f_{41}^*f_{12}\frac{F_1 F_2 F_3^\dagger F_3}{M^4} \frac{(S^\dagger S)^3}{M^6}\overline{q}_{1L}u_{1R} \bar{H} + h.c.\; . \end{eqnarray} Integrating out the heavy fermions in the tree level diagram composed from the couplings \begin{eqnarray} &&\hspace*{-20pt} f_{9}\overline{q}_{1L} Q_{1R} S^\dagger +f_{32}^*\overline{Q}_{1R} Q_{1L} F_3 +f_{18}^*\overline{Q}_{1L} Q_{5R} S^\dagger +f_{19}^*\overline{Q}_{5R} Q_{7L} S^\dagger + f_{8}\overline{Q}_{7L} D_{7R} H + f_{52}^*\overline{D}_{7R} D_{7L} F_2 \nonumber\\ &&\hspace*{-20pt} + f_{29}\overline{D}_{7L} D_{5R} S + f_{49}^*\overline{D}_{5R} D_{5L} F_3^\dagger + f_{26}\overline{D}_{5L} D_{2R} S + f_{48}^*\overline{D}_{2R} D_{2L} F_1 + f_{15}\overline{D}_{2L} d_{2R} S \label{twentytwog} \end{eqnarray} produces an effective coupling below the TeV scale proportional to \begin{eqnarray} f_{9}f_{32}^*f_{18}^*f_{19}^*f_{8}f_{52}^*f_{29}f_{49}^*f_{26}f_{48}^*f_{15}\frac{F_1 F_2 F_3^\dagger F_3}{M^4} \frac{(S^\dagger S)^3}{M^6}\overline{q}_{1L}d_{2R}H + h.c.\; . \end{eqnarray} Integrating out the heavy fermions in the tree level diagram composed from the couplings \begin{eqnarray} &&\hspace*{-20pt} f_{9}\overline{q}_{1L} Q_{1R} S^\dagger +f_{32}^*\overline{Q}_{1R} Q_{1L} F_3 +f_{18}^*\overline{Q}_{1L} Q_{5R} S^\dagger +f_{19}^*\overline{Q}_{5R} Q_{7L} S^\dagger + f_{8}\overline{Q}_{7L} D_{7R} H + f_{52}^*\overline{D}_{7R} D_{7L} F_2 \nonumber\\ &&\hspace*{-20pt} + f_{29}\overline{D}_{7L} D_{5R} S + f_{50}^*\overline{D}_{5R} D_{4L} F_2 + f_{24}^*\overline{D}_{4L} D_{3R} S + M\overline{D}_{3R} D_{3L} + f_3\overline{D}_{3L} d_{3R} S \label{twentytwoh} \end{eqnarray} produces an effective coupling below the TeV scale proportional to \begin{eqnarray} f_{9}f_{32}^*f_{18}^*f_{19}^*f_{8}f_{52}^*f_{29}f_{50}^*f_{24}^*f_{3}\frac{(F_2)^2 F_3}{M^3} \frac{(S^\dagger S)^3}{M^6}\overline{q}_{1L}d_{3R}H + h.c.\; . \end{eqnarray} There is one other very similar tree level contribution to $h_{13}^d$. Integrating out the heavy fermions in the tree level diagram composed from the couplings \begin{eqnarray} &&\hspace*{-20pt} f_{11}\overline{q}_{3L} Q_{3R} S^\dagger + f_{34}^*\overline{Q}_{3R} Q_{3L} F_1^\dagger + f_{10}\overline{Q}_{3L} D_{3R} H + f_{24}^*\overline{D}_{3R} D_{4L} S^\dagger + f_{50}\overline{D}_{4L} D_{5R} F_2^\dagger + f_{29}^*\overline{D}_{5R} D_{7L} S^\dagger \nonumber\\ &&\hspace*{-20pt} + f_{52}^*\overline{D}_{7L} D_{7R} F_2^\dagger + M\overline{D}_{7R} D_{8L} + f_{30}\overline{D}_{8L} D_{6R} S + f_{28}\overline{D}_{6R} D_{1L} S + f_{14}\overline{D}_{1L} d_{1R} S \label{twentytwoi} \end{eqnarray} produces an effective coupling below the TeV scale proportional to \begin{eqnarray} f_{11}f_{34}^*f_{10}f_{24}^*f_{50}f_{29}^*f_{52}^*f_{30}f_{28}f_{14}\frac{F_1^\dagger (F_2^\dagger )^2}{M^3} \frac{(S^\dagger S)^3}{M^6}\overline{q}_{3L}d_{1R}H + h.c.\; . \end{eqnarray} The following effective couplings are not generated or are generated at higher order in $\epsilon$ and/or $\beta$: $h_{23}^u$, $h_{32}^u$, $h_{12}^u$, $h_{21}^u$, $h_{13}^u$, $h_{31}^u$, and $h_{21}^d$. As already indicated these couplings are not needed to reproduce the observed SM quark masses and mixings. For illustration, $h_{32}^u$ arises from the effective coupling \begin{eqnarray} f_{11}f_{34}^*f_6f_{25}f_7^*f_{36}f_{17}^*f_2f_{42}^*f_{13}\frac{F_1^\dagger F_2 F_2^\dagger}{M^3}\frac{(S^\dagger S)^2}{M^4} \frac{H^\dagger H}{M^2}\overline{q}_{3L}u_{2R}\bar{H} + h.c. \;, \end{eqnarray} so the extra suppression relative to eq. (\ref{ONE}) is by an additional factor of $\beta$ as well as an additional factor of $\epsilon$. Since $h_{12}^u$ and $h_{21}^u$ have extra suppression in this model, $D^0 - \overline{D^0}$ mixing also has extra suppression. This weakens the lower bound on $m_s$ derived in Section \ref{sec:3.2}. Similarly since $h_{23}^u$ and $h_{32}^u$ have extra suppression the relatively large BR for $t\rightarrow{c s}$ discussed in Section \ref{sec:4.2} will not occur for this particular realization. \section{Conclusion} We have presented a framework in which only the top quark obtains its mass from the Yukawa interaction with the SM Higgs boson via dimension four operators. All the other quarks receive their masses from operators of dimension six or higher involving a complex scalar $S$ that is part of an extended Higgs sector and whose vev is at the electroweak scale. The successive hierarchy of light quark masses is generated via the expansion parameter $\left(\frac{S^\dagger S}{M^2}\right)\sim \epsilon^2$, where $\epsilon \equiv \frac{v_s}{M} \sim 0.15$. All the couplings of the higher dimensional operators are of order one. We are able to generate the appropriate hierarchy of fermion masses with this small parameter $\epsilon$. Since $v_s$ is at the EW scale, the physics of the new scale $M$ is not far above a TeV. We predict a neutral scalar $s$, which gives rise to signals that could be detected at the LHC or at the Tevatron. We make new predictions for Higgs decays and for top decays. The model has a light $Z'$ that has very weak couplings to SM fermions, but could be light enough to be produced via mixing in Higgs decays at the LHC; this could give rise to invisible Higgs decays, displaced vertices from the $Z'$ decays, or multilepton final states, depending on the mass and lifetime of the $Z'$. We have presented an explicit model in which the effective interaction given in (\ref{ONE}) is realized. This involves extending the SM gauge symmetry by an abelian gauge symmetry $U(1)_S$ and a local flavon symmetry group $U(1)_{F_1} \times U(1)_{F_2} \times U(1)_{F_3}$ The flavon symmetry is spontaneously broken at the TeV scale by a complex flavon scalars $F_1$, $F_2$, $F_3$, whereas the $U(1)_S$ symmetry is broken at the electroweak scale by the complex scalar $S$, which is a SM singlet extension of the SM Higgs sector. $S$ acts as the messenger of both flavor and electroweak symmetry breaking. The model requires the existence of vectorlike quarks and leptons, both EW doublets and singlets, at the TeV scale. These can be probed at the LHC. Their decays will be a new source for Higgs production and give rise to final states with four $Z$'s or four $Z^\prime$'s and other interesting new physics signals at the LHC. We have restricted ourselves to models where all of the hierarchies of the SM quark and charged lepton masses and mixings arise from powers of the vev of a single messenger field. In \cite{bn}, a framework was suggested in which all of these hierarchies arise from powers of $\beta = \left(\frac{H^\dagger H}{M^2}\right)$. As we saw in the previous section, in explicit models it is natural to generate powers of both $\epsilon$ and $\beta$. Thus the model presented here and the framework of \cite{bn} are two extremes of a more general class of models. Obviously one could also generalize by introducing a more complicated messenger sector, i.e. further extending the Higgs sector. A truely viable model should have fewer species of heavy fermions than were required in our example, ameliorating what is otherwise a dramatic worsening of the little hierarchy problem of the Standard Model. This could be achieved by a more efficient construction of the messenger sector and its interplay with the flavon sector. Another interesting direction is to attempt to generate some of the higher order effective couplings from the top quark Yukawa, as was done successfully with leptoquark-generated loop diagrams in \cite{Dobrescu:2008sz}. \subsection*{Acknowledgements} We are grateful to Bogdan Dobrescu for several useful discussions. SN and ZM would like to thank the Fermilab Theoretical Physics Department for warm hospitality and support during the completion of this work. This research was supported in part by grant numbers DOE-FG02-04ER41306 and DOE-FG02-ER46140. Fermilab is operated by the Fermi Research Alliance LLC under contract DE-AC02-07CH11359 with the U.S. Dept. of Energy.
3,212,635,537,837
arxiv
\section{Introduction} The compact (scale sizes of $\sim 0.1-1$ pc) and ultracompact (UC; scale sizes $\;\lower4pt\hbox{${\buildrel\displaystyle <\over\sim}$}\; 0.15$ pc) H \Rmnum{2} regions are associated with massive OB stars \citep[e.g.,][]{Habing}. The UC H \Rmnum{2} region stage ($\;\lower4pt\hbox{${\buildrel\displaystyle <\over\sim}$}\; 10^5$ yr) stands for a substantial part of the relatively short main-sequence lifetimes of OB stars. Several radio surveys (e.g., Wood \& Churchwell 1989; Fish 1993; Kurtz, Churchwell \& Wood 1994) observed expansions of luminous H \Rmnum{2} regions with shock signatures. ``Champagne flow" models \citep[e.g.,][]{Tenorio1979, TT, York} successfully explain the expansion of H \Rmnum{2} regions by considering a protostar formed in a cloud core, photonionizing and heating the cloud, as well as driving a shock that accelerates the ionized gas to expand rapidly. Observations tend to support ``champagne flow" models, such as Lumsden \& Hoare (1999) for UC H \Rmnum{2} regions G 29.96-0.02 and \citet{Barriault} for Compact H \Rmnum{2} region Sh 2-158. Champagne flows in clouds of larger scales have also been identified, such as \citet{Foster} for the dense Galactic H \Rmnum{2} region G84.9+0.5 and Maheswar et al. (2007) for the classical H \Rmnum{2} region S236 in the cluster of OB stars NGC 1893. \citet{Tenorio1979} classified champagne flows into cases R and cases D. For the case R, the ionization front (IF) created by the emergence of the central massive protostar rapidly breaks out from the dense cloud and leaves the gas behind it fully ionized. For the case D, the ionization front is `trapped' inside a cloud, and produces an expanding H \Rmnum{2} region within a cloud. In the formation phase of H \Rmnum{2} regions, whether the IF is R-type or D-type depends on the initial grain opacity, the ionizing flux, and the initial density and the size of a cloud \citep{FTB}. In the expansion phase, H \Rmnum{2} regions with an initial mass density profile $\rho\propto r^{-l}$ and $l>3/2$ are `density bounded', where $\rho$ is the mass density and $r$ is the radius (see, e.g., Osterbrock 1989; Franco et al. 1990). \footnote{In Franco et al. (1990), the power-law index of the mass density profile is denoted by $w$ instead of $l$ as adopted here to avoid notational confusions.} It is also possible that the D-type IF changes to a weak R-type IF when $l>3/2$. In such cases, the fully ionized cloud begins to expand and an outgoing shock forms. This is referred to as the ``champagne phase" (Bodenheimer et al. 1979). Notably if a shock front encounters a steep negative density gradient, for example, the edge of a cloud, asymmetric ``champagne flows" may occur, as observed by Lumsden \& Hoare (1999). According to the VLA survey by Wood \& Churchwell (1989), 16\% of H \Rmnum{2} regions bear cometary appearance. \citet{Arthur} numerically simulated ``cometary champagne flows". If $l<3/2$, the H \Rmnum{2} region is `ionization bounded', e.g., the ultraviolet radiation is trapped within a finite radius \citep{Oster}. In such cases, the ionized region should expand as $t^{4/(7-2l)}$, driving a shock that would accelerate the ambient medium into a thin shell (Franco et al. 1990). In this paper, we focus on the champagne phase of a cloud assumed to be `density bounded', implying $l>3/2$. We also assume that the cloud is fully ionized shortly after the onset of nuclear burning of a central massive protostar. \citet{Shu} (also Tsai \& Hsu 1995) investigated isothermal ``champagne flows" under spherical symmetry in the self-similar framework. By neglecting the gravity of the central massive protostar and assuming that a cloud initially stays at rest and gets heated by the luminous massive protostar to a uniform high temperature, one may obtain self-similar expansion solutions connected with isothermal outflows with shocks; this corresponds to a ``champagne flow" in a highly idealized setting. The initial isothermal mass density profile scales as $\rho\propto r^{-2}$. In addition, for molecular clouds with other possible initial mass density profiles, \citet{Shu} also ignored the self-gravity of a cloud completely and proposed another self-similar transformation, referred to as the `invariant form', and obtained solutions for cases with an initial mass density profile $\rho\propto r^{-l}$, where the power-law index $l$ is not necessarily equal to 2. In general, it is not realistic to suppose molecular clouds to be isothermal in many astrophysical situations. One specific example of cloud temperature measurement of a UC H \Rmnum{2} region NGC 6334F undergoing a ``champagne flow" reveals a conspicuous temperature gradient from the centre to the edge \citep[e.g.,][]{DeBuizer}. Energy sources and plasma coolings in molecular clouds are not completely known. We then approximate the energy equation by a general polytropic equation of state $p=\kappa(r,\ t)\ \rho^{\gamma}$, where $p$ is the thermal gas pressure, $\gamma$ is the polytropic index and the proportional coefficient $\kappa(r,\ t)$ (related to specific entropy) depends on radius $r$ and time $t$ in general. Setting $\kappa$ as a global constant, the equation of state simply becomes a conventional polytropic one. By adjusting $\gamma$, we may model various situations of H \Rmnum{2} regions in molecular clouds. For example, for $\gamma=1$ and a constant $\kappa$, our solutions reduce to isothermal ones. Since ``champagne flows" in a polytropic molecular cloud has not been studied, we would generalize the isothermal analyses of \citet{Shu} and of Tsai \& Hsu (1995) to a polytropic description of self-similar ``champagne flows". We shall provide the basic formulation with the most general polytropic equation of state (i.e., specific entropy conservation along streamlines; Wang \& Lou 2008) and present global ``champagne flow" solutions with shocks for a conventional polytropic gas. \citet{Shu} introduced the Bondi-Parker radius as a measure for the effective distance of the central gravity. The Bondi-Parker radius is defined by $r_{\rm BP}=GM_*/(2a^2)\ ,$ where $M_*$ is the mass of the central gravity source and $a$ is the sound speed of the surrounding medium, which is a constant for an isothermal cloud. The mass originally residing within a radius $r_0$ is dumped into the star during the star formation. After the star formation, as the surrounding cloud becomes much hotter with a higher sound speed, the Bondi-Parker radius becomes much less than $r_0$. Therefore for the gas in $r>r_0$, the gravity of the central massive star may be neglected. This reasoning naturally leads to the possible existence of a cavity around the centre of a molecular cloud, which we refer to as `void' or `bubble'. At $t=0$, the void boundary is at $r=r_0$. For an expanding cloud, the central void also expands. Indeed, a stellar wind drives also a principal shock and is capable of sweeping the surrounding ionized gas into an expanding shell. Wood \& Churchwell (1989) identified central cavities in the shell or cometary UC H \Rmnum{2} regions, which are thought to be supported by stellar wind and radiation pressures in their survey. Lumsden \& Hoare (1999) suggested a ``champagne flow" surrounding a hot stellar wind bubble to interpret observations of G 29.96-0.02. \citet{Comeron} numerically simulated the dynamic evolution of wind-driven H \Rmnum{2} regions with strong density gradients and found that features of classical champagne model are not substantially changed, except that the compression of the swept-up matter would, rapidly and particularly in densest cases, lead to the trapping of the IF and inhibit the champagne phase. Therefore, the dynamic evolution of void expansion represents an important physical aspect of ``champagne flows". Recently, Lou \& Hu (2008, in preparation) explore self-similar solutions for voids in a more general context. In this paper, we construct self-similar solutions for ``champagne flows" with central voids in self-similar expansion. The inclusion of a central void not only makes our model more realistic, but also allows us to take into account stellar wind bubbles. We outline the model formulation of a general polytropic gas and present self-similar asymptotic solutions in section 2 and construct global solutions of ``champagne flows" in section 3. Section 4 provides solutions of self-similar ``champagne flows" with an expanding central void. In section 5, we discuss behaviours and astrophysical applications of our novel solutions, and suggest other plausible forms of H \Rmnum{2} regions. Details of an invariant form of self-similar solutions in a conventional polytropic gas with the self-gravity ignored are summarized in Appendix A. \section[]{Self-Similar Polytropic Flows} \subsection[]{General Polytropic Formulation} Dynamic evolution of a quasi-spherical general polytropic gas under self-gravity can be described by nonlinear hydrodynamic equations in spherical polar coordinates $(r,\ \theta,\ \phi)$, \begin{equation} \frac{\partial \rho}{\partial t}+\frac{1}{r^2}\frac{\partial}{\partial r}(r^2\rho u)=0\ , \label{equ1} \end{equation} \begin{equation} \frac{\partial M}{\partial t}+u \frac{\partial M}{\partial r}=0\ , \label{equ2} \end{equation} \begin{equation} \frac{\partial M}{\partial r}=4\pi r^2\rho\ ,\label{equ3} \end{equation} \begin{equation} \rho \bigg(\frac{\partial u}{\partial t}+u \frac{\partial u}{\partial r}\bigg)=-\frac{\partial p}{\partial r}-\frac{GM \rho}{r^2}\ ,\label{equ4} \end{equation} \begin{equation} p=\kappa(r,\ t)\rho^{\gamma}\ ,\label{equ5} \end{equation} where $\rho(r,\ t)$ is the mass density, $u(r,\ t)$ is the bulk gas radial flow velocity, $M(r,\ t)$ is the enclosed mass within $r$ at time $t$, $p$ is the thermal pressure, $G=6.67\times 10^{-8}$ dyne cm$^2$ g$^{-2}$ is the gravity constant. Equations (\ref{equ1}), (\ref{equ2}), (\ref{equ3}) describe the mass conservation, equation (\ref{equ4}) is the radial momentum equation and equation (\ref{equ5}) is the general polytropic equation of state, in which $\gamma$ is the polytropic index and the coefficient $\kappa$ directly related to the `specific entropy' depends on $r$ and $t$. For a conventional polytropic gas, $\kappa$ is a global constant in space and time. More generally, we require the conservation of `specific entropy' along streamlines, namely \begin{equation} \bigg(\frac{\partial}{\partial t}+u\frac{\partial}{\partial r}\bigg)\bigg(\ln\frac{p}{\rho^{\gamma}}\bigg)=0\ . \label{equ5a} \end{equation} This set of equations is the same as those of \citet{WangLou08} but without a completely random magnetic field. To reduce the nonlinear partial differential equations (PDEs) to ordinary differential equations (ODEs) for self-similar flows, we introduce the following transformation \begin{eqnarray} r=k^{1/2} t^n x\ ,\qquad u=k^{1/2} t^{n-1} v\ ,\qquad \rho=\frac{\alpha}{4\pi G t^2}\ ,\nonumber\\ p=\frac{k t^{2n-4}}{4\pi G}\beta\ ,\quad\qquad M=\frac{k^{3/2} t^{3n-2} m}{(3n-2)G}\ ,\label{equ6} \end{eqnarray} where $x$ is a dimensionless independent self-similar variable, $k$ is a dimensional parameter related to the polytropic sound speed making $x$ dimensionless, $v(x)$, $\alpha(x)$, $\beta(x)$, $m(x)$ are dimensionless reduced dependent variables of $x$ only, and $n$ is a key scaling index which controls the relation between $r$ and $x$ as well as various scalings of reduced dependent variables. We refer to $v(x)$, $ \alpha(x)$, $\beta(x)$, and $m(x)$ as the reduced radial flow speed, mass density, thermal pressure, and enclosed mass, respectively. Transformation (\ref{equ6}) is identical with that of Lou \& Wang (2006). By performing self-similar transformation (\ref{equ6}) in equations (\ref{equ1})$-$(\ref{equ5a}) and introducing parameter $q\equiv 2(n+\gamma-2)/(3n-2)$, we obtain two integral relations \begin{equation} m=\alpha x^2 (nx-v)\ ,\label{equ7} \end{equation} \begin{equation} \beta=\alpha^\gamma m^q\ ,\label{equ8} \end{equation} where there is no loss of generality to set the proportional coefficient (i.e., an integration constant) equal to unity in integral (\ref{equ8}) for $q\neq 2/3$ or $\gamma\neq 4/3$~\citep{WangLou08}. The special case of $\gamma=4/3$ corresponds to a relativistically hot gas as studied by Goldreich \& Weber (1980) and Lou \& Cao (2008). By setting $q=0$, the general polytropic formulation reduces to the conventional polytropic case of a global constant $\kappa$ (e.g., Suto \& Silk 1988; Lou \& Gao 2006; Lou \& Wang 2006; Lou, Jiang \& Jin 2008) with $n+\gamma=2$. According to expression $M(r,t)$ for the enclosed mass in transformation (\ref{equ6}), we require $3n-2>0$ and $nx-v>0$ to ensure a positive enclosed mass. The inequality $3n-2>0$ will reappear later for a class of asymptotic solutions at large $x$. By equation (\ref{equ7}), we emphasize that for $nx-v=0$ at a certain $x^*$, the enclosed mass within $x^*$ becomes zero; we refer to this as a central void and $x^*$ is the independent similarity variable marking the void boundary which expands with time $t$ in a self-similar manner. Combining all reduced equations above, we readily derive two coupled nonlinear ODEs for $\alpha'$ and $v'$ as \begin{equation} {\cal X}(x,\alpha,v)\alpha'={\cal A}(x,\alpha,v)\ ,\ \ {\cal X}(x, \alpha, v)v'={\cal V}(x,\alpha, v)\ ,\label{equ9a} \end{equation} where functionals ${\cal X}$, ${\cal A}$ and ${\cal V}$ are defined by \begin{eqnarray} \!\!\!\!\! & \!\!\!\!\! {\cal X}(x,\alpha,v)\equiv\big[2-n+(3n-2)q/2\big]\alpha^{1-n+3nq/2} \nonumber\\& \qquad\times x^{2q} (nx-v)^q-(nx-v)^2\ ,\nonumber\\ &{\cal A}(x,\alpha,v)\equiv 2\frac{x-v}{x}\alpha \bigg[q\alpha^{1-n+3nq/2} x^{2q} (nx-v)^{q-1} \nonumber\\&+(nx-v)\bigg]-\alpha\bigg[(n-1)v +\frac{nx-v}{3n-2}\alpha\nonumber\\&+ q\alpha^{1-n+3nq/2}x^{2q-1} (nx-v)^{q-1}(3nx-2v)\bigg]\ ,\nonumber\\& {\cal V}(x,\alpha, v)\equiv 2 \frac{x-v}{x}\alpha\bigg(2-n+\frac{3n}{2}q\bigg) \alpha^{-n+3nq/2} x^{2q}\nonumber\\& \times(nx-v)^q -(nx-v)\bigg[(n-1)v+\frac{nx-v}{3n-2}\alpha \nonumber\\& +q\alpha^{1-n+3nq/2} x^{2q-1} (nx-v)^{q-1}(3nx-2v)\bigg]\ . \label{equ10a} \end{eqnarray} For a conventional polytropic gas of constant $\kappa$ with $n+\gamma=2$, we simply set $q=0$ in equation (\ref{equ10a}) to derive \begin{eqnarray} \!\!\!\!\!\!\!\!\!\! \frac{\alpha'}{\alpha^2}=\frac{(n-1)v+\frac{(nx-v)\alpha}{(3n-2)} -2(x-v)(nx-v)/x}{\alpha(nx-v)^2-\gamma\alpha^{\gamma}}\ ,\label{equ9}\\ v'=\frac{(n-1)\alpha v(nx-v)+\frac{(nx-v)^2}{(3n-2)}\alpha^2-2\gamma\alpha^{\gamma} (x-v)/x}{\alpha(nx-v)^2-\gamma\alpha^{\gamma}}\ .\label{equ10} \end{eqnarray} Up to this point, our basic self-similar hydrodynamic formulation is the same as that of Lou \& Wang (2006) and of \citet{WangLou08} without a random magnetic field. For energy conservation, we define the energy density $\epsilon$ and the energy flux density ${\cal J}$ as follows, \begin{equation} \epsilon=\frac{\rho u^2}{2}-\frac{GM\rho}{r}+\frac{i}{2}p\ , \label{Eden} \end{equation} \begin{equation} {\cal J}=\rho u\bigg(\frac{u^2}{2}-\frac{GM}{r}+\frac{i}{2}\frac{\gamma p}{\rho}\bigg)\ ,\label{Eflux} \end{equation} where $\epsilon$ is the energy density, ${\cal J}$ is the energy flux density and $i$ is the degree of freedom of an individual gas particle. The three terms in expressions (\ref{Eden}) and (\ref{Eflux}) correspond to densities of the kinetic energy, the gravitational energy and the internal energy, respectively. With equations $(\ref{equ1})-(\ref{equ5})$ and a globally constant $\kappa$, we derive \begin{equation} \frac{\partial\epsilon}{\partial t}+\frac{1}{r^2}\frac{\partial}{\partial r}(r^2{\cal J})={\cal P}\equiv u\frac{\partial p}{\partial r}\bigg[\frac{i}{2}(\gamma-1)-1\bigg] \label{Econ} \end{equation} for energy conservation, where ${\cal P}$ represents the net energy input. If the gas expands adiabatically or $\gamma=(i+2)/i$, then ${\cal P}=0$. Whether the gas locally gains or loses energy depends not only on the difference between $\gamma$ and $(i+2)/i$, but also on the signs of $\partial p/\partial r$ and $u$. \subsection[]{Self-Similar Solutions} An exact globally static solution, known as the singular polytropic sphere (SPS) takes the following form of \begin{equation} v=0\ , \quad \alpha=\bigg[\frac{n^{2-q}}{2(2-n) (3n-2)}\bigg]^{-1/(n-3nq/2)}x^{-2/n}\ .\label{SPS} \end{equation} This is a straightforward generalization of the singular isothermal sphere (SIS; e.g., Shu 1977) and of the SPS for a conventional polytropic gas with $q=0$ (Lou \& Wang 2006; Lou \& Gao 2006). For a general SPS here, the mass density profile scales as $\rho\propto r^{-2/n}$, independent of $q$ parameter. For large $x$, the asymptotic flow behaviour is \begin{eqnarray} \!\!\!\!\!\! v=\bigg[-\frac{nA}{(3n-2)}+2(2-n) n^{q-1}A^{1-n+3nq/2}\bigg]x^{1-2/n}\quad\nonumber\\ +Bx^{1-1/n}+\cdots\ ,\ \label{equ15}\\ \alpha=Ax^{-2/n}+\cdots\ ,\qquad\qquad\qquad\qquad\qquad\qquad\qquad\ \ \ \ \label{equ16} \end{eqnarray} where $A>0$ and $B$ are two constant parameters. As $x\rightarrow +\infty$, the mass density profile scales the same way as SPS (\ref{SPS}) above, independent of $q$ parameter to the leading order. Since $x\rightarrow +\infty$ means $t\rightarrow 0^{+}$, the asymptotic solution at large $x$ is thus equivalent to initial conditions of the system at a finite $r$. Coefficients $A$ and $B$ are referred to as the mass and velocity parameters, respectively. A global solution with asymptotic behaviour (\ref{equ15}) and (\ref{equ16}) represents a fluid whose density profile scales initially similar to that of a SPS. Thus, by varying the scaling index $n$ or the polytropic index $\gamma$, we are able to model the initial density profile with an index $l=-2/n$. Furthermore, as a physical requirement for plausible similarity solutions of our polytropic flow, both $v(x)$ and $\alpha(x)$ should remain finite or tend to zero at large $x$. Hence, in cases of $2/3<n<1$, corresponding to $2<l<3$, $A$ and $B$ are fairly arbitrary, while in cases of $1\leqslant n<2$ (general polytropic), corresponding to $1<l\leqslant2$, $B$ should vanish for a finite radial flow velocity at large $x$. In the regime of $x\rightarrow0^+$, there exists a free-fall asymptotic solutions for which the gravity is virtually the only force in action, and the radial velocity and the mass density profile both diverge in the limit of $x\rightarrow0^+$. To the leading order, the free-fall asymptotic solution takes the form of \begin{equation} \alpha(x)=\bigg[\frac{(3n-2)m(0)}{2x^3}\bigg]^{1/2}\ , \label{freefall1} \end{equation} \begin{equation} v(x)=-\bigg[\frac{2m(0)}{(3n-2)x}\bigg]^{1/2}, \label{freefall2} \end{equation} where constant $m(0)$ represents an increasing central point mass. Such solutions were first found by \citet{b1} in the isothermal case, and were generalized to the conventional polytropic case~\citep[Cheng 1978;][]{b33, b11} and to the general polytropic case with a random magnetic field by \citet{WangLou08}. The asymptotic form does not depend on $q$. The validity of such solutions requires $n>2/3$ and $\gamma<5/3$; the last inequality appears as a result of comparing various terms in series expansions. Another exact global solution, known as the Einstein-de Sitter (EdS) solution, exists in two cases of $q=0$ and $q=2/3$. The EdS solution for a conventional polytropic gas of $q=0$ reads as \begin{equation} v=\frac{2}{3}x\ ,\qquad \alpha=\frac{2}{3}\ ,\qquad m=\frac{2(n-2/3)}{3}x^3\ ,\label{equ11} \end{equation} (Lou \& Wang 2006); for $q=2/3$ and thus $\gamma=4/3$, this global EdS solution without and with a random magnetic field takes a slightly different form (Lou \& Cao 2008). With shocks, EdS solutions can be used to construct polytropic ``champagne flows" with various upstream dynamics. From now on, we focus on the conventional polytropic case of $q=0$ with $n+\gamma=2$. To simulate ``champagne flows" in H \Rmnum{2} regions in star-forming clouds, we solve coupled nonlinear ODEs (\ref{equ9}) and (\ref{equ10}) subject to the inner boundary conditions at $x=0$, namely, \begin{equation} \alpha=\alpha_0\ ,\qquad\qquad v=0\ ,\qquad\ \label{equ12} \end{equation} where $\alpha_0$ is a constant. A series expansion yields the LP type of asymptotic solutions at small $x$ in the form of \begin{eqnarray} v=\frac{2}{3}x-\frac{\alpha_0^{(1-\gamma)}} {15\gamma}\bigg(\alpha_0-\frac{2}{3}\bigg) \bigg(n-\frac{2}{3}\bigg)x^3+\cdots\ , \label{equ13}\\ \alpha=\alpha_0-\frac{\alpha_0^{(2-\gamma)}} {6\gamma}\bigg(\alpha_0-\frac{2}{3}\bigg)x^2+\cdots\ .\ \qquad\qquad \label{equ14} \end{eqnarray} The isothermal counterpart of this polytropic series solution was obtained earlier \citep[][Hunter 1977; Shu et al. 2002]{Lar1,Lar2,Pen1,Pen2} with $n=1$ and $\gamma=1$. Such LP type solutions may be utilized to construct champagne flows, if we ignore the central protostar as an approximation and assume the surrounding gas to be initially static \citep[e.g.,][]{Shu}. Physically, as a result of the gravity of the central protostar, the gas infall towards the very central region may not stop, even when the protostar starts to shine and the `champagne phase expansion' has occurred in the outer cloud envelope due to the photoionization and ultraviolet (UV) heating. Thus with central free falls (\ref{freefall1}) and (\ref{freefall2}) as the downstream side of a shock, we can also construct global solutions for the dynamics of H \Rmnum{2} regions surrounding a nascent central massive protostar which involves free-fall materials. We can take LP type or EdS solutions as the downstream side of a shock and model classical champagne flows for a conventional polytropic gas. We shall come to the possible scenario of inner free-fall solutions with an outer champagne flow. With the restriction $n+\gamma=2$ for a conventional polytropic gas, the initial density profile with the scaling index $n$ is directly linked to the polytropic index $\gamma$, depending on the energy exchange process in the gas (see eq \ref{Econ}). For a general polytropic gas with $q\neq 0$ in contrast, the scaling index $n$ and polytropic index $\gamma$ can be independently specified; in particular, we can have $\gamma>1$ and $n\geq 1$ (this is impossible for a conventional polytropic gas). On the other hand, the initial mass density profile is affected by the star formation or other energetic processes before $t=0$. Hence for $l=-2/n$, we postulate that the energy exchange process remains largely unchanged after the protostar formation at $t=0$. This is plausible because the ionization front travels relatively fast to large distances in a cloud during the ``champagne flow" phase. Our general polytropic model allows inequality $2/3<n<2$, corresponding to $1<l<3$, which covers so far the entire range of initial mass density profiles of H \Rmnum{2} regions. This range of mass density distribution has been obtained from radio observations of cloud fragments and isolated dark clouds \citep[e.g.,][]{Arquilla, Myers}. Franco et al. (1990) reveals that the initial mass density profile index $3/2<l<3$ (i.e., $2/3<n<4/3$ in a polytropic model) leads to ``champagne flows" in clouds with weak shocks; and $l>3$ corresponds to a ``champagne flow" with strong and accelerating shocks. We then require parameter $n$ within the range $2/3<n<4/3$, as we assume the cloud is `density bounded'. With such range of parameter, we provide solutions of ``champagne flow" with shocks. We emphasize in particular that there is one more degree of freedom to choose the velocity parameter $B$ in asymptotic solution (\ref{equ15}) and (\ref{equ16}) in polytropic cases with $2/3<n<1$ (i.e. $2<l<3$) than in the isothermal case. This leads to major differences of the polytropic champagne flow solutions we find as compared with those isothermal solutions of Shu et al. (2002) and Tsai \& Hsu (1995). \citet{Franco} have recently argued from the radio continuum spectra that UC H \Rmnum{2} regions have initial density gradients with $2\leqslant l\leqslant3$; so we would consider primarily the $n$ range of $2/3<n<1$ or $2<l<3$. In summary, molecular clouds with $3/2<l<3$ have ``champagne flows" in self-similar manner, while those clouds with $l\geqslant3$ have ``champagne flows" without similarity. For clouds with $2<l<3$, there is more than one parameter to specify initial dynamic flows. Finally, we need to include shocks at proper places in LP type solutions, EdS solutions or free-fall solutions to match with appropriate asymptotic solutions at large $x$ to determine relevant coefficients. The shock jump conditions between the downstream and upstream variables are determined by the mass conservation, the radial momentum conservation and the energy conservation. With these three equations for shock conditions, one can determine upstream self-similar variables $(x_{s{\rm u}},\ \alpha_{\rm u},\ v_{\rm u})$ uniquely from downstream self-similar variables $(x_{s{\rm d}},\ \alpha_{\rm d},\ v_{\rm d})$ or vice versa. The two subscripts ${\rm d}$ and ${\rm u}$ here denote the downstream and upstream variables, respectively. Detailed formulation and procedure of self-similar shocks can be found in Section 5 of Lou \& Wang (2006). All solutions in this paper are obtained by solving coupled nonlinear ODEs (\ref{equ9}) and (\ref{equ10}) for a conventional polytropic gas with $n+\gamma=2$. \section[]{Polytropic Champagne Flows} In cases of $n<1$, there is a range of shock positions (or speeds) for a specified downstream solution with a fixed density at the centre $\alpha_0$ at $x=0$, corresponding to different asymptotic flow behaviours at large $x$ on the upstream side. We now discuss two situations: first, for cases with a fixed value of $\alpha_0$, we adjust shock positions for a specified LP type solution and observe the relation between shock positions and asymptotic flow behaviours at large $x$. Secondly, for cases with a fixed shock position, we alter the value of $\alpha_0$ and examine the variation of upstream conditions. It is expected to set certain limits on relevant parameters for polytropic ``champagne flows" to exist. As a series of examples, we choose the scaling index $n=0.9$. Numerical explorations have also been performed for cases of $n=0.7$ and $n=0.8$ and the results are qualitatively similar. \subsection[]{Cases with a Fixed $\alpha_0$ Value} With a fixed value of $\alpha_0$ for $x\rightarrow 0^{+}$, one can uniquely determine a LP type solution by a standard numerical integration. Such a LP type solution will encounter the sonic critical curve at a certain $x_{\rm max}$ uniquely corresponding to $\alpha_0$. It is natural to consider possible hydrodynamic shock positions $x_{s{\rm d}}<x_{\rm max}$ on the downstream side of shock front. One such example of $n=0.9$ and $\alpha_0=1$ is shown in Figure \ref{Fig1} with relevant parameters summarized in Table \ref{tab1}. To model ``champagne flows" with outward expansions at large radii in molecular clouds, we provide the following analysis. In principle, there are two other conditions giving rise to two minima of $x_{s{\rm d}}$ in order to obtain an outflow in the upstream side of a shock. First, our extensive numerical explorations reveal that there exists a $x_{\rm min1}$ and for $x_{s{\rm d}}<x_{\rm min1}$, the upstream shock position $x_{s{\rm u}}$ becomes complex by the shock conditions. This only happens when we attempt to obtain upstream variables from downstream variables. For a real $x_{s{\rm u}}$, downstream variables $(x_{s{\rm d}},\ \alpha_{\rm d},\ v_{\rm d})$ should satisfy \begin{equation} (1-\gamma)\alpha_{\rm d}^{\gamma-1}+2(nx_{s{\rm d}}-v_{\rm d})^2>0\ .\label{xmin} \end{equation} For $\gamma< 1$ (unphysical) and $\gamma=1$, inequality (\ref{xmin}) is readily satisfied; for $\gamma>1$ or $n<1$, this condition does not always hold. Algebraic manipulations give the downstream Mach number in the shock reference framework ${\cal M}_{\rm d}$ as \begin{equation} {\cal M}_{\rm d}^2=\frac{(nx_{s{\rm d}}-v_{\rm d})^2}{\gamma \alpha_{\rm d}^{\gamma-1}}\ .\label{Math} \end{equation} Inequality (\ref{xmin}) imposed on a subsonic downstream Mach number is $1>{\cal M}_{\rm d}^2>(\gamma-1)/(2\gamma)$. The downstream Mach number and the upstream Mach number is related by \begin{equation} {\cal M}_{\rm d}^2=\frac{2+(\gamma-1){\cal M}_{\rm u}^2}{2\gamma{\cal M}_{\rm u}^2-(\gamma-1)}\ . \end{equation} This relation was provided by \citet{LouCao} for a relativistically hot gas and is proven valid in our model consideration. With the possible range of upstream Mach number $1<{\cal M}_{\rm u}^2<+\infty$, we then have the limit on ${\cal M}_{\rm d}$ shown above. As we integrate LP or EdS solutions from $x=0$ with $\alpha=\alpha_0$ and $v=0$, solutions do not satisfy inequality (\ref{xmin}) when $x$ remains sufficiently small and $x_{\rm min1}$ is the minimum value of $x$ satisfying (\ref{xmin}). This value of $x_{\rm min1}$ is uniquely determined by the $\alpha_0$ value. Therefore, for a fixed LP type solution around small $x$, polytropic shocks can be constructed with a downstream shock in the range of $x_{\rm min1}<x_{s{\rm d}}<x_{\rm max}$, and across such a shock, the LP type solution at small $x$ can be matched with different asymptotic flows at large $x$. Systematic numerical explorations reveal that the upstream velocity increases monotonically with the increase of $x_{s{\rm d}}$, as shown by the variation trend of the $B$ parameter (see Table \ref{tab1}). There is thus another critical value imposed on $x_{s{\rm d}}$, denoted by $x_{\rm min2}$. For $x_{s{\rm d}}>x_{\rm min2}$, the upstream solution matches to an asymptotic solution at large $x$ in the form of (\ref{equ15}) and (\ref{equ16}) with $B>0$, referred to as an outflow. Complementarily with $x_{s{\rm d}}<x_{\rm min2}$, the upstream solution matches to an asymptotic solution with $B<0$, referred to as an inflow. As $B$ varies continuously and monotonically with $x_{s{\rm d}}$, for $x_{s{\rm d}}=x_{\rm min2}$ the upstream solution corresponds to an asymptotic solution with $B=0$, which describes a breeze or a contraction in association with ``champagne flows". According to asymptotic expressions (\ref{equ15}) and (\ref{equ16}) with $q=0$, the breeze or contraction correspond to slow outward or inward flows. To obtain a breeze, we need a mass parameter \begin{equation} A<A_s\equiv \bigg[\frac{n^2}{2\gamma(3n-2)}\bigg]^{-1/n}\ . \end{equation} For the specific case of $A=A_s$, either a breeze or a contraction reduces to SPS solution (\ref{SPS}). With $n<1$, there are three possibilities in general. First, $x_{\rm min1}>x_{\rm min2}$ for the allowed range to construct shocks, the upstream solutions always correspond to outflows. Secondly, $x_{\rm min1}<x_{\rm min2}$ for the allowed range to construct shocks, it is possible to obtain outflows, inflows and breezes or contractions for the upstream side. Thirdly, if any of $x_{\rm min1}$ or $x_{\rm min2}$ exceeds $x_{\rm max}$, a global champagne flow is not allowed. For the isothermal case of $n=1$, we may set $B$ equal to zero; asymptotic breezes or contractions on the upstream side are allowed, and the only allowed value of the downstream shock position is $x_{s{\rm d}}=x_{\rm min2}$. \citet{Shu} indicated that the isothermal shock position is uniquely determined by the value of $\alpha_0$, which is consistent with our more general analysis here. The unique shock position found by \citet{Shu} corresponds to the $x_{\rm min2}$ above. The conventional scenario for ``champagne flows" would require the entire fluid to expand outward. Numerical simulations on ``champagne flows" \citep[e.g.,][]{TT1} assume that at $t=0^{+}$ the central star is formed and the surrounding cloud is initially at rest. For $t>0^+$, the fluid is photoionized and heated by the ultraviolet radiation from the central star and expands. In this scenario, we would require an expanding upstream flow in order to model ``champagne flows". However, since solutions with an asymptotic inflow or contraction as upstream part may also exist, the outer part of H \Rmnum{2} regions can also have inward velocities. In fact, during the star formation even some time after the star formation, the surrounding cloud may continue to collapse towards the centre \citep[e.g.,][]{b32}. With the core nuclear burning of the central protostar, the surrounding gas is ionized and heated, and the inner part of the fluid starts to expand, while the outer part continues to fall inwards. Those solutions of the LP type as the downstream side of a shock and an asymptotic inflow or contraction for the upstream side correspond to this scenario just described; and such global solutions are referred to as Inner Shock Expansions in a Collapsing Envelope (ISECE). For the case of $n=0.9$ and $\alpha_0=1$, we have determined $x_{\rm min1}=0.95$, $x_{\rm min2}=2.287$ and $x_{\rm max}=3$; the shock range $2.287<x_{s{\rm d}}<3$ gives sensible classical polytropic ``champagne flow" solutions and $0.95<x_{s{\rm d}}<2.287$ gives the ISECE solutions as shown in Figure \ref{Fig1}. For $n=0.9$, we have $A_s=2.042$; with $x_{s{\rm d}}=x_{\rm min2}$ and $A=1.536<A_s$, the asymptotic solution is a breeze and should be considered as a classical champagne flow. Given other parameters the same, the situation of $A>A_s$ would give rise to an asymptotic contraction. The shock location and shock speed can be determined once $x_{s{\rm d}}$ is specified. The dimensionless shock position in the self-similar variable represents the shock strength and velocity in dimensional form. The shock velocity reads $dr_s/dt=nk_{\rm d}^{1/2}x_{s{\rm d}}t^{n-1}$. The outgoing shock slightly decelerates (for $n$ slightly less than 1) and the shock velocity is proportional to $x_{s{\rm d}}$. \begin{figure} \includegraphics[width=0.5\textwidth]{Figure1a.eps} \caption{The reduced mass density $\alpha(x)$ (top) and the reduced radial flow velocity $v(x)$ (bottom) for global ``champagne flow" solutions in cases with $n=0.9$ (thus $\gamma=1.1$) and $\alpha_0=1$. In both panels the dashed curve represents the sonic critical curve and in the bottom panel the dotted line is $v=0$. The downstream solution is connected with various upstream solutions with light solid curves. The downstream solution is integrated numerically from $x\rightarrow 0^{+}$ with a LP type solution. In both panels, the upstream solutions from top to bottom correspond to $x_{s{\rm d}}$=3, 2.923, 2.773, 2.623, 2.473, 2.287, 2.2 and 2 respectively. We note that the upstream solution with $x_{s{\rm d}}=2.287$ corresponds to a breeze. Relevant parameters are summarized in Table \ref{tab1}.} \label{Fig1} \end{figure} As shown in Figure \ref{Fig1} and Table \ref{tab1}, with increasing $x_{s{\rm d}}$, the upstream variables at the shock front $v_{\rm u}$ and $\alpha_{\rm u}$ increase, and the two parameters $A$ and $B$ of asymptotic solutions (\ref{equ15}) and (\ref{equ16}) also increase. This is a fairly common feature, observed in all other cases that we have studied numerically. Different shock positions match with different asymptotic solutions at large $x$. Once $A$ and $B$ are specified, the shock position $x_{s{\rm d}}$ is uniquely determined. As $A$ is the mass parameter and $B$ is the velocity parameter, $x_{s{\rm d}}$, proportional to the shock velocity and strength, is determined not only by initial mass density but also by the initial motion. This differs from the isothermal case when $B=0$ with the mass parameter $A$ determining the shock behaviour. We expect a faster and stronger shock with a higher initial speed. We further identify two sub-types of such ISECE solutions: (i) the upstream side has an outward velocity near the shock and an asymptotic inward velocity far from the centre (e.g., solution with $x_{s{\rm d}}=2.2$ in Figure \ref{Fig1}); (ii) the upstream side has an inward velocity everywhere (e.g., solution with $x_{s{\rm d}}=2$ in Figure \ref{Fig1}). For the type (i) solutions, there is a stagnation point $x_{\rm stg}$ where the radial flow velocity vanishes; for the solution with $x_{s{\rm d}}=2.2$, $x_{\rm stg}\sim3$. With self-similar transformation (\ref{equ6}), this stagnation point $r_{\rm stg}=k^{1/2}t^{n}x_{\rm stg}$ travels outward with time in a self-similar manner. For ISECE solutions, we envision that such solutions correspond to the situation where a star starts to burn, ionizing and heating the surrounding medium as the gas falling continues. The gas infall and collapse are indispensable in star formation. If the nascent star ionizes the whole residual gas sufficiently fast, the outer gas may still possess an inward momentum. A champagne shock runs into the infall gas, deposits outward momentum and accelerates the outer gas. If a shock is sufficiently strong, we expect type (i) solutions, e.g., the gas immediately outside a shock flows outward, and the stagnation point travels outward proportional to the self-similar expansion of the shock. If a shock is sufficiently weak, we expect type (ii) solutions, e.g., the gas flows inward outside the shock front. This ISECE scenario is expected to occur in certain H \Rmnum{2} regions (e.g., Shen \& Lou 2004; \citet{b32}). \begin{table*} \begin{center} \caption{Data Parameters of Global Polytropic ``Champagne Flow" Solutions in Cases with $n=0.9$ and $\alpha_0=1$} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline \hline $A$ & $B$ & $x_{s{\rm d}}$ & $\alpha_{\rm d}$ & $v_{\rm d}$ & $x_{s{\rm u}}$ & $\alpha_{\rm u}$ & $v_{\rm u}$\\ \hline 0.9902 & $-0.7699$ & 2 & 0.8416 & 1.3019 & 2.0693 & 0.2005 & -0.3012\\ 1.3501 & $-0.2267$ & 2.2 & 0.8172 & 1.4266 & 2.2446 & 0.2404 & 0.1004\\ 1.5357 & 0& 2.2869& 0.8068& 1.4805 & 2.3234& 0.2586& 0.2599\\ 2.0109 & 0.4767 &2.4732 & 0.7848 &1.5956 &2.4964 &0.2993 &0.5789\\ 2.474 &0.8577 & 2.6232 &0.7678 &1.688 &2.6387 &0.3336 &0.8172\\ 3.0224 &1.2416 &2.7731 &0.7516 &1.7803 &2.7832 &0.369 &1.0425\\ 3.6696& 1.6323 &2.9231 &0.7363 &1.8728 &2.9292 &0.4054 &1.2568 \\4.0455 &1.8367 &3 &0.729 &1.9204 &3.0047 &0.4243 &1.3629\\ \hline \end{tabular}\label{tab1} \end{center} \end{table*} \subsection[]{Cases of a Fixed Dimensionless Shock Position} The variation of LP type solutions with different $\alpha_0$ and the same $x_{s{\rm d}}$ is shown in Figure \ref{Fig2}. For $\alpha_0>2/3$, $\alpha$ decreases with increasing $x$ while for $\alpha_0<2/3$, $\alpha$ increases with increasing $x$. With a larger $\alpha_0$ the LP type solution encounters the sonic critical curve earlier; therefore $\alpha_0$ cannot be too large, otherwise the LP type solution encounters the sonic critical curve before reaching the preset shock position $x_{s{\rm d}}$. One case of $n=0.9$ and $x_{s{\rm d}}=3$ is shown in Figure \ref{Fig2} with parameters given in Table \ref{tab2}. Here, we have $\alpha_{s{\rm d}}<2.5$. \begin{figure} \includegraphics[width=0.5\textwidth]{Figure2.eps} \caption{The reduced mass density $\alpha(x)$ (top) and the reduced radial flow velocity $v(x)$ (bottom) for global polytropic ``champagne flow" solutions in cases $n=0.9$ (thus $\gamma=1.1$) and $x_{s{\rm d}}=3$. In the top panel, the horizontal dotted line stands for $\alpha=0$, separating the top panel into two parts; the vertical scales in these parts are different. In the upper part $\alpha(x)$ is presented linearly, while in the lower part $\log[\alpha(x)]$ is presented. In the bottom panel, the dotted line is the sonic critical curve. Shocks at $x\sim 3$ as the discontinuity in solutions are shown. The solid curves on the downstream side of shocks are LP type solutions with $\alpha_0$=0.5, 1, 1.5, 2 and 2.5 (from bottom to top in the top panel and from top to bottom in the bottom panel). The solid curves on the right show respectively the corresponding upstream solutions approaching different asymptotic solutions at large $x$ (from bottom to top in the top panel and from bottom to top in the bottom panel). The dashed curves on the downstream side of shocks are LP type solutions with $\alpha_0=10^{-5}$, $10^{-4}$, $10^{-3}$, and $10^{-2}$ (from bottom to top in the top panel and from top to bottom in the bottom panel). The dashed curves on the right show respectively the corresponding upstream solutions approaching different asymptotic solutions at large $x$ (from bottom to top in the top panel and from top to bottom in the bottom panel). The thick solid curve in both panels represents the solution with $\alpha_0=0.04$, which has the lowest upstream speed and the minimum $B$ at large $x$. Relevant parameters are summarized in Table \ref{tab2}.} \label{Fig2} \end{figure} As shown in Figure \ref{Fig2} and Table \ref{tab2}, with the increase of $\alpha_0$, the upstream conditions at the shock front $\alpha_{\rm u}$ and the mass parameter $A$ of the asymptotic solution increases, but $v_{\rm u}$ and $B$ do not have a steady trend. As $\alpha_0$ increases from $10^{-5}$ to $0.04$, $v_{\rm u}$ and $B$ decrease, while as $\alpha_0$ increases from 0.04 to 2.5, $v_{\rm u}$ and $B$ increase (see Table \ref{tab2}). This shows that $v_{\rm u}$ and $B$ are correlated. The minimum values of $v_{\rm u}$ and $B$ are associated with $\alpha_0=0.04$. Here we show that with a prefixed downstream shock position $x_{s{\rm d}}$, it is possible that neither breeze nor contraction solutions are allowed. With a $x_{s{\rm d}}$ as large as 3, all the asymptotic upstream solutions correspond to outflows. In the isothermal analysis of \citet{Shu}, the mass parameter $A$ tends to zero with $\alpha_0\rightarrow 0^{+}$. In polytropic cases, we obtain similar results. According to series expansions (\ref{equ13}) and (\ref{equ14}), if setting $\alpha_0=0$ exactly, the integration gives a trivial solution of $\alpha=0$. Based on numerical explorations for $\alpha_0\rightarrow 0^{+}$ in isothermal cases, \citet{Shu} argued that the self-gravity may be neglected for cases with small central density, and developed another self-similar transformation, viz., the so-called invariant form, in order to model the initial mass density profile other than that of a SIS (i.e., $\rho\propto r^{-2}$). The initial mass density profile $\rho\propto r^{-l}$, where index $l$ does not necessarily equal to 2, can be described by the invariant form when the self-gravity is ignored. We perform a similar reduction with the invariant self-similar transformation for a conventional polytropic gas without the self-gravity (see Appendix A), and show that with $n\neq 1$ (i.e., non-isothermal cases), the power index $l$ must be equal to $2/n$ for a self-similar form. This mass density profile with a scaling index $l=2/n$ is the same as that in a SPS and in asymptotic solutions at large $x$. In summary, the freedom of choosing $l$ in the invariant form disappears for non-isothermal cases. From another perspective, since the index $n$ ranges in $2/3<n<4/3$, self-similar polytropic champagne flows can model the initial mass density profile with $3/2<l<3$. In other words, the objective to model the initial density profile other than $l=2$ is naturally fulfilled, without the necessity of dropping the self-gravity. The clear advantage of our polytropic approach is that the self-gravity is included in the model consideration. Therefore, to apply our polytropic solutions, $\alpha_0\rightarrow 0^{+}$ is no longer required. From transformation (\ref{equ6}), parameter $\alpha_0$ is tightly linked with the central mass density $\rho_0$ and timescale $t$, and should have different values in different situations. Therefore polytropic champagne flow solutions are adaptable to a much wider range of astrophysical cloud systems. Moreover, the $\alpha_0\rightarrow 0^{+}$ cases in our polytropic framework can be approximated by a central ``void" as discussed in the following section. With a central ``void", we are able to neglect the gravity of the central region where the density is sufficiently low and still consider the self-gravity of the outer more dense gas medium. \begin{table*} \begin{center} \caption{Data Parameters of Global Polytropic ``Champagne Flow" Solutions in Cases with $n=0.9$ and $x_{s{\rm d}}=3$. } \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline \hline $\alpha_0$ & $A$ & $B$ & $x_{s{\rm d}}$ & $\alpha_{\rm d}$ & $v_{\rm d}$ & $x_{s{\rm u}}$ & $\alpha_{\rm u}$ & $v_{\rm u}$\\ \hline 0.00001& 0.0002532& 1.145 & 3 & 0.0001 & 2.3469 & 3.0609 & 0.000030184& 1.5294\\ 0.0001 &0.001735 &0.9499 & 3& 0.0007& 2.3152 & 3.064 &0.0001993& 1.3959\\ 0.001 &0.01243 &0.7205 & 3 & 0.0049 &2.2821 &3.0693& 0.001372 &1.2353\\ 0.01& 0.0912 &0.4713 &3 &0.0358 & 2.2449 &3.0746 &0.009752 &1.0536\\ 0.04 &0.3045 & 0.3899 & 3& 0.1154 & 2.2135 &3.0715 &0.03192& 0.9641\\ 0.5& 2.4053 &1.1453 &3& 0.6048 &2.0444 &3.0197 &0.2556 &1.1567\\ 1 &4.0455& 1.8367 & 3 & 0.729 & 1.9204 &3.0047 &0.4244& 1.3629 \\1.5& 5.5009& 2.406& 3 &0.7512 &1.8166 & 3.008 & 0.5556& 1.5059 \\2 &6.9587 &2.9308 & 3 &0.7414 & 1.7274& 3 &0.6606& 1.6083\\ 2.5 &8.5684& 3.4633& 3& 0.7588 &1.562& 2.8444 & 0.7086 &1.4913 \\ \hline \end{tabular}\label{tab2} \end{center} \end{table*} In the isothermal case of \citet{Shu}, the solution in which the outer part is a static SIS represents a limiting solution defining the maximum value of $\alpha_0$. In the polytropic cases with $n=0.9$, we can also identify such a limit by requiring $B=0$ for the upstream asymptotic solutions, such that for a preset $\alpha_0$ the downstream shock position $x_{s{\rm d}}$ is uniquely determined. A family of such ``champagne flow" solutions with asymptotic upstream breezes or contractions is shown by solid curves in Figure \ref{Breeze} with parameters summarized in Table \ref{tabb}. With a gradual increase of $\alpha_0$, the upstream side varies from outward breezes, to a SPS, and to an inward contraction; this trend leads to a maximum $\alpha_0$ if we define an outward breeze for classical ``champagne flows". Here, the critical value $\alpha_0=3.13$ for an upstream SPS corresponds to the upstream SIS limit in \citet{Shu} (i.e., $\alpha_0=7.9$ in the isothermal case). Naturally, this critical value depends on the choice of $n$. For $\alpha_0>3.13$, an upstream asymptotic contraction or an ISECE solution appears (dashed curve in Fig. \ref{Breeze}). Thus, isothermal champagne flows form a special family with $B=0$ and $n=1$, referred to as breeze champagne flows. With $2/3<n<1$ or $2<l<3$, there are many more physically possible champagne flows, for which the upstream side corresponds to asymptotic outflows at large $x$. The physical meaning of this special solution whose upstream side is the outer part of a static SPS is clear: the outer envelope of gas is initially in a hydrostatic equilibrium, and the expanding shock created by the UV photoionization travels into the static envelope; on the downstream side of this expanding shock, the gas is heated to high temperatures. As the static SPS relies on the scaling parameter $n$ or the polytropic index $\gamma$, we expect one single solution for a preset $n$. From Table \ref{tabb} we see clearly that with the increase of $\alpha_0$, the downstream shock position to obtain a breeze in the upstream, or $x_{\rm min2}$ decreases. Meanwhile, numerical explorations suggest that $x_{\rm min1}$ increases with increasing $\alpha_0$. Hence we expect for a sufficiently large $\alpha_0$, $x_{\rm min2}$ would become less than $x_{\rm min1}$ to forbid ISECE solutions. \begin{figure} \includegraphics[width=0.5\textwidth]{Figure3.eps} \caption{``Champagne flow" solution with a LP type downstream and the upstream part as a breeze or contraction for $n=0.9$ (i.e., $\gamma=1.1$). The top panel shows the density and the bottom panel shows the velocity. In both panels the dotted curve is the sonic critical curve. The solutions are integrated from $\alpha_0=0.2209$, $2/3$, $1$, $2$, $3.13$ and $5$ (from bottom to top in the top panel and from top to bottom in the bottom panel), and the downstream shock positions $x_{s{\rm d}}$ are carefully chosen to let the upstream solution correspond to the asymptotic solutions with $B=0$. Relevant parameters are summarized in Table \ref{tabb}. For the solution with $\alpha_0=3.13$, the corresponding upstream is a SPS. For $\alpha_0>3.13$, the upstream contracts (dashed curve), while for $\alpha_0<3.13$, the upstream is a breeze. For a breeze ``champagne flow", we require $\alpha_0\geqslant3.13$ (solid curves).} \label{Breeze} \end{figure} \begin{table*} \begin{center} \caption{Parameters of Global Polytropic ``Champagne Flow" Solutions with $n=0.9$ and Upstream Breeze or Contraction ($B=0$). } \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline \hline $\alpha_0$ & $A$ & $B$ & $x_{s{\rm d}}$ & $\alpha_{\rm d}$ & $v_{\rm d}$ & $x_{s{\rm u}}$ & $\alpha_{\rm u}$ & $v_{\rm u}$\\ \hline 0.2209&0.7550&0&2.6250&0.3571&1.8473&2.6873&0.0990&0.5153\\ 2/3 &1.3240&0&2.3986&0.6667&1.5991&2.4419&0.2045&0.3407\\ 1 &1.5357&0&2.2869&0.8068&1.4805&2.3234&0.2586&0.2599\\ 2 &1.8643&0&2.0733&1.0783&1.2563&2.0996&0.3734&0.1069\\ 3.13 &2.0420&0&1.9264&1.2773&1.1036&1.9475&0.4643&0\\ 5 &2.1924&0&1.7706&1.5091&0.9429&1.7873&0.5746&$-0.1163$\\ \hline \end{tabular}\label{tabb} \end{center} \end{table*} \section[]{Similarity Polytropic ``Champagne Flows" with Central Voids} We now establish and analyze a new class of ``champagne flow" solutions with voids surrounding the centre. We extend solutions from $x=0$ to the dimensionless self-similar expanding boundary $x^{\ast}$ of a void, inside of which there is no mass, i.e., $m^*=m(x^{\ast})=0$. In our notations, superscript $^{\ast}$ attached to variables indicates variables on the void boundary $x^{\ast}$. By expression (\ref{equ7}), we have $v^{\ast}=nx^{\ast}$. The void boundary conditions are \begin{equation} \alpha=\alpha^*\ ,\qquad v=nx^*\ ,\qquad\hbox{at}\qquad x=x^*\ . \label{equ20} \end{equation} A Taylor series expansion to the first order around the void boundary $x=x^*$ of ODEs (\ref{equ9}) and (\ref{equ10}) yields \begin{eqnarray} &&v(x)=nx^{\ast}+2(1-n)(x-x^{\ast})+\cdots\ ,\qquad \label{equ23}\\ &&\alpha(x)=\alpha^*+\frac{n(1-n)}{\gamma} (\alpha^*) ^n x^{\ast}(x-x^{\ast})+\cdots\ . \label{equ24} \end{eqnarray} Series expansions (\ref{equ23}) and (\ref{equ24}) are conspicuously different from series expansions (\ref{equ13}) and (\ref{equ14}). The proportional coefficient of the asymptotic velocity is no longer $2/3$ but depends on the scaling index $n$. By translating the origin, the dynamic flow behaves differently. For $n<2$, solutions given by expression (\ref{equ23}) are locally above the line $nx-v=0$, indicating a positive enclosed mass. Numerical integrations reveal that the solutions are always above the line $nx-v=0$ thereafter. Here, we model a gas flow with a central void in the presence of self-gravity and thermal pressure, directly relevant to galactic subsystems such as H \Rmnum{2} regions. One can readily obtain the downstream portion of ``champagne flow" solutions by numerically integrating coupled nonlinear ODEs (\ref{equ9}) and (\ref{equ10}) from the void boundary $x^{\ast}$ with asymptotic expansions (\ref{equ23}) and (\ref{equ24}). All such void solutions encounter the sonic critical curve, and the lower the value of $\alpha^{\ast}$ is, the later (or at larger $x$) the void solution encounters the sonic critical curve. In order to match with asymptotic solutions of finite density and velocity at large $x$, void solutions must either cross the critical curve smoothly or connect to another branch of solutions via shocks. To model ``champagne flows", we need to construct shocks to obtain global solutions. Global void solutions that cross the sonic critical curve smoothly will be discussed in Lou \& Hu (2008, in preparation). We now present the solutions for large and small $\alpha^{\ast}$, respectively. A family of semi-complete ``champagne flow" solutions with $n=0.9$, void boundary $x^*=1$ and $\alpha^*=5$ is constructed by varying the self-similar shock position as shown in Figure \ref{Fig4}. Complementarily, another family of solutions with $\alpha^*=10^{-4}$ is also constructed and shown in Figure \ref{Fig5}. Both outflow, inflow and breeze or contraction as the upstream side are presented. Relevant parameters of these void ``champagne flow" solutions are summarized in Table \ref{tab3}. \begin{figure} \includegraphics[width=0.5\textwidth]{Figure5.eps} \caption{The reduced mass density $\alpha(x)$ (top) and the reduced radial flow velocity $v(x)$ (bottom) for semi-complete ``champagne flow" solutions with a central void inside $x^{\ast}=1$ in the case of $n=0.9$ and density on the void boundary $\alpha^{\ast}=5$. In both panels, the dashed curve stands for the sonic critical curve. In the bottom panel the dotted line stands for the line $v=0$. The solid curve on the upper left of the sonic critical curve is the downstream void solution and the solid curves on the lower right of the sonic critical curve are the corresponding upstream solutions with the downstream shock position $x_{s{\rm d}}=$ 1.7 (inflow), 1.818 (contraction) and 2 (outflow). In this case, $x_{\rm min2}\approx 1.818$ and $x_{\rm min1}=1.408$. Numerical data about these solutions are tabulated in Table \ref{tab3}.} \label{Fig4} \end{figure} \begin{figure} \includegraphics[width=0.5\textwidth]{Figure6.eps} \caption{The reduced mass density $\alpha(x)$ (top) and the reduced radial velocity $v(x)$ (bottom) for semi-complete ``champagne flow" solutions with a central void inside $x^{\ast}=1$ in cases with $n=0.9$ (thus $\gamma=1.1$) and a density on the void boundary $\alpha^{\ast}= 10^{-4}$. In both panels, the dashed curve represents the sonic critical curve. In the bottom panel, the dotted line stands for the line $v=0$. The solid curve on the left side is the downstream void solution and the solid curves on the right side are the corresponding upstream outflow solutions with the downstream shock position $x_{s{\rm d}}=1.8$ (inflow), 2.068 (breeze), 3.088 (outflow) and 5.701 (outflow). Here, $x_{s{\rm d}}=2.068$ is the limit to ensure an asymptotic outflow, i.e., $x_{\rm min2}\approx 2.068$. In this case, $x_{\rm min1}=1.261$. Numerical data for these solutions are tabulated in Table \ref{tab3}.} \label{Fig5} \end{figure} \begin{table*} \begin{center} \caption{Polytropic ``Champagne Flow" Solutions with a Central Void inside $x^*=1$ in the Case of $n=0.9$} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline \hline $\alpha^*$ & $A$ & $B$ & $x_{s{\rm d}}$ & $\alpha_{\rm d}$ & $v_{\rm d}$ & $x_{s{\rm u}}$ & $\alpha_{\rm u}$ & $v_{\rm u}$\\ \hline $10^{-4}$ &0.000106& $-0.3813$& 1.8&$1.4734\times10^{-4}$& 1.3322 & 1.9058 & $2.7895\times10^{-5}$ & 0.1058\\$10^{-4}$ &0.000191& 0& 2.068 & $1.8377\times10^{-4}$& 1.5386 & 2.1442 & $4.2673\times10^{-5}$ & 0.4893\\ $10^{-4}$& 0.0014& 1.0833 & 3.088 & 0.000517& 2.398& 3.152& 0.00015& 1.501\\ $10^{-4}$& 0.0991 & 3.5315& 5.701& 0.0114& 4.701& 5.839& 0.0032 & 3.669\\ 5& 2.2512& $-1.0512$ & 1.7& 2.8906& 1.0956& 1.8424& 0.4646& $-1.2708$\\ 5& 2.9388& 0& 1.818& 2.4535& 1.1047& 1.8777& 0.5978& $-0.5632$\\ 5&5.2894&1.7273&2&1.8447&1.0816&2.0101&0.8364&0.2166\\ \hline \end{tabular}\label{tab3} \end{center} \end{table*} Similar to no void cases, $\alpha$ decreases with increasing $x$ for large $\alpha^{\ast}$, so the density on the void boundary is a local maximum (see Fig. \ref{Fig4}); while for small $\alpha^{\ast}$, $\alpha$ increases with increasing $x$, so the density maximum is at the downstream side of the shock. The latter corresponds to a shell-like structure in self-similar expansion (see Figure \ref{Fig5}). With downstream void solutions and upstream outflow and breeze solutions connected by shocks, we establish semi-complete polytropic ``champagne flow" solutions with central voids. Similar to LP type solutions with shocks, there are also one maximum $x_{\rm max}$ and two minimum limits $x_{\rm min1}$ and $x_{\rm min2}$ imposed on the downstream shock position $x_{s{\rm d}}$ in order to obtain ``champagne flow" solutions. Systematic numerical explorations for cases of $n=0.7$, $n=0.8$ and $n=0.9$ show that in general $x_{\rm min2}>x_{\rm min1}$. For $x_{\rm min1}<x_{s{\rm d}}<x_{\rm min2}$, a void solution can be matched with an asymptotic inflow to produce ISECE solutions, while for $x_{s{\rm d}}>x_{\rm min2}$, a central void solution can be matched with an asymptotic outflow to produce ``champagne flow" solutions. For $x_{s{\rm d}}=x_{\rm min2}$, the upstream corresponds to a breeze or a contraction with $B=0$. The analysis here parallels that in the previous section for cases without central voids; in particular, the parameters $x_{\rm min1}$ and $x_{\rm min2}$ are determined not only by $n$ and $\alpha^{\ast}$, but also by the expanding void boundary $x^{\ast}$. Numerical explorations suggest that for a certain $n$, with the increase of $\alpha^{\ast}$, $x_{\rm min1}$ increases and $x_{\rm min2}$ decreases. Hence for a sufficiently large $\alpha^{\ast}$, we expect $x_{\rm min2}<x_{\rm min1}$ for which ISECE solutions are not allowed. This is consistent with polytropic cases without central voids. \section[]{Analysis and Discussion} \subsection[]{Comparison with Numerical Simulations} To adapt our self-similar solutions for modelling an astrophysical cloud system, we need to first specify the parameter $k$ related to the sound speed squared. By varying $k$, one can model clouds of different scales using a single self-similar solution. Parameter $k$ is determined by the thermodynamic parameters, including thermal pressure $p$, mass density $\rho$ and temperature $T$. A useful relation for a conventional polytropic gas derived from transformation (\ref{equ6}) is \begin{equation} k=\frac{p}{\rho^{\gamma}(4\pi G)^{\gamma-1}}=\frac{k_B T}{\mu\rho^{\gamma-1}(4\pi G)^{\gamma-1}}\ ,\label{equk} \end{equation} where $\mu$ is the mean molecular mass of gas particles. For a typical value provided by the classification of \citet{Habing}, the UC H \Rmnum{2} regions have an electron number density $n_e>3000$ cm$^{-3}$ (corresponding to a mass density $\rho>5\times10^{-21}$ g cm$^{-3}$ for a fully ionized hydrogen gas), and the Compact H \Rmnum{2} regions have $1000<n_e<3000$ cm$^{-3}$ (corresponding to $1.7\times10^{-21}<\rho<5\times10^{-21}$ g cm$^{-3}$ for mass density $\rho$). Typically, the temperature of H \Rmnum{2} regions is of order of $\sim10^4$ K. For a fully ionized hydrogen gas, we assume that $\mu=m_p/2$ where $m_p$ is the proton mass. The value of the polytropic index $\gamma$ does influence very much the resulting $k$ and thus $k$ should be evaluated specifically. For nearly isothermal cases, with relation (\ref{equk}), we estimate $k$ for UC and Compact H \Rmnum{2} regions to be $k\sim 10^{11}\sim10^{12}$ cgs unit. One should be aware that $\kappa$ and thus $k$ vary with the gas temperature and density in a cloud. Here, we presume a constant $k$ to convert self-similar variables to real space variables as a first approximation. We now compare our self-similar solutions of quasi-spherical symmetry with previous numerical simulations. \citet{TT1} performed a numerical study for a similar scenario as ours, i.e. a nascent central massive protostar ionizes and heats the ambient neutral gas and then leads to ``champagne flows". In their simulation, the radiative cooling rate is assumed to be low and thus our polytropic approach may be applicable. Franco et al. (1990) gave an analytical model for the formation and expansion of H \Rmnum{2} regions and have their solutions compared with simulations. We intend to demonstrate that our self-similar polytropic analysis of the problem gives qualitatively similar results. In the simulation of \citet{TT1}, computations were carried out following the progressive ionization of a diffuse gas and subsequent dynamical evolution, in a globular cluster soon after the star formation has been initiated. The residual gas is initially in a hydrostatic equilibrium in the gravitational field at a uniform temperature and all the ionizing UV radiation comes from stars at the very centre of the cluster. We emphasize that only the cases in which the gas is fully ionized correspond to our conventional ``polytropic champagne flow" model. In the simulation, the initial mass density at the centre $\rho_0$, initial temperature $T_0$, and the UV photon flux $F$ from the central star completely determine the evolution in spherical symmetry. To translate these parameters into our self-similar form of solutions, we first explore the time evolution of a ``champagne flow" shock. In our self-similar model, the shock radius $r_s$ obeys $r_{s}=k^{1/2}x_st^n$. With such a self-similar formula, we can fit the scaling index $n$ and $k^{1/2}x_s$ to shock positions obtained by the numerical simulation for case F of \citet{TT1}, and the result comparison is shown in Figure \ref{Fig6} with relevant parameters in caption. We see an almost perfect fit, suggesting that the dynamical evolution of ``champagne flows" approaches a self-similar form. This fitting gives a value of $n=1.0583$ and $\log(k^{1/2}x_s)=6.0356$ in cgs unit (strictly speaking, we need a general polytropic gas for $\gamma>1$). The value of $n$ is fairly close to the isothermal case of $n=1$, consistent with the model analysis of the numerical simulation that the gas evolution is nearly isothermal \citep{TT1}. \begin{figure} \includegraphics{ChampFlowShock.eps} \caption{Shock position evolution with time $t$ in a ``champagne flow" for our self-similar model (solid line) and the numerical simulation of \citet{TT1} (asterisks). For the simulation, the adopted parameters are central initial density $\rho_0=2\times10^{-21}$ g cm$^{-3}$, initial temperature $T_0=3000$ K, and the ionization UV flux $F=2\times 10^{51}$ photons s$^{-1}$. For the self-similar model, the best fitted parameters are $n=1.0583$ and $\log(k^{1/2}x_s)=6.0356$ in cgs unit. } \label{Fig6} \end{figure} We now generate a global ``champagne flow" solution grossly comparable to figure 2 of \citet{TT1} by fitting parameters. We first choose a central reduced density $\alpha_0\sim1\times10^{-5}$ such that the initial central density yields the value used in the simulation. We expediently choose $n=0.9$ (hence the parameter $\gamma=2-n=1.1$) to model the initial density profile $l=-2/n=-2.22$. Note that parameter $n$ for the ``champagne flow" solution is slightly different from the value we obtain from the fitting. With $\rho_0$ and $T_0$ specified in the simulation, we estimate $k_d=3.6\times 10^{15}$ cgs unit with expression (\ref{equk}). We still have the freedom to require the shock traveling to $r_s=2.51\times10^{19}$ cm at $t=1.3\times10^5$ yr, giving $x_{s{\rm d}}=1.86$. With these parameters, we model ``champagne flows" in diffuse H \Rmnum{2} regions with radius $r$ up to $10^{21}$ cm (i.e., $\sim$ 300 pc). The full solution is shown in Figure \ref{Fig7}. The timescale of $1.3\times10^5$ yr is regarded as the duration of the formation phase and the initial time for a ``champagne flow", and the timescale $5.1\times10^6$ yr is regarded as the lifetime of a ``champagne flow". \begin{figure} \includegraphics[width=0.5\textwidth]{FigureT.eps} \caption{ Self-similar ``champagne flow" solutions for radius up to $10^{21}$ cm ($\sim 300$ pc) at time $t=1.3\times 10^5$ yr (solid curves) and $t=5.1\times10^6$ yr (dashed curves). From top to bottom, the panels show number density, velocity, pressure, enclosed mass and temperature of the gas, respectively. The self-similar shock solution is obtained with $n=0.9$, $\gamma=1.1$, $\alpha_0=1\times10^{-5}$ and downstream shock position $x_{s{\rm d}}=1.86$. The downstream sound scaling factor $k_{\rm d}$ is $3.6\times10^{15}$ cgs unit, and the upstream sound scaling factor $k_{\rm u}$ is $3.38\times10^{15}$ cgs unit. The self-similar variables on the downstream side of the shock are $(x_{s{\rm d}},\ \alpha_{\rm d},\ v_{\rm d})=(1.86,\ 2.80\times10^{-5}, \ 1.37)$, and the corresponding upstream variables are $(x_{s{\rm u}},\ \alpha_{\rm u},\ v_{\rm u})=(1.92,\ 6.88\times10^{-6},\ 0.46)$. At large $x$, the numerical solution matches with asymptotic solution (\ref{equ15}) and (\ref{equ16}) with $A=31.942$ and $B=1.006$ at large $x$.}\label{Fig7} \end{figure} The orders of magnitude of all variables are consistent with typical values; e.g., the expansion velocity is several tens km s$^{-1}$ and the temperature is about $\sim 10^4$ K. The enclosed mass at $r=10^{21}$ cm ($\sim 300$ pc) is about $850 M_{\odot}$, consistent with the value of $\sim 800 M_{\odot}$ given by the numerical simulation and with the typical value for diffuse H \Rmnum{2} regions. The enclosed mass does not vary with time $t$, confirming the cut-off radius chosen at $r=10^{21}$ cm. As time evolves, the central number density decreases from $10^{-0.5}$ to $10^{-3.5}$ cm$^{-3}$ and the central thermal pressure decreases accordingly from $10^{-12}$ to $10^{-15}$ dyne cm$^{-2}$. We can also compare variable profiles with the case F of \citet{TT1}. The velocity profiles are very similar and we clearly see an expanding shock. As time evolves, the shock strength becomes weaker. In both numerical simulation and our model analysis, we observe a density peak on the downstream side of the shock and a significant temperature gradient on the upstream side, which cannot be accounted for by previous isothermal solutions. The upstream density and pressure profiles are also similar; however, the downstream density, pressure and temperature profiles (near centre) are somewhat different. In \citet{TT1}, temperature, pressure and density are initially very uniform behind the champagne shock but at the end of the calculations show large inward gradients. In Figure \ref{Fig7}, our model also produces quasi-uniform temperature, pressure and density behind the shock at the beginning ($t=1.3\times10^5$ yr), but we do not observe large inward gradients at the end. This differences are primarily due to the different physical assumptions adopted in the simulation and our self-similar solutions. \citet{TT1} treated the gas dynamics in protoglobular clusters and neglected the gas self-gravity as the gas mass is only about 0.1 per cent that of the stars. As shown above, our self-similar solutions neglect the gravity of the central massive protostar but include the self-gravity effect. Another important factor that introduces such difference is that in the simulation both forward champagne shock and reverse rarefaction wave are taken into account. Our self-similar model can accommodate forward moving shock, so we only have the principle outgoing champagne shock. We calculate the total energy $E_{total}$ defined as the energy of the gas under consideration in an infinite space. The total energy at time $t$ can be expressed as \begin{eqnarray} \!\!\! &E_{\rm total}=E_{\rm K}+E_{\rm G}+E_{\rm I} \qquad\qquad\qquad\qquad\nonumber\\ &=\int_{r_{\rm in}}^{r_{\rm out}}\bigg(\frac{1}{2}\rho u^2-\frac{GM\rho}{r}+\frac{i}{2}p\bigg)4\pi r^2dr\qquad\nonumber\\ &=\frac{k^{5/2}t^{5n-4}}{2G}\int_{x_{\rm in}}^{x_{\rm out}}\bigg[\alpha v^2x^2-\frac{2}{(3n-2)}\alpha^2x^3(nx-v)\nonumber\\ &+i\alpha^{\gamma}x^2\bigg]dx\ , \end{eqnarray} where $E_K$, $E_G$ and $E_I$ are the kinetic, gravitational and internal energies of the gas, respectively, and $r_{\rm in}$, $r_{\rm out}$, $x_{\rm in}$, $x_{\rm out}$ are the inner and outer boundaries of the gas under consideration, $i$ is the degree of freedom of the gas particle presumed to be 3. Note that one fixed $r_{\rm out}$ at different times corresponds to different values of $x_{\rm out}$. In this solution, $E_{\rm total}=5.5\times10^{48}$ erg at $t=1.3\times10^5$ yr, and $E_{\rm total}=8.1\times10^{48}$ erg at $t=5.1\times10^6$ yr, so there is net energy input. Especially, we see the kinetic energy $E_K=6.4\times10^{47}$ erg at $t=1.3\times10^5$ yr, indicating a small fraction of the total energy, and $E_K=3.9\times10^{48}$ erg at $t=5.1\times10^6$ yr, indicating a fairly large fraction of the total energy. The increase of kinetic energy shows clearly the development of a champagne flow. The gravitational energy is of order $10^{44}$ erg, much less than the kinetic energy throughout this duration. This confirms that the gas is not bounded and must have an outflow. We are also able to consider qualitatively the local energy exchange throughout the self-gravitating gas with relation (\ref{Econ}). In Figure \ref{Fig7}, $\partial p/\partial r$ is positive on the downstream side and negative on the upstream side. For $\gamma=1.1$, the downstream and upstream sides locally loses and gains energy, respectively. In summary, the profiles on the order of magnitudes, and the time evolution of our self-similar solution in modelling this case are grossly consistent with the numerical simulation result of \citet{TT1}, which lends support to our polytropic self-similar ``champagne flow" solution as a gross description of dynamics of H \Rmnum{2} regions. Recent studies further suggest that the inclusion of stellar winds is also important and even necessary sometimes in understanding the large-scale dynamics of H \Rmnum{2} regions. \citet{Comeron} found that a shocked stellar wind in the central region produces important morphological differences as compared to windless cases. Moreover, \citet{Comeron} suggested that the spatial scale of an H \Rmnum{2} region undergoing ``champagne flow" is systematically larger and the gas flow is generally faster as driven by a central stellar wind. \citet{Arthur} provided two-dimensional cylindrical radiative-hydrodynamic simulations of cometary H \Rmnum{2} regions using champagne flow models, by taking into account of strong stellar winds from the central ionizing star. In these simulations, the hydrodynamics and radiative transfer are coupled through an energy equation whose source term depends on the photoionization heating and radiative cooling rates; while with our polytropic approach, complicated energetic processes are relegated to the choice of $\gamma$. \citet{Arthur} studied the hydrodynamics of a compact H \Rmnum{2} region with a radius 0.13 pc and at a time $\sim 200$ yr after the triggering of UV ionizing photons and powerful stellar winds; a stellar wind bubble around the centre with a radius up to 0.03 pc is formed. Inside such a stellar wind bubble, the mass density is about 3 orders of magnitude lower than that of the surrounding medium, and the density of the flow does not vary much with radius in the vicinity of the bubble boundary. Because the central wind bubble is effectively depleted of mass and the gravity force of the central massive star may be neglected, given a typical Bondi-Parker radius of $\sim 10^2$ AU, we thus approximate such a stellar wind bubble as a central `void' and model it using our polytropic self-similar void ``champagne flow" model. In the scenario as outlined by \citet{Arthur}, the central star has an effective temperature $T_{\rm eff}=3\times10^4$ K, a stellar wind mass-loss rate $\dot{M}=10^{-6}$ M$_{\odot}$ yr$^{-1}$ and a terminal wind speed $V_{\rm w}=2000$ km s$^{-1}$. The initial ambient medium has a number density of $n_0=6000$ cm$^{-3}$ and a temperature of $T_0=300$ K. In our self-similar model for ``champagne flows", the radius of a void boundary is $r^*=k^{1/2}x^*t^n$. By taking $n=0.8$, we have $k^{1/2}x^*=1.34\times10^9$ cgs unit to obtain a $r^*=0.03$ pc central void at a time of $t=200$ yr. The $n$ value depends on the energetic process, including plasma cooling and radiative heating, of the flow. We further estimate from relation (\ref{equk}) for a downstream sound parameter $k_d=2.5\times10^{17}$ cgs unit, and for a self-similar void boundary $x^{\ast}=2.68$. Another parameter that needs to be specified is the mass density on the expanding void boundary, denoted as $\alpha^{\ast}$ here. The simulation of \citet{Arthur} gives an electron number density on the void boundary as $n_e^{\ast}=10^4$ cm$^{-3}$ at $t=200$ yrs. With relation $\rho^{\ast}=\alpha^{\ast}/(4\pi G t^2)$, we estimate $\alpha^{\ast}=5.5\times10^{-7}$. We emphasize that the length and time scales in this case are quite different from those in the previous case of \citet{TT1}. As an important advantage, this suggests that self-similar models are suitable to give a unified description for cloud systems on quite different scales. As $n<1$ in this case, we have one more degree of freedom to specify the shock position. In principle, the shock position is determined by both the initial density (mass parameter $A$) and the initial gas motion (velocity parameter $B$). We find that the lower limit of the downstream shock position $x_{s{\rm d}}$ is $8.8$, according to condition (\ref{xmin}); thus, the minimum shock position at $t=200$ yrs is $\sim 3.04\times 10^{17}$ cm (i.e., about $0.1$ pc). The numerical simulation of \citet{Arthur} studied the gas dynamics up to a radius of $0.13$ pc. As an example of illustration, we assume $x_{s{\rm d}}=9$ and show the resulting solution in Figure \ref{Fig8}. This solution clearly shows that as time evolves, the void boundary expands, meanwhile the density and pressure in the vicinity of void boundary decrease by several orders of magnitude, consistent with the simulation. However, we see a very high density near the ``champagne flow" shock on the downstream side, and as time evolves, the density profile becomes more and more smooth. For our self-similar ``champagne flow" model with central expanding voids, the velocity can rise up to several hundred km s$^{-1}$. In Figure \ref{Fig8}, we also see clearly that the case is non-isothermal, and on the downstream side of the shock the temperature is the highest as expected. We note that at $t=800$ yrs, the shock is at $\sim 0.3$ pc, beyond the scale of UC or compact H \Rmnum{2} regions. In reality, a champagne shock is so fast that even at a short timescale of $\sim 800$ yrs the shock is well in the surrounding diffuse interstellar medium (ISM). \begin{figure*} \includegraphics[width=\textwidth]{ChampFlowVoid1.eps} \caption{Self-similar ``champagne flow" solution with an initial central void radius of $10^{17}$ cm at time $200$ yr (solid curve), $500$ yr (dashed curve) and $800$ yr (dotted curve). The four panels show mass density $\rho$, flow velocity $u$, thermal pressure $p$, and temperature $T$ of the gas, respectively. The central void has a radius of $x^{\ast}=2.68$, corresponding to $r^{\ast}=0.03,\ 0.06,\ 0.09$ pc with increasing time $t$. The self-similar solution is obtained with parameters: $n=0.8$, $\gamma=1.2$, $\alpha^{\ast}=5.5\times10^{-7}$ and downstream shock position $x_{s{\rm d}}=9$. The downstream sound parameter $k_{\rm d}$ is $2.5\times10^{17}$ cgs unit, and the upstream sound parameter $k_{\rm u}$ is $4.9\times10^{16}$ cgs unit. The self-similar variables on the downstream side of the shock are $(x_{s{\rm d}},\ \alpha_{\rm d},\ v_{\rm d}) =(9,\ 1.07,\ 6.86)$, and the corresponding upstream variables are $(x_{s{\rm u}},\ \alpha_{\rm u},\ v_{\rm u})=(20.4,\ 0.111,\ 8.88)$. At large $x$, the solution matches with asymptotic solution (\ref{equ15}) and (\ref{equ16}) with $A=135$ and $B=22$ at large $x$. The enclosed mass by radius of $10^{18}$ cm is $1.38\times10^6$, $1.23\times10^6$ and $1.06\times10^6$ M$_{\odot}$ as time evolves. The total energy of gas is $5.86\times10^{53}$, $5.94\times10^{53}$ and $5.98\times10^{53}$ erg, respectively. } \label{Fig8} \end{figure*} Compared with numerical simulations, the advantage of our semi-analytical self-similar approach is clear. We can generate self-similar shock solutions to model different H \Rmnum{2} regions by varying a few parameters. The self-similar processes and shock solutions of this paper describe the basic hydrodynamics of polytropic ``champagne flows" and serve as test cases for bench marking numerical simulations. \subsection[]{Asymptotic Free-Fall Solutions around a Central Protostar} So far we have constructed ``champagne flow" solutions with LP type asymptotic solutions on the downstream side as $x\rightarrow 0^+$, because such asymptotic solution satisfies boundary condition (\ref{equ12}). Complementarily, free-fall asymptotic solutions (\ref{freefall1}) and (\ref{freefall2}) at $x\rightarrow 0^+$ represent gas infall and collapse during the protostar formation phase; and the surrounding gas and the infall momentum associated with the star formation process may be sustained for a while during the evolution after the onset of stellar nuclear burning and UV photoionization of the surrounding gas. \citet{Cochran} have investigated consequences of the birth of a massive star within a dense cloud with a free-fall density profile, and found that the radiation pressure from the star sweeps up grains from the infalling gas to form a dust shell which bounds the H \Rmnum{2} region. Here, we utilize such free-fall solutions as the downstream side and construct global solutions with shocks to model possible dynamic evolutions of H \Rmnum{2} regions surrounding a nascent protostar in nuclear burning. We present such solutions in Figures \ref{Free1} and \ref{Free2} where parameter $m(0)$ for free-fall solutions is different. In dimensionless form, $m(0)$ stands for a central mass point, and with dimensions in self-similar transformation (\ref{equ6}), $M(0,t)\propto t^{3n-2}m(0)$; therefore $m(0)$ scales as the central mass accretion rate. For $m(0)=0.546$ (Figure \ref{Free1}), the free-fall solution crosses the sonic critical curve smoothly at $x=0.3237$, and can also be connected to the upstream solutions via shocks at various locations. The free-fall solution crosses the line $v=0$ at $x_{\rm stg}=0.74$. This stagnation radius expands with time in a self-similar manner; inside the stagnation radius the gas falls inwards, while outside the stagnation radius the gas expands outwards. Therefore if $x_{s{\rm d}}<x_{\rm stg}$, the entire global solution corresponds to an inflow (solution 4 of Figure \ref{Free1}). This situation describes an accretion shock during a protostar formation phase. If $x_{s{\rm d}}>x_{\rm stg}$, the outer part of the downstream side is an outflow. This describes the scenario that the shock sweeps up the gas and turns the gas from infall to expansion on the downstream side near the shock front. Similar to the situation with downstream LP type solutions, there exists one specific $x_{s{\rm d}}$, from which the upstream solution is a breeze with $B=0$. In this case, $x_{s{\rm d}}=1.7747$ gives an upstream breeze (solution 2 of Figure \ref{Free1}). Thus for $x_{s{\rm d}}<1.7747$, the upstream side corresponds to an asymptotic inflow far from the centre (solution 3 of Figure \ref{Free1}) and for $x_{s{\rm d}}>1.7747$, the upstream side corresponds to an asymptotic outflow (wind) far from the centre (solution 1 of Figure \ref{Free1}). Another example shown in Figure \ref{Free2} has $m(0)=4.638$. This free-fall solution does not cross the sonic critical curve smoothly and can be connected with upstream solutions via shocks. In an analogous manner, we show the possibility to obtain an outflow (solution 1 of Figure \ref{Free2}), breeze (solution 2 of Figure \ref{Free2}) and inflow (solution 3 of Figure \ref{Free2}) for the upstream side. Solutions 1 and 2 of Figure \ref{Free1} and solution 1 of Figure \ref{Free2} have asymptotic outflow or breeze on the upstream side and in the outer part of the downstream side, which is very similar to champagne flow solutions obtained with a downstream LP type, with different behaviours in central regions. With a free-fall asymptotic solution, the gravity of the central massive star is not neglected and the gas immediately surrounding the massive protostar still undergoes infall when the outer envelope starts to expand. Hence, solutions with free-fall centre are plausibly suitable to describe the early stage of ``champagne flows". In general, with central free falls on the downstream side, we can also obtain asymptotic outflow, inflow, breeze and contraction for the upstream side, by varying the downstream shock position $x_{s{\rm d}}$ in a proper range. \begin{figure} \includegraphics[width=0.5\textwidth]{ChampFree1.eps} \caption{Reduced mass density $\alpha(x)$ (top) and reduced radial flow velocity $v(x)$ (bottom) for global solutions in cases with $n=0.9$ (thus $\gamma=1.1$) whose downstream side is free-fall solution and the upstream side corresponds to either outflow, inflow, breeze or contraction. In both panels, the dashed curve represents the sonic critical curve; in the bottom panel the dotted line is $v=0$. The downstream solution is connected with the upstream solutions with solid curves via shocks. The downstream solution is integrated from a sonic critical point $(x,\ \alpha,\ v)=(0.3237,\ 5.0050,\ -0.8455)$ towards $x\rightarrow 0^{+}$ with a central free-fall asymptotic solution of $m(0)=0.546$, and outwards to the downstream shock positions. At the inner most part the downstream solution corresponds to an inflow, and outer part of the downstream side is an outflow. The static point in this case is at $x_{static}\sim 0.74$. In both panels, the upstream solutions from top to bottom correspond to $x_{s{\rm d}}=2.5$ (labeled 1), 1.7747 (labeled 2), 1 (labeled 3) and 0.7 (labeled 4). Solution 1 has inner inflow and outer outflow on the downstream side and upstream outflow with $A=4.368$ and $B=1.76$. The shock parameters are $(x_{s{\rm d}},\ \alpha_{\rm d},\ v_{\rm d})=(2.5,\ 0.8036,\ 1.3249)$, and $(x_{s{\rm u}},\ \alpha_{\rm u},\ v_{\rm u})=(2.5003,\ 0.6453,\ 1.0981)$. Solution 2 has inner inflow and outer outflow on the downstream side and an upstream breeze with $A=1.8635$ and $B=0$ at large $x$. The shock parameters are $(x_{s{\rm d}},\ \alpha_{\rm d},\ v_{\rm d})=(1.7747,\ 1.0715,\ 0.8470)$ and $(x_{s{\rm u}},\ \alpha_{\rm u},\ v_{\rm u})=(1.7796,\ 0.5575,\ 0.1558)$. Solution 3 has an inner inflow and outer outflow on the downstream side and upstream inflow with $A=0.6930$ and $B=-1.6963$ at large $x$. The shock parameters are $(x_{s{\rm d}},\ \alpha_{\rm d},\ v_{\rm d})=(1,\ 1.7883,\ 0.2668)$ and $(x_{s{\rm u}},\ \alpha_{\rm u},\ v_{\rm u})=(1.0118,\ 0.6348,\ -0.8942)$. Solution 4 has a downstream inflow and upstream inflow, with $A=0.4899$ and $B=-2.1564$ at large $x$. The shock parameters are $(x_{s{\rm d}},\ \alpha_{\rm d},\ v_{\rm d})=(0.7,\ 2.4355,\ -0.0567)$ and $(x_{s{\rm u}},\ \alpha_{\rm u},\ v_{\rm u})=(0.7054,\ 0.9837,\ -1.0785)$.} \label{Free1} \end{figure} \begin{figure} \includegraphics[width=0.5\textwidth]{ChampFree2.eps} \caption{Reduced mass density $\alpha(x)$ (top) and reduced velocity $v(x)$ (bottom) for global solutions in cases of $n=0.9$ (thus $\gamma=1.1$) whose downstream side is a free fall and upstream side corresponds to either outflow, inflow, breeze or contraction. In both panels the dashed curve represents the sonic critical curve. The downstream solution is connected with upstream solutions of solid curves via shocks. The downstream solution is integrated from a sonic critical point $(x,\ \alpha,\ v)= (1.7727,\ 1.0050,\ 0.5463)$ towards $x\rightarrow 0^{+}$ for a free-fall asymptotic solution of $m(0)=4.638$. Most of the downstream side is an outflow, while the inner most part is a free fall. In both panels, the upstream solutions from top to bottom correspond to $x_{s{\rm d}}=1.6$ (labeled 1), 1.4269 (labeled 2) and 1 (labeled 3). The entire upstream solution labeled 1 has an outflow with $A=2.8032$ and $B=0.6058$. Shock parameters are $(x_{s{\rm d}},\ \alpha_{\rm d},\ v_{\rm d})=(1.6,\ 1.2326, \ 0.5110)$, and $(x_{s{\rm u}},\ \alpha_{\rm u},\ v_{\rm u})=(1.6002,\ 0.9575,\ 0.2443)$. Solution labeled 2 has an upstream contraction with $A=2.115$ and $B=0$ at large $x$. Shock parameters are $(x_{s{\rm d}},\ \alpha_{\rm d},\ v_{\rm d})=(1.4269,\ 1.5391,\ 0.4705)$, and $(x_{s{\rm u}},\ \alpha_{\rm u},\ v_{\rm u})=(1.4290,\ 0.9057,\ -0.0988)$. Solution labeled 3 has an inflow for the entire upstream portion with $A=1.0063$ and $B=-1.6727$. Shock parameters are $(x_{s{\rm d}},\ \alpha_{\rm d},\ v_{\rm d})=(1,\ 2.9346,\ 0.3423)$, and $(x_{s{\rm u}},\ \alpha_{\rm u},\ v_{\rm u})=(1.0271,\ 0.7726,\ -1.2514)$.} \label{Free2} \end{figure} Central free-fall solutions describe the core collapse phase in star formation. For this purpose, \citet{WangLou08} explored such solutions with free-fall inner core and an inflow or outflow in the outer envelope, for general polytropic cases in their Figure 2. The inner and outer portions are connected by magnetohydrodynamic (MHD) shocks. Such shocks are interpreted as accretion shocks, typically found in a star formation process, or around accreting black holes. Here we specifically emphasize that such shocks may also arise by the UV photoionization of ambient medium surrounding a nascent protostar. Under certain situations, the UV flux from the burning star might not be intense and rapid enough to turn the surrounding gas from infall to expansion by ionization and heating. Meanwhile the ionization front (IF) creates a weak shock traveling outwards, and the upstream side may have an outward velocity. Another possibility is that the gravity of the central star is so large that the gas immediately surrounding the star keeps falling towards the protostar, but the outer part of the downstream side and the corresponding upstream side expand. In summary, with different initial conditions of gas and different physical conditions of a burning protostar, the radiative influence of the nascent protostar on the dynamic evolution of the surrounding gas may give rise to various self-similar solutions, including the classical champagne flow solutions, the ISECE solutions and the inner free-fall with outer inflow/outflows or contraction/breeze solutions. The solutions constructed in this paper (classical champagne flows) is suitable for situations that the gas is initially static and the protostar ionizes the entire gas immediately, and then the gas begins to expand in a ``champagne phase" with an outgoing shock. \section[]{Conclusions} We present newly established self-similar polytropic shock solutions with and without central voids to model ``champagne flows" in H \Rmnum{2} regions featuring various asymptotic dynamic behaviours. As a substantial generalization of the isothermal model of Tsai \& Hsu (1995) and \citet{Shu}, we found similarities and differences in self-similar polytropic processes. Our general polytropic ``champagne flow" model allows a much larger freedom to choose the polytropic index $\gamma\geq 1$ for $2/3<n<2$; for a conventional polytropic gas as a subclass of examples, the power-law index $l$ of the initial mass density profile $\rho\propto r^{-l}$ is linked to $\gamma$ by $l=2/n=2/(2-\gamma)$. Together, our model is adaptable to a wide range of initial mass density profile with $1<l<3$ for H \Rmnum{2} regions. For conventional polytropic cases of $1<\gamma<4/3$ (i.e., $2/3<n<1$ and $2<l<3$), we have more freedom for convergent initial conditions. In this fashion, our conventional polytropic shock flow solutions are determined not only by the initial mass density profile (i.e., mass parameter $A$), but also by the motion at the very early stage (i.e., velocity parameter $B$). The dimensionless shock positions or the dimensional shock speed and strength are determined by the initial conditions related to $A$ and $B$ and the central density at $\alpha_0$. Our self-similar shock flow solutions give a plausible description for the ``champagne flow" phase for the dynamics of H \Rmnum{2} regions. We conclude that general polytropic ``champagne flows" with the initial density power-law index $1<l<3$ may evolve in a self-similar manner. We have established novel ``champagne flow" shock solutions with an expanding void surrounding the centre to model a certain cloud core whose inner part has fallen into a nascent protostar. We observe that the evolution of central void boundary plays an important role in determining the asymptotic solution to approach and the general behaviour of solutions as well. With even one more free parameter, the ``champagne flow" shock solutions with central voids can model the dynamics of H \Rmnum{2} regions more realistically, including the effect of central stellar wind bubbles. We have further explored possibilities of asymptotic inflows or contractions far from the cloud centre. In addition, we also establish global shock solutions with the asymptotic free-fall solution approaching the centre. In general, by varying dimensionless shock position, we connect the downstream side, with either LP type solutions, EdS solutions or free-fall solutions, to upstream solutions which eventually merge into asymptotic outflow, breeze, contraction and inflow. Within the theoretical framework of the self-similar polytropic fluid, global shock solutions with different behaviours correspond to different forms of hydrodynamic evolution of H \Rmnum{2} regions after the nascence of a central massive protostar. Apparently, even within the framework of self-similarity, dynamic evolution of polytropic H \Rmnum{2} regions depends on the initial and boundary conditions of molecular clouds. Numerical simulations are needed to probe and connect various self-similar evolution phases. \section*{Acknowledgments} This research has been supported in part by the Tsinghua Centre for Astrophysics (THCA), by the NSFC grants 10373009 and 10533020 at Tsinghua University, and by the SRFDP 20050003088 and the Yangtze Endowment from the Ministry of Education at Tsinghua University.
3,212,635,537,838
arxiv
\subsubsection*{Achievability} \label{achievability} We now propose an online scheduling policy $\pi^{\textsf{MMW}}$ which approximately minimizes the average AoI \eqref{objective} for mobile UEs (the abbreviation \textsf{MMW} stands for ``Multi-cell Max-Weight"). Our policy is a multi-cell generalization of the $4$-approximate single-BS scheduling policy proposed in \cite{kadota2018scheduling}. Moreover, using a tighter analysis, we give an improved $2$-factor approximation guarantee for $\pi^{\textsf{MMW}}$. \paragraph*{The policy $\pi^{\textsf{MMW}}$} At every slot, each BS schedules a UE under its coverage that has the highest index among all other UEs. The index $I_i(t)$ of $\textrm{UE}_i$ is defined as $I_i(t) \equiv p_ih_i^2(t).$ \begin{framed} \begin{theorem}[Achievability]\label{achievability_thm} $\pi^{\textsf{MMW}}$ is a $2$-approximation scheduling policy for statistically identical UEs with i.i.d. uniform mobility (\emph{i.e.,} $p_i=p, \forall i$ and $\psi_{ij}=\frac{1}{M}, \forall i,j$). \end{theorem} \end{framed} For a proof of Theorem \ref{achievability_thm}, please refer to Appendix \ref{achievability_thm_proof}. When the BSs employ power-control, all UEs experience the same SINR, and they become statistically identical. It can be easily seen that the policy $\pi^{\textsf{MMW}}$ is fully distributed and may be implemented with local information only. \subsubsection*{Effect of mobility on AoI} Recall that, a BS can schedule a transmission to only one UE in its cell at every slot. Hence, if all of the $N$ UEs remain stationary at a single cell, they all have to contend with each other for scheduling. This naturally increases the average AoI of the UEs. On the other hand, if the UEs are mobile, they can take advantage of multiple downlink transmission opportunities from multiple BSs. This form of \emph{multi-user diversity} drastically reduces the overall AoI, by improving the network resource utilization. Next, we quantify the effect of mobility on the average AoI.\\ Define the \emph{Mobility Advantage on AoI} ($\alpha$) to be the ratio of the optimal AoI when all UEs are stationary at a single BS (\emph{i.e.}, $M=1$.) vs. the optimal AoI when the UEs are mobile. As noted above, for a single BS, we have $g(\bm \psi)=1.$ From our achievability result in Theorem \ref{achievability_thm}, we know that the lower bound in Eqn. \eqref{lb_expr} is achievable within a factor of $2$. This implies that $\alpha = \Theta(g(\bm{\psi})).$ From the equation \eqref{g_unif}, we have \begin{eqnarray} \label{mobility_advantage} g(\bm{\psi^{\textsf{unif}}}) = M\bigg( 1 - e^{-c \frac{N}{M}}\bigg), \end{eqnarray} for some constant $1 \leq c \leq 1.387$. Consider the following three scaling regime: \begin{itemize} \item \textbf{Constant Density:} If $N$ and $M$ scale in such a way that the \emph{density} of the UEs remains constant, \emph{i.e.}, $\frac{N}{M}=\rho, $ we see that the average AoI diminishes linearly with the number of BSs, \emph{i.e.,} $\alpha = M(1-\exp(-c\rho))$. \item \textbf{Under-Loaded BS:} If $N/M << 1$, we have $\alpha \approx M\big(1-1+c\frac{N}{M}\big) = \Theta(N).$ \item \textbf{Over-Loaded BS:} If $N/M >>1 $, we have $\alpha = \Theta(M)$. \end{itemize} \section{Appendix} \label{appendix} \subsection{Proof of Theorem \ref{lb}} \label{lb_proof} \begin{IEEEproof} In the proof below, we first follow a sample-path-based argument to obtain an almost sure lower bound to AoI. Finally, we use Fatou's lemma \cite{williams1991probability} to convert the almost sure bound to a bound in expected AoI, as defined in Eqn.\ \eqref{objective}. \\ \begin{figure} \centering \begin{overpic}[width=0.35\textwidth]{AoI_fig_new} \end{overpic} \caption{Time-evolution of the Age-of-Information of a UE} \label{AoI_fig} \end{figure} Consider a sample path under the action of any arbitrary scheduling policy $\pi$ up to time $T$. See Figure \ref{AoI_fig}. For $\textrm{UE}_i$, let the r.v. $N_i(T)$ denote the number of packets received up to time $T$, the r.v. $T_{ij}$ denote the time interval between receiving the $(j-1)$\textsuperscript{th} packet and the $j$\textsuperscript{th} packet, and the r.v. $D_i$ denote the time interval between receiving the last ($N_i(T)$\textsuperscript{th}) packet and the time-horizon $T$. Hence, we have \begin{eqnarray} \label{sum_val} T= \sum_{j=1}^{N_i(T)} T_{ij} + D_i. \end{eqnarray} Since the AoI of any $\textrm{UE}$ increases in step of one at each slot until a new packet is received (and then it drops to one again), the average AoI up to time $T$ may be lower bounded as: \begin{eqnarray} \label{AoI_lb_der} \overline{\textsf{AoI}_T}&\equiv&\frac{1}{NT}\sum_{i=1}^{N} \sum_{t=1}^{T} h_i(t) \nonumber \\ &= & \frac{1}{NT}\sum_{i=1}^{N}\bigg(\sum_{j=1}^{N_i(T)} \frac{1}{2}T_{ij}(T_{ij}+1)+ \frac{1}{2}D_i(D_i+1)\bigg) \nonumber \\ &\stackrel{(a)}{=}&\frac{1}{2NT}\sum_{i=1}^{N}\bigg(N_i(T) \big(\frac{1}{N_i(T)} \sum_{j=1}^{N_i(T)}T_{ij}^2 \big)+D_i^2\bigg)+ \frac{1}{2}\nonumber\\ &\stackrel{(b)}{\geq} & \frac{1}{2NT}\sum_{i=1}^{N}\bigg( N_i(T)\bar{T_i}^2+D_i^2\bigg)+ \frac{1}{2}, \end{eqnarray} where in (a) we have used Eqn.\ \eqref{sum_val}, and in (b) we have defined $\bar{T}_i= \frac{1}{N_i(T)} \sum_{j=1}^{N_i(T)} T_{ij}$ and used Jensen's inequality afterwards. Rearranging the Eqn. \eqref{sum_val}, we can express the random variable $\bar{T}_i$ as: \begin{eqnarray*} \bar{T}_i= \frac{T-D_i}{N_i(T)}. \end{eqnarray*} \begin{figure} \centering \begin{overpic}[width=0.35\textwidth]{AoI_mobility_fig} \end{overpic} \caption{Movement of $N=3$ UEs in an area with $M=3$ cells } \label{AoI_mobility_fig} \end{figure} With this substitution, the term within the bracket in Equation \eqref{AoI_lb_der} evaluates to \begin{eqnarray} \label{AoI_lb_der2} N_i(T)\bar{T}_i^2+ D_i^2 = \frac{(T-D_i)^2}{N_i(T)} + D_i^2 \geq \frac{T^2}{N_i(T)+1}, \end{eqnarray} where the last inequality is obtained by minimizing the resulting expression by viewing it as a quadratic in the variable $D_i$. \\ Hence, from Eqns.\ \eqref{AoI_lb_der} and \eqref{AoI_lb_der2}, we obtain the following lower bound to the average AoI under the action of any admissible scheduling policy: \begin{eqnarray} \label{val} \overline{\textsf{AoI}_T} \geq \frac{T}{2N} \sum_{i=1}^{N} \frac{1}{N_i(T)+1} + \frac{1}{2}. \end{eqnarray} Next, we analyze the resource constraints of the system to further lower bound the RHS of the inequality \eqref{val}. Let the r.v. $A_i(T)$ denote the total number of transmission attempts made to $\textrm{UE}_i$ by all BSs up to time $T$. Also, let the r.v. $g_j(T)$ denote the fraction of time that $\textrm{BS}_j$ contained \emph{at least one UE} in its coverage area. Since, a BS can attempt a downlink transmission only when there is at least one UE in its coverage area, the total number of transmission attempts to all UEs by the BSs is upper bounded by the following \emph{global balance condition}: \begin{eqnarray} \label{attmpt_constr} \sum_{i=1}^{N} A_i(T) \leq T \sum_{j=1}^{M} g_j(T)\equiv T g(T), \end{eqnarray} where $g(T) \equiv \sum_j g_j(T)$. Plugging in Eqn.\ \eqref{attmpt_constr}, we can further lower bound the inequality \eqref{val} as: \begin{eqnarray*} \overline{\textsf{AoI}_T} \geq \frac{1}{2Ng(T)} \big(\sum_{i=1}^{N} A_i(T)\big)\big(\sum_{i=1}^{N} \frac{1}{N_i(T)+1}\big) + \frac{1}{2}. \end{eqnarray*} An application of the Cauchy-Schwartz inequality on the RHS yields: \begin{eqnarray} \label{AoI_lb_2} \overline{\textsf{AoI}_T} \geq\frac{1}{2N g(T)}\bigg(\sum_{i=1}^{N} \sqrt{\frac{A_i(T)}{N_i(T)+1}}\bigg)^2 + \frac{1}{2}. \end{eqnarray} Note that, $\textrm{UE}_i$ successfully received $N_i(T)$ packets out of a total of $A_i(T)$ packet transmission-attempts made by the BSs via the erasure channel with success probability $p_i$. Without any loss of generality, we may fix our attention on those scheduling policies only for which $\lim_{T \to \infty} A_i(T)= \infty, \forall i$. Otherwise, at least one of the UEs receive a finite number of packets, resulting in infinite average AoI. Hence, using the Strong law of large numbers \cite{williams1991probability}, we obtain: \begin{eqnarray} \label{SLLN2} \lim_{T \to \infty} \frac{N_i(T)}{A_i(T)} = p_i, ~~~\forall i \hspace{5pt} \textrm{w.p.} ~ 1. \end{eqnarray} Moreover, using the ergodicity property of the UE mobility, we conclude that almost surely: \begin{eqnarray*} \lim_{T \to \infty} g_j(T) = \mathbb{P}_{\bm{\psi}}\big( \textrm{BS}_j \textrm{ contains at least one UE}\big), \end{eqnarray*} where we recall that $\bm{\psi}$ denotes the stationary cell occupancy distribution defined earlier. Thus, we have almost surely \begin{eqnarray} \label{g_lim} \lim_{T \to \infty} g(T) &=& \lim_{T\to \infty} \sum_j g_j(T)\nonumber\\ &=& \sum_{j=1}^{M} \mathbb{P}_{\bm \psi} \big( \textrm{BS}_j \textrm{ contains at least one UE}\big) \nonumber \\ &\equiv& g(\bm{\psi}), \end{eqnarray} where the function $g(\bm{\psi})$ denotes the expected number of non-empty cells where the expectation is evaluated w.r.t. the stationary occupancy distribution $\bm \psi$. Hence, putting equations \eqref{SLLN2} and \eqref{g_lim} together with the lower bound in \eqref{AoI_lb_2}, we have almost surely: \begin{eqnarray} \label{AoI_LB2} \liminf_{T \to \infty} \overline{\textsf{AoI}_T} \geq \frac{1}{2N g(\bm{\psi})} \bigg(\sum_i \sqrt{\frac{1}{p_i}}\bigg)^2 + \frac{1}{2}. \end{eqnarray} Finally, \begin{eqnarray*} \textsf{AoI}^* &\geq& \liminf_{T \to \infty} \mathbb{E}(\textsf{AoI}_T) \\ &\stackrel{(a)}{\geq}& \mathbb{E}(\liminf_{T \to \infty} \textsf{AoI}_T) \\ &\geq& \frac{1}{2N g(\bm{\psi})} \bigg(\sum_i \sqrt{\frac{1}{p_i}}\bigg)^2 + \frac{1}{2}, \end{eqnarray*} where the inequality (a) follows from Fatou's lemma. This concludes the proof of Theorem \ref{lb}. Note that the proof continues to hold even when the mobility of the UEs are not independent of each other. \end{IEEEproof} \subsection{Derivation of the bounds in Eqn.\ \eqref{g_unif}} \label{g_unif_proof} For $M \geq 2$, we have the following bounds: \begin{eqnarray} \label{g_ineq1} e^{-\frac{\beta}{M}} \stackrel{(a)}{\leq} (1-\frac{1}{M}) \stackrel{(b)}{\leq} e^{-\frac{1}{M}}, \end{eqnarray} where $\beta \equiv \log(4) \leq 1.387.$ The inequality (b) is standard. To prove the inequality (a), consider the concave function \[f(x) = 1-x-e^{-\beta x}, 0\leq x \leq \frac{1}{2},\] for some $\beta > 0$. Since a concave function of a real variable defined on an interval attains its minima at one of the end points of the closed interval, and since $f(0)=0$, we have $f(x) \geq 0, \forall x \in [0, \frac{1}{2}],$ if $f(1/2) \geq 0$, i.e., $ e^{\beta /2} \geq 2$, i.e., $\beta \geq \ln(4)$. Thus, the inequality (a) holds for $M\geq 2$ with $\beta = \ln(4)$. The inequality \eqref{g_ineq1} directly leads to the bounds in Eqn.\ \eqref{g_unif}. \subsection{Proof of Theorem \ref{achievability_thm}} \label{achievability_thm_proof} \begin{IEEEproof} Let the scheduling decisions at slot $t$ be denoted by the binary control vector $\bm{\mu}(t) \in \{0,1\}^N$, where $\mu_i(t)=1$ if and only if the following two conditions hold simultaneously: (1) $C_i(t)=j$, \emph{i.e.,} $\textrm{UE}_i$ is within the coverage area of $\textrm{BS}_j$ at slot $t$, for some $1 \leq j \leq M$, and (2) $\textrm{BS}_j$ schedules a transmission to $\textrm{UE}_i$ at time $t$ \footnote{Recall that the random variable $C_i(t)$ denotes the index of the BS $\textrm{UE}_i$ is associated with at time $t$.}. Since a BS can schedule only one transmission per slot to a UE in its coverage area, the control vector must satisfy the following constraint: \begin{eqnarray*} \sum_{i: C_i(t)=j}\mu_i(t) \leq 1, ~~ \forall j, t. \end{eqnarray*} For performance analysis, we consider the following Lyapunov function, which is linear in the ages of the UEs: \begin{eqnarray} \label{lyap_linear} L(\bm{h}(t))= \sum_{i=1}^N \frac{h_i(t)}{\sqrt{p_i}}. \end{eqnarray} The conditional transition probabilities for the age of $\textrm{UE}_i$ may be written as follows: \begin{eqnarray*} \mathbb{P}\big(h_i(t+1)=1|\bm{h}(t), \bm{\mu}(t), \bm{C}(t)\big) &=& \mu_i(t)p_i\\ \mathbb{P}\big(h_i(t+1)=h_i(t)+1|\bm{h}(t), \bm{\mu}(t), \bm{C}(t)\big) &=& 1 - \mu_i(t)p_i, \end{eqnarray*} where the first equation corresponds to the event when $\textrm{UE}_i$ was scheduled and the packet transmission was successful, and the second equation corresponds to its complement event. Hence, for each UE $i$, we can compute : \begin{eqnarray} \label{one_step_expectation} \mathbb{E}\big(h_i(t+1)|\bm{h}(t), \bm{\mu}(t), \bm{C}(t)\big)= h_i(t) -\mu_i(t)p_i h_i(t)+1. \end{eqnarray} From the equation above, we can evaluate the one-step conditional drift as: \begin{eqnarray} \label{drift_ineq_1} && \mathbb{E}\big(L(\bm{h}(t+1))-L(\bm{h}(t)) | \bm{h}(t), \bm{\mu}(t), \bm{C}(t)\big) \nonumber \\ &=& -\sum_{i=1}^N \mu_i(t) \sqrt{p_i}h_i(t) + \sum_{i=1}^N \frac{1}{\sqrt{p_i}}. \end{eqnarray} Finally, consider the drift minimizing policy \textsf{Multi-Cell MW} (MMW), under which, each Base Station $\textsf{BS}_{j}$ schedules a user $\textrm{UE}_i$ having the highest weight $\sqrt{p_i}h_i(t)$ in its cell. For the purpose of the proof, we now define a stationary randomized scheduling policy $\textsf{RAND}$, under which every BS randomly schedules a UE in its cell with probability $ \mu^{\textrm{RAND}}_i(t) \propto 1/\sqrt{p_i}$ \footnote{We use the usual convention that summation over an empty set is zero.}. Comparing \textsf{MMW} with \textsf{RAND}, we have: \begin{eqnarray*} \mathbb{E}\bigg(\sum_{i=1}^N \mu^{\textsf{MMW}}_i(t) \sqrt{p_i}h_i(t)| \bm{h}(t), \bm{\mu}(t), \bm{C}(t)\bigg)\\ \geq \sum_{j=1}^M \frac{\sum_{i: C_i(t)=j} h_i(t)}{\sum_{i: C_i(t)=j} \frac{1}{\sqrt{p_i}}}. \end{eqnarray*} Thus, we have the following upper-bound of the drift \eqref{drift_ineq_1} under the \textsf{MMW} policy: \begin{framed} \begin{eqnarray*} \mathbb{E}^{\textsf{MMW}}\big(L(\bm{h}(t+1))-L(\bm{h}(t)) | \bm{h}(t), \bm{C}(t)\big) \\ \leq -\sum_{j=1}^M \frac{\sum_{i: C_i(t)=j} h_i(t)}{\sum_{i: C_i(t)=j} \frac{1}{\sqrt{p_i}}}+ \sum_{i=1}^N \frac{1}{\sqrt{p_i}}. \end{eqnarray*} \end{framed} Taking expectation of the above drift-inequality w.r.t. the random cell-occupancy vector $\bm{C}(t)$, we have \begin{eqnarray} \label{drift_mob} \mathbb{E}^{\textsf{MMW}}\big(L(\bm{h}(t+1))-L(\bm{h}(t))|\bm{h}(t)\big) \nonumber \\ \leq -\sum_{j=1}^M \mathbb{E}(Z_j(t)|\bm{h}(t)) + \sum_{i=1}^N \frac{1}{\sqrt{p_i}}, \end{eqnarray} where $Z_j(t) \equiv \frac{\sum_{i: C_i(t)=j} h_i(t)}{\sum_{i: C_i(t)=j} \frac{1}{\sqrt{p_i}}}.$ Our next task is to evaluate this expectation. Note that, we can alternatively express the random variable $\sum_{j=1}^M Z_j(t)$ as \begin{eqnarray*} \sum_{j=1}^M Z_j(t) = \sum_{i=1}^N h_i(t) Y_i(t), \end{eqnarray*} where $Y_i(t) = \big(\frac{1}{\sqrt{p_i}}+\sum_{k\neq i} \frac{1}{\sqrt{p_k}}\mathds{1}(C_i(t)=C_k(t))\big)^{-1}. $ We can evaluate this expectation exactly for the i.i.d. uniform mobility model. Recall that $\bm{C}(t) \perp \bm{h}(t)$. Hence, \begin{eqnarray} \label{yi} \mathbb{E}(Y_i(t)) = \sum_{n=0}^{N-1}\sum_{S: i\notin S, |S|=n} \bigg(\frac{1}{\sqrt{p_i}}+\sum_{k\in S} \frac{1}{\sqrt{p_k}}\bigg)^{-1} \times \nonumber \\ \frac{1}{M^n}\bigg(1-\frac{1}{M}\bigg)^{N-n-1}. \end{eqnarray} In the special case when all UEs are identical, \emph{i.e.,} $p_i=p, \forall i$, the summation \eqref{yi} has a closed-form expression. Clearly, for all $ 0 \leq n \leq N-1$, we have: \begin{eqnarray*} Y_i(t)= \frac{\sqrt{p}}{n+1},~~~ \textrm{w.p.}~ \binom{N-1}{n} \frac{1}{M^n}\bigg(1-\frac{1}{M}\bigg)^{N-n-1}. \end{eqnarray*} To evaluate the expectation of $Y_i(t)$, we integrate the binomial expansion of $(1+x)^{N-1}$ in the range $[0,\beta]$ to obtain the identity: \begin{eqnarray*} \frac{1}{N}\bigg( (1+\beta)^N-1 \bigg) = \beta \sum_{n=0}^{N-1} \frac{1}{n+1}\binom{N-1}{n} \beta^n. \end{eqnarray*} Substituting $\beta = \frac{1}{M-1}$ in the above, we obtain \begin{eqnarray}\label{iid_yi} \mathbb{E}(Y_i(t))= \sqrt{p}\frac{M}{N}\bigg(1-\big(1-\frac{1}{M}\big)^{N} \bigg) \equiv Y^*(\textrm{say}). \end{eqnarray} From Eqn. \eqref{drift_mob} and \eqref{iid_yi}, we have \begin{eqnarray*} \mathbb{E}^{\textsf{MMW}}\big(L(\bm{h}(t+1))-L(\bm{h}(t))|\bm{h}(t)\big) \leq -Y^*\sum_i h_i(t) + \frac{N}{\sqrt{p}}. \end{eqnarray*} Taking expectation of both sides, we have \begin{eqnarray*} \mathbb{E}^{\textsf{MMW}}\big(L(\bm{h}(t+1))-L(\bm{h}(t))\big) \leq -Y^*\sum_i \mathbb{E}h_i(t) + \frac{N}{\sqrt{p}}. \end{eqnarray*} Summing up the above inequalities and averaging w.r.t. $T$ slots, we obtain \begin{eqnarray}\label{ub_pf_1} \textsf{AoI}^{\textsf{MMW}}&=&\limsup_{T \to \infty} \frac{1}{NT}\sum_{t=1}^{T}\sum_i \mathbb{E}h_i(t) \nonumber \\ &\leq& \frac{N}{Y^*\sqrt{p}}= \frac{N}{Mp\bigg( 1- (1-\frac{1}{M})^N\bigg)}. \end{eqnarray} On the other hand, the lower bound from Theorem \ref{lb}, specialized to this case, yields: \begin{eqnarray} \label{lb_pf_1} \textsf{AoI}^* \geq \frac{N}{2Mp\bigg( 1- (1-\frac{1}{M})^N\bigg)}. \end{eqnarray} Eqns.\ \eqref{ub_pf_1} and \eqref{lb_pf_1}, we have \[\textsf{AoI}^{\textsf{MMW}} \leq 2\textsf{AoI}^*. \] The above inequality shows that the policy $\textsf{MMW}$ is $2-$optimal in the case of statistically identical UEs with uniform mobility. \end{IEEEproof} \begin{figure} \centering \begin{overpic}[width=0.5\textwidth]{intervals_fig_new} \end{overpic} \put(-221,184){\footnotesize{$\Delta_1$}} \put(-74,184){\footnotesize{$\Delta_4$}} \caption{\small {Illustrating the \emph{intervals} for $\textrm{UE}_i$}} \label{intervals_fig} \end{figure} \subsection{Proof of Theorem \ref{comp_ratio_ub}} \label{comp_ratio_ub_proof} \begin{IEEEproof} Let us assume that the \textsf{MA} policy had $K \geq 0$ successful transmissions during the entire time-horizon of length $T$. We divide the time horizon into $K$ successive \emph{intervals}, defined naturally as follows. Let $T_i$ be the time index at which the \textsf{MA} policy had its $i$\textsuperscript{th} successful transmission, $0\leq i \leq K$, and $T_{K+1}=T$. Let $\Delta_i \equiv T_i- T_{i-1}$ denote the length of the $i$\textsuperscript{th} interval between the $i$\textsuperscript{th} and $i-1$ \textsuperscript{th} successful transmissions of the \textsf{MA} policy. For notational consistency, we define $T_0 \equiv 0, \Delta_0 \equiv 0.$ See Figure \ref{intervals_fig}. We start our analysis with two simple observations - first, whenever a successful transmission is made by the \textsf{MA} policy, the optimal policy \textsf{OPT} also transmits at that slot successfully. Second, the \textsf{MA} policy is a \emph{persistent round robin} policy, which keeps on scheduling a user (having the highest age) until the transmission is successful. In the immediately following time slot, the \textsf{MA} policy switches to the other user and continues the round-robin scheduling cycle. See Figure \ref{OPT_MW_fig} for a typical run. \begin{figure} \centering \begin{overpic}[width=0.5\textwidth]{OPT_MW_new} \end{overpic} \caption{Illustrating the scheduling decisions of \textsf{MA} and \textsf{OPT} with $N=3$ UEs. User which is scheduled by the \textsf{MA} policy at each slot is denoted by MA and the user which is scheduled by the offline Optimal policy \textsf{OPT} at each slot is denoted by OPT and the user which is scheduled by both \textsf{MA} and \textsf{OPT} at the same instant is denoted by MA as well as OPT. The figure shows that the \textsf{MA} policy sticks to one user till it gets served and then it switches over to another user in a round-robin fashion. This figure also shows that how the optimal algorithm takes advantage of the known channel states.} \label{OPT_MW_fig} \end{figure} Hence, under the \textsf{MA} policy, the states of the users (in sorted order) at the beginning of the $i$\textsuperscript{th} interval is \[\{1, 1+\Delta_{i-1}, 1 + \Delta_{i-1}+ \Delta_{i-2}, \ldots, 1+ \sum_{j=1}^{N-1} \Delta_{i-j}.\}\] Since the \textsf{MA} policy continues scheduling the UE having the highest age, at the end of the $k$\textsuperscript{th} slot of the $i$\textsuperscript{th} interval, the ages of the UEs (in sorted order) are given by: \[\{k, k+\Delta_{i-1}, k + \Delta_{i-1}+ \Delta_{i-2}, \ldots, k+ \sum_{j=1}^{N-1} \Delta_{i-j}, ~~ 1\leq k \leq \Delta_i.\}\] Hence, the cost $C_i^{\textsf{MA}}$ incurred by the \textsf{MA} policy during the $i$\textsuperscript{th} interval is computed as: \begin{eqnarray} \label{CMA} C_i^{\textsf{MA}}&=& \sum_{k=1}^{\Delta_i} k + \sum_{k=1}^{\Delta_i}\sum_{m=1}^{N-1}\bigg(k+\big(\sum_{j=1}^{m} \Delta_{i-j}\big)\bigg)\nonumber \\ &=&N \sum_{k=1}^{\Delta_i}k + \Delta_i \sum_{j=1}^{N-1}(N-j)\Delta_{i-j} \nonumber\\ &\leq & N \bigg( \frac{\Delta_i(\Delta_i+1)}{2} + \sum_{j=1}^{N-1} \Delta_i \Delta_{i-j} \bigg)\\ &\leq & \frac{N}{2}\bigg( N \Delta_i^2 + \Delta_i+ \sum_{j=1}^{N-1} \Delta_{i-j}^2\bigg) \end{eqnarray} where in the last step, we have used the AM-GM inequality to conclude $ \Delta_i \Delta_{i-j} \leq \frac{1}{2}\big(\Delta_i^2 + \Delta_{i-j}^2\big), 1\leq j \leq N-1.$\\ Hence, the total AoI cost incurred by the \textsf{MA} scheduling policy over the entire time horizon is upper bounded as: \begin{eqnarray*} \textrm{AoI}^{\textsf{MA}}(T)&=& \sum_{i=1}^{K}C_i^{\textsf{MA}} \\ &\leq & \frac{N}{2}\sum_{i=1}^{K}\bigg( N \Delta_i^2 + \Delta_i+ \sum_{j=1}^{N-1} \Delta_{i-j}^2\bigg)\\ &\leq & \frac{N}{2}\sum_{i=1}^{K} \bigg(2N \Delta_i^2 + \Delta_i\bigg). \end{eqnarray*} On the other hand, the cost incurred by \textsf{OPT} during the $i$\textsuperscript{th} interval is lower bounded as: \begin{eqnarray} \label{COPT} C_i^{\textsf{OPT}}&\geq & (N-1)\sum_{k=1}^{\Delta_i}1 + \sum_{k=1}^{\Delta_i} (1+k). \nonumber\\ &\geq & \frac{1}{2} \Delta_i^2 + N\Delta_i, \end{eqnarray} where we have separately lower bounded the cost incurred by the UE being scheduled by \textsf{MA} (which was consistently seeing \textsf{Bad} channels) and the other UEs. Finally, the cost of the entire horizon may be obtained by summing up the cost incurred in the constituent intervals. Hence, noting that $\Delta_0=0$, from Eqns.\ \eqref{CMA} and \eqref{COPT}, the competitive ratio $\eta^{\textsf{MA}}$ of the \textsf{MA} policy may be upper bounded as follows: \begin{eqnarray*} \eta^{\textsf{MA}} &=& \frac{\sum_{i=1}^K C_i^{\textsf{MA}}}{\sum_{i=1}^K C_i^{\textsf{OPT}}} \\ &\stackrel{(a)}{\leq}& \frac{\frac{N}{2}\sum_{i=1}^{K} \bigg(2N \Delta_i^2 + \Delta_i\bigg)}{\sum_{i=1}^K \big(\frac{1}{2}\Delta_i^2 + N\Delta_i\big)} \\ & \leq & 2N^2. \end{eqnarray*} \end{IEEEproof} \subsection{Proof of Theorem \ref{comp_ratio_lb}} \label{comp_ratio_lb_proof} \begin{IEEEproof} To apply Yao's principle, we need to compute the expectations appearing in the numerator and the denominator of Eqn.\ \eqref{Yao_lb}. \subsubsection{Upper bound to \textsf{OPT}'s expected cost} Let the random variable $C_i(T)$ denote the total AoI-cost incurred by the $i$\textsuperscript{th} UE up to time $T$. In other words, \begin{eqnarray*} C_i(T) = \sum_{t=1}^{T} h_i(t). \end{eqnarray*} Hence, the limiting time-averaged total expected cost incurred by \textsf{OPT} may be expressed as \begin{eqnarray} \label{opt_ub_1} \bar{\mathcal{C}}(\textsf{OPT}) \equiv \lim_{T \to \infty} \frac{1}{T} \sum_{i=1}^{N} \mathbb{E}\big(C_i(T)\big) = \sum_{i=1}^{N} \lim_{T \to \infty} \frac{\mathbb{E}(C_i(T))}{T}, \end{eqnarray} In the following, we will show that all of the above limits exist with the assumed choice of the underlying probability space. We now use the Renewal Reward Theorem \cite{gallager2012discrete} in order to evaluate the RHS of Eqn.\ \eqref{opt_ub_1}. Since, under the assumed channel state distribution $\bm p$, only one channel is in \textsf{Good} state, the optimal policy \textsf{OPT} is easy to characterize - at any slot, \textsf{OPT} schedules the user having \textsf{Good} channel. Under this probability space, it can be verified that, for each user $i$, the sequence of random variables $\{h_i(t)\}_{t\geq 1}$ constitute a renewal process, with the commencement of scheduling of the $i$\textsuperscript{th} user constituting renewal instants. A generic renewal interval of length $\tau$ for the $i$\textsuperscript{th} user consists of two parts - (1) a consecutive sequence of \textsf{Good} channels of length $\tau_\textsf{G}$, and (2) a consecutive sequence of \textsf{Bad} channels of length $\tau_{\textsf{B}}$. The AoI cost $c_i(\tau)$ incurred by the user $i$ in any generic renewal cycle may be written as the sum of the costs incurred in two parts: \begin{eqnarray*} c_i(\tau)&=& c_i(\tau_{\textsf{G}})+ c_i(\tau_{\textsf{B}})\\ &=& \sum_{t=1}^{\tau_{\textsf{G}}}1 + \sum_{t=1}^{\tau_{\textsf{B}}}(1+t)\\ &=& \tau_{\textsf{G}} + \frac{3}{2}\tau_{\textsf{B}} + \frac{1}{2}\tau_{\textsf{B}}^2. \end{eqnarray*} Let $q\equiv \frac{1}{N}$ be the probability that that the channel is \textsf{Good} for the $i$\textsuperscript{th} user at any slot. Hence, from our construction, the random variables $\tau_{\textsf{G}}$ and $\tau_{\textsf{B}}$ follows a Geometric distribution having the following p.m.f. \begin{eqnarray*} \mathbb{P}(\tau_{\textsf{G}}=k) &=& q^{k-1} (1-q), ~~ k \geq 1. \\ \mathbb{P}(\tau_{\textsf{B}}=k) &=& q(1-q)^{k-1}, ~~ k \geq 1. \end{eqnarray*} Hence, the expected cost incurred by the $i$\textsuperscript{th} user at any renewal cycle is given by \begin{eqnarray}\label{cycle_cost} \mathbb{E}(c_i(\tau))= \frac{1}{1-q}+ \frac{3}{2q}+ \frac{2-q}{2q^2}= \frac{1}{q^2(1-q)}. \end{eqnarray} Moreover, the expected length of any renewal cycle is given by \begin{eqnarray}\label{cycle_length} \mathbb{E}(\tau)= \mathbb{E}(\tau_{\textsf{G}})+ \mathbb{E}(\tau_{\textsf{B}})= \frac{1}{q(1-q)}. \end{eqnarray} Using Renewal Reward Theorem \cite{gallager2012discrete}, we have \begin{eqnarray*} \lim_{T \to \infty} \frac{\mathbb{E}(C_i(T))}{T}= \frac{\mathbb{E}(c_i(\tau))}{\mathbb{E}(\tau)}=\frac{1}{q}=N, ~~~\forall i. \end{eqnarray*} Hence, from \eqref{opt_ub}, we conclude that the time-averaged total expected cost incurred by \textsf{OPT} is given by \begin{eqnarray}\label{opt_ub} \bar{\mathcal{C}}(\textsf{OPT})= N^2. \end{eqnarray} \subsubsection{Lower Bound to the AoI for $N$ users} By directly appealing to the general lower bound in Theorem \eqref{lb}, with $p_i=\frac{1}{N}, ~\forall i$, and $M=1$, we conclude that under the assumed channel state distribution, the time-averaged expected cost for any online scheduling policy $\pi$ is lower bounded as \begin{eqnarray} \label{lb_gnl} \bar{\mathcal{C}}(\pi) = \limsup_{T\to \infty} \frac{1}{T}\sum_{i=1}^{N}\mathbb{E}(C_i(T)) \geq \frac{N^3+N}{2}. \end{eqnarray} We should point out that the lower bound in \eqref{lb_gnl} is not numerically tight. In particular, the following Proposition \ref{improved_LB} shows that, using a more careful analysis, the AoI lower bound for $N=2$ users may be improved to $6$. \begin{framed} \begin{proposition} \label{improved_LB} In the above set up, for any online policy, the average AoI for $N=2$ users with the probability of successful transmission $p_1=p_2=\frac{1}{2}$ is lower bounded by $6$.\end{proposition} \end{framed} For a proof of the above proposition, please refer to Appendix \ref{improved_LB_proof} below.\\ Nevertheless, the achievability result in Theorem \ref{achievability_thm} shows that the bound in Eqn.\ \eqref{lb_gnl} is tight within a factor of $2$. In particular, Eqn.\ \eqref{lb_gnl} has the order optimal dependence on $N$. Finally, using Yao's minimax principle in conjunction with Eqns. \eqref{opt_ub} and \eqref{lb_gnl}, we conclude that the competitive ratio $\eta(N)$ of any online policy is lower bounded as \begin{eqnarray*} \eta(N) \geq \sup_{T}\frac{C_T(\pi)}{C_T(\textsf{OPT})} \geq \frac{\bar{\mathcal{C}}(\pi)}{\bar{\mathcal{C}}(\textsf{OPT})} \geq \frac{N}{2} + \frac{1}{2N}. \end{eqnarray*} In the case when $N=2$, using the result of Appendix \ref{improved_LB_proof}, the competitive ratio is lower bounded by \[ \eta(2) \geq \frac{6}{2^2}=1.5.\] \end{IEEEproof} \subsection{Proof of Proposition \ref{improved_LB}} \label{improved_LB_proof} \begin{IEEEproof} Define $\mathcal{F}_{t-1}\equiv \sigma(\vec{h}(k), \vec{\mu}(k), 1\leq k \leq t-1)$ to be the sigma-algebra generated by the r.v.s of age and control vectors observed up to time $t-1$. Since the policy is online, the scheduling decision $\vec{\mu}(t)$ at time $t$ must be measurable in $\mathcal{F}_{t-1}$ for all $t\geq 1$. Let $H_{\textrm{sum}}(t) \equiv \mathbb{E}^\pi(h_1(t))+ \mathbb{E}^\pi(h_2(t))$ be the expected sum of the ages of the UEs at time $t$. Let $B_t \in \mathcal{F}_t$ be the event for which the $\textrm{UE}_1$ is scheduled under the policy $\pi$. Then, we can write \begin{eqnarray} \label{cond_ex1} &&\mathbb{E}^\pi\big(h_1(t+1)|\mathcal{F}_t) \\ &=& \big(1+\frac{1}{2}h_1(t)\big)\mathds{1}(B_t) + \big(1+h_1(t\big) \mathds{1}(B_t^c) \nonumber \\ &=& 1+ \frac{1}{2}h_1(t) + \frac{1}{2}h_1(t)\mathds{1}(B_t^c)\nonumber \\ &\stackrel{(a)}{\geq} & 1+ \frac{1}{2}h_1(t) + \frac{1}{2}\min \{h_1(t), h_2(t)\}\mathds{1}(B_t^c), \end{eqnarray} Similarly, we can also write \begin{eqnarray}\label{cond_ex2} \mathbb{E}^\pi\big(h_2(t+1)|\mathcal{F}_t) \geq 1+ \frac{1}{2}h_2(t) + \frac{1}{2}\min\{h_1(t), h_2(t) \}\mathds{1}(B_t). \end{eqnarray} Since $\mathds{1}(B_t)+\mathds{1}(B_t^c)=1$, from the equations \eqref{cond_ex1} and \eqref{cond_ex2}, we have \begin{eqnarray*} &&\mathbb{E}^\pi\big(h_1(t+1)+h_2(t+1)|\mathcal{F}_t) \geq\\ && 2+ \frac{1}{2}(h_1(t)+h_2(t))+ \frac{1}{2} \min\{h_1(t), h_2(t)\}. \end{eqnarray*} Taking expectations of both sides of the above equation, we get \begin{eqnarray} \label{key_eqn1} H_{\textrm{sum}}(t+1) \geq 2 + \frac{1}{2} H_{\textrm{sum}}(t) + \frac{1}{2} \mathbb{E}\bigg(\min\{h_1(t), h_2(t)\}\bigg). \end{eqnarray} Let the random variable $S(t)$ denote the time elapsed since the last successful transmission (by any UE) before time $t$. Clearly, \[ \min \{h_1(t), h_2(t)\} \geq S(t)\] (the above inequality holds with equality for the two user case). Hence, the above inequality implies \[ H_{\textrm{sum}}(t+1) \geq 2 + \frac{1}{2} H_{\textrm{sum}}(t) + \frac{1}{2} \mathbb{E}\big(S(t)\big).\] Summing up the above inequalities for $t=1,2, \ldots, T$, and dividing both sides by $T$, we obtain \begin{eqnarray} \label{cesaro_mean_lt} 2\frac{H_{\textrm{sum}}(T+1)}{T}+\frac{1}{T}\sum_{t=1}^{T}H_{\textrm{sum}}(t) \geq 4 + \frac{1}{T}\sum_{t=1}^{T}\mathbb{E}(S(t)). \end{eqnarray} It is to be noted that $\{S(t)\}_{t\geq 1}$ is a renewal process with the time-stamp of successful transmissions constituting the renewal instants. Let the random variable $\tau$ denote the length of any generic renewal cycle. Hence, using the renewal reward theorem \cite{gallager2012discrete} \cite{gallager2013stochastic}, it follows that \begin{eqnarray*} \lim_{T \to \infty} \frac{1}{T}\sum_{t=1}^{T}\mathbb{E}(S(t)).&=& \frac{\mathbb{E}\big(\int_{0}^{\tau}S(t)dt\big)}{\mathbb{E}(\tau)}\\ &=& \frac{\mathbb{E}(1+2+\ldots+\tau)}{\mathbb{E}(\tau)}\\ &=& \frac{\mathbb{E}(\tau^2)+\mathbb{E}(\tau)}{2\mathbb{E}(\tau)}\\ &=& 2, \end{eqnarray*} where the last inequality follows from the fact that the \newpage renewal cycle lengths $T$ are distributed geometrically with the parameter $p=1/2$. Thus, the limit of the RHS of Eqn.\ \eqref{cesaro_mean_lt} exists and the limiting value is equal to $6$. Next, we consider two possible cases. \\ \textbf{Case I: $\liminf_{T\to \infty} \frac{H_{\textrm{sum}}(T+1)}{T}=0$: } In this case, consider a subsequence $\{T_k\}_{k\geq 1}$ along which $\lim_{k\to \infty} \frac{H_{\textrm{sum}}(T_k+1)}{T_k}=0$. For this subsequence, we have from Eqn.\ \eqref{cesaro_mean_lt}: \begin{eqnarray*} 2\frac{H_{\textrm{sum}}(T_k+1)}{T_k}+\frac{1}{T_k}\sum_{t=1}^{T_k}H_{\textrm{sum}}(t) \geq 4 + \frac{1}{T_k}\sum_{t=1}^{T_k}\mathbb{E}(S(t)). \end{eqnarray*} Taking $k \to \infty$, we conclude that \begin{eqnarray}\label{ces_lim2} \limsup_{T\to \infty} \frac{1}{T}\sum_{t=1}^{T}H_{\textrm{sum}}(t) \geq 6. \end{eqnarray} \textbf{Case II: $\liminf_{T\to \infty} \frac{H_{\textrm{sum}}(T+1)}{T}=\alpha >0$: } From the definition of $\liminf$, it follows that there exists a finite $T_0$ such that, for all $T \geq T_0$, we have \begin{eqnarray} \label{liminf_eqn} \frac{H_{\textrm{sum}}(T+1)}{T} \geq \frac{\alpha}{2}. \end{eqnarray} Thus, for any $T \geq T_0$, we can write \begin{eqnarray*} \frac{1}{T}\sum_{t=1}^{T} H_{\textrm{sum}}(t) \geq \frac{1}{T}\sum_{t=T_0+1}^{T} H_{\textrm{sum}}(t) \stackrel{(a)}{\geq} \frac{\alpha}{2T}\sum_{t=T_0}^{T-1}t = \Omega(T). \end{eqnarray*} Hence, in this case, we have \begin{eqnarray*}\label{lim_cost1} \limsup_{T \to \infty} \frac{1}{T}\sum_{t=1}^{T} H_{\textrm{sum}}(t) = \infty. \end{eqnarray*} Hence, from Eqns. \eqref{ces_lim2} and \eqref{lim_cost1}, we conclude that, in either case, we have \begin{eqnarray} \label{lim_cost2} \limsup_{T \to \infty} \frac{1}{T}\sum_{t=1}^{T} H_{\textrm{sum}}(t) \geq 6. \end{eqnarray} \end{IEEEproof} \section{Conclusion and Future Work} \label{conclusion} This paper investigates the fundamental limits of Age-of-Information in stationary and non-stationary environments from an online scheduling point-of-view. In the stochastic setting, a $2$-optimal scheduling policy has been proposed for mobile UEs. For the non-stationary regime, a new adversarial channel model has been introduced. Upper and lower bounds for the competitive ratio have been derived for the adversarial model. As an immediate extension of this work, the effect of mobility in the non-stationary environment may be considered. The gap between the upper and lower bounds of the competitive ratio may be tightened. Also, it will be interesting to obtain the competitive ratio for $w$-step lookahead policies as a function of the prediction-window $w$. \section{A tight universal lower bound} In this section, we derive an improved universal lower bound for a single BS in which only one user may be scheduled among $N$ users at a slot. For simplicity, we assume that the UEs have statistically identical channels, \emph{i.e.,} $p_i=p, \forall 1\leq i \leq N$. \\ Let $\mathcal{F}_{t}\equiv \sigma(\vec{h}(k), \vec{\mu}(k), 1\leq k \leq t)$ be the sigma-algebra generated by the random age $\vec{h}(t)$ and scheduling decisions $\vec{\mu}(t)$ up to time $t$. For any online policy, the scheduling decision $\vec{\mu}(t+1)$ at the beginning of the slot $t+1$ must be measurable in the sigma-algebra $\mathcal{F}_{t}$. Let the random variable $H_{\textrm{sum}}(t) \equiv (\sum_i h_i(t))$ denote the sum of ages of all UEs at time $t$. Let $B_i(t) \in \mathcal{F}_t$ be the event under which the policy $\pi$ schedules the $\textrm{UE}_i, \forall i$. Clearly, $B_i(t) \cap B_j(t)=\phi, \forall i\neq j$ and $\sqcup_i B_{i}(t)=\Omega.$ Hence, we can write \begin{eqnarray*} \mathbb{E}\big(h_i(t+1)| \mathcal{F}_{t}\big) &=& \mathds{1}(B_i(t))\big(p + (1-p)(1+h_i(t)\big)\\ &&+ \mathds{1}(B_i^c(t))\big(1+h_i(t)\big)\\ &=& 1+ h_i(t) - p h_i(t) \mathds{1}(B_i(t)). \end{eqnarray*} Hence, \begin{eqnarray*} \mathbb{E}(H_{\textrm{sum}}(t+1)| \mathcal{F}_t)= N+ H_{\textrm{sum}}(t) - p \sum_{i}h_i(t)\mathds{1}(B_i(t)). \end{eqnarray*} Next, we define the set of exhaustive and mutually exclusive events $\{B_i'(t)\}_{i}$ as follows: \begin{eqnarray*} B_i'(t) = \{ \omega \in \Omega: h_i(t, \omega) \geq h_j(t, \omega), \forall j \neq i\}, \end{eqnarray*} where, in the above, we break ties uniformly at random. Then, it follows that \begin{eqnarray*} \mathbb{E}(H_{\textrm{sum}}(t+1)| \mathcal{F}_t) &\geq& N+ H_{\textrm{sum}}(t) - p \sum_{i}h_i(t)\mathds{1}(B_i'(t))\\ &=& N+ H_{\textrm{sum}}(t) - p h_{\max}(t). \end{eqnarray*} It is clear that, \section{Introduction and Related work} \lettrine[]{\textbf{T}}{ }he Quality-of-Service (QoS) offered by any wireless network has traditionally been measured along three dimensions, namely, \emph{throughput}, \emph{packet delay}, and \emph{energy efficiency}. There exists an extensive body of literature addressed to optimizing the cross-layer resource allocations to improve the QoS along these axes \cite{tassiulas, mandelbaum2004scheduling, sinha_umw, neely2010stochastic, kozat2004framework}. However, it has been argued that the standard QoS metrics are primarily geared towards quantifying the degree of utilization of the system resources, and less towards measuring the actual user experience \cite{new_QoS}. With the explosive growth of hand-held mobile devices, Internet of Things (IoT), real-time AR and VR systems powered by the emerging 5G technology, the Quality of Experience (QoE) for the users plays a major role in today's network design \cite{QoE}. In order to integrate QoE with the design criteria, a new metric, called \emph{Age-of-Information} (AoI), has been proposed recently for measuring the \emph{freshness} of information available to the end-users \cite{kaul2012real, kosta2017age}. Designing efficient schedulers to minimize the AoI is currently an active area of research. The papers \cite{kadota2018scheduling} and \cite{kadota2019scheduling} study the \emph{average} AoI minimization problem for static User Equipments (UEs) associated with a single Base Station (BS). In these papers, the authors propose a $4$-optimal Max-Weight-type scheduling policy (Theorem 12 of \cite{kadota2018scheduling}). The paper \cite{srivastava2019minimizing} proposes an optimal scheduling policy for the same setup, where the objective is to minimize the \emph{maximum} AoI of all UEs. All of these papers consider a single-hop network model with static UEs only. The problem of AoI minimization in a multi-hop network with static UEs has been studied in \cite{talak2017minimizing}. The paper \cite{tripathi2019age} considers the problem of designing an AoI-optimal trajectory for a mobile agent which facilitates information dissemination from a central station to a set of ground terminals. The effect of mobility on the capacity of wireless networks has been investigated in the classic work of \cite{grossglauser2002mobility}. It has been shown that mobility, in general, increases the capacity of ad hoc networks. However, to the best of our knowledge, the effect of UE-mobility on the Age-of-Information has not been studied before. One of the main objectives of this paper is to study the AoI-optimal scheduling with mobile UEs. Most of the existing works on wireless networks assume a stationary channel model for analytical tractability. In rapidly varying environments, such as high-speed trains and vehicle-to-vehicle communication, the standard stationary channel model assumption no longer holds in practice. This is particularly true for the 5G mmWave regime ($\geq 28$ GHz), which suffers from severe attenuation loss \cite{non_stationary, wu2017general}. On the other hand, designing an accurate and analytically tractable non-stationary wireless channel model remains an overarching challenge to the research community \cite{nonstat1, nonstat2}. To overcome this difficulty, in the second part of this paper, we propose a simple adversarial channel model for non-stationary environments and study the scheduling problem in this model. In addition to the emerging 5G technology, the adversarial channel model is also useful for ensuring reliable communication in the presence of tactical jammers, where the interferers, in reality, behave adversarially \cite{poisel2011modern, mpitziopoulos2009survey}. \subsection*{Our contributions:} We make the following contributions in this paper. \begin{itemize} \item We study the multi-user scheduling problem in stationary and non-stationary environments. The stationary environment is modelled stochastically, and the non-stationary environment is modelled using an adversarial framework. To the best of our knowledge, this is the first paper that considers the AoI-optimal scheduling problem in an adversarial setting. \item In the stationary setting described in Section \ref{stochastic}, we design a $2$-optimal scheduling policy for mobile UEs. Our result improves upon the $4$-optimality bound known for static UEs \cite{kadota2018scheduling, kadota2019scheduling}. \item Our analytical result enables us to precisely characterize the effect of mobility on the overall AoI as a function of the long-term user mobility statistics. The results may also be effectively used for small-cell network planning \cite{balazinska2003characterizing}. \item In the non-stationary setting of Section \ref{online}, we show that a simple online scheduling policy achieves $O(N^2)$ competitive ratio. Using Yao's minimax principle, we show that no online policy can have a competitive ratio better than $\Omega(N)$. \item We propose a heuristic scheduling policy in Section \ref{prediction} for the scenario where the future channel states can be accurately estimated for the next $w$ slots. We validate the efficacy of the proposed policy through numerical simulations. \end{itemize} The rest of the paper is organized as follows. In Section \ref{sys_model_stochastic}, we describe the stochastic model and formulate the problem in the stationary regime. Section \ref{stochastic} and \ref{online} study the problem in the Stationary and Non-stationary environments respectively. In Section \ref{simulation}, we compare the performance of the proposed scheduling policies via numerical simulations. Section \ref{conclusion} concludes the paper with some pointers to open problems. \section{AoI Minimization in Stationary Environments} \label{sys_model_stochastic} In this section, we first describe the stochastic system model and then formulate the optimal scheduling problem. In the rest of the paper, the abbreviation UE will refer to any generic user equipment, and the term BS will refer to a Base Station. The area covered by a BS will be referred to as a Cell. \paragraph{Channel model} We consider a cellular system where a set of $N$ UEs travel around in an area having $M$ BSs. Time is slotted, and at every slot, each BS can beam-form and schedule a packet transmission to one of the UEs in its coverage area. The wireless link to $\textrm{UE}_i$ from the BS in its current cell is assumed to be a stationary erasure channel with the probability of successful reception of a transmitted packet being $p_i, 1\leq i \leq N$. Hence, when a BS schedules a downlink packet transmission to $\textrm{UE}_i$ in its cell, the packet is either successfully received with probability $p_i$ or lost otherwise. \paragraph{Mobility model} We assume that the UE mobility is modelled by a stationary ergodic process. Formally, let the random variable $C_i(t) \in \{1,2,\ldots, M\}$ denote the index of the cell to which $\textrm{UE}_i$ is associated with at time $t$ \footnote{We make the standard assumption that the coverage areas of the cells are mutually disjoint. Hence a UE is associated with only one BS at any time.}. Then, according to our assumption, the stochastic process $\{C_i(t)\}_{t \geq 1}$ is a stationary ergodic process with the probability that $\textrm{UE}_i$ is associated with $\textrm{BS}_j$ at any time $t$ given by $\mathbb{P}(C_i(t)=j)= \psi_{ij}, \forall i,j, t.$ The probability measure $\bm{\psi}$ denotes the stationary occupancy distribution of the cells by the UEs. The mobility of different UEs is assumed to be independent of each other. Many different mobility models proposed in the literature fall under the above general scheme, including the i.i.d. mobility model, random walk model, and the random waypoint model \cite{ge2016user, akyildiz2000new, johnson1996dynamic, bai2004survey}. See Figure \ref{AoI_mobility_fig} in the Appendix \ref{lb_proof} for a schematic. \paragraph{Packet arrival model to BS} We consider a \emph{saturated} traffic model, where at the beginning of any slot, each BS receives a fresh update packet from a common external source (\emph{e.g.,} a high-speed optical backbone network). Since the UEs are interested in the latest updates only, the BS then deletes any old packet from its buffer and schedules the fresh packet for transmission to some UE following a scheduling policy. The saturated traffic model is standard in applications relying on continuous status updates \cite{costa2016age}, such as monitoring and surveillance with sensor networks \cite{javani2019age}, velocity and position updates for autonomous vehicles \cite{kaul2011minimizing}, command and control information exchange in mission-critical systems, disseminating stock-index updates and live game scores. \paragraph{System states} For slot $t$, let $t_i(t) < t$ denote the last time before time $t$ at which $\textrm{UE}_i$ received a packet successfully from any BS. The Age-of-Information $h_i(t)$ of $\textrm{UE}_i$ at time $t$ is defined as \[ h_i(t) \equiv t-t_i(t). \] In other words, the random variable $h_i(t)$ denotes the length of time elapsed since $\textrm{UE}_i$ received its last update before time $t$. Hence, the r.v. $h_i(t)$ quantifies the \emph{staleness} of information available to $\textrm{UE}_i$. See Figure \ref{AoI_fig} in the Appendix for a typical evolution of $h_i(t)$. The state of the UEs at time $t$ is completely specified by the Age-of-Information of all UEs, given by the random vector $\bm{h}(t)\equiv \big(h_1(t), h_2(t), \ldots, h_N(t)\big)$, and the association of the UEs with the cells, represented by the cell-occupancy vector $\bm{C}(t)$. \paragraph{Policy space and performance metric} A scheduling policy $\pi$ first selects a UE in each cell (if the cell contains any UE), and then schedules the transmission of the latest packet from the BSs to the UEs over the wireless erasure channel described earlier. The scheduling decisions are required to be causal for it to be implementable in real-time. The set of all admissible scheduling policies is denoted by $\Pi$. Our goal in this paper is to design a distributed scheduling policy which minimizes the long-term average AoI of all users. In view of this, we consider the following average-cost problem: \begin{eqnarray} \label{objective} \textsf{AoI}^*=\inf_{\bm{\pi} \in \Pi } \limsup_{T \to \infty} \frac{1}{T} \sum_{t=1}^{T}\frac{1}{N}\bigg(\sum_{i=1}^{N} \mathbb{E}^\pi(h_i(t))\bigg). \end{eqnarray} \subsection{Converse and Achievability} \label{stochastic} The AoI minimization problem given by \eqref{objective} is an example of an average-cost MDP with countably infinite state-space \cite{bertsekas1995dynamic}. Excepting a few cases with special structures (\emph{cf.} \cite{srivastava2019minimizing}), such problems are notoriously difficult to solve exactly. Moreover, the standard numerical approximation schemes for infinite-state MDPs typically do not provide theoretical performance guarantees. In this paper, we take a different approach to approximately solve the problem \eqref{objective}. In the following Theorem, we obtain a fundamental lower bound to the optimal AoI. Finally, in Theorem \ref{achievability_thm}, we show that a simple online scheduling policy $\pi^{\textsf{MMW}}$ achieves the lower bound within a factor of $2$. \begin{framed} \begin{theorem}[Converse] \label{lb} In the stationary setup, the optimal \textsf{AoI} in \eqref{objective} is lower bounded as: \begin{eqnarray} \label{lb_expr} \textsf{AoI}^* \geq \frac{1}{2N g(\bm{\psi})} \bigg(\sum_{i=1}^{N} \sqrt{\frac{1}{p_i}}\bigg)^2+ \frac{1}{2}, \end{eqnarray} where the quantity $g(\bm{\psi})$ denotes the expected number of cells with \emph{at least one UE}, where the expectation is taken with respect to the stationary occupancy distribution $\bm{\psi}$. In particular, since $g(\bm \psi) \leq \min \{M, N\},$ we also have the following (loose) lower bound which is agnostic of the UE mobility statistics: \begin{eqnarray*} \textsf{AoI}^* \geq \frac{1}{2N \min \{M, N\}} \bigg(\sum_{i=1}^N \sqrt{\frac{1}{p_i}}\bigg)^2 + \frac{1}{2}. \end{eqnarray*} \end{theorem} \end{framed} Please refer to Appendix \ref{lb_proof} for a proof of this theorem. \paragraph*{Discussion} Theorem \ref{lb} gives a universal lower bound for the minimum AoI achievable by \emph{any} admissible scheduling policy $\pi \in \Pi$. Interestingly, it reveals that the lower bound depends on the mobility of the UEs only through their stationary cell-occupancy distribution $\bm{\psi}$. Hence, given the stationary distribution $\bm \psi$, the lower bound \eqref{lb_expr} is agnostic of the details of the mobility model. The appearance of the quantity $g(\bm \psi)$ in the lower bound should not be surprising as it denotes the \emph{typical} number of non-empty cells at a slot in the long run. Since a BS can transmit a packet only if at least one UE is present in its coverage area, the quantity $g(\bm \psi)$, in some sense, represents the \emph{multi-user diversity} of the system. \subsubsection*{Expression for $g(\bm \psi)$} To get a sense of the lower bound \eqref{lb_expr}, we now work out a closed-form expression for $g(\bm \psi)$ for the uniform UE mobility pattern. Using linearity of expectation, \begin{eqnarray}\label{g_psi_eq} g(\bm{\psi})&=& \mathbb{E}_{\bm \psi} \sum_{j=1}^{M} \mathds{1}(\textrm{BS}_j \textrm{ contains at least one UE}\big)\nonumber \\ &=&\sum_{j=1}^{M} \mathbb{P}_{\bm \psi} \big( \textrm{BS}_j \textrm{ contains at least one UE}\big). \end{eqnarray} Since the cells are disjoint, we readily conclude from \eqref{g_psi_eq} that $g(\bm{\psi}) \leq \min\{M, N\}$. Recall that $\psi_{ij}$ denotes the marginal probability that the $\textrm{UE}_i$ is in $\textrm{BS}_j$. Since the mobility of the UEs are independent of each other, the expected number of non-empty cells $g(\bm \psi)$ in Eqn.\ \eqref{g_psi_eq} simplifies to: \begin{eqnarray} \label{g_eqn3} g(\bm{\psi})= \sum_{j=1}^{M} \big(1-\prod_{i=1}^N(1-\psi_{ij})\big). \end{eqnarray} We now evaluate the above expression for the case when the limiting occupancy distribution of each UE is \emph{uniform} across all BSs, \emph{i.e.,} $\psi_{ij}=\frac{1}{M}, \forall i,j $. The uniform stationary distribution arises, for example, when the UE mobility can be modelled as a random walk on a regular graph \cite{lovasz1993random}. In this case, Eqn.\ \eqref{g_eqn3} simplifies to \begin{eqnarray} \label{g_spl} g(\bm{\psi^{\textsf{unif}}}) = M \bigg(1-\big(1-\frac{1}{M}\big)^N\bigg). \end{eqnarray} For $M=1$, we have $g(\bm{\psi})=1$. For $M\geq 2$, we have the following bounds which are easier to work with \begin{eqnarray} \label{g_unif} M\bigg(1-e^{-\frac{N}{M}}\bigg)\leq g(\bm{\psi^{\textsf{unif}}}) \leq M\bigg(1-e^{-1.387 \frac{N}{M}}\bigg). \end{eqnarray} For a derivation of the bounds in \eqref{g_unif}, please refer to Appendix \ref{g_unif_proof}. \section{AoI Minimization in Non-Stationary Environments} \label{online} In this Section, we consider the problem of AoI-optimal scheduling with $N$ static users in a non-stationary environment. Since non-stationary channels are difficult to model and analyze, we propose a new adversarial channel model in this setting. Besides being analytically tractable, all positive results in this model (\emph{e.g.,} Theorem \ref{comp_ratio_ub}) carry over to less adversarial environments. \paragraph*{Channel model} A set of $N$ UEs are under the coverage of a single BS (\emph{i.e.,} $M=1$). The BS can transmit to any one UE at a slot. The channel state $\textsf{Ch}_i(t)$ of any $\textrm{UE}_i$ at any time slot $t$ could be either \textsf{Good} ($1$) or \textsf{Bad} ($0$). If the BS schedules a packet to a UE having a \textsf{Good} channel at that slot, the UE decodes the packet successfully. Otherwise, the packet is lost. We assume that, the states of the $N$ channels (corresponding to $N$ different UEs) are selected by an \emph{omniscient adversary} from the set of all possible $2^N$ states at every slot. The scheduling policy is \emph{online} and has no information on the channel states for the current or future slots. We will partially relax this assumption in Section \ref{prediction}, by considering a more general class of adversarial channel models with future channel estimations. The cost function over a horizon of $T$ slots is given by: \begin{eqnarray}\label{cost_fn} \textsf{AoI}(T) = \sum_{t=1}^{T}\bigg(\sum_{i=1}^N h_i(t)\bigg). \end{eqnarray} The packet arrival model to the BS remains the same as in the stationary environment in Section \ref{sys_model_stochastic}. \paragraph*{Performance Metric} As standard in the literature on online algorithms \cite{fiat1998online, albers1996competitive}, we gauge the performance of an online scheduling policy $\mathcal A$ using \emph{competitive ratio} ($\eta^{\mathcal A}$), which compares the cost of $\mathcal A$ with that of an optimal \emph{offline} policy \textsf{OPT} equipped with hindsight knowledge. More precisely, let $\bm{\sigma} \in \{\{0,1\}^N\}^T$ be a sequence of length $T$ representing the vector of channel states chosen by the adversary for the entire horizon. Then, the competitive ratio of the policy $\mathcal A$ is defined as \cite{albers1996competitive}: \begin{eqnarray}\label{comp_rat_def} \eta^{\mathcal{A}} = \sup_{\bm \sigma}\bigg(\frac{\textrm{Cost of the online policy } \mathcal A \textrm{ on } \bm{\sigma}}{\textrm{Cost of OPT on } \bm{\sigma}}\bigg), \end{eqnarray} where the supremum is taken over all finite-length input sequences $\bm \sigma$, and the cost function is given by \eqref{cost_fn}. In the definition \eqref{comp_rat_def}, while the online policy $\mathcal A$ has only causal information, the policy \textsf{OPT} is assumed to be equipped with full knowledge on the entire channel-state sequence $\bm \sigma.$ \subsection*{Characterization of the optimal offline (\textsf{OPT}) policy} For a given sequence of channel states $\bm \sigma$ of length $T$, the optimal offline policy \textsf{OPT} may be obtained by using Dynamic Programming. Let the variable $C_t^*(h_1(t), h_2(t), \ldots, h_N(t))$ denote the optimal cost-to-go from time $t$ when the AoIs of the the $N$ UEs are given by the vector $\bm{h}(t)\equiv (h_1(t), h_2(t), \ldots, h_N(t)).$ Using standard notations, we have the following backward DP recursion \begin{eqnarray}\label{opt_dp} C^*_{t}(\bm{h}(t))&=& \underbrace{\sum_{i=1}^N h_i(t)}_{\textrm{cost for slot } t} + \underbrace{\min_{i: \textsf{Ch}_i(t+1)=1} C^*_{t+1}(\bm{h}_{-i}(t)+\bm 1, 1)}_{\textrm{optimal future cost}}, \nonumber \\ C^*_{T+1}(\bm{h})&=&0 \hspace{10pt} \forall \bm{h}, \end{eqnarray} where the minimization in Eqn.\ \eqref{opt_dp} is over all UEs $i$ having a \textsf{Good} channel at slot $t+1$. When there is no UE with a \textsf{Good} channel at slot $t+1$ (\emph{i.e.,} $\textsf{Ch}_i(t+1)=0, \forall i$), the second term denoting the future cost is replaced with $C^*_{t+1}(\bm{h}(t)+\bm{1})$. \paragraph*{Comparison with the throughput maximization problem} It is interesting to note that the competitive ratio for the sum-throughput maximization problem in this adversarial model can be arbitrarily bad (\emph{i.e.}, unbounded). It can be understood from the following. Consider a system with two users. If an online scheduler $\mathcal{A}$ schedules $\textrm{UE}_1$ at any slot, the adversary can set the channel corresponding to $\textrm{UE}_1$ to \textsf{Bad} and set $\textrm{UE}_2$'s channel to \textsf{Good} and vice versa. At any slot, the optimal policy schedules the user with the \textsf{Good} channel state. Hence, any online scheduler $\mathcal{A}$ receives zero throughput, but \textsf{OPT} achieves the full throughput of unity. \\ Surprisingly enough, Theorem \ref{comp_ratio_ub} shows that the \textsf{Max Age} (MA) scheduling policy, which schedules a user having the \emph{highest age} (\emph{i.e.,} Scheduled UE at time $t$ $\in \arg\max_i h_i(t)$), is $O(N^2)$-competitive for minimizing the AoI. \begin{framed} \begin{theorem}[Achievability] \label{comp_ratio_ub} In the adversarial setting with $N$ users, the \textsf{MA} policy is $O(N^2)$ competitive for minimizing the average AoI. \end{theorem} \end{framed} For a proof of Theorem \ref{comp_ratio_ub}, please refer to Appendix \ref{comp_ratio_ub_proof}. On a related note, in our recent work \cite{srivastava2019minimizing}, we showed that the \textsf{MA} policy is exactly optimal for minimizing the \emph{maximum} AoI of all UEs in the stochastic setting. \subsection{A Lower bound to the competitive ratio} \label{competitive_ratio_lb} In this section, we use Yao's minimax principle for obtaining a universal lower bound to the competitive ratio \eqref{comp_rat_def} in the adversarial setting. In connection with online problems, Yao's minimax principle may be stated as follows: \begin{framed} \begin{theorem}[Yao's Minimax principle \cite{albers1996competitive}] Given any online problem, the competitive ratio of the best randomized online algorithm against any oblivious adversary is equal to the competitive ratio of the best deterministic online algorithm under a worst-case input distribution. \end{theorem} \end{framed} Using the above principle, it is clear that a lower bound to the competitive ratio of \emph{all} deterministic online algorithms under \emph{any} input channel state distribution $\bm p$ yields a lower bound to the competitive ratio in the adversarial setting, \emph{i.e.,} \begin{eqnarray}\label{Yao_lb} \eta \geq \frac{\mathbb{E}_{\bm{\sigma} \sim \bm{p}}(\textrm{Cost of the Best Deterministic Online Policy})}{\mathbb{E}_{\bm \sigma \sim \bm p}\textrm{(Cost of OPT)}}. \end{eqnarray} To apply Yao's principle in our setting, we construct the following distribution $\bm{p}$ of the channel states: at every slot $t$, a UE is chosen independently and uniformly at random, and assigned a \textsf{Good} channel. The rest of the UEs are assigned \textsf{Bad} channels. The rationale behind the above choice of the channel state distributions will become clear when we compute \textsf{OPT}'s expected cost in Appendix \ref{comp_ratio_lb_proof}. In general, the cost of the optimal offline policy is obtained by solving the Dynamic Program \eqref{opt_dp}, which is difficult to analyze. However, with our chosen channel distribution $\bm{p}$, we see that only one UE's channel is in \textsf{Good} state at any slot. This greatly simplifies the evaluation of \textsf{OPT}'s expected cost. The following Theorem gives the universal lower bound: \begin{framed} \begin{theorem}[Converse] \label{comp_ratio_lb} In the adversarial set up, the competitive ratio $\eta$ of any online policy with $N$ UEs is lower bounded by $\frac{N}{2}+ \frac{1}{2N}.$ Further, for $N=2$ UEs, the lower bound can be improved to $1.5.$ \end{theorem} \end{framed} Please refer to Appendix \ref{comp_ratio_lb_proof} for a proof of this Theorem. \subsection{AoI minimization with Channel Predictions} \label{prediction} The converse result in Theorem \ref{comp_ratio_lb} states that under the adversarial channel model, \emph{any} online scheduling policy has a worst-case competitive ratio $\eta$ which grows at least linearly with the number of UEs ($N$). This is quite a disappointing result when the number of UEs is large. On the flip side, the fully adversarial channel model may also be too restrictive in practice. To circumvent this situation, we now exploit the physical fact that wireless channels with block-fading may often be estimated quite accurately for a few subsequent future slots \cite{prediction_RHC}. We consider a relaxed adversarial model, where at any slot $t$, the BS can estimate the channels perfectly for a window of the next $w \geq 0$ slots. Here, $w$ is an adjustable system parameter that can be adaptively tuned by the policy in accordance with the scale of time-variation of the channels (\emph{e.g.,} fading block length). Similar to the adversarial model in Section \ref{online}, we continue to assume that the channel states are binary-valued and chosen by an omniscient adversary. Thus, the adversarial model discussed in Section \ref{online} is a special case of this model with the window-size $w=0$. We now propose the following policy which exploits the $w$-step look-ahead information:\\ \underline{Receding Horizon Control} (\textsf{RHC:}) The UE scheduled at each time $t$ is chosen by minimizing the total cost for the next $w$ time-steps. Hence, the scheduling decision at time $t$ is obtained by solving the DP \eqref{opt_dp} with the boundary condition $C^*_{t+w+1}(\bm{h})=0, \forall \bm h$. \\ The \textsf{RHC} policy was considered in \cite{geo_load} in the context of load-balancing in data centers. It was shown that the \textsf{RHC} policy has a competitive ratio of $1+O(\frac{1}{w})$- approaching $1$ as the prediction window size $w$ is increased. Since the result of \cite{geo_load} is not directly applicable to our problem, we examine the gain for AoI due to channel prediction capabilities via numerical simulations in the next section. Unsurprisingly, \textsf{RHC} reduces to the \textsf{MA} policy when the prediction window $w=0.$ \section{Numerical Simulations}\label{simulation} In this Section, we perform numerical simulations to compare the performance of the \textsf{RHC} and \textsf{MA} policies in the adversarial setting. Figure \ref{comp_fig} shows the variation of time-averaged AoI with different number of UEs for $T=500$. A Monte-Carlo simulation with $k=50$ iterations was performed with randomly generated channels, and we plotted the worst-case AoI in Figure \ref{comp_fig}(a). For each of these iterations, at every time step, the number of \textsf{Good} Channels is selected uniformly at random between $1$ and $N-1$. From the plots, we see that \textsf{RHC} outperforms \textsf{MA} by a large margin even with just a small prediction window of $w=3$. Figure \ref{comp_fig}(b) shows the variation of the AoI with the window size $(w)$ for the \textsf{RHC} policy. The number of UEs is $N=5$ and the simulation is performed for $T=500$ slots. The window-size is varied from $1$ to $10$. Each simulation is repeated for $50$ times and we plotted the maximum AoI value at the end of these iterations. We see that increasing the prediction window does not significantly decrease the average AoI. \begin{figure} \hspace{-10pt} \begin{overpic}[width=0.5\textwidth]{./combined_plot2} \put(26,-2){\footnotesize{$(a)$}} \put(79,-2){\footnotesize{$(b)$}} \end{overpic} \caption{Performance comparison between the \textsf{MA} and \textsf{RHC} scheduling policies in a single BS. Figure 1 (a) shows the reduction in the average AoI with as few as $w=3$ slots channel estimations. Figure 1(b) shows the reduction in AoI achieved with $N=5$ UEs as the prediction window $w$ is increased.} \label{comp_fig} \end{figure}
3,212,635,537,839
arxiv
\section{Introduction} \label{Sec_1} Let us consider the one-dimensional stochastic differential equation (SDE) \begin{align}\label{SDE_1} X_t= x_0 +\int_0^t b(X_s)ds +\int_0^t \sigma(X_s)dW_s, ~x_0 \in \mathbb{R}, ~t \in [0,T], \end{align} where $W:=(W_t)_{0\leq t \leq T}$ is a standard Brownian motion defined on a probability space $(\Omega, \mathcal{F},\mathbb{P})$ with a filtration $(\mathcal{F}_t)_{0\leq t \leq T}$ satisfying the usual conditions. Since the solution of (\ref{SDE_1}) is rarely analytically tractable, one often approximates $X=(X_t)_{0 \leq t \leq T}$ by using the Euler-Maruyama (EM) scheme given by \begin{align* X_t^{(n)} &= x_0 +\int_0^tb\left(X_{\eta _n(s)}^{(n)}\right)ds +\int_0^t \sigma\left(X_{\eta _n(s)}^{(n)}\right) dW_s,~t \in [0,T], \end{align*} where $\eta _n(s) = kT/n=:t_k^{(n)}$ if $ s \in \left[kT/n, (k+1)T/n \right)$. It is well-known that if $b$ and $\sigma$ are Lipschitz continuous, the EM approximation for \eqref{SDE_1} converges at the strong rate of order $1/2$ (see \cite{KP}). On the other hand, when $b$ and $\sigma$ are not Lipschitz continuous, the strong rate is less known and it has been a subject of extensive study. In the recent articles \cite{JMY} and \cite{HHJ}, it has been shown that for every arbitrarily slow convergence speed there exist SDEs with infinitely often differentiable and globally bounded coefficients such that neither the EM approximation nor any approximation method based on finitely many observations of the driving Brownian motion can converge in absolute mean to the solution faster than the given speed of convergence. The approximation for SDEs with possibly discontinuous drift coefficients was first studied in \cite{G98}. It is shown that if the drift satisfies the monotonicity condition and the diffusion coefficient is Lipschitz continuous, then the EM scheme converges at the rate of $1/4$ in pathwise senses. In \cite{HK}, the strong convergence of EM scheme is shown for SDEs with discontinuous monotone drift coefficients. If $\sigma$ is uniformly elliptic and $(\alpha + 1/2)$-H\"oder continuous, and $b$ is of locally bounded variation, it has been shown that the strong rate of the EM in $L^1$-norm is $n^{-\alpha}$ for $\alpha \in (0,1/2]$ and $(\log n)^{-1}$ for $\alpha=0$ (see \cite{NT_MCOM, NT_2015_2}). The strong rate of convergence for SDEs whose drift coefficient $b$ is H\"older continuous is studied in \cite{Gyongy, MeTa, NT_2015_2}. The above mentioned papers contain just a few selected results and a number of further and partially significantly improved approximation results for SDEs with irregular coefficients are available in the literature; see, e.g., \cite{AKU, CS, HaTsu, HJK, KLY, LeSz, MT, NT_2015_1, Y} and the references there in. In this paper we are interested in strong approximation of SDEs with discontinuous diffusion coefficients. These SDEs appears in many applied domains such as stochastic control and quantitative finance (see \cite{CE, AI}). For such SDEs, the existence and uniqueness of solution was studied in \cite{Nakao, LeGall, CE}; the weak convergence of EM approximation was shown in \cite{Y}. To the best of our knowledge, the strong convergence of the EM approximation of SDEs with discontinuous diffusion coefficient has not been considered before in the literature. It is worth noting that the key ingredients to establish the strong rate of convergence of EM approximation for SDEs with discontinuous drift are either the Krylov estimate (see \cite{KLY, Gyongy}) or the Gaussian bound estimate for the density of the numerical solution (\cite{Lemaire, NT_MCOM, NT_2015_2}). However, these estimates seem no longer available for SDEs with discontinuous diffusion coefficients. Therefore in this paper we develop another method, which is based on an argument with local time, to overcome this obstacle. The remainder of the paper is structured as follows. In the next section we introduce some notations and assumptions for our framework together with the main results. All proofs are deferred to Section 3. \section{Main results} \subsection{Notations} Throughout this paper the following notations are used. For any continuous semimartingale $Y$, we denote $L^x_t(Y)$ the symmetric local time of $Y$ up to time $t$ at the level $x \in \mathbb{R}$ (see \cite{LeGall}). For bounded measurable function $f$ on $\mathbb{R}$, we define $\|f\|_{\infty}:=\sup_{x \in \mathbb{R}} |f(x)|$. We denote by $L^1(\mathbb{R})$ the space of all integrable functions with respect to Lebesgue measure on $\mathbb{R}$ with semi-norm $\|f\|_{L^1(\mathbb{R})}:=\int_{\mathbb{R}}|f(x)|dx$. For each $\beta \in (0,1]$ and $\kappa >0$, we denote by $H^{\beta, \kappa}$ the set of all functions $f:\mathbb{R} \to \mathbb{R}$ such that there exists a measurable subset $S(f)$ of $\mathbb{R}$ satisfying \begin{itemize} \item[(i)] $\|f\|_{\beta} := \|f\|_\infty+ \sup_{x<y; [x,y]\cap S(f) = \emptyset} \dfrac{|f(x)-f(y)|}{|x-y|^{\beta}} < \infty$; and \item[(ii)] $C_{\beta,\kappa}:= \sup_{K\geq 1}\sup_{\varepsilon>0} \dfrac{\lambda(S(f)^\varepsilon \cap [-K,K])}{K \varepsilon^\kappa} < +\infty$ where $\lambda$ denotes the Lebesgue measure on $\mathbb{R}$ and $S(f)^\varepsilon$ is the $\varepsilon$-neighbourhood of $S(f)$, i.e., $S(f)^\varepsilon = \{y \in \mathbb{R}: \text{ there exists } x \in S(f) \text{ such that } |x-y|\leq \varepsilon\}.$ \end{itemize} Here are some remarks on the class $H^{\beta, \kappa}$. \begin{Rem} \begin{enumerate} \item $H^{\beta, \kappa}$ is a vector space on $\mathbb{R}$, i.e., if $a, b \in \mathbb{R}$ and $f, g \in H^{\beta, \kappa}$ then $af + bg \in H^{\beta, \kappa}$. \item A bounded function $f$ is called piecewise $\beta$-H\"older if there exist a positive constant $L$ and a sequence $-\infty = s_0 < s_1 < s_2 < \ldots < s_m < s_{m+1}= \infty$ such that $|f(u) - f(v)| \leq L|u-v|^\beta$ for any $u,v$ satisfying $s_k < u < v < s_{k+1}$. It is easy to verify that such function $f \in H^{\beta,1}, \ S(f) = \{s_1, \ldots, s_m\}$ and $C_{\beta, 1} \leq 2m$. \item The following $\zeta$ is a non-trivial example of function of $H^{\beta, \kappa}$ with $\kappa < 1$. For each $\hat{\beta}, \kappa \in (0,1)$, we denote \begin{equation}\label{exp:zeta} \zeta(x) = \begin{cases} \frac{x-1}{2x-1} & \text{ if } x \leq 0,\\ 1+\frac{\log 2}{\log(n+1)}x^{\hat{\beta}} & \text{ if } (n+1)^{-1/(1-\kappa)} \leq x < n^{-1/(1-\kappa)} \text{ and } \ n \in \mathbb{N}, \\ \frac{3x+1}{x+1} &\text{ if } x \geq 1.\end{cases} \end{equation} It can be shown that $\zeta$ is a strictly increasing function with an infinite number of discontinuous points which are cumulative at $0$, $\frac{1}{2} < \zeta < 3$, and $\zeta \in H^{\beta,\kappa}$ with $\beta = \frac{1+\hat{\beta}-\kappa}{2-\kappa}$, $S(\zeta) = \{ n^{-1/(1-\kappa)}, n = 1, 2,\ldots \}$ and $C_{\beta,\kappa} \leq 3$. \end{enumerate} \end{Rem} \subsection{Main results} We need the following assumptions on the diffusion coefficient $\sigma$. \begin{Ass}\label{Ass_1} \begin{itemize} \item[(i)] There exists a bounded and strictly increasing function $f_{\sigma}$ such that for any $x,y \in \mathbb{R}$, \begin{align*} |\sigma(x)-\sigma(y)|^2 \leq |f_{\sigma}(x)-f_{\sigma}(y)|. \end{align*} \item[(ii)] $\sigma$ is bounded and uniformly positive, i.e. there exist positive constants $\overline{\sigma}$ and $\underline{\sigma}$ such that for any $x \in \mathbb{R}$, \begin{align*} \underline{\sigma} \leq \sigma(x) \leq \overline{\sigma}. \end{align*} \end{itemize} \end{Ass} Le Gall \cite{LeGall} has shown that if $b$ is bounded measurable, and $\sigma$ satisfies Assumption \ref{Ass_1}, then there exists a unique strong solution to SDE \eqref{SDE_1} (see also \cite{Nakao}). We now give some remarks on the Assumption \ref{Ass_1}. \begin{Rem} \begin{enumerate} \item The function $\sigma(x) = 1 + {\bf 1}_{x \geq 0}$ satisfies Assumption \ref{Ass_1} and belongs to $H^{1,1}$. \item The function $\zeta$ defined in \eqref{exp:zeta} also satisfies Assumption \ref{Ass_1}. \item If $a, b >0$ and $\sigma_1, \sigma_2$ satisfies Assumption \ref{Ass_1}, then $a\sigma_1 + b\sigma_2$ also satisfies Assumption \ref{Ass_1}. \item Let $f_1, f_2$ be two strictly increasing, piecewise $1$-H\"older functions. Let $\rho$ be a $1/2$-H\"older continuous function satisfying $0 < \inf_{x \in \mathbb{R}} \rho(x) \leq \sup_{x \in \mathbb{R}} \rho(x) < \infty$. Then $\sigma := \rho\circ (f_1-f_2)$ is piecewise $1/2$-H\"older and it satisfies Assumption \ref{Ass_1} with $f_\sigma = C(f_1 + f_2)$ for some positive constant $C$. \end{enumerate} \end{Rem} We are now in the position to state the main result of this paper. \begin{Thm} \label{Main_1} Let Assumption \ref{Ass_1} hold, and $b, \sigma \in H^{\beta, \kappa}$ for some $\beta \in (0,1]$ and $\kappa >0$. \begin{enumerate} \item[(i)] There exists a constant $C$ such that for all $n \geq 3$, \begin{equation} \label{logn2} \sup_{0\leq t \leq T} \mathbb{E}[|X_t - X^{(n)}_t|] \leq \frac{Ce^{C\sqrt{\log \log n}}}{\log n}. \end{equation} \item[(ii)] Moreover, if $ b \in L^1(\mathbb{R})$, then there exists a constant $C$ such that for all $n \geq 3$, \begin{equation} \label{logn} \sup_{0\leq t \leq T} \mathbb{E}[|X_t - X^{(n)}_t|] \leq \frac{C}{\log n}. \end{equation} \end{enumerate} \end{Thm} The estimates \eqref{logn2} and \eqref{logn} were obtained in \cite{Gyongy, NT_MCOM, NT_2015_2} under a stronger assumption that $\sigma$ is $1/2$-H\"older continuous on $\mathbb{R}$. \section{Proof of main results} \subsection{Some auxiliary estimates} In this section, we derive a key estimation (Lemma \ref{key_sigma_0}) for proving the main theorem. We first introduce the following standard estimation (see Remark 1.2 in \cite{Gyongy}). \begin{Lem} \label{Lem_1} Suppose that $b$ and $\sigma$ are bounded, measurable. Then for any $q>0$, there exists $C_q \equiv C(q,\|b\|_{\infty}, \|\sigma\|_{\infty}, T) $ such that for all $n \in \mathbb{N}$, \begin{align*} \sup_{t \in [0,T]} \mathbb{E}[|X_t^{(n)}-X_{\eta_n(t)}^{(n)}|^q]\leq \frac{C_q}{n^{q/2}}. \end{align*} \end{Lem} The next estimation is a uniform $L^2$-bounded of the local time of solution of SDE \eqref{SDE_1} and its EM approximation. \begin{Lem}\label{local_time} Suppose that $b$ is bounded, measurable and $\sigma$ is measurable and satisfies Assumption \ref{Ass_1}-(ii). For each $\theta \in [0,1]$, define \begin{align*} &V_t^{(n)}(\theta):=(1-\theta)X_t+\theta X_t^{(n)}.\\ &=x_0 +\int_{0}^{t} \left\{ (1-\theta)b(X_s) + \theta b(X_{\eta_n(s)}^{(n)}) \right\} ds +\int_{0}^{t} \left\{ (1-\theta)\sigma(X_s) + \theta \sigma(X_{\eta_n(s)}^{(n)}) \right\} dW_s. \end{align*} Then it holds that \begin{align}\label{esti_local_time_0} \sup_{\theta \in [0,1], x \in \mathbb{R}} \mathbb{E}[|L_T^x(V^{(n)}(\theta))|^2] &\leq 12\|b\|_{\infty}^2T^2+ 6 \overline{\sigma}^2 T. \end{align} \end{Lem} \begin{proof} By using the symmetric It\^o-Tanaka formula, we have \begin{align*} L_T^x(V^{(n)}(\theta)) &=|V_T^{(n)}(\theta)-x|-|x_0-x|-\int_0^T \left( {\bf 1}(V_s^{(n)}(\theta)>x)-{\bf 1}(V_s^{(n)}(\theta)<x) \right) dV_s^{(n)}(\theta)\\ &\leq |V_T^{(n)}(\theta)-x_0|+\left| \int_0^T \left( {\bf 1}(V_s^{(n)}(\theta)>x)-{\bf 1}(V_s^{(n)}(\theta)<x) \right) dV_s^{(n)}(\theta) \right|\\ &\leq 2\int_{0}^{T} \left| (1-\theta)b(X_s) + \theta b(X_{\eta_n(s)}^{(n)}) \right| ds +\left| \int_{0}^{T} \left\{ (1-\theta)\sigma(X_s) + \theta \sigma(X_{\eta_n(s)}^{(n)}) \right\} dW_s\right|\\ &+\left| \int_{0}^{T} \left( {\bf 1}(V_s^{(n)}(\theta)>x)-{\bf 1}(V_s^{(n)}(\theta)<x) \right) \left\{ (1-\theta)\sigma(X_s) + \theta \sigma(X_{\eta_n(s)}^{(n)}) \right\} dW_s\right|. \end{align*} Since $b$ and $\sigma$ are bounded, it follows from inequality $(a+b+c)^2\leq 3(a^2+b^2+c^2)$ and the $L^2$-isometry that, \begin{align*} \sup_{\theta \in [0,1], x \in \mathbb{R}} \mathbb{E}[|L_T^x(V^{(n)}(\theta))|^2] \notag &\leq 12\|b\|_{\infty}^2T^2 + 6 \sup_{\theta \in [0,1], x \in \mathbb{R}}\int_0^T \mathbb{E} \Big[ \big| (1 - \theta) \sigma(X_s) + \theta \sigma(X^{(n)}_{\eta(s)})\big|^2 \Big] ds\\ &\leq 12\|b\|_{\infty}^2T^2 + 6 \overline{\sigma}^2T. \end{align*} This concludes the statement. \end{proof} The following lemma, which is similar to Lemma 2.2 in \cite{Y}, plays a crucial role in our argument. \begin{Lem}\label{tight_1} Assume that $b$ and $\sigma$ are bounded measurable. For any $\varepsilon, \chi>0$ such that $\delta:=\frac{\chi \varepsilon^4}{8(T\|b\|_{\infty}^4+2^7\overline{\sigma}^4)}\leq T$, it holds that for any $t \geq 0$ and $n \in \mathbb{N}$, $ \mathbb{P}(\sup_{t \leq r \leq t+\delta}|X_r^{(n)}-X_t^{(n)}| \geq \varepsilon) \leq \delta \chi.$ \end{Lem} \begin{proof Let $t\in [0,T]$ be fixed. We define $Z_s^{(n)}:=X_{t+s}^{(n)}-X_{t}^{(n)}$. Then using Burkholder-Davis-Gundy's inequality, it holds that for any $\delta \in [0,T]$, \begin{align*} \mathbb{E}\left[\sup_{0\leq s \leq \delta}|Z_s^{(n)}|^4\right] &\leq 8 \mathbb{E}\left[\sup_{0 \leq s \leq \delta} \left|\int_{t}^{t+s} b(X_{\eta_n(r)}^{(n)})dr\right|^4\right] +8 \mathbb{E}\left[\sup_{0 \leq s \leq \delta} \left|\int_{t}^{t+s} \sigma(X_{\eta_n(r)}^{(n)})dW_r\right|^4\right]\\ &\leq 8 \delta^3 \mathbb{E}\left[ \int_{t}^{t+\delta} \left| b(X_{\eta_n(r)}^{(n)})\right|^4 dr\right] +2^{10} \delta \mathbb{E}\left[ \int_{t}^{t+\delta} \left|\sigma(X_{\eta_n(r)}^{(n)})\right|^4dr\right]\\ &\leq 8 \|b\|^4_{\infty} \delta^4 +2^{10} \overline{\sigma}^4 \delta^2 \leq 8\left( \|b\|^4_{\infty}T^2+ 2^7 \overline{\sigma}^4 \right) \delta^2. \end{align*} Hence, for any $\varepsilon, \chi>0$ such that $\delta:=\frac{\chi \varepsilon^4}{8(T^2\|b\|_{\infty}^4+2^7\overline{\sigma}^4)} \leq T$, from Markov's inequality, we have \begin{align*} \mathbb{P}\left(\sup_{t \leq s \leq t+\delta}|X_s^{(n)}-X_t^{(n)}| \geq \varepsilon \right) &\leq \frac{1}{\varepsilon^4} \mathbb{E}\left[\sup_{t \leq s \leq t+\delta}|X_s^{(n)}-X_t^{(n)}|^4\right] = \frac{1}{\varepsilon^4} \mathbb{E}\left[\sup_{0 \leq s \leq \delta} |Z_s^{(n)}|^4 \right]\\ &\leq \frac{8\left( \|b\|^4_{\infty}T^2+ 2^7 \overline{\sigma}^4 \right) \delta^2}{\varepsilon^4} =\delta \chi, \end{align*} which concludes the statement. \end{proof} Lemma \ref{tight_1} directly implies the following result. \begin{Lem}\label{tight_3} Assume that $b$ and $\sigma$ are bounded measurable. Let $(\gamma_n)_{n\in\mathbb{N}}$ be a decreasing sequence such that $\gamma_n \in (0,1]$ and $\gamma_n \downarrow 0 $ and $\gamma_n n^2 \to \infty$ as $n \to \infty$. Denote $ \varepsilon_n :=\frac{\widetilde{c}}{\gamma_n^{1/4} n^{1/2}},~ \widetilde{c} :=2^{3/4}T^{1/2}\{T^2\|b\|_{\infty}^4+ 2^7 \overline{\sigma}^4\}^{1/4}, \chi_n:=\frac{\gamma_n n}{T}, \delta_n:=\frac{\chi_n \varepsilon_n^4}{8(T^2\|b\|_{\infty}^4+ 2^7 \overline{\sigma}^4)}=\frac{T}{n}.$ For each $k=0,\ldots,n-1$, we define \begin{align*} \Omega_{k,n,\varepsilon_n} :=\left\{ \omega \in \Omega \bigg| \sup_{t_k^{(n)} \leq s \leq t_{k+1}^{(n)}}|X_s^{(n)}(\omega)-X_{t_k^{(n)}}^{(n)}(\omega)| \geq \varepsilon_n \right\}. \end{align*} Then it holds that $\mathbb{P}(\Omega_{k,n,\varepsilon_n}) \leq \delta_n \chi_n = \gamma_n$. \end{Lem} Now we state the a key lemma of our demonstration. \begin{Lem}\label{key_sigma_0} Let Assumption \ref{Ass_1}-(ii) hold and the drift coefficient $b$ be bounded and measurable. Let $f \in H^{\beta,\kappa}$ for some $\beta \in (0,1]$. Then for any $p\geq 1$ and $0< \alpha < \frac{p\beta}{2} \wedge \frac{2\kappa}{\kappa+4}$, there exists a positive constant $C_p^*(f)= C^*(p,\alpha, \beta, \kappa,T,x_0,\|f\|_\beta, C_{\beta,\kappa}, \|b\|_\infty, \overline{\sigma}, \underline{\sigma})$ which does not depend on $n$ such that for each $n \geq 3$, \begin{align} \label{eqnL0} \int_{0}^{T}\mathbb{E}\left[\left|f(X_s^{(n)})-f(X_{\eta_n(s)}^{(n)})\right|^p \right]ds \leq \frac{C_p^*(f)}{n^\alpha \log n}. \end{align} \end{Lem} \begin{proof} From Lemma \ref{tight_3} and the boundedness of $f$, it holds that \begin{align}\label{key_sigma_1} &\int_{0}^{T}\mathbb{E}\left[\left|f(X_s^{(n)})-f(X_{\eta_n(s)}^{(n)}) \right|^p\right]ds \notag \\ &=\sum_{k=0}^{n-1}\int_{t_{k}^{(n)}}^{t_{k+1}^{(n)}}\mathbb{E}\left[\left|f(X_s^{(n)})-f(X_{t_{k}^{(n)}}^{(n)}) \right|^p \left({\bf 1}_{\Omega_{k,n,\varepsilon_n}}+{\bf 1}_{\Omega_{k,n,\varepsilon_n}^c} \right)\right]ds \notag\\ &\leq 2^p\|f\|_{\infty}^pT \gamma_n +\sum_{k=0}^{n-1}\int_{t_{k}^{(n)}}^{t_{k+1}^{(n)}}\mathbb{E}\left[\left|f(X_s^{(n)})-f(X_{t_{k}^{(n)}}^{(n)}) \right|^p {\bf 1}_{\Omega_{k,n,\varepsilon_n}^c} \right]ds. \end{align} We estimate the second term of \eqref{key_sigma_1} as follows \begin{align} &\sum_{k=0}^{n-1}\int_{t_{k}^{(n)}}^{t_{k+1}^{(n)}}\mathbb{E}\left[\left|f(X_s^{(n)})-f(X_{t_{k}^{(n)}}^{(n)}) \right|^p {\bf 1}_{\Omega_{k,n,\varepsilon_n}^c} \right]ds \notag\\ =& \sum_{k=0}^{n-1}\int_{t_{k}^{(n)}}^{t_{k+1}^{(n)}}\mathbb{E}\left[\left|f(X_s^{(n)})-f(X_{t_{k}^{(n)}}^{(n)}) \right|^p {\bf 1}_{\Omega_{k,n,\varepsilon_n}^c} {\bf 1}_{X^{(n)}_s\in S^{\varepsilon_n}(f)}\right]ds \notag \\ & \qquad + \sum_{k=0}^{n-1}\int_{t_{k}^{(n)}}^{t_{k+1}^{(n)}}\mathbb{E}\left[\left|f(X_s^{(n)})-f(X_{t_{k}^{(n)}}^{(n)}) \right|^p {\bf 1}_{\Omega_{k,n,\varepsilon_n}^c} {\bf 1}_{X^{(n)}_s \not \in S^{\varepsilon_n}(f)}\right]ds. \label{eqnL1} \end{align} On the set $\Omega_{k,n,\varepsilon_n}^c \cap \big\{ X^{(n)}_s \not \in S^{\varepsilon_n}(f)\big\}$, it holds that $S(f) \cap [X^{(n)}_s \wedge X^{(n)}_{t^{(n)}_k}, X^{(n)}_s \vee X^{(n)}_{t^{(n)}_k}] =\emptyset$, thus, $$ \left|f(X_s^{(n)})-f(X_{t_{k}^{(n)}}^{(n)}) \right|^p {\bf 1}_{\Omega_{k,n,\varepsilon_n}^c} {\bf 1}_{X^{(n)}_s \not \in S^{\varepsilon_n}(f)} \leq \|f\|_\beta^p \left|X_s^{(n)}- X_{t_{k}^{(n)}}^{(n)} \right|^{p\beta}. $$ This implies the second term of \eqref{eqnL1} is bounded by \begin{align} \label{eqnL2} \|f\|_\beta^p \sum_{k=0}^{n-1}\int_{t_{k}^{(n)}}^{t_{k+1}^{(n)}}\mathbb{E}\left[ \left|X_s^{(n)}- X_{t_{k}^{(n)}}^{(n)} \right|^{p\beta} \right]ds \leq \|f\|_\beta^p T C_{p\beta}n^{-p\beta/2}, \end{align} where the last inequality follows from Lemma \ref{Lem_1}. For each constant $K_n \geq 1\vee (|x_0|+ T\|b\|_\infty)$, the first term of \eqref{eqnL1} is bounded by \begin{align} &2^p\|f\|_\infty^p \sum_{k=0}^{n-1}\int_{t_{k}^{(n)}}^{t_{k+1}^{(n)}} \Big( \mathbb{E}\left[ {\bf 1}_{X^{(n)}_s\in S^{\varepsilon_n}(f)\cap[-K_n,K_n]}\right]+ \mathbb{E}\left[ {\bf 1}_{X^{(n)}_s\in S^{\varepsilon_n}(f)\backslash[-K_n,K_n]}\right]\Big)ds \notag \\ \leq & 2^p\|f\|_\infty^p \int_0^T \mathbb{E}\left[ {\bf 1}_{X^{(n)}_s\in S^{\varepsilon_n}(f)\cap[-K_n,K_n]}\right]ds + 2^p\|f\|_{\infty}^p\int_0^T \mathbb{E}\left[ {\bf 1}_{|X^{(n)}_s| \geq K_n}\right]ds. \label{eqnL3} \end{align} Since $\sigma$ is uniformly elliptic, $ \langle X^{(n)}\rangle_t \geq \underline{\sigma}^2 t$, we obtain \begin{align*} \int_0^T \mathbb{E}\left[ {\bf 1}_{X^{(n)}_s\in S^{\varepsilon_n}(f)\cap[-K_n,K_n]}\right]ds &\leq \underline{\sigma}^{-2} \mathbb{E}\left[ \int_0^T {\bf 1}_{X^{(n)}_s\in S^{\varepsilon_n}(f)\cap[-K_n,K_n]} d\langle X^{(n)}\rangle_s\right] \\ & = \underline{\sigma}^{-2} \mathbb{E}\left[ \int_\mathbb{R} {\bf 1}_{ S^{\varepsilon_n}(f)\cap[-K_n,K_n]}(x)L_T^{x}(X^{(n)})dx \right], \end{align*} where the last equation follows from the occupation time formula. Moreover, it follows from Lemma \ref{local_time} that \begin{align*} \mathbb{E}\left[ \int_\mathbb{R} {\bf 1}_{S^{\varepsilon_n}(f)\cap[-K_n,K_n]}(x) L_T^{x}(X^{(n)})dx \right] &\leq \int_\mathbb{R} {\bf 1}_{S^{\varepsilon_n}(f)\cap[-K_n,K_n]}(x) \mathbb{E}[L_T^{x}(X^{(n)})]dx \\ &\leq \sup_{x \in \mathbb{R}} \mathbb{E}[L_T^{x}(X^{(n)})] \lambda\Big(S^{\varepsilon_n}(f)\cap[-K_n,K_n]\Big)\\ &\leq \{12\|b\|_{\infty}^2T^2+ 6 \overline{\sigma}^2 T\}^{1/2} C_{\beta,\kappa} K_n\varepsilon_n^\kappa. \end{align*} Now we consider the second term of \eqref{eqnL3}. For each $s \in [0,T]$, \begin{align*} \mathbb{E}\left[ {\bf 1}_{|X^{(n)}_s| \geq K_n}\right] &\leq \mathbb{P} \Big( \Big| \int_0^s \sigma(X^{(n)}_{\eta_n(u)}) dW_u\Big| \geq K_n - \Big| x_0+ \int_0^s b(X^{(n)}_{\eta_n(u)}) du\Big|\Big)\\ &\leq \mathbb{P} \Big( \Big| \int_0^s \sigma(X^{(n)}_{\eta_n(u)}) dW_u\Big| \geq K_n - \|b\|_\infty T - |x_0|\Big). \end{align*} Since $\langle \int_{0}^{\cdot} \sigma(X_{\eta_n(s)}^{(n)})dW_s \rangle_t \leq \overline{\sigma}^2 T$ almost surely, from Proposition 6.8 of \cite{Shigekawa} and the inequality $(a-b)^2 \geq a^2/2-b^2$ for any $a,b \in \mathbb{R}$, we have \begin{align*} &\mathbb{P}\left(\sup_{0 \leq t \leq T} \left| \int_{0}^{t} \sigma(X_{\eta_n(s)}^{(n)})dW_s \right|\geq K_n -\|b\|_{\infty}T - |x_0|\right) \notag\\ &\leq 2\exp \left(-\frac{(K_n-|x_0|-\|b\|_{\infty}T)^2}{2\overline{\sigma}^2T}\right) \leq 2\exp \left(\frac{(|x_0|+\|b\|_{\infty}T)^2}{2\overline{\sigma}^2T}\right) \exp\left(-\frac{K_n^2}{4\overline{\sigma}^2T}\right). \end{align*} This implies \begin{align} \label{key_sigma_13} \int_0^T \mathbb{E}\left[ {\bf 1}_{|X^{(n)}_s| \geq K_n}\right]ds \leq 2T\exp \left(\frac{(|x_0|+\|b\|_{\infty}T)^2}{2\overline{\sigma}^2T}\right) \exp\left(-\frac{K_n^2}{4\overline{\sigma}^2T}\right). \end{align} Gathering together the estimates \eqref{key_sigma_1} --\eqref{key_sigma_13}, we get \begin{align} \int_{0}^{T}\mathbb{E}\left[\left|f(X_s^{(n)})-f(X_{\eta_n(s)}^{(n)})\right|^p \right]ds \leq & 2^p\|f\|_{\infty}^pT \gamma_n + \|f\|_\beta^p TC_{p\beta}n^{-p\beta/2} \notag\\ & + 2^p \|f\|_{\infty}^p \underline{\sigma}^{-2} \{12\|b\|_{\infty}^2T^2+6 \overline{\sigma}^2 T\}^{1/2} C_{\beta,\kappa} K_n\varepsilon_n^\kappa \notag\\ & + 2^{p+1} \|f\|_{\infty}^p T\exp \left(\frac{(|x_0|+\|b\|_{\infty}T)^2}{2\overline{\sigma}^2T}\right) \exp\left(-\frac{K_n^2}{4\overline{\sigma}^2T}\right). \label{eqnL6} \end{align} For each $0< \alpha < \frac{p\beta}{2} \wedge \frac{2\kappa}{\kappa+4}$, by choosing $K_n = (1+|x_0|+T\|b\|_\infty + 2\overline{\sigma}\sqrt{T\alpha}) \sqrt{\log n}$ and $\gamma_n = \frac{1}{n^\alpha \log n}$, we obtain \eqref{eqnL0} from \eqref{eqnL6}. \end{proof} \subsection{Method of removal of drift} The following removal of drift transformation plays a crucial role in our argument. Suppose that $b \in L^1(\mathbb{R})$. The function $\varphi (x) := \int_0^x \exp\Big(-2 \int_0^y \frac{b(z)}{\sigma^2(z)} dz \Big) dy$ is well-defined since $\sigma^2$ is uniformly elliptic. Define $Y_t:=\varphi(X_t)$ and $Y_t^{(n)}:=\varphi(X_t^{(n)})$. Then by It\^o's formula we have \begin{align*} Y_t = \varphi(x_0) + \int_0^t \varphi'(X_s) \sigma(X_s)dW_s, \end{align*} and \begin{align*} Y_t^{(n)} = \varphi(x_0) +\int_0^t \left( \varphi'(X_s^{(n)}) b(X_{\eta_n(s)}^{(n)})+\frac{1}{2}\varphi''(X_s^{(n)}) \sigma^2(X_{\eta_n(s)}^{(n)}) \right) ds + \int_0^t \varphi'(X_s^{(n)}) \sigma(X_{\eta_n(s)}^{(n)})dW_s. \end{align*} To simplify the notation, we denote $K_\sigma = \overline{\sigma} \vee \underline{\sigma}^{-1}$ and $C_0 = e^{2K_\sigma^2 \|b\|_{L^1(\mathbb{R})}}$. We will make repeated use of the following elementary lemma. \begin{Lem}(\cite{NT_2015_2}) \label{PDE_2} Suppose that $b \in L^1(\mathbb{R})$ and Assumption \ref{Ass_1}-(ii) holds. \begin{itemize} \item[(i)] For any $x \in \mathbb{R}$, $ C_0^{-1} \leq \varphi'(x)=\exp\Big(-2 \int_0^x \frac{b(z)}{\sigma^2(z)} dz \Big) \leq C_0.$ \item[(ii)] For any $x \in \mathbb{R}$, $|\varphi''(x)| \leq 2K_\sigma^2 \|b\|_{\infty} \|\varphi'\|_{\infty} \leq 2\|b\|_\infty K_\sigma^2 C_0.$ \item[(iii)] For any $z,w \in Dom(\varphi^{-1})$, \begin{align}\label{PDE_4} |\varphi^{-1}(z)-\varphi^{-1}(w)| \leq C_0 |z-w|. \end{align} \end{itemize} \end{Lem} \subsection{Yamada and Watanabe approximation technique} Under the Assumption 2.2, by using the Yamada-Watanabe approximation technique, Le Gall \cite{LeGall} show that the pathwise uniequness holds for SDE (1). We also use this technique to prove the main result (see \cite{Yamada} or \cite{Gyongy}). For each $\delta \in (1,\infty)$ and $\varepsilon \in (0,1)$, we define a continuous function $\psi _{\delta, \varepsilon}: \mathbb{R} \to \mathbb{R}^+$ with $\text{supp}\: \psi _{\delta, \varepsilon} \subset [\varepsilon/\delta, \varepsilon]$ such that $\int_{\varepsilon/\delta}^{\varepsilon} \psi _{\delta, \varepsilon}(z) dz = 1 \text{ and } 0 \leq \psi _{\delta, \varepsilon}(z) \leq \frac{2}{z \log \delta}, \:\:\:z > 0.$ Since $\int_{\varepsilon/\delta}^{\varepsilon} \frac{2}{z \log \delta} dz=2$, there exists such a function $\psi_{\delta, \varepsilon}$. We define a function $\phi_{\delta, \varepsilon} \in C^2(\mathbb{R};\mathbb{R})$ by $\phi_{\delta, \varepsilon}(x):=\int_0^{|x|}\int_0^y \psi _{\delta, \varepsilon}(z)dzdy.$ It is easy to verify that $\phi_{\delta, \varepsilon}$ has the following useful properties: \begin{align} &|x| \leq \varepsilon + \phi_{\delta, \varepsilon}(x), \text{ for any $x \in \mathbb{R} $}, \label{phi3}\\ &0 \leq |\phi'_{\delta, \varepsilon}(x)| \leq 1, \text{ for any $x \in \mathbb{R}$} \label{phi2}, \\ \phi''_{\delta, \varepsilon}(\pm|x|)&=\psi_{\delta, \varepsilon}(|x|) \leq \frac{2}{|x|\log \delta}{\bf 1}_{[\varepsilon/\delta, \varepsilon]}(|x|), \text{ for any $x \in \mathbb{R} \setminus\{0\}$}. \label{phi4} \end{align} From \eqref{PDE_4} and \eqref{phi3}, for any $t \in [0,T]$, we have \begin{align}\label{esti_X1} |X_t-X_t^{(n)}| \leq C_0 |Y_t-Y_t^{(n)}| \leq C_0 \left( \varepsilon + \phi_{\delta,\varepsilon}(Y_t-Y_t^{(n)}) \right). \end{align} Using It\^o's formula, we have \begin{align}\label{esti_X2} \phi_{\delta,\varepsilon}(Y_t-Y_t^{(n)}) =M_t^{n,\delta,\varepsilon} +I_t^{(n)} +J_t^{(n)}, \end{align} where \begin{align*} M_t^{n,\delta,\varepsilon} &:=\int_0^t \phi'_{\delta,\varepsilon}(Y_s-Y_s^{(n)}) \left\{ \varphi'(X_s)\sigma(X_s) - \varphi'(X_s^{(n)}) \sigma(X_{\eta_n(s)}^{(n)}) \right\}dW_s,\\ I_t^{(n)} &:=-\int_0^t \phi'_{\delta,\varepsilon}(Y_s-Y_s^{(n)}) \left\{ \varphi'(X_s^{(n)})b(X_{\eta_n(s)}^{(n)}) +\frac{1}{2} \varphi''(X_s^{(n)}) \sigma^2(X_{\eta_n(s)}^{(n)}) \right\} ds,\\ J_t^{(n)} &:=\frac{1}{2}\int_0^t \phi''_{\delta,\varepsilon}(Y_s-Y_s^{(n)}) \left| \varphi'(X_s) \sigma(X_s) - \varphi'(X_s^{(n)}) \sigma(X_{\eta_n(s)}^{(n)}) \right|^2 ds. \end{align*} \subsection{Proof of Theorem \ref{Main_1}} We will only present the detail proof for the case that $b \in L^1(\mathbb{R})$. The proof for the case $b \not \in L^1(\mathbb{R})$ is based on the localisation technique given in \cite{NT_2015_2} and it will be omitted. We fix $n \geq 3$ and a constant $0< \alpha < \frac{\beta}{2} \wedge \frac{2\kappa}{\kappa+4}$. We first consider $I_t^{(n)}$. Since $\varphi'' = - \frac{2b\varphi'}{\sigma^2}$, \begin{align} |I_t^{(n)}| &\leq \int_0^T \left|\phi'_{\delta,\varepsilon}(Y_t-Y_t^{(n)}) \varphi'(X_s^{(n)}) \right| \left|b(X_{\eta_n(s)}^{(n)}) - \frac{b(X_s^{(n)}) \sigma^2(X_{\eta_n(s)}^{(n)})}{\sigma^2(X_s^{(n)})} \right| ds. \notag \end{align} Thanks to Lemma \ref{PDE_2} and estimate \eqref{phi2}, we have \begin{align} |I_t^{(n)}| &\leq K_\sigma^2 C_0 \int_0^T \left|b(X_{\eta_n(s)}^{(n)}) \sigma^2(X_s^{(n)}) - b(X_s^{(n)}) \sigma^2(X_{\eta_n(s)}^{(n)}) \right| ds \notag\\ &\leq K_\sigma^2 C_0 \int_0^T \left\{ K_\sigma^2 \left|b(X_s^{(n)}) - b(X_{\eta_n(s)}^{(n)}) \right| +\|b\|_{\infty} \left| \sigma^2(X_s^{(n)}) - \sigma^2(X_{\eta_n(s)}^{(n)}) \right| \right\}ds. \notag \end{align} It follows from Lemma \ref{key_sigma_0} that \begin{equation} \label{eqnL7} \mathbb{E}[|I_t^{(n)}|] \leq \frac{C_I}{n^\alpha \log n}, \end{equation} where $C_I:=K_{\sigma}^2 C_0\{K_{\sigma}^2 C_1^*(b)+2\|b\|_{\infty} \overline{\sigma} C_1^*(\sigma) \}$. Now we estimate $J_t^{(n)}$. From \eqref{phi4}, we have \begin{align*} J_t^{(n)} &\leq \int_0^T \frac{{\bf 1}_{[\varepsilon/\delta,\varepsilon]}(|Y_s-Y_s^{(n)}|)}{|Y_s-Y_s^{(n)}| \log \delta} \left| \varphi'(X_s) \sigma(X_s) - \varphi'(X_s^{(n)}) \sigma(X_{\eta_n(s)}^{(n)}) \right|^2 ds\\ &\leq 3(J_T^{1,n}+J_T^{2,n}+J_T^{3,n}), \end{align*} where \begin{align*} J_t^{1,n} &:= \int_0^t \frac{{\bf 1}_{[\varepsilon/\delta,\varepsilon]}(|Y_s-Y_s^{(n)}|)}{|Y_s-Y_s^{(n)}| \log \delta} |\sigma(X_s)|^2 \left| \varphi'(X_s) - \varphi'(X_s^{(n)}) \right|^2 ds,\\ J_t^{2,n} &:=\int_0^t \frac{{\bf 1}_{[\varepsilon/\delta,\varepsilon]}(|Y_s-Y_s^{(n)}|)}{|Y_s-Y_s^{(n)}| \log \delta} |\varphi'(X_s^{(n)})|^2 \left| \sigma(X_s) - \sigma(X_s^{(n)}) \right|^2 ds, \\ J_t^{3,n} &:=\int_0^t \frac{{\bf 1}_{[\varepsilon/\delta,\varepsilon]}(|Y_s-Y_s^{(n)}|)}{|Y_s-Y_s^{(n)}| \log \delta} |\varphi'(X_s^{(n)})|^2 \left| \sigma(X_s^{(n)}) - \sigma(X_{\eta_n(s)}^{(n)}) \right|^2 ds. \end{align*} From Lemma \ref{PDE_2}-(ii), $\varphi'$ is Lipschitz continuous with Lipschitz constant $\|\varphi''\|_{\infty}$. Hence, we have \begin{align}\label{esti_J1} J_T^{1,n} &\leq \frac{K_\sigma^2 \|\varphi''\|_{\infty}^2}{\log \delta} \int_0^T \frac{{\bf 1}_{[\varepsilon/\delta,\varepsilon]}(|Y_s-Y_s^{(n)}|)}{|Y_s-Y_s^{(n)}|} \left| X_s - X_s^{(n)} \right|^2 ds \notag\\ &\leq \frac{K_\sigma^2 \|\varphi''\|_{\infty}^2 C_0^2}{\log \delta} \int_0^T{\bf 1}_{[\varepsilon/\delta,\varepsilon]}(|Y_s-Y_s^{(n)}|) \left| Y_s - Y_s^{(n)} \right| ds \notag\\ &\leq \frac{C_{J,1} \varepsilon}{\log \delta}, \end{align} where $C_{J,1}:=4K_\sigma^6 C_0^4 \|b\|_\infty^2 T$. Next we consider $J_T^{2,n}$. We first note that by \eqref{PDE_4}, $$J_T^{2,n} \leq \frac{C^3_0}{\log \delta} \int_0^T \frac{\left| \sigma(X_s) - \sigma(X_s^{(n)}) \right|^2}{|X_s-X_s^{(n)}|} {\bf 1}_{|X_s- X^{(n)}_s|\geq \varepsilon/(C_0\delta)} ds.$$ Recall that by Assumption \ref{Ass_1}-(i), there exists a bounded and strictly increasing function $f_{\sigma} : \mathbb{R} \to \mathbb{R}$ such that for any $x,y \in \mathbb{R}$, \begin{align*} |\sigma(x)-\sigma(y)|^2 \leq |f_{\sigma}(x)-f_{\sigma}(y)|. \end{align*} We consider approximation $f_{\sigma,\ell} \in C^1(\mathbb{R})$ of $f_{\sigma}$ which is also strictly increasing function and satisfies $\|f_{\sigma, \ell}\|_{\infty} \leq \|f_{\sigma}\|_{\infty}$ and $f_{\sigma,\ell} \uparrow f_{\sigma}$ as $\ell \to \infty$ on $\mathbb{R}$. Then by using Fatou's lemma and the mean value theorem, we have \begin{align}\label{pr_1_5} J_T^{2,n} &\leq \frac{C^3_0}{\log \delta} \int_0^T \frac{|f_{\sigma}(X_s)-f_{\sigma}(X_s^{(n)})|}{|X_s-X_s^{(n)}|} {\bf 1}_{|X_s- X^{(n)}_s|> \varepsilon/(C_0\delta)} ds \notag\\ &\leq \liminf_{\ell \to \infty} \frac{C^3_0}{\log \delta} \int_0^T \frac{|f_{\sigma, \ell}(X_s)-f_{\sigma, \ell}(X_s^{(n)})| }{|X_s-X_s^{(n)}|} {\bf 1}_{|X_s- X^{(n)}_s|> \varepsilon/(C_0\delta)} ds \notag\\ &\leq \liminf_{\ell \to \infty} \frac{C_0^3}{\log \delta} \int_0^T ds \int_0^1 d\theta f'_{\sigma, \ell}(V_s^{(n)}(\theta)), \end{align} where $V^{(n)}(\theta)=(V_t^{(n)}(\theta))_{0 \leq t \leq T}$ is defined in Lemma \ref{local_time}. Since $\sigma \geq \underline{\sigma}$, the quadratic variation of $V^{(n)}(\theta)$ satisfies \begin{align*} \langle V^{(n)}(\theta) \rangle_t = \int_{0}^{t} \left\{ (1-\theta)\sigma(X_s) + \theta \sigma(X_{\eta_n(s)}^{(n)}) \right\}^2 ds \geq \underline{\sigma}^2 t, \end{align*} which implies \begin{align* \int_0^T ds \int_0^1 d \theta f'_{\sigma, \ell}(V_s^{(n)}(\theta)) &\leq \underline{\sigma}^{-2} \int_0^1 d \theta \int_0^T d \langle V^{(n)}(\theta) \rangle_{s} f'_{\sigma, \ell}(V_s^{(n)}(\theta)) \notag\\ &= \underline{\sigma}^{-2} \int_{\mathbb{R}} dx f'_{\sigma, \ell}(x) \int_0^1 d \theta L_T^x(V^{(n)}(\theta)), \end{align*} where the last equality is implied from the occupation time formula. Using Lemma \ref{local_time} and the estimate $\|f'_{\sigma, \ell}\|_{L^1(\mathbb{R})} \leq 2 \|f_{\sigma, \ell}\|_{\infty} \leq 2 \|f_{\sigma} \|_{\infty}$ we have \begin{align*} \mathbb{E}\left[ \int_0^T ds \int_0^1 d \theta f'_{\sigma, \ell}(V_s^{(n)}(\theta)) \right] &\leq \underline{\sigma}^{-2} \int_{\mathbb{R}} dx f'_{\sigma, \ell}(x) \int_0^1 d \theta \mathbb{E} [L_T^x(V^{(n)}(\theta))] \\ &\leq \underline{\sigma}^{-2} \|f'_{\sigma, \ell}\|_{L^{1}(\mathbb{R})} \sup_{\theta \in [0,1], x \in \mathbb{R}} \mathbb{E} [|L_T^x(V^{(n)}(\theta))|^2]^{1/2}\\ &\leq 2 \underline{\sigma}^{-2} \|f_{\sigma}\|_{\infty} \{ 12\|b\|_{\infty}^2T^2+6 \overline{\sigma}^2 T\}^{1/2}. \end{align*} By plugging this estimate to \eqref{pr_1_5} and using Fatou's lemma, we get the following estimate for the expectation of $J^{2,n}_T$, \begin{align}\label{pr_1_7} \mathbb{E}[J^{2,n}_T] &\leq \frac{C_{J,2}}{\log \delta}, \end{align} where $C_{J,2}:=2C_0^3\underline{\sigma}^{-2} \|f_{\sigma}\|_{\infty} \{ 12\|b\|_{\infty}^2T^2+6 \overline{\sigma}^2 T\}^{1/2}$. Finally, we estimate $J^{3,n}_T$ as follows \begin{align*} \mathbb{E}[J^{3,n}_T] \leq \frac{C_0^2 \delta}{\varepsilon \log \delta} \int_0^T \mathbb{E}\Big[\left|\sigma(X^{n}_s)- \sigma(X^{(n)}_{\eta_n(s)})\right|^2\Big]ds. \end{align*} Applying Lemma \ref{key_sigma_0}, we get \begin{equation} \label{eqnL8} \mathbb{E}[J^{3,n}_t] \leq \frac{\delta}{\varepsilon \log \delta} \frac{C_{J,3}}{n^\alpha \log n}, \end{equation} where $C_{J,3}:=C_0^2 C_2^*(\sigma)$. Since $\mathbb{E} [M_t^{n,\delta, \varepsilon}] = 0$, it follows from \eqref{esti_X1} -- \eqref{eqnL8} that there exists a positive constant $C$ which do not depend on $n$ such that \begin{align*} \sup_{0\leq t \leq T}\mathbb{E}[|X_t - X^{(n)}_t|] \leq C \Big( \varepsilon + \frac{1}{n^\alpha \log n} + \frac{\varepsilon}{\log \delta} + \frac{1}{\log \delta} + \frac{\delta}{\varepsilon \log \delta} \frac{1}{n^\alpha \log n}\Big). \end{align*} By choosing $\varepsilon = \frac{1}{\log n}$ and $\delta = n^\alpha$, we obtain the desired result. \qed \section*{Acknowledgements} The authors thank Arturo Kohatsu-Higa, Miguel Martinez and Toshio Yamada for their helpful comments. This research is funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED). The second author was supported by JSPS KAKENHI Grant Number 16J00894.
3,212,635,537,840
arxiv
\section{Conclusion} \label{sec:Conclusion} In this paper, we proposed a novel machine learning-based self-compensating approximate accelerators for enhancing the efficiency of approximate computing applications. In contrast to the state-of-the-art error reduction methodologies, the proposed generic self-compensating methodology has shown an opportunity for error reduction without requiring similar approximate computing elements. The proposed decision tree-based compensation module, illustrated through approximate accelerators, is found to achieve noteworthy enhancement in accuracy performance without compromising the power consumption and speed. This work yields significant new insights into the potential of approximate computing in complex hardware designs, that can lead the designers towards exploiting the problematic error reduction. For future work, we aim to investigate complex accelerators with heterogeneous arithmetic components considering other error metrics rather than ED, as well as other error-tolerant applications. Machine learning based-models, other than decision trees, may be investigated. \section{Introduction} \label{sec:introduction} Dedicated hardware accelerators are extensively being advocated to be used in complex heterogeneous system-on-chip to process large data more efficiently than pure software processing \cite{HwAcc}. Moreover, hardware accelerates have a reduced power consumption, reduced latency and increased parallelism. These features make them quite suitable for image and digital signal processing (DSP) applications. Approximate computing (AC) or best-effort computing \cite{AC1} is being adapted as a new design paradigm, in both hardware and software \cite{S1}, for error-resilient applications, due to the increased benefits of approximation, i.e., simplified circuit design with reduced silicon area, delay and power consumption. Several designs of approximate arithmetic components, i.e., adders \cite{XORFA}, dividers \cite{Div2} and multipliers \cite{MasadehGLS}, have been presented. Such approximate components are integrated to form \textit{approximate hardware accelerators} (AxAcc), which are suitable for error-tolerant computationally intensive applications, e.g., big-data and image processing. These applications can tolerate error due to the following factors \cite{ComputingEfficiency}: 1) the lack of a unique, golden result, where a range of results are equally acceptable, 2) no guarantee or need to find the best solution where good-enough result is sufficient, 3) the input data is noisy with iterative-refinement nature, and 4) a reduced quality is tolerable by perceptual, i.e., visual or hearing human limitations. The approximation error persists permanently during the entire lifetime of the \textit{approximate hardware accelerators} (AxAcc). Thus, it is necessary to develop techniques that can alleviate approximation error and enhance the accuracy with minimal overhead, when high error cannot be afforded. Thus, it is crucial to tackle this issue at the early design stage and change the architecture of \textit{approximate hardware accelerators} by building a lightweight internal error compensation/recovery module with minimal overhead, i.e., area, power and delay. Despite the unprecedented power saving and reduced execution time introduced by design approximation, it is still an immature computing paradigm \cite{AxCTesting}, where to the best of our knowledge, a formal model of the impact of approximation on accuracy metric is still missing \cite{AxCSecurity}. However, accuracy performance of approximate designs is highly \textit{input-dependent} \cite{ErrorTR}, where we know relatively little about enhancing the accuracy of approximation in a disciplined manner. In this paper, we propose a novel machine learning (ML)-based self-compensating approximate accelerator, aiming to improve the accuracy of the approximated results. There is no clear relationship between the inputs of approximate accelerators and their errors. Therefore, such accelerators are designed by employing ML-based compensation module, to capture input dependency of error. This leads to a noteworthy reduction in error magnitude, with negligible overhead. As a proof of concept, we consider \textit{approximate hardware accelerators} with 8-bit approximate array multipliers \cite{MasadehGLS}. Such accelerators have 9 bits of the results being approximated. Also, they utilize full adder (FA) cells, known as approximate mirror adder 5 (\textit{AMA5}) \cite{Vaibhav}, which provides a simplified design with reduced area, power and delay. The challenge is to build an efficient compensation module, which considers the value of the inputs. Thus, machine learning techniques are used to capture such dependency. Finally, we consider an image blending application, where two images are multiplied pixel-by-pixel to demonstrate a practical application of \textit{self-compensating approximate hardware accelerators}. The rest of the paper is structured as follows: Section \ref{sec:RelatedWork} introduces the related work. Section \ref{sec:Methodology} explains our proposed methodology to enhance the accuracy of approximate hardware accelerators. The obtained results utilizing image processing are described in Section \ref{sec:Results}. Section \ref{sec:Conclusion} concludes the paper and highlights the future work. \section{Methodology} \label{sec:Methodology} In self-compensating approximate accelerator, we propose to integrate an input-dependent compensation module in such a way that the accumulative error is reduced. The design of a simplified accelerator with two approximate multipliers is shown in Figure \ref{fig:Accelerator}(a). The magnitude of error $e1$ depends on inputs \textit{A} and \textit{B}, while the magnitude of error $e2$ depends on inputs \textit{C} and \textit{D}. Whereas, $e1$ does not equal $e2$, i.e., $e1$ $\neq$ $e2$, unless $\lbrace A, B \rbrace$ = $\lbrace C,D \rbrace$. It is important to note that most of the previous work did not consider the input dependency of the approximation error. The final accelerator error is \textit{e}, where $e$ = $e1$ + $e2$. The maximum error is $|e1|$ + $|e2|$. In this paper, without loss of generality, we consider accelerators constructed utilizing 8-bit approximate array multipliers based on \textit{AMA5} FAs with 9-bits of the results being approximated~\cite{MasadehGLS}. However, the proposed methodology is applicable to any approximate accelerator design, e.g., approximate multiply-accumulate units \cite{AxMAC}. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{Figures/Accelerator3.jpg} \caption{\small{Simplified Architecture for Accelerator of Two Approximate Multipliers, (a) Without Error Compensation, (b) With Error Compensation Module per Approximate Component, (c) With Error Compensation Module per Approximate Accelerator.}} \label{fig:Accelerator} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{Figures/ax1.jpg} \caption{Design Flow for Approximate Accelerator Compensation Module} \label{fig:SCAxAcc} -\vspace{0.2cm} \end{figure} The main challenge in the design of self-compensating accelerators is the development of the input-dependent compensation module that has minimal area, delay and power overhead. An overview of the proposed design methodology is given in Figure \ref{fig:SCAxAcc}, where its steps are explained next. The fundamental step in the proposed flow is designing an approximate multiplier, which is the essential building component of the accelerator. Table \ref{tab:ApproxMult} shows the design characteristics of the 8-bit approximate multiplier including its area, delay, power and energy consumption. Moreover, in order to show the benefits of such approximation, the characteristics of the exact multiplier are also shown in Table \ref{tab:ApproxMult}. We evaluate the power, area, delay and energy utilizing the XC6VLX75T FPGA, which belongs to the Virtex-6 family, and the FF484 package. We use Mentor Graphics \textit{Modelsim} \cite{ModelSim}, \textit{Xilinx XPower Analyser} and \textit{Xilinx Integrated Synthesis Environment} (ISE 14.7) tool suite. \begin{table}[t!] \centering \caption{Characteristics of Approximate Accelerator Components, i.e., Approximate Multiplier and Compensation Module} \label{tab:ApproxMult} \resizebox{0.99\columnwidth}{!}{% \begin{tabular}{c|cccccc} \hline \multicolumn{1}{|c|}{\textbf{Design}} & \multicolumn{1}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Dynamic\\ Power (mW)\end{tabular}}} & \multicolumn{1}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Slice\\ LUTs\end{tabular}}} & \multicolumn{1}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Occupied\\ Slices\end{tabular}}} & \multicolumn{1}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Period\\ (ns)\end{tabular}}} & \multicolumn{1}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Frequency\\ (MHz)\end{tabular}}} & \multicolumn{1}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Energy\\ (pj)\end{tabular}}} \\ \hline \hline \multicolumn{1}{|c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Exact\\ Multiplier\end{tabular}}} & \multicolumn{1}{c|}{442} & \multicolumn{1}{c|}{85} & \multicolumn{1}{c|}{33} & \multicolumn{1}{c|}{8.747} & \multicolumn{1}{c|}{114.32} & \multicolumn{1}{c|}{3866.2} \\ \hline \multicolumn{1}{|c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Approximate\\ Multiplier\end{tabular}}} & \multicolumn{1}{c|}{113} & \multicolumn{1}{c|}{31} & \multicolumn{1}{c|}{11} & \multicolumn{1}{c|}{4.625} & \multicolumn{1}{c|}{216.22} & \multicolumn{1}{c|}{522.6} \\ \hline \multicolumn{1}{|c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Compensation\\ Module \end{tabular}}} & \multicolumn{1}{c|}{2.79} & \multicolumn{1}{c|}{23} & \multicolumn{1}{c|}{8} & \multicolumn{1}{c|}{2.213} & \multicolumn{1}{c|}{451.88} & \multicolumn{1}{c|}{6.6} \\ \hline \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \end{tabular} } \vspace{-0.5cm} \end{table} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{Figures/ED_hist2.png} \caption{Histogram Distribution of the Error Distance (ED) of the Approximate Multiplier} \label{fig:HistogramED} \end{figure} Since the magnitude of approximation error is input dependent, we apply an exhaustive simulation by having $2^8=256$ different values for each input. Thus, we have $256*256=65,536$ different input combinations with their associated error distance (ED), which constitute our training data. Figure \ref{fig:HistogramED} shows the histogram distribution for the ED of the approximate multiplier. Accordingly, we can make the following observations regarding the ED: \begin{itemize} \item Out of the 65536 possible input combinations, 62420 have inexact results, thus the error rate (ER) is 95.25\%. \item Approximate computing relies on the principle of fail \textit{small} or fail \textit{rare}. Therefore, high error rate (ER), i.e., 95.25\%, requires having a small value of ED to get an acceptable final result. \item Small errors occur more frequently than large errors. For example, we have only 1575 input combinations with ED$>$500, which is about 2.48\% of the erroneous inputs. Considering such extreme values in ED may simplify building the compensation module. \item Error distance has 176 distinctive values, where the minimum ED is 4, the maximum ED is 756 and the average is 185. \end{itemize} Generally, whenever the error occurs for a small fraction of input combinations, i.e., error rate (ER) is low, approximate design with simple error correction, such as adding a constant corrective magnitude, exhibits better performance compared with the exact design. However, our approximate accelerator has an ER of 95.25\%. Therefore, such high ER makes simple error correction inapplicable. \begin{figure*}[t!] \centering \includegraphics[width=0.8\textwidth]{Figures/Tree3.png} \caption{The Structure of the Decision Tree-based Model} \label{fig:DT} \end{figure*} In order to predict the ED based on the value of the inputs, we use a \textit{lightweight} machine learning-based algorithm, i.e., classification decision tree (DT) based on \textit{C5.0} algorithm~\cite{C50}, given in R~\cite{R} which is a programming and statistical computing language. Decision trees which are fast, memory efficient and have a simple structure, are quite well able to model the non-linear relationship between the inputs and error distance. We notice that the inputs of the approximate design with close magnitudes are associated with a very close ED. Consequently, we quantize the inputs based on their magnitudes into 16 different clusters. Thus, the model has $16*16=256$ different input combinations rather than $256*256=65,536$ which simplifies its internal structure. Figure \ref{fig:DT} shows the structure of the decision tree that we obtained. The leaves of the tree represent the expected values of the ED that should be added to correct the final result, while the internal nodes represent the \textit{conditional decision points} which are the inputs of the model, i.e., the first input (\textit{Input1}) and the second input (\textit{Input2}) of the approximate design. The values associated with the connections between the \textit{conditional decision points} represent the cluster of the inputs, i.e., from 1 to 16. For example, the first branch in Figure \ref{fig:DT} examines the class of \textit{Input1}, then it traces to the left-side if it is $\leq$9 or traces to the right side if the class is $>$9. To show the effectiveness of the proposed \textit{compensation module}, we perform accuracy evaluation utilizing its implementation in MATLAB. Moreover, we evaluate its power, area, delay and energy. Table \ref{tab:ApproxMult} shows the obtained results, where the power consumption of the module is about 2.8\textit{mW}, which forms about 2.4\% added power to the approximate multiplier. Similarly, the introduced area, delay and energy overhead of the module with respect to the approximate multiplier is about 42.5\%, 32.4\% and 1.2\%, respectively. Such overhead is insignificant when compared to the approximate multiplier where we integrate multiple instances of it within the approximate accelerator. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{Figures/Area_power.png} \caption{Power, Area, Delay and Energy of Approximate Accelerator Components} \label{fig:AxC_Module} \end{figure} Figure \ref{fig:AxC_Module} shows a relative representation of the power, area, delay and energy of the approximate multiplier, compensation module as well as the exact multiplier. Despite of the module added overhead, the approximate multiplier with the accompanying module (as shown in Figure \ref{fig:Accelerator}(b)) has a reduction of 73.8\%, 38.1\%, 21.8\% and 86.3\% in the power, area, delay and energy, respectively, compared to the exact multiplier. The error of the approximate multiplier, i.e., $e1$, will be reduced to $e1^C$, which represents $e1$ after being alleviated by the compensation module at the component level, where $e1^C$$<<$$e1$. Moreover, in order to amortize the overhead of the proposed module, we propose another architectural configuration with a single compensation module for the approximate accelerator, as shown in Figure \ref{fig:Accelerator}.(c), rather than having a dedicated module for each approximate component, as shown in Figure \ref{fig:Accelerator}.(b). Such proposed design is applicable when different data processed at different components have alike values, e.g., adjacent image pixels. Thus, the introduced error is roughly similar. In image processing applications, the accelerator processes adjacent image pixels, which usually have close values. Therefore, for image blending in multiplicative mode where the pixels of the two images are multiplied pixel-by-pixel, we propose to divide the image into three segments (colored-components), i.e., red, green and blue. Each colored component is processed on a separate accelerator. For that, the compensation module of the approximate accelerator evaluates the average value of the pixels for each frame colored-component. Based on that, a compensation value is calculated (predicted by decision tree-based model) and then added to all the pixels of the frame colored-component. Thus, the error of the approximate accelerator, i.e., $e1+e2$, will be reduced to $e1^A$ + $e2^A$, based on the error compensation module at the accelerator level. The next section evaluates the accuracy of the implemented \textit{compensating module} that we developed. \section{Related Work} \label{sec:RelatedWork} There has been significant work on designing approximate components and accelerators. However, to the best of our knowledge, there are very few works targeting the enhancement of the accuracy of approximate accelerators. While most prior works focus on error prediction, in this paper, we aim to overcome the approximation error through an input-dependent error compensation. Authors of \cite{Xu} approximated different designs given as behavioral descriptions based on the expected coarse-grained input data distributions. Then, they used these approximate designs to build an adaptive hardware accelerator based on the applied workload. However, the proposed approximate circuits heavily depend on the training data used during the approximation process, where not all possible workload distributions can be precharacterized. Thus the real workload may differ completely from the training one. Authors of \cite{Marcelo} performed a design-space exploration of state-of-the-art approximate designs, and proposed a flow for designing approximate coarse-grained reconfigurable arrays (CGRAs). Green \cite{Green} and SAGE \cite{Sage} check the output quality of approximate programs through sampling techniques, and use a more accurate configuration if the approximation error is high. However, \cite{Marcelo} -- \cite{Sage} are inadequate for fine-grained input data. A machine learning-based technique has been proposed in \cite{MasadehDATE2019}, aiming to control the quality of approximate computing through selecting the most suitable approximate design based on the inputs. Nevertheless, this technique is efficient when having a set of approximate designs to select the most suitable among them, which is not always applicable. A fault recovery method utilizing machine learning to ameliorate the effect of permanent faults have been proposed in \cite{Taher}, assuming that the number of unique values of error distance (ED) is very low, i.e., less than 5. However, such assumption is unrealistic, where the value of the ED may range from 1 to $2^n$, based on fault location, where \textit{n} is the number of circuit inputs. Recently, a self-compensating accelerator has been proposed in \cite{MAZAHIR20199} by integrating approximate components with their complementary designs, i.e., having the same error magnitude with opposite polarity. However, obtaining such complementary components is not always guaranteed, e.g., the approximate multiplier based on \textit{AMA5}, which is utilized in this work does not have a complementary design. Moreover, the approximate design and its complementary design may have different characteristics, i.e., area, power, delay and energy. Aiming to avoid the overhead of adapting the design and improving its accuracy, in this paper, we investigate a novel ML-based approach to build an input-dependent \textit{compensation module} for approximate accelerators. The proposed approach relies on the high error rate (ER) of the approximate accelerator aiming to lower the magnitude of the error distance (ED). Our work is orthogonal to the previous related work, where innovatively we utilize ML-based, i.e., decision tree, model to capture input dependency of error. As a proof of concept, we utilize an approximate hardware accelerator with approximate multipliers based on \textit{AMA5} FAs. \section{Results and Discussion} \label{sec:Results} \begin{figure*}[t!] \centering \includegraphics[width=0.90\textwidth, height=8cm]{Figures/HistogramED4.jpg} \caption{Distribution of Error Distance (ED) of Approximate Multiplier with/without the Error Compensation Module} \label{fig:HistogramED_Correction} \end{figure*} \begin{figure*}[t!] \centering \includegraphics[width=0.90\textwidth]{Figures/Blending_PSNR2.png} \caption{Output Quality (PSNR) of Image Blending, (a) Without Error Compensation, (b) With Error Compensation Module per Approximate Component, (c) With Error Compensation Module per Approximate Accelerator} \label{fig:BlendingPSNR} \end{figure*} This section presents the experimental results obtained by introducing the \textit{compensation module} both at the component and the accelerator level. In order to evaluate the performance of the compensation module, which is shown in Figure \ref{fig:Accelerator}.(b), we perform an exhaustive simulation of the approximate multiplier. Figure \ref{fig:HistogramED_Correction} shows the histogram of the error distance of the approximate multiplier without compensation as well as and the compensated value by integrating the compensation module into the approximate component. Such module will enhance the accuracy of the result, by adding a compensation value based on decision tree-model in order to reduce the final error distance (ED). Clearly, there is a significant reduction in error characteristics, i.e., in both error magnitude and error frequency. As summarized in the table shown in Figure \ref{fig:HistogramED_Correction}, the proposed compensation module, reduces the maximum ED of the multiplier from 756 to 520, while the mean ED is decreased from 185 to 110. The number of input combinations with erroneous result, where ED$>$500, is reduced from 1575 input combinations into 16, which is a significant quality improvement. Similarly, the number of input combinations with erroneous result that has an ED$>$400 and ED$>$300 is notably reduced from 5454 to 218, and from 12922 to 1458, respectively. This noteworthy improvement in the quality of results validates the importance of the added compensation module. Moreover, the number of distinctive values of the ED is lowered from 176 to 129. Without the proposed compensation module, the approximate multiplier has 3116 error-free input combinations, i.e., error rate is 95.25\%. However, adding a ML-based compensation module reduces the error-free input combinations into 2177, i.e., error rate is 96.68\%, by erroneously adding a compensation value into error-free result. This is due to \textit{model imperfection}, even though the final accuracy has significant improvement. Similarly, in some cases, the compensation module increase the ED rather than reduce it. Overall, there is a significant reduction in error magnitude and error frequency, where this will enhance the final accuracy of the utilized error-resilient application. In order to evaluate the proposed self-compensating approximate accelerators in practical applications, we deployed them in the image blending, where two images are multiplied pixel-by-pixel. The images used in blending and their corresponding accurate results are shown in Figure \ref{fig:BlendingExamples}, where the size of each image is $250$x$400$ pixels. Two configurations of compensation modules are used: 1) a compensation module for each approximate component; and 2) a single compensation module for all approximate components. The Peak-Signal-to-Noise ration (PSNR) of the obtained results are shown in Figure \ref{fig:BlendingPSNR}, which show that the output quality is improved because of error compensation. As shown in Figure \ref{fig:BlendingPSNR}, all blending examples have an improved quality, i.e., PSNR, whenever the compensation module is used. Clearly, the improvement in the output quality when the compensation module is incorporated at the component level is higher than the case when the module is used at the accelerator level. The shown results of image blending with error compensation have an enhanced quality, where the increase in the PSNR ranges from 2.6dB to 4.7dB with an average of 4.2bB for the considered examples. Thus, we are able to obtain an average of 9\% improvement in the final quality of image blending application with negligible overhead. Using the compensation module at the accelerator level achieved a lower accuracy enhancement, where the compensation value is evaluated for 100,000 components. Obviously, the accuracy of approximate accelerators can be enhanced by integrating the compensation module at finer granularity level.
3,212,635,537,841
arxiv
\section{Introduction} \setcounter{equation}{0}\hspace{0.25in}In an instrumental variable (IV) model, researchers often rely on asymptotic approximations when making inference on the structural coefficients. These approximations, however, can be poor when instruments are weakly correlated with the endogenous regressors as explained by \citet{NelsonStartz90}, \citet{BoundJaegerBaker95}% , \citet{Dufour97}, and \citet{StaigerStock97}. The goal is to find reliable econometric methods regardless of how strong the instruments are. There has been some progress in the IV model with one endogenous variable and $k$ instruments when errors are homoskedastic. \citet{AndersonRubin49} propose a test statistic which has an asymptotic chi-square-$k$ distribution regardless of how weak the instruments are. \citeauthor{Moreira01} (% \citeyear{Moreira01}, \citeyear{Moreira09a}) shows that the Anderson-Rubin statistic is optimal in the just-identified model, but points out potential power gains when there exists more than one instrument. \citet{Kleibergen02} and \citet{Moreira02} show that a score (LM) test statistic has a standard chi-square-one distribution whether the instruments are weak or not. % \citet{Moreira03} proposes to replace the critical value number by conditional quantiles of test statistics. These conditional tests are similar by construction, hence have correct size. He applies the conditional method to the likelihood ratio (LR) statistic and the two-sided Wald statistic. \citet{AndrewsMoreiraStock06} (hereinafter, AMS06) show that the conditional likelihood ratio (CLR) test satisfies natural orthogonal invariance conditions and is nearly optimal. \citet{AndrewsMoreiraStock07} find that conditional Wald (CW) tests, however, have poor behavior and object to their use in empirical work. \citet{MillsMoreiraVilela14} show that the bad performance of CW tests is due to the asymmetric distribution of one-sided Wald statistics when instruments are weak. By extending % \citeauthor{Moreira03}'s (\citeyear{Moreira03}) conditional approach, they find approximately unbiased Wald tests whose power is comparable to the CLR test. While use of the IV model with homoskedastic errors was important to advance the literature on weak identification, the IV model with heteroskedastic and autocorrelated (HAC) errors is considerably more relevant for applied researchers. Some of the theoretical findings for homoskedastic errors are easily extended for more complicated stochastic processes, whereas others are not. Important work by \citet{StockWright00}, \citet{GuggenbergerSmith05}% , \citet{Kleibergen06}, \citet{Otsu06}, and \citet{AndrewsMikusheva15}, among others, extends the tests conceived for the simple homoskedastic IV model to the generalized method of moments (GMM) and generalized empirical likelihood (GEL) frameworks. Their tests are of course applicable to the HAC-IV model, but it is unknown whether these adaptations are optimal. The purpose of this paper is exactly this: to develop a theory of optimal two-sided tests for the HAC-IV model. We are able to find a statistic that is pivotal and independent of a second statistic, which is sufficient and complete for the instruments' coefficients under the null. We show that the invariance argument of AMS06 for homoskedastic errors is only applicable if a (long-run) variance has a Kronecker product structure. This limitation has profound consequences for the behavior of weighted-average power (WAP) tests. We choose two priors for the structural parameter and the instruments' coefficients and denote the associated test statistics MM1 and MM2. The priors are chosen to illustrate the effect of a poor weight choice on the power of WAP\ tests. Although priors vanish asymptotically as in the Bernstein-von Mises theorem, the associated tests can behave quite differently in finite samples (or under the weak-instrument asymptotics). When a variance matrix has a Kronecker product structure, both test statistics are orthogonally invariant, but only MM2 satisfies an additional \emph{sign} invariance argument that preserves the two-sided hypothesis testing problem. As a consequence, a WAP similar test based on the MM1 statistic can behave as a one-sided test and have poor power even with homoskedastic errors (this problem is analogous to the conditional Wald tests documented by \citet{AndrewsMoreiraStock07}) while the WAP similar test using the MM2 statistic has overall good power with a Kronecker-product variance matrix. Other weight choices face the same difficulties as the MM1 statistic for the HAC-IV model, including the recently proposed WAP similar test by \citet{Olea15}, denoted ECS (HAC-IV). When the (long-run) variance matrix does not have a Kronecker product representation and the model is identified, the Anderson-Rubin test (among other equivalent tests) is the uniformly most powerful unbiased test. In the over-identified model, we show theoretically that it is possible to find a weight so that the test is approximately unbiased and admissible. The lack of invariance, however, makes it harder to construct such weights. In practice, we endogeneize this search by imposing in the WAP maximization problem a boundary condition based on the local power around the null hypothesis. This locally unbiased (LU) condition is a weaker requirement than unbiasedness, so it does not rule out admissibility. The WAP-LU tests are found with non-linear algorithms, which makes it difficult to implement them. We then propose a stronger requirement than LU, denoted the strongly unbiased (SU) condition. The resulting class of tests includes several two-sided tests robust to weak IV, including the Anderson-Rubin, score, (pseudo) likelihood ratio tests by \citet{Kleibergen06} and % \citet{AndrewsGuggenberger14b}, and I. \citeauthor{Andrews15}' (% \citeyear{Andrews15}) PI-CLC tests. Two-sided optimal tests also satisfy the SU condition asymptotically when the sample size is large and instruments are strong. The WAP-SU tests have power close to the WAP-LU\ tests based on the MM1 and MM2 weights, with the advantage being that the WAP-SU tests are easy to implement with a standard linear programming software package. We refer to the WAP-SU tests based on our weights as MM1-SU and MM2-SU tests. We follow I. \citet{Andrews15} and implement numerical simulations based on % \citet{Yogo04}. We choose, however, \citeauthor{Yogo04}'s (\citeyear{Yogo04}% ) design where the endogenous variable is the real stock return and the instruments are genuinely weak. We find that, as our theory predicts, the WAP similar tests can be quite erratic. In some designs, they behave as usual two-sided tests and have good power. In other designs they behave as one-sided tests and have power near zero. We do not recommend the MM1 and MM2 similar tests for empirical researchers. The MM2-SU test, however, outperforms other tests (including the MM1-SU test) and when it occasionally has less power than competing tests, the power loss is small. We recommend the use of the MM2-SU\ test in empirical work. Our asymptotic analysis is quite general and encompasses all WAP similar and WAP-SU tests whose weight does not depend strongly on the sample size. The remainder of this paper is organized as follows. Section \ref{IV Model and Statistics} introduces the HAC-IV model and presents the test statistics, including the MM1 and MM2 statistics. Sections \ref{Maximization Sec} and \ref{Conditions Sec} discuss the power maximization problem and the WAP-LU\ and WAP-SU\ tests. Section \ref{Numerical Sec} presents power curves and the role of LU\ and SU conditions in obtaining WAP tests with overall good power. Section \ref{Asymptotic Sec} develops an asymptotic framework that encompasses the weak IV and strong IV asymptotics. Section \ref% {Application Sec} revisits the work of I. \citet{Andrews15} and % \citet{Yogo04} on testing the intertemporal rate of substitution, with one important modification. Section \ref{Conclusion Sec} contains concluding remarks. All proofs are given in the appendices. \section{The IV Model and Statistics \label{IV Model and Statistics}} Consider the instrumental variable model \begin{eqnarray*} y_{1} &=&y_{2}\beta +u \\ y_{2} &=&Z\pi +v_{2}, \end{eqnarray*}% where $y_{1}$ and $y_{2}$ are $n$ $\times $ $1$ vectors of observations on two endogenous variables, $Z$ is an $n\times k$ matrix of nonrandom exogenous variables having full column rank, and $u$ and $v_{2}$ are $n$ $% \times $ $1$ unobserved disturbance vectors having mean zero. The goal here is to test the null hypothesis $H_{0}:\beta =\beta _{0}$ against the alternative hypothesis $H_{1}:\beta \neq \beta _{0}$, treating $\pi $ as a nuisance parameter. We do not not include covariates in this model, but we note that can be easily handled by the usual projection arguments; see AMS06. We look at the reduced-form model for $Y=\left[ y_{1},y_{2}\right] $:% \begin{equation} Y=Z\pi a^{\prime }+V, \label{(reduced-form IV)} \end{equation}% where $a=\left( \beta ,1\right) ^{\prime }$ and $V=\left[ v_{1},v_{2}\right] =\left[ u+v_{2}\beta ,v_{2}\right] $ is the $n\times 2$ matrix of reduced-form errors. We allow the errors to be heteroskedastic and autocorrelated. Let $P_{1}=Z\left( Z^{\prime }Z\right) ^{-1/2}$ and let $% \left[ P_{1},P_{2}\right] \in \mathcal{O}_{n}$, the group of $n\times n$ orthogonal matrices. Pre-multiplying the reduced-form model (\ref% {(reduced-form IV)}) by $\left[ P_{1},P_{2}\right] ^{\prime }$, we obtain the pair of statistics $P_{1}^{\prime }Y$ and $P_{2}^{\prime }Y$. In this section, we assume that $\left( Z^{\prime }Z\right) ^{-1/2}Z^{\prime }V$ is normally distributed with known variance matrix $\Sigma $ (this assumption will be relaxed later at the cost of asymptotic approximations). The statistic $P_{2}^{\prime }Y$ is ancillary and we do not have previous knowledge about the correlation structure on $V$. In consequence, we consider tests based on $R=P_{1}^{\prime }Y$:% \begin{equation*} R=\mu a^{\prime }+\left( Z^{\prime }Z\right) ^{-1/2}Z^{\prime }V, \end{equation*}% where $\mu =\left( Z^{\prime }Z\right) ^{1/2}\pi $. It is convenient to find the one-to-one transformation of $R$ given by the pair% \begin{eqnarray} S &=&\left[ \left( b_{0}^{\prime }\otimes I_{k}\right) \Sigma \left( b_{0}\otimes I_{k}\right) \right] ^{-1/2}\left( b_{0}^{\prime }\otimes I_{k}\right) \overline{R}\text{ and} \label{(Defns of S and T)} \\ T &=&\left[ \left( a_{0}^{\prime }\otimes I_{k}\right) \Sigma ^{-1}\left( a_{0}\otimes I_{k}\right) \right] ^{-1/2}\left( a_{0}^{\prime }\otimes I_{k}\right) \Sigma ^{-1}\overline{R}, \notag \end{eqnarray}% where $\overline{R}=vec\left[ \left( Z^{\prime }Z\right) ^{-1/2}Z^{\prime }Y% \right] $, $a_{0}=\left( \beta _{0},1\right) ^{\prime }$ and $b_{0}=\left( 1,-\beta _{0}\right) ^{\prime }$. The pair $S$ and $T$ have three important properties: \emph{(i)} they are independent; \emph{(ii)} $S$ is pivotal; and \emph{(iii)} $T$ is complete and sufficient for $\mu $ under the null. More specifically, the statistics $S$ and $T$ have distribution% \begin{eqnarray} S &\sim &N\left( \left( \beta -\beta _{0}\right) C_{\beta _{0}}\mu ,I_{k}\right) \text{ and }T\sim N\left( D_{\beta }\mu ,I_{k}\right) \text{, where} \label{(Dist S and T)} \\ C_{\beta _{0}} &=&\left[ \left( b_{0}^{\prime }\otimes I_{k}\right) \Sigma \left( b_{0}\otimes I_{k}\right) \right] ^{-1/2}\text{ and} \notag \\ D_{\beta } &=&\left[ \left( a_{0}^{\prime }\otimes I_{k}\right) \Sigma ^{-1}\left( a_{0}\otimes I_{k}\right) \right] ^{-1/2}\left( a_{0}^{\prime }\otimes I_{k}\right) \Sigma ^{-1}\left( a\otimes I_{k}\right) . \notag \end{eqnarray}% The joint density $f_{\beta ,\mu }\left( s,t\right) $ is given by% \begin{eqnarray*} f_{\beta ,\mu }\left( s,t\right) &=&\left( 2pi\right) ^{-k/2}\exp \left( -% \frac{\left\Vert s-\left( \beta -\beta _{0}\right) C_{\beta _{0}}\mu \right\Vert ^{2}}{2}\right) \times \left( 2pi\right) ^{-k/2}\exp \left( -% \frac{\left\Vert t-D_{\beta }\mu \right\Vert ^{2}}{2}\right) \\ &=&f_{\beta ,\mu }^{S}\left( s\right) \times f_{\beta ,\mu }^{T}\left( t\right) , \end{eqnarray*}% where $pi=3.1415...$ and $f_{\beta ,\mu }^{S}\left( s\right) $ and $f_{\beta ,\mu }^{T}\left( t\right) $ are the marginal densities for $S$ and $T$. Examples of test statistics based on $S$ and $T$ are the Anderson-Rubin (AR), the score or Lagrange multiplier (LM), and the quasi likelihood ratio (LR) statistics. Anderson and Rubin (1949) propose to use a pivotal statistic. In our model the Anderson-Rubin statistic is given by \begin{equation} AR=S^{\prime }S. \label{(AR stat)} \end{equation}% In Appendix A, we derive the $LM$ and $LR$ statistics under that the assumption the errors are normal. For any full column rank matrix $X$, let $% N_{X}=X\left( X^{\prime }X\right) ^{-1}X^{\prime }$ and $M_{X}=I-N_{X}$. Then the $LM$ statistic simplifies to% \begin{equation} LM=S^{\prime }N_{C_{\beta _{0}}D_{\beta _{0}}^{-1}T}S\text{.} \label{(LM stat)} \end{equation}% The likelihood ratio statistic is given by% \begin{equation} LR=\max_{a}\overline{R}^{\prime }\Sigma ^{-1/2}N_{\Sigma ^{-1/2}(a\otimes I_{k})}\Sigma ^{-1/2}\overline{R}-T^{\prime }T. \label{(LR stat)} \end{equation}% The $LR$ statistic is apparently not a simple function of $S$ and $T$ (which makes it difficult to implement the test coupled with conditional critical values). \citet{Kleibergen06} instead adapts the formula for the likelihood ratio statistic derived by \citet{Moreira03} in the homoskedastic IV model to the GMM framework. For the HAC-IV model, this quasi likelihood ratio statistic becomes% \begin{equation} QLR=\frac{AR-r\left( T\right) +\sqrt{\left( AR-r\left( T\right) \right) ^{2}+4LM\cdot r\left( T\right) }}{2}, \label{(QLR stat)} \end{equation}% where $AR$ and $LM$ are defined in (\ref{(AR stat)}) and (\ref{(LM stat)}), and $r\left( T\right) =T^{\prime }T$. \citet{AndrewsGuggenberger14b} use a Kronecker product $\Omega \otimes \Phi $ (where $\Omega $ and $\Phi $ are positive-definite matrices respectively with dimensions $2\times 2$ and $% k\times k$) approximation to the variance $\Sigma $; see % \citet{VanLoanPtsianis93} for more details on Kronecker product approximations. We now present two novel WAP\ statistics based on the weighted-average density% \begin{equation} h_{\Lambda }\left( s,t\right) =\int f_{\beta ,\mu }\left( s,t\right) \text{ }% d\Lambda \left( \beta ,\mu \right) . \label{(WAP density)} \end{equation}% These weight functions use the Kronecker product $\Omega \otimes \Phi $ approximation to $\Sigma $ with the Frobenius norm (i.e.,\ the norm of a matrix $X$ is given by $\left\Vert X\right\Vert =\sqrt{tr\left( X^{\prime }X\right) }$). For the MM1 statistic $h_{1}\left( s,t\right) $, we choose $% \Lambda \left( \beta ,\mu \right) $ to be $N\left( \beta _{0},1\right) \times N\left( 0,\sigma ^{2}\Phi \right) $. For the MM2 statistic $% h_{2}\left( s,t\right) $, we first define the identity $\tan \left( \theta \right) \equiv d_{\beta \left( \theta \right) }/c_{\beta \left( \theta \right) }$, where% \begin{equation} c_{\beta }=(\beta -\beta _{0})\cdot (b_{0}^{\prime }\Omega b_{0})^{-1/2}% \text{ and }d_{\beta }=a^{\prime }\Omega ^{-1}a_{0}\cdot (a_{0}^{\prime }\Omega ^{-1}a_{0})^{-1/2}. \label{(c_b and d_b)} \end{equation}% We choose $\Lambda \left( \beta ,\mu \right) $ so that the prior for $\theta $ and $\mu $ are \textit{Unif}$\left[ -pi,pi\right] \times N\left( 0,\left\Vert l_{\beta \left( \theta \right) }\right\Vert ^{-2}\zeta \cdot \Phi \right) $, where $l_{\beta }=\left( c_{\beta },d_{\beta }\right) ^{\prime }$. In Appendix A, we show that the MM1 and MM2 statistics are \begin{eqnarray} h_{1}\left( s,t\right) &\hspace{-0.08in}=\hspace{-0.08in}&\left( 2pi\right) ^{-k-1/2}\int \left\vert \Psi _{\beta ,\sigma ^{2}}\right\vert ^{-1/2}\exp \left( -\frac{\left( s^{\prime },t^{\prime }\right) \Psi _{\beta ,\sigma ^{2}}^{-1}\left( s^{\prime },t^{\prime }\right) ^{\prime }+\left( \beta -\beta _{0}\right) ^{2}}{2}\right) d\beta \label{(h densities)} \\ h_{2}\left( s,t\right) &\hspace{-0.08in}=\hspace{-0.08in}&\left( 2pi\right) ^{-\left( k+1\right) }\int_{-pi}^{pi}\left\vert \Psi _{\beta \left( \theta \right) ,\left\Vert l_{\beta \left( \theta \right) }\right\Vert ^{-2}\zeta }\right\vert ^{-1/2}\exp \left( -\frac{\left( s^{\prime },t^{\prime }\right) \Psi _{\beta \left( \theta \right) ,\left\Vert l_{\beta \left( \theta \right) }\right\Vert ^{-2}\zeta }^{-1}\left( s^{\prime },t^{\prime }\right) ^{\prime }}{2}\right) d\theta , \notag \end{eqnarray}% where the matrix $\Psi _{\beta ,\sigma ^{2}}$ is given by% \begin{equation} \Psi _{\beta ,\sigma ^{2}}=I_{2}\otimes I_{k}+\sigma ^{2}\left[ \begin{array}{cc} \left( \beta -\beta _{0}\right) ^{2}C_{\beta _{0}}\Phi C_{\beta _{0}} & \left( \beta -\beta _{0}\right) C_{\beta _{0}}\Phi D_{\beta }^{\prime } \\ \left( \beta -\beta _{0}\right) D_{\beta }\Phi C_{\beta _{0}} & D_{\beta }\Phi D_{\beta }^{\prime }% \end{array}% \right] . \label{(Psi_b,sigma2)} \end{equation} \subsection{Kronecker Variance Matrix} We consider here the special case where $\Sigma =\Omega \otimes \Phi $ exactly. This framework is particularly interesting for two reasons. First, it encompasses the homoskedastic case by taking $\Phi $ to be the identity matrix. We will show that the $S$ and $T$ statistics for general error structure simplify to the original statistics of \citeauthor{Moreira01} (% \citeyear{Moreira01}, \citeyear{Moreira09a}) for the homoskedastic model. Second, the model where $\Sigma $ has a Kronecker product structure enjoys natural invariance properties. Some statistics are invariant but others are not. This has profound consequences for testing procedures based on these statistics. Indeed, typical tests based on noninvariant statistics (such as those using a constant or \citeauthor{Moreira03}'s (\citeyear{Moreira03}) conditional critical value function) behave as one-sided tests for parts of the parameter space. We will illustrate this problem numerically in Section % \ref{Numerical Sec}. When $\Sigma =\Omega \otimes \Phi $, the statistics $S$ and $T$ defined in (% \ref{(Defns of S and T)}) simplify to \begin{eqnarray} S &=&\Phi ^{-1/2}(Z^{\prime }Z)^{-1/2}Z^{\prime }Yb_{0}\cdot (b_{0}^{\prime }\Omega b_{0})^{-1/2}\text{ and } \label{(Defns of S and T kron)} \\ T &=&\Phi ^{-1/2}(Z^{\prime }Z)^{-1/2}Z^{\prime }Y\Omega ^{-1}a_{0}\cdot (a_{0}^{\prime }\Omega ^{-1}a_{0})^{-1/2}. \notag \end{eqnarray}% Their distribution is given by% \begin{equation} S\sim N\left( c_{\beta }\Phi ^{-1/2}\mu ,I_{k}\right) \text{ and }T\sim N\left( d_{\beta }\Phi ^{-1/2}\mu ,I_{k}\right) . \label{(Dist S and T kron)} \end{equation}% AMS06 use invariance arguments for the special case $\Phi =I_{k}$. However, the parameter $\mu _{\Phi }=\Phi ^{-1/2}\mu $ is unknown because $\mu $ is unknown. Hence, AMS06's invariance argument applies to the new parameter $% \mu _{\Phi }=\Phi ^{-1/2}\mu $. Specifically, let $g\in \mathcal{O}_{n}$ and consider the transformation in the sample space% \begin{equation*} g\circ \left( S,T\right) =\left( gS,gT\right) . \end{equation*}% The induced transformation in the parameter space is% \begin{equation*} g\circ \left( \beta ,\mu _{\Phi }\right) =\left( \beta ,g\mu _{\Phi }\right) . \end{equation*} Invariant tests depend on the data only through \begin{equation} Q=\left[ \begin{array}{cc} Q_{S} & Q_{ST} \\ Q_{ST} & Q_{T}% \end{array}% \right] =\left[ \begin{array}{cc} S^{\prime }S & S^{\prime }T \\ S^{\prime }T & T^{\prime }T% \end{array}% \right] . \label{(Q def)} \end{equation}% The density of $Q$ at $q$ for the parameters $\beta $ and $\lambda =\pi ^{\prime }\left( Z^{\prime }Z\right) ^{1/2}\Phi ^{-1}\left( Z^{\prime }Z\right) ^{1/2}\pi $ is given by \begin{eqnarray*} &&f_{\beta ,\lambda }(q_{S},q_{ST},q_{T})\overset{}{=}K_{0}\exp (-\lambda (c_{\beta }^{2}+d_{\beta }^{2})/2)\left\vert q\right\vert ^{(k-3)/2} \\ &&\hspace{1.03in}\times \exp (-(q_{S}+q_{T})/2)(\lambda \xi _{\beta }(q))^{-(k-2)/4}I_{(k-2)/2}(\sqrt{\lambda \xi _{\beta }(q)}), \end{eqnarray*}% where $K_{0}^{-1}=2^{(k+2)/2}pi^{1/2}\Gamma _{(k-1)/2}$, $\Gamma _{(\cdot )}$ is the gamma function, $I_{(k-2)/2}(\cdot )$ denotes the modified Bessel function of the first kind, and \begin{equation} \xi _{\beta }(q)=c_{\beta }^{2}q_{S}+2c_{\beta }d_{\beta }q_{ST}+d_{\beta }^{2}q_{T}. \label{(Defn of Xi_Beta)} \end{equation} The following proposition shows that the WAP densities $h_{1}\left( s,t\right) $ and $h_{2}\left( s,t\right) $ are invariant when the covariance matrix is a Kronecker product. Indeed, the Kronecker product approximation $% \Omega \otimes \Phi $ to $\Sigma $ in the definition of the weights was chosen exactly to guarantee the test statistics are \emph{orthogonal} invariant. AMS06 show there also exists a \emph{sign} transformation that preserves the two-sided hypothesis testing problem. Consider the group $\mathcal{O}_{1}$, which contains only two elements: $\overline{g}\in $ $\left\{ -1,1\right\} $% . The group transformation in the sample is \begin{equation*} \overline{g}\circ \left( Q_{S},Q_{ST},Q_{T}\right) =\left( Q_{S},\overline{g}% \cdot Q_{ST},Q_{T}\right) , \end{equation*}% whose maximal invariant is $Q_{S}$, $\left\vert Q_{ST}\right\vert $, and $% Q_{T}$. This group yields a\ transformation in the parameter space. For $% \overline{g}=-1$, AMS06 show that this transformation is% \begin{eqnarray} \overline{g}\circ \left( \beta ,\lambda \right) &=&\left( \beta _{0}-\frac{% d_{\beta _{0}}(\beta -\beta _{0})}{d_{\beta _{0}}+2j_{\beta _{0}}(\beta -\beta _{0})},\lambda \frac{(d_{\beta _{0}}+2j_{\beta _{0}}(\beta -\beta _{0}))^{2}}{d_{\beta _{0}}^{2}}\right) ,\text{ where} \notag \\ j_{\beta _{0}}\hspace{-0.08in} &=&\hspace{-0.08in}\frac{e_{1}^{\prime }\Omega ^{-1}a_{0}}{(a_{0}^{\prime }\Omega ^{-1}a_{0})^{-1/2}}\text{ and }% e_{1}=(1,0)^{\prime }. \label{(Defn of Beta2* and Lambda2*)} \end{eqnarray}% (by the definition of a group, the parameter remains unaltered at $\overline{% g}=1$). The transformation in (\ref{(Defn of Beta2* and Lambda2*)}) flips the sign of $\beta -\beta _{0}$ for $\beta \neq \beta _{AR}$ defined as \begin{equation} \beta _{AR}=\frac{\omega _{11}-\omega _{12}\beta _{0}}{\omega _{12}-\omega _{22}\beta _{0}}\text{ where }\Omega =\left[ \omega _{i,l}\right] \text{.} \label{(Defn of beta_AR)} \end{equation}% So the \emph{sign} transformation preserves the two-sided hypothesis testing problem $H_{0}:\beta =\beta _{0}$ against $H_{1}:\beta \neq \beta _{0}$, but not the one-sided, e.g., testing $H_{0}:\beta \leq \beta _{0}$ against $% H_{1}:\beta >\beta _{0}$. \bigskip \begin{proposition} \label{Invariant WAP HAC-IV Prop} The following holds when $\Sigma =\Omega \otimes \Phi $:\newline \emph{(i)} The weighted-average densities $h_{1}\left( s,t\right) $ and $% h_{2}\left( s,t\right) $ are invariant to orthogonal transformations. That is, they depend on the data only through $Q$; and\newline \emph{(ii)} The weighted-average density $h_{2}\left( s,t\right) $ is invariant to sign transformations. It depends on the data only through $% Q_{S} $, $\left\vert Q_{ST}\right\vert $, and $Q_{T}$.\newline \end{proposition} \bigskip The MM1 statistic is not \emph{sign} invariant. We can create a weighted-average statistic that is sign invariant by replacing the weight in $h_{1}=\int f_{\beta _{0},\lambda }\left( q_{S},q_{ST},q_{T}\right) $ $% d\Lambda _{1}\left( \beta ,\lambda \right) $ by \begin{equation} \Lambda \left( \beta ,\lambda \right) =\frac{\Lambda _{1}\left( \beta ,\lambda \right) +\Lambda _{1}\left( \overline{g}\circ \left( \beta ,\lambda \right) \right) }{2}, \label{(correct weight)} \end{equation}% for $\overline{g}=-1$. We note that \begin{equation*} \int f_{\beta ,\lambda }(q_{S},q_{ST},q_{T})\text{ }d\Lambda \left( \beta ,\lambda \right) =\int \int f_{\beta ,\lambda }(q_{S},q_{ST},q_{T})\text{ }% d\Lambda _{1}\left( \overline{g}\circ \left( \beta ,\lambda \right) \right) \text{ }\nu \left( d\overline{g}\right) , \end{equation*}% where $\nu $ is the Haar probability measure on the group $\mathcal{O}_{1}$: $\nu \left( \left\{ 1\right\} \right) =\nu \left( \left\{ -1\right\} \right) =1/2$. Because \begin{eqnarray*} \int f_{\beta ,\lambda }(q_{S},-q_{ST},q_{T})\ d\Lambda \left( \beta ,\lambda \right) &=&\int f_{\left( -1\right) \circ \left( \beta ,\lambda \right) }(q_{S},q_{ST},q_{T})\text{ }d\Lambda \left( \beta ,\lambda \right) \\ &=&\int f_{\beta ,\lambda }(q_{S},q_{ST},q_{T})\text{ }d\Lambda \left( \beta ,\lambda \right) , \end{eqnarray*}% the weighted-average statistic based on (\ref{(correct weight)}) only depends on $q_{S},\left\vert q_{ST}\right\vert ,q_{T}$. But the MM2 statistic is already sign invariant for having chosen a clever prior for $% \beta $ and $\mu $. In fact, the MM2 prior was chosen so that the final statistic is sign invariant. Tests based on $h_{2}\left( s,t\right) $ are naturally two-sided tests for the null $H_{0}:\beta =\beta _{0}$ against the alternative $H_{1}:\beta \neq \beta _{0}$ when $\Sigma =\Omega \otimes \Phi $% . This important property does not hold for standard tests based on $% h_{1}\left( s,t\right) $. The WAP test (denoted ECS-HACIV) proposed recently by \citet{Olea15} is not \emph{sign} invariant either. Sections \ref% {Numerical Sec} and \ref{Application Sec} present numerical simulations showing that all these WAP similar tests can behave like one-sided tests for some parameter values. In the next section, we will discuss ways to circumvent this problem whether $\Sigma $ has a Kronecker product structure or not. \section{Weighted-Average Power Tests \label{Maximization Sec}} So far, we have only described test statistics. Coupled with critical values, we obtain the test procedures commonly used in the literature. The Anderson-Rubin test rejects the null when $AR>c\left( k\right) $, where $% c\left( d\right) $ is the $1-\alpha $ quantile of a chi-square distribution with $d$ degrees of freedom. The LM test rejects the null when $LM>c\left( 1\right) $. The conditional tests reject the null when each test statistic $% \psi \left( S,T\right) >\kappa \left( T\right) $. Each critical value function $\kappa \left( T\right) $ is the null conditional quantile of $\psi $ given $T=t$; see \citet{Moreira03} for details (we omit the dependence of the critical value function on the statistic $\psi $ when there is no ambiguity). For example, the CQLR test rejects the null when the QLR statistic defined in (\ref{(QLR stat)}) is larger than the conditional critical value. Our goal in this section is to find optimal tests. Specifically, a test is defined to be a measurable function $\phi \left( s,t\right) $ that is bounded by $0$ and $1$. For a given outcome, the test rejects the null with probability $\phi \left( s,t\right) $ and accepts the null with probability $% 1-\phi \left( s,t\right) $, e.g., the Anderson-Rubin test is simply $I\left( AR>c\left( k\right) \right) $ where $I\left( \cdot \right) $ is the indicator function. The test is said to be nonrandomized if $\phi $ only takes values $0$ and $1$; otherwise, it is called a randomized test. We note that% \begin{equation*} E_{\beta ,\mu }\phi \left( S,T\right) \equiv \int \phi \left( s,t\right) f_{\beta ,\mu }\left( s,t\right) \text{ }d\left( s,t\right) \end{equation*}% is the probability of rejecting the null when the parameters are $\beta $ and $\mu $. The object $E_{\beta ,\mu }\phi \left( S,T\right) $ taken as a function of $\beta $ and $\mu $ gives the power curve for the test $\phi $. In particular, $E_{\beta _{0},\mu }\phi \left( S,T\right) $ gives the null rejection probability. By Tonelli's theorem, we can write \begin{equation} E_{\Lambda }\phi \left( S,T\right) =\int E_{\beta ,\mu }\phi \left( s,t\right) d\Lambda \left( \beta ,\mu \right) =\int \phi \left( s,t\right) h_{\Lambda }\left( s,t\right) \text{ }d\left( s,t\right) , \end{equation}% where $h_{\Lambda }\left( s,t\right) $ is defined in (\ref{(WAP density)}). Hence, $E_{\Lambda }\phi \left( S,T\right) $ is the weighted-average power for the measure $\Lambda \left( \beta ,\mu \right) $. A natural first step is to find tests that maximize WAP and have size no larger than $\alpha $. That is,% \begin{equation} \max_{0\leq \phi \leq 1}E_{\Lambda }\phi \left( S,T\right) \text{, where }% E_{\beta _{0},\mu }\phi \left( S,T\right) \leq \alpha ,\forall \mu . \end{equation}% Since the parameter $\mu $ is unknown, finding a WAP test with correct size is nontrivial. The task entails finding a least favorable distribution $% \Lambda _{0}$ to construct the WAP test as described in Section 3.8 of % \citet{LehmannRomano05}. This test rejects the null when the likelihood ratio is large:% \begin{equation} \frac{h_{\Lambda }\left( s,t\right) }{\int f_{\beta _{0},\mu }^{T}\left( t\right) \text{ }d\overline{\Lambda }\left( \mu \right) }>\kappa , \end{equation}% where $\kappa \cdot \overline{\Lambda }$ is really a Lagrange multiplier in an infinite-dimensional space; see Lemma 3 of \citet{MoreiraMoreira10} for details\footnote{% Also available as Lemma 2 in the most recent version, % \citet{MoreiraMoreira13}. Both versions are available on Marcelo Moreira's website: http://www.fgv.br/professor/mjmoreira/}. For a parameter $\mu $ of small dimension, we can apply numerical algorithms to approximate the WAP test (such as the one by \citet{ElliottMuellerWatson15} or the linear programming algorithm of \citet{MoreiraMoreira13}). The task of finding tests with correct size is simplified if we can find optimal similar tests:% \begin{equation} \max_{0\leq \phi \leq 1}E_{\Lambda }\phi \left( S,T\right) \text{, where }% E_{\beta _{0},\mu }\phi \left( S,T\right) =\alpha ,\forall \mu . \label{(WAP similar)} \end{equation}% Because the statistic $T$ is sufficient and complete under the null, any similar test is conditionally similar (for almost all levels $T=t$). Hence, we can solve% \begin{equation*} \max_{0\leq \phi \leq 1}E_{\Lambda }\phi \left( S,t\right) \text{, where }% E_{\beta _{0}}\phi \left( S,t\right) =\alpha . \end{equation*}% The WAP similar test rejects the null when% \begin{equation} \frac{h_{\Lambda }\left( s,t\right) }{f_{\beta _{0}}^{S}\left( s\right) \cdot h_{\Lambda }^{T}\left( t\right) }>\kappa \left( t\right) , \label{(WAP similar hT)} \end{equation}% where $\kappa \left( t\right) $ is a conditional critical value function and $h_{\Lambda }^{T}\left( t\right) =\int h_{\Lambda }\left( s,t\right) $ $ds$. By Tonelli's theorem, \begin{eqnarray*} h_{\Lambda }^{T}\left( t\right) &=&\int \int f_{\beta ,\mu }\left( s,t\right) \text{ }d\Lambda \left( \beta ,\mu \right) \text{ }ds \\ &=&\int \int f_{\beta ,\mu }\left( s,t\right) ds\text{ }d\Lambda \left( \beta ,\mu \right) \\ &=&\int f_{\beta ,\mu }^{T}\left( t\right) \text{ }d\Lambda \left( \beta ,\mu \right) . \end{eqnarray*} For arbitrary weights $\Lambda $, neither the WAP test with correct size nor the WAP similar test is guaranteed to have overall good power in finite samples\footnote{% As the geneticist and statistician Anthony W. F. \citet[p. 60]{Edwards92} remarks, \textquotedblleft It is sometimes said, in defence of the Bayesian concept, that the choice of prior distribution is unimportant in practice, because it hardly influences the posterior distribution at all when there are moderate amounts of data. The less said about this `defence' the better.\textquotedblright}. Take for a moment the case where $\Sigma =\Omega \otimes \Phi $. The WAP tests based on $% h_{1}\left( s,t\right) $ can have very low power for some parameter values. Because the WAP test with correct size and the WAP similar test based on the MM1 weight are not sign invariant, they can actually behave like one-sided tests for parts of the parameter space. This issue is analogous to the problem with conditional Wald tests found by % \citet{AndrewsMoreiraStock07} which leads them to give a very specific recommendation: \textquotedblleft \textit{The evident conclusion for applied work is that researchers choosing among these tests (including conditional Wald) should use the CLR test. The strong asymptotic bias and often low power of the conditional Wald tests indicate that they can yield misleading inferences and are not useful, even as robustness checks.}% \textquotedblright\ For our purposes we can of course circumvent this problem by replacing $h_{1}\left( s,t\right) $ by a sign invariant weight given by (\ref{(correct weight)}) or by the density $h_{2}\left( s,t\right) $% . However, this solution relies on model symmetries (i.e., sign invariance) and only works for Kronecker covariance matrices. On the other hand, \citet{MillsMoreiraVilela14} find approximately unbiased Wald tests which have overall good power. Their procedure only works for the model with homoskedastic errors, but it does hint that imposing additional constraints can actually help to obtain optimal tests with overall good power for general $\Sigma $. \section{Two-Sided Boundary Conditions \label{Conditions Sec}} The WAP similar test based on $h_{2}\left( s,t\right) $ is a two-sided test in the homoskedastic case precisely because the sign-group of transformations preserves the two-sided testing problem when $\Sigma =\Omega \otimes \Phi $. More specifically, because this test depends only on $Q_{S}$% , $\left\vert Q_{ST}\right\vert $, and $Q_{T}$ it is locally unbiased; see Corollary 1 of \citet{AndrewsMoreiraStock06b}. When errors are autocorrelated and heteroskedastic, however, the covariance $\Sigma $ typically does not have a Kronecker product structure. In this case, the WAP similar test (or a WAP test with correct size) based on $h_{2}\left( s,t\right) $ may not have good power for parts of the parameter space. Worse yet, when the covariance matrix lacks Kronecker product structure, there is actually no sign invariance argument to accommodate two-sided testing. \bigskip \begin{proposition} \label{No sign invariance Prop} Assume that we cannot write $\Sigma $ as $% \Omega \otimes \Phi $ for a $2\times 2$ matrix $\Omega $ and a $k\times k$ matrix $\Phi $, both symmetric and positive definite. Then for the data group of transformations $\left[ S,T\right] \rightarrow \left[ \pm S,T\right] $, there exists no group of transformations in the parameter space which preserves the testing problem. \end{proposition} \bigskip Proposition \ref{No sign invariance Prop} asserts that we cannot simplify the two-sided hypothesis testing problem using sign invariance arguments. It is then much more difficult to find a weight so that the test is, loosely speaking, two-sided. An unbiasedness condition instead adjusts the weights automatically (whether $\Sigma $ has a Kronecker product or not). Hence, we can seek approximately optimal unbiased tests. An important property of WAP tests is admissibility. Theorem \ref% {Admissibility Thm} below shows that the WAP unbiased tests are admissible. The proof follows exactly the same steps as the proof for admissibility of WAP similar tests of \citet{MoreiraMoreira13} (see Comment 1 after their Theorem 4)\footnote{\citet{Olea15} provides an alternative proof that similar tests are admissible by contradiction.}. For completeness, we provide a proof in the appendix for the following theorem. \bigskip \begin{theorem} \label{Admissibility Thm} Let $\left( \beta ,\mu \right) \in \mathbb{B}% \times \mathbb{P}$, where both sets compact. Assume that the weight $\Lambda $ appearing in (\ref{(WAP density)}) has full support on $\mathbb{B}\times \mathbb{P}$. Then there exists a sequence of Bayes' tests $\phi _{m}\left( s,t\right) $ which weakly converges (in the weak* topology to the $\mathcal{L% }_{\infty }(\mathbb{R}^{2k})$ space) to the WAP unbiased test. In particular, the WAP\ unbiased test is admissible. \end{theorem} \textbf{Comments: 1. }The weak convergence guarantees, for example, that the limiting power function of $\phi _{m}\left( s,t\right) $ is the power function of the WAP\ unbiased test. See \citet{MoreiraMoreira13} for details on weak convergence of tests. \textbf{2. }The theorem assumes the parameter space is compact. It may be possible to drop this assumption with some additional technical conditions; see \citet{Lehmann52}. The compactness assumption, however, may not be overly restrictive in practice. First, one could argue that we can pin down a region large enough in which the parameter lies. Second, the usual mathematical and statistical software packages have limited numerical accuracy, so for all practical purposes the weight $\Lambda $ in the average density $h_{\Lambda }\left( s,t\right) $ has support in a compact set. \bigskip Proposition \ref{No sign invariance Prop} shows that there is no sign group structure which preserves the null and alternative. This makes the task of finding a weight function $h_{\Lambda }\left( s,t\right) $ which yields a WAP\ unbiased test difficult with HAC errors. Instead of seeking a weight function $\Lambda $ so that the WAP test is approximately unbiased, we can select an arbitrary weight and find the optimal test among unbiased tests; see \citet{MoreiraMoreira13}. In practice, it would be computationally intensive to handle so many constraints of the form $E_{\beta ,\mu }\phi \left( S,T\right) \geq E_{\beta _{0},\mu _{0}}\phi \left( S,T\right) $ for any scalar $\beta $ and $k$-dimensional vectors $\mu $ and $\mu _{0}$, especially when $k$ is large. Instead we choose two different restrictions. The first condition is based on the local power around the null hypothesis. It is a weaker condition than unbiasedness, so it does not rule out admissibility. The second condition is a stronger requirement but is easier to implement. Better yet, numerical simulations will show it yields little power reduction compared to the first condition. Both conditions and their associated WAP tests are presented next. \subsection{Locally Unbiased (LU) Condition \label{LU subsec}} If the test is unbiased, the derivative of the power function must be equal to zero under the null. The next proposition uses this fact and completeness of $T$ to provide a necessary condition for a test to be unbiased. This locally unbiased (LU) condition states that the test must be similar and uncorrelated with linear combinations (which depend on the instruments' coefficient $\mu $) of the pivotal statistic $S$. \bigskip \begin{proposition} \label{LU Prop} A test is said to be locally unbiased (LU) if% \begin{equation} E_{\beta _{0},\mu }\phi \left( S,T\right) =\alpha \text{ and }E_{\beta _{0},\mu }\phi \left( S,T\right) S^{\prime }C_{\beta _{0}}\mu =0\text{, }% \forall \mu . \tag{LU} \label{(LU eq)} \end{equation}% If a test is unbiased, then it is LU. \end{proposition} \bigskip In the case $k=1$ where the model is exactly identified, we have an optimality result for any choice of $\Lambda $. The Anderson-Rubin test is the uniformly most powerful unbiased (UMPU) test and has power function depending on the noncentrality parameter $\left( \beta -\beta _{0}\right) ^{2}C_{\beta _{0}}^{2}\mu ^{2}$. We can prove this result directly from Theorem 2-(a) of \citeauthor{Moreira01} (\citeyear{Moreira01}, % \citeyear{Moreira09a}) for homoskedastic errors (with the scalar $\mu $ and matrix $\Omega $ being replaced by $\mu _{\Phi }$ and $\Sigma $). As this setup resembles the just-identified model with homoskedastic errors, optimality of the Anderson-Rubin test for HAC errors and $k=1$ follows straightforwardly. \bigskip \begin{proposition} \label{Just Ident Prop} If $k=1$, the Anderson-Rubin test is the uniformly most powerful unbiased test and has a power function given by% \begin{equation*} P_{\beta ,\mu }\left( AR>c\left( 1\right) \right) =1-G\left( c\left( 1\right) ;\frac{\left( \beta -\beta _{0}\right) ^{2}\mu ^{2}}{b_{0}^{\prime }\Sigma b_{0}}\right) , \end{equation*}% where $G\left( \cdot ;\delta ^{2}\right) $ is the noncentral $\chi ^{2}\left( 1\right) $ distribution function with noncentrality parameter $% \delta ^{2}$. Furthermore, the LM\ and CQLR tests are equivalent to the Anderson-Rubin test, and are also optimal. \end{proposition} \bigskip Following Proposition \ref{LU Prop}, the WAP-LU test solves \begin{equation} \underset{0\leq \phi \leq 1}{\max }E_{\Lambda }\phi \left( S,T\right) \text{% , where }E_{\beta _{0},\mu }\phi \left( S,T\right) =\alpha \text{ and }% E_{\beta _{0},\mu }\phi \left( S,T\right) S^{\prime }C_{\beta _{0}}\mu =0,\forall \mu . \label{(WAP-LU)} \end{equation}% The optimal tests based on $h_{1}\left( s,t\right) $ and $h_{2}\left( s,t\right) $ are denoted respectively MM1-LU and MM2-LU tests. In the just-identified model, the MM1-LU test is shown to be the uniformly most powerful unbiased test. The MM2-LU\ test is equivalent to the MM2 similar test and is also optimal. \bigskip \begin{proposition} \label{Just Ident LU prop} The following hold when $k=1$:\newline \emph{(a)} The MM2-LU\ and MM2 similar tests are equivalent and uniformly most powerful unbiased tests.\newline \emph{(b)} Both MM1-LU and MM2-LU tests are uniformly most powerful unbiased tests. \end{proposition} \textbf{Comments: 1. }The MM2 similar test automatically satisfies the LU condition when $k=1$. Hence, the MM2-LU and MM2 similar tests are equivalent when the model is exactly identified. \textbf{2.} The MM1 similar test is not locally unbiased even when $k=1$. Close inspection of the weighted density $h_{1}\left( s,t\right) $ shows that $d_{\beta }/c_{\beta }$ is the relative contribution of the one-sided $% S\cdot T$ statistic to the $AR=S^{2}$ statistic. If $\Sigma $ is close to being singular (that is, $\left\vert \Sigma \right\vert $ is near zero), the ratio $d_{\beta }/c_{\beta }$ can diverge to infinity. The MM1 test can then behave as a one-sided test. We will illustrate this problem numerically in Section \ref{Numerical Sec}. \bigskip In the case $k>1$ where the model is overidentified, we no longer have a uniformly most powerful unbiased test. However, we can still find WAP tests which are locally unbiased. Relaxing both constraints in (\ref{(WAP-LU)}) assures us the existence of Lagrange multipliers; see % \citet{MoreiraMoreira13}. Therefore, we solve the approximated maximization problem:% \begin{eqnarray} \underset{0\leq \phi \leq 1}{\max }E_{\Lambda }\phi \left( S,T\right) \text{% , where }\alpha -\epsilon &\leq &E_{\beta _{0},\mu }\phi \left( S,T\right) \leq \alpha +\epsilon ,\forall \mu \label{(relaxed optimization Boundary 1 eq)} \\ \text{and }E_{\beta _{0},\mu _{l}}\phi \left( S,T\right) S^{\prime }C_{\beta _{0}}\mu _{l} &=&0,\text{ for }l=1,...,m, \notag \end{eqnarray}% when $\epsilon $ is small and the number of discretizations $m$ is large. The optimal test rejects the null hypothesis when% \begin{equation} h_{\Lambda }\left( s,t\right) -s^{\prime }C_{\beta _{0}}\sum_{l=1}^{m}c_{l}^{\epsilon }\mu _{l}f_{\beta _{0},\mu _{l}}\left( s,t\right) >\int f_{\beta _{0},\mu }\left( s,t\right) \text{ }d\Lambda _{\epsilon }\left( \mu \right) , \label{(WAP-LU test)} \end{equation}% where the measure $\Lambda _{\epsilon }$ and the scalars $c_{l}^{\epsilon }$% , $l=1,...,m$, are multipliers associated to boundary constraints in the maximization problem (\ref{(relaxed optimization Boundary 1 eq)}). We can use $f_{\beta _{0},\mu }\left( s,t\right) =$ $f_{\beta _{0}}^{S}\left( s\right) \times f_{\beta _{0},\mu }^{T}\left( t\right) $ to write (\ref{(WAP-LU test)}) as% \begin{equation} \frac{h_{\Lambda }\left( s,t\right) }{f_{\beta _{0}}^{S}\left( s\right) }% -s^{\prime }C_{\beta _{0}}\sum_{l=1}^{m}c_{l}^{\epsilon }\mu _{l}f_{\beta _{0},\mu _{l}}^{T}\left( t\right) >\int f_{\beta _{0},\mu }^{T}\left( t\right) \text{ }d\Lambda _{\epsilon }\left( \mu \right) . \end{equation}% Letting $\epsilon \downarrow 0$, the optimal test rejects the null hypothesis when \begin{equation} \frac{h_{\Lambda }\left( s,t\right) }{f_{\beta _{0}}^{S}\left( s\right) }% -s^{\prime }C_{\beta _{0}}\sum_{l=1}^{m}c_{l}\mu _{l}f_{\beta _{0},\mu _{l}}^{T}\left( t\right) >\kappa \left( t\right) , \end{equation}% where $\kappa \left( t\right) $ is the conditional $1-\alpha $ quantile of% \begin{equation} \frac{h_{\Lambda }\left( S,t\right) }{f_{\beta _{0}}^{S}\left( S\right) }% -S^{\prime }C_{\beta _{0}}\sum_{l=1}^{m}c_{l}\mu _{l}f_{\beta _{0},\mu _{l}}^{T}\left( t\right) . \end{equation}% This representation is very convenient as we can find \begin{equation} \kappa \left( t\right) =\lim_{\epsilon \downarrow 0}\int f_{\beta _{0},\mu }^{T}\left( t\right) \text{ }d\Lambda _{\epsilon }\left( \mu \right) \end{equation}% by numerical approximations of the conditional distribution instead of searching for an infinite-dimensional multiplier $\Lambda _{\epsilon }$. We then search for the values $c_{l}$ so that \begin{equation} E_{\beta _{0},\mu _{l}}\phi \left( S,T\right) S^{\prime }C_{\beta _{0}}\mu _{l}=\int \phi \left( s,t\right) s^{\prime }C_{\beta _{0}}\mu _{l}f_{\beta _{0}}^{S}\left( s\right) f_{\beta _{0},\mu _{l}}^{T}\left( t\right) =0, \end{equation}% by taking into consideration that $\kappa \left( t\right) $ depends on $% c_{l} $, $l=1,...,m$. We can find $c_{l}$, $l=1,...,m$ with a nonlinear numerical algorithm\footnote{% The two-step procedure just described is the usual \emph{substitution method} for a system of equations, but here we have an uncountable number of equations and unknowns.}. As an alternative procedure, we consider a condition stronger than the LU condition which is simpler to implement numerically. This strategy turns out to be useful because it provides a simple way to implement tests with overall good power. We explain this alternate condition next. \subsection{Strongly Unbiased (SU) Condition \label{SU subsec}} The LU\ condition asserts that the test $\phi $ is uncorrelated with a linear combination indexed by the instruments' coefficients $\mu $ and the pivotal statistic $S$. We note that the LU\ condition trivially holds if \begin{equation} E_{\beta _{0},\mu }\phi \left( S,T\right) =\alpha \text{ and }E_{\beta _{0},\mu }\phi \left( S,T\right) S=0,\forall \mu . \tag{SU} \label{(SU eq)} \end{equation}% That is, the test $\phi $ is uncorrelated with the $k$-dimensional statistic $S$ itself under the null. This strongly unbiased (SU)\ condition states that the test $\phi \left( S,T\right) $ is uncorrelated with $S$ for all instruments' coefficients $\mu $. The WAP-SU test based on the weight $% \Lambda $ solves% \begin{equation} \underset{0\leq \phi \leq 1}{\max }E_{\Lambda }\phi \left( S,T\right) \text{% , where }E_{\beta _{0},\mu }\phi \left( S,T\right) =\alpha \text{ and }% E_{\beta _{0},\mu }\phi \left( S,T\right) S=0,\forall \mu . \label{(WAP-SU)} \end{equation}% The optimal tests based on $h_{1}\left( s,t\right) $ and $h_{2}\left( s,t\right) $ are denoted respectively MM1-SU and MM2-SU tests. When $k=1$, the LU\ and SU conditions are equivalent (hence, the MM1-SU and MM2-SU tests are uniformly most powerful unbiased). When $k>1$, the following lemma proves the LU\ condition is strictly weaker than the SU condition. Hence, finding WAP similar tests that satisfy the SU instead of the LU\ condition in theory may entail unnecessary power losses. In practice, numerical simulations in Section \ref{Numerical Sec} indicate that there is little power gain --if any-- by using the LU\ instead of the SU\ condition (with the MM1-SU and MM2-SU tests having the advantage of being easier to implement). \bigskip \begin{lemma} \textbf{\label{LU not SU Lemma} }Define the integral% \begin{equation*} F_{\phi }(\mu _{1},\mu _{2})=E_{\beta _{0},D_{\beta _{0}}^{-1}\mu _{2}}\phi \left( s,t\right) s^{\prime }C_{\beta _{0}}\mu _{1}=\int \phi \left( s,t\right) s^{\prime }C_{\beta _{0}}\mu _{1}\cdot f_{\beta _{0}}^{S}\left( s\right) f_{\beta _{0},D_{\beta _{0}}^{-1}\mu _{2}}^{T}\left( t\right) \text{ }d\left( s,t\right) . \end{equation*}% For $k>1$, there exists a test function $\phi :\left[ S,T\right] \rightarrow % \left[ 0,1\right] $ such that $F_{\phi }(\mu _{1},\mu _{1})=0$ for all $\mu _{1}$, and $F_{\phi }(\mu _{1},\mu _{2})\neq 0$, for some $\mu _{1}$ and $% \mu _{2}$. \end{lemma} \bigskip Because the statistic $T$ is complete, we can carry on power maximization in (\ref{(WAP-SU)}) for each level of $T=t$:% \begin{equation} \underset{0\leq \phi \leq 1}{\max }E_{\Lambda }\phi \left( S,t\right) \text{% , where }E_{\beta _{0}}\phi \left( S,t\right) =\alpha \text{ and }E_{\beta _{0}}\phi \left( S,t\right) S=0, \label{(boundary SU eq)} \end{equation}% where the expectation is taken with respect to $S$ only. The WAP-SU test rejects the null when \begin{equation*} \frac{h_{\Lambda }\left( s,t\right) }{f_{\beta _{0}}^{S}\left( s\right) \cdot h_{\Lambda }^{T}\left( t\right) }>\kappa \left( s,t\right) , \end{equation*}% where the function $\kappa \left( s,t\right) =\overline{\kappa }_{0}\left( t\right) +s^{\prime }\overline{\kappa }_{1}\left( t\right) $ is such that the optimal test satisfies the SU condition. The term $h_{\Lambda }^{T}\left( t\right) $ can be absorbed in the critical value function. For numerical stability, however, we recommend keeping it so that the numerator and denominator are of the same order of magnitude. In practice, we can find $\overline{\kappa }_{0}\left( t\right) $ and $% \overline{\kappa }_{1}\left( t\right) $ using linear programming based on simulations for the statistic $S$. Consider the approximated problem \begin{eqnarray*} \max_{0\leq x^{\left( j\right) }\leq 1} &&\text{ }J^{-1}\sum_{j=1}^{J}x^{% \left( j\right) }\frac{h_{\Lambda }\left( s^{\left( j\right) },t\right) }{% h_{\Lambda }^{T}\left( t\right) }\exp \left( s^{\left( j\right) \prime }s^{\left( j\right) }/2\right) \left( 2pi\right) ^{k/2} \\ \text{s.t.} &&\text{ }J^{-1}\sum_{j=1}^{J}x^{\left( j\right) }=\alpha \text{ and} \\ &&\text{ }J^{-1}\sum_{j=1}^{J}x^{\left( j\right) }s_{l}^{^{\left( j\right) }}=0,\text{ for }l=1,...,k. \end{eqnarray*}% Each $j$-th draw of $S$ is iid standard-normal: \begin{equation*} S^{\left( j\right) }=\left[ \begin{array}{c} S_{1}^{\left( j\right) } \\ \vdots \\ S_{k}^{^{\left( j\right) }}% \end{array}% \right] \sim N\left( 0,I_{k}\right) . \end{equation*}% We note that for the linear programming, the only term which depends on $T=t$ is $h_{\Lambda }\left( s^{\left( j\right) },t\right) /h_{\Lambda }^{T}\left( t\right) $. The multipliers for this linear programming problem are the critical value functions $\overline{\kappa }_{0}\left( t\right) $ and $% \overline{\kappa }_{1}\left( t\right) $. To speed up the numerical algorithm, we can use the same sample $S^{\left( j\right) }$, $j=1,...,J,$ for every level $T=t$. Finally, we use the WAP test found in (\ref{(boundary SU eq)}) to find a useful \textit{two-sided power envelope}. The next proposition finds the optimal test for any given alternative which satisfies the SU\ condition. \bigskip \begin{proposition} \label{POSU Prop} The optimal SU test for a point alternative $\left( \beta ,\mu \right) $ rejects the null hypothesis when \begin{equation} \frac{\left( s^{\prime }C_{\beta _{0}}\mu \right) ^{2}}{\mu C_{\beta _{0}}^{2}\mu }>c(1). \label{(POSU test eq)} \end{equation}% This test is denoted the Point Optimal Strongly Unbiased (POSU) test and has power given by% \begin{equation*} P_{\beta ,\mu }\left( \frac{\left( s^{\prime }C_{\beta _{0}}\mu \right) ^{2}% }{\mu C_{\beta _{0}}^{2}\mu }>c\left( 1\right) \right) =1-G\left( c\left( 1\right) ;\left( \beta -\beta _{0}\right) ^{2}\mu ^{\prime }C_{\beta _{0}}^{2}\mu \right) , \end{equation*}% where $G\left( \cdot ;\delta ^{2}\right) $ is the noncentral $\chi ^{2}\left( 1\right) $ distribution function with noncentrality parameter $% \delta ^{2}$. \end{proposition} \textbf{Comments: 1. }The POSU test does not depend on $\beta $ but does depend on the direction of the vector $C_{\beta _{0}}\mu $. \textbf{2.} When $k=1$, the Anderson-Rubin and POSU\ tests are the same. \bigskip The power plot of $1-G\left( c\left( 1\right) ;\left( \beta -\beta _{0}\right) ^{2}\mu ^{\prime }C_{\beta _{0}}^{2}\mu \right) $ as $\beta $ and $\mu $ change yields the two-sided power envelope. This power envelope is the two-sided analogue of the one-sided power envelope among similar tests. This power upper bound, based on the Point Optimal Similar (POS) test for the alternative $\left( \beta ,\mu \right) $, is given by the plot of $% 1-\Phi \left( \sqrt{c\left( 1\right) }-\left\vert \beta -\beta _{0}\right\vert \sqrt{\mu C_{\beta _{0}}^{2}\mu }\right) $, where $\Phi \left( \cdot \right) $ is the standard normal distribution. \section{Numerical Evaluation of WAP Tests \label{Numerical Sec}} In this section, we provide numerical simulations for WAP\ tests based on the MM statistics. The MM tests are WAP similar tests based on $h_{1}\left( s,t\right) $ and $h_{2}\left( s,t\right) $. The MM-LU\ and MM-SU tests also satisfy respectively the locally unbiased and strongly unbiased conditions. The goal in this section is to numerically illustrate the importance of using two-sided conditions to obtain tests with overall good power. We can write \begin{equation*} \Omega =\left[ \begin{array}{cc} \omega _{11}^{1/2} & 0 \\ 0 & \omega _{22}^{1/2}% \end{array}% \right] P_{\Omega }\left[ \begin{array}{cc} 1+\rho & 0 \\ 0 & 1-\rho% \end{array}% \right] P_{\Omega }^{\prime }\left[ \begin{array}{cc} \omega _{11}^{1/2} & 0 \\ 0 & \omega _{22}^{1/2}% \end{array}% \right] , \end{equation*}% where $P_{\Omega }$ is an orthogonal matrix and $\rho =\omega _{12}/\omega _{11}^{1/2}\omega _{22}^{1/2}$. For the numerical simulations, we specify $% \omega _{11}=\omega _{22}=1$. We use the decomposition of $\Omega $ to perform numerical simulations for a class of covariance matrices:% \begin{equation*} \Sigma =P_{\Omega }\left[ \begin{array}{cc} 1+\rho & 0 \\ 0 & 0% \end{array}% \right] P_{\Omega }^{\prime }\otimes diag\left( \varsigma _{1}\right) +P_{\Omega }\left[ \begin{array}{cc} 0 & 0 \\ 0 & 1-\rho% \end{array}% \right] P_{\Omega }^{\prime }\otimes diag\left( \varsigma _{2}\right) , \end{equation*}% where $\varsigma _{1}$ and $\varsigma _{2}$ are $k$-dimensional vectors. We consider two possible choices for $\varsigma _{1}$ and $\varsigma _{2}$. For the first design, we set $\varsigma _{1}=\varsigma _{2}=\left( 1/\varepsilon -1,1,...,1\right) ^{\prime }$. The covariance matrix then simplifies to a Kronecker product: $\Sigma =\Omega \otimes diag\left( \varsigma _{1}\right) $. For the non-Kronecker design, we set $\varsigma _{1}=\left( 1/\varepsilon -1,1,...,1\right) ^{\prime }$ and $\varsigma _{2}=$ $\left( 1,...,1,1/\varepsilon -1\right) ^{\prime }$. This setup captures the data asymmetry in extracting information about the parameter $\beta $ from each instrument. For small $\varepsilon $, the angle between $\varsigma _{1}$ and $\varsigma _{2}$ is nearly $90^{\circ }$. We report numerical simulations for $\varepsilon =\left( k+1\right) ^{-1}$. As $k$ increases, the vector $\varsigma _{1}$ becomes orthogonal to $\varsigma _{2}$ in the non-Kronecker design. We set the parameter $\mu =\left( \lambda ^{1/2}/\sqrt{k}\right) 1_{k}$ for $% k=2,5,10,20$ and $\rho =-0.5,0.2,0.5,0.9$. We choose $\lambda /k=0.5,1,2,4,8,16$, which span the range from weak to strong instruments. We focus on tests with significance level 5\% for testing $\beta _{0}=0$. To conserve space, we report here only power plots for $k=5$, $\rho =0.9$, and $% \lambda /k=2,8$. The full set of simulations is available on Marcelo Moreira's website. We present plots for the power envelope and power functions against various alternative values of $\beta $ and $\lambda $. All results reported here are based on 1,000 Monte Carlo simulations. We plot power as a function of the rescaled alternative $\left( \beta -\beta _{0}\right) \lambda ^{1/2}$, which reflects the difficulty in making inference on $\beta $ for different instruments' strength. \begin{figure}[tbh] \caption{Power Comparison (Kronecker Variance)} \label{fig:Kronecker}\centering \bigskip \minipage{0.5\textwidth} \centering % \includegraphics[width=5.5cm]{eminemHacivKron1rho09k5f2MM1.pdf} \endminipage% \hfill \minipage{0.5\textwidth} \centering % \includegraphics[width=5.5cm]{eminemHacivKron1rho09k5f8MM1.pdf} \endminipage % \hfill \minipage{0.5\textwidth} \bigskip \centering % \includegraphics[width=5.5cm]{eminemHacivKron1rho09k5f2MM2.pdf} \endminipage% \hfill \minipage{0.5\textwidth} \bigskip \centering % \includegraphics[width=5.5cm]{eminemHacivKron1rho09k5f8MM2.pdf} \endminipage % \hfill \end{figure} Figure \ref{fig:Kronecker} reports numerical results for the Kronecker product design. All four pictures present the power envelope and power curves for two existing tests, the Anderson-Rubin ($AR$) and score ($LM$) tests. The first two graphs plot the power curves for the three WAP tests based on the MM1 statistic with $\sigma ^{2}=10$. All three tests reject the null when the $h_{1}\left( s,t\right) $ statistic is larger than an adjusted critical value function. In practice, we approximate these critical value functions with 10,000 replications. The MM1 test sets the critical value function to be the 95\% empirical quantile of $h_{1}\left( S,t\right) $. The MM1-SU\ test uses a conditional linear programming algorithm to find its critical value function. The MM1-LU test uses a nonlinear optimization package. The AR test has power considerably lower than the power envelope when instruments are both weak ($\lambda /k=2$) and strong ($\lambda /k=8$). The LM test does not perform well when instruments are weak, and its power function is not monotonic even when instruments are strong. These two facts about the AR\ and LM tests are well documented in the literature; see % \citet{Moreira03} and AMS06. The figure also reveals some salient findings for the tests based on the MM1 statistic. First, all MM1-based tests have correct size. Second, the MM1 similar test can have large bias to the point that it has zero power for parts of the parameter space. Hence, a naive choice for the density can yield a WAP test which can have overall poor power. We can eliminate this problem by imposing an unbiased condition when selecting an optimal test. The MM1-SU\ test is easy to implement and has power closer to the power upper bound. When instruments are weak, its power lies moderately below the reported power envelope. This is expected as the number of parameters is too large\footnote{% The MM1-SU power is nevertheless close to the two-sided power envelope for orthogonally invariant tests as in AMS06 (which is applicable to this design, but not reported here).}. When instruments are strong, its power is virtually the same as the power envelope. To support the use of the MM1-SU\ test we also consider the MM1-LU test, which imposes a weaker unbiased condition. Close inspection of the graphs show that the derivative of the power function of the MM1 test is different from zero at $\beta =\beta _{0}$. This observation suggests that the power curve of the WAP test would change considerably if we were to force the power derivative to be zero at $\beta =\beta _{0}$. Indeed, we implement the MM1-LU test where the locally unbiased condition is true at only one point, the true parameter $\mu $. This parameter is of course unknown to the researcher and this test is not feasible. However, by considering the locally unbiased condition for other values of the instruments' coefficients, the WAP test would be smaller ---not larger. The power curves of MM1-LU\ and MM1-SU\ tests are very close, which shows that there is not much to be gained by relaxing the strongly unbiased condition. The last two graphs plot the power curves for the three WAP tests based on the MM2 statistic with $\zeta =10$. By using the density $h_{2}\left( s,t\right) $, we avoid the pitfalls for the MM1 test. Recall that $% h_{2}\left( s,t\right) $ is invariant to those data transformations which preserve the two-sided hypothesis testing problem. Hence, the MM2 similar test is unbiased and has overall good power without imposing any additional unbiased conditions. The graphs illustrate this theoretical finding, as the MM2, MM2-SU, and MM2-LU tests have numerically the same power curves. This conclusion changes dramatically when the covariance matrix is no longer a Kronecker product. \begin{figure}[tbh] \caption{Power Comparison (Non-Kronecker Variance)} \label{fig:Non-Kronecker}\centering \bigskip \minipage{0.5\textwidth} % \centering \includegraphics[width=5.5cm]{eminemHacivKron0rho09k5f2MM1.pdf} % \endminipage\hfill \minipage{0.5\textwidth} \centering % \includegraphics[width=5.5cm]{eminemHacivKron0rho09k5f8MM1.pdf} \endminipage % \hfill \minipage{0.5\textwidth} \bigskip \centering % \includegraphics[width=5.5cm]{eminemHacivKron0rho09k5f2MM2.pdf} \endminipage% \hfill \minipage{0.5\textwidth} \bigskip \centering % \includegraphics[width=5.5cm]{eminemHacivKron0rho09k5f8MM2.pdf} \endminipage % \hfill \end{figure} Figure \ref{fig:Non-Kronecker} presents the power curves for all reported tests for the non-Kronecker design. Both MM1 and MM2 tests are severely biased and have overall bad power. For each design, we can make the tests approximately unbiased by choosing the $\sigma ^{2}$ and $\zeta $ parameters large enough. However, this unbiasedness control is pointwise in the parameter space. We can always find a design such that each test behaves as a one-sided test and has very low power in parts of the parameter space. Hence, the strong asymptotic bias and often-low power of the conditional Wald tests found by \citet{AndrewsMoreiraStock07} also hold for the MM1 (even for the homoskedastic IV model) and MM2 similar tests (only for the HAC-IV model). These WAP similar tests are highly biased with power equal to zero in some parts of the parameter space. Therefore, just as % \citet{AndrewsMoreiraStock07} object to the use of conditional Wald tests, we do not recommend the MM1 and MM2 similar tests for empirical researchers. Proposition \ref{No sign invariance Prop} shows that we cannot find a group of data transformations which preserve the two-sided testing problem with heteroskedastic-autocorrelated errors. Hence, a choice for the density for the WAP test based on symmetry considerations is not obvious. The correct density choice can be particularly difficult due to the large parameter-dimension (the coefficients $\mu $ and covariance $\Sigma $). Instead, we can endogenize the weight choice so that the WAP test will be automatically unbiased. This is done by the MM1-LU and MM2-LU tests. These two tests perform as well as the MM1-SU and MM2-SU\ tests. Because the latter two tests are easy to implement, we recommend their use in empirical practice. \section{Asymptotic Theory \label{Asymptotic Sec}} All theoretical and numerical results so far do not rely on the sample size $% n$ at all as we have assumed the statistics $S$ and $T$ to be exactly normally distributed with known variance $\Sigma $. In this section we relax this assumption at the cost of asymptotic approximations. Let $z_{i}$ and $v_{i}$ denote the $i$-th row of $Z$ and $V$, respectively, written as column vectors of dimensions $k$ and $2$. We make the following two assumptions as the sample size $n$ grows. \bigskip \noindent \textbf{Assumption 1. }$n^{-1}Z^{\prime }Z=n^{-1}\sum_{i=1}^{n}z_{i}z_{i}^{\prime }\rightarrow _{p}D_{Z}$ for some positive definite $k\times k$ matrix $D_{Z}$.\smallskip \bigskip \noindent \textbf{Assumption 2. }$n^{-1/2}\sum_{i=1}^{n}\left( v_{i}\otimes z_{i}\right) \rightarrow _{d}N(0,\Sigma _{\infty })$ for some positive definite $2k\times 2k$ matrix $\Sigma _{\infty }$. \bigskip Assumption 1 holds under Birkhoff's Ergodic Theorem. Assumption 2 holds under suitable conditions by a central limit theorem (CLT). It also assumes that the long-run covariance matrix of $\Sigma _{\infty }$ is positive definite, as is usual in the literature. We no longer omit the dependence of $\Sigma $ on the sample size $n$ and, hereinafter, write $\Sigma _{n}$. Assumption 2 asserts that $\Sigma _{\infty }$ is the limit of $\Sigma _{n}$ as $n$ grows. Let $\widehat{\Sigma }_{n}$ be a consistent estimator of $% \Sigma _{\infty }$ based on $\{\left( \widehat{v}_{i}\otimes z_{i}\right) :i\leq n\}$, where $\widehat{v}_{i}$ are reduced-form residuals. There are many HAC estimators in the literature that can be used for this purpose; see, e.g., \citet{NeweyWest87} and \citet{Andrews91}. For brevity, we do not provide an explicit set of conditions under which one or more of these HAC estimators is consistent; see \citet{Jansson02} for details. We note, however, that the presence of weak instruments does not complicate standard proofs of the consistency of HAC estimators. Indeed, the convergence for most estimators holds uniformly over all true parameters $\beta $ and $\pi $. We now introduce feasible versions of $S_{n}$ and $T_{n}$ with the variance $% \Sigma _{n}$ replaced by the estimator $\widehat{\Sigma }_{n}$:% \begin{eqnarray} \widehat{S}_{n} &=&\left[ \left( b_{0}^{\prime }\otimes I_{k}\right) \widehat{\Sigma }_{n}\left( b_{0}\otimes I_{k}\right) \right] ^{-1/2}\left( b_{0}^{\prime }\otimes I_{k}\right) \overline{R}_{n}\text{ and} \label{(S^ and T^ defn)} \\ \widehat{T}_{n} &=&\left[ \left( a_{0}^{\prime }\otimes I_{k}\right) \widehat{\Sigma }_{n}^{-1}\left( a_{0}\otimes I_{k}\right) \right] ^{-1/2}\left( a_{0}^{\prime }\otimes I_{k}\right) \widehat{\Sigma }_{n}^{-1}% \overline{R}_{n}, \notag \end{eqnarray}% where $\overline{R}_{n}=vec\left[ \left( Z^{\prime }Z\right) ^{-1/2}Z^{\prime }Y\right] $. Likewise, we define the feasible statistic $% \widehat{\psi }_{n}$ as $\psi \left( S,T,\Sigma ,D_{Z}\right) $ with the arguments being replaced by their sample analogues:% \begin{equation} \widehat{\psi }_{n}=\psi (\widehat{S}_{n},\widehat{T}_{n},\widehat{\Sigma }% _{n},\widehat{D}_{Z})\text{, where }\widehat{D}_{Z}=n^{-1}Z^{\prime }Z. \label{(Psi^ and Dz^ defn)} \end{equation} \bigskip \noindent \textbf{Assumption 3. }The prior distribution for $\left( \beta ,\pi \right) $ is absolutely continuous to the Lebesgue measure in $\mathbb{R% }^{k+1}$. Its density \begin{equation*} w(\beta ,\pi ,\widehat{D}_{Z})=w_{1}(\left. \pi \right\vert \beta ,\widehat{D% }_{Z})\cdot w_{2}(\beta ,\widehat{D}_{Z}) \end{equation*}% has full support and is a continuous function of $\pi $ and $\beta $. \bigskip Assumption 3 allows the density $w(\beta ,\pi ,\widehat{D}_{Z})$ to depend on the data through $\widehat{D}_{Z}$. This generalization allows us to cover all tests considered here and asymptotically behaves as $w(\beta ,\pi ,D_{Z})$ (and so we will omit the dependence of the weights on $\widehat{D}% _{Z}$ out of convenience). Although the conditional density $w_{1}(\left. \pi \right\vert \beta )$ does not depend on $\beta $ for the MM1 tests, it does depend on $\beta $ for the MM2 tests. Assumption 3 also guarantees that the priors for $\beta $ and $\pi $ are not dogmatic and will vanish asymptotically as in the Bernstein-von Mises theorem. If we set the prior on $\mu $, then the associated prior on $\pi $ $=\left( Z^{\prime }Z\right) ^{1/2}\mu $ depends on the sample size. For example, the MM statistics introduced in (\ref{(h densities)}) use the prior $\mu \sim N\left( 0,\sigma ^{2}\Phi \right) $. For the associated prior on $\pi \sim N\left( 0,\left( \sigma ^{2}/n\right) \widehat{D}_{Z}^{-1/2}\Phi \widehat{D}% _{Z}^{-1/2}\right) $ not to be sensitive to the sample size, the parameters $% \sigma ^{2}$ and $\zeta $ present in the MM1 and MM2 statistics must eventually grow at the rate $n$. We make the dependence of $\Lambda \left( \beta ,\mu \right) $ on the sample size $n$ explicit and, hereinafter, use the notation $\Lambda _{n}$. We now analyze the asymptotic behavior of the WAP similar and WAP-SU tests. Recall that both of these types of tests depend on the test statistic% \begin{equation} \frac{h_{\Lambda _{n}}\left( s,t\right) }{f_{\beta _{0}}^{S}\left( s\right) \cdot h_{\Lambda _{n}}^{T}\left( t\right) }. \label{(WAP original statistic)} \end{equation}% When instruments are weak, the numerator and denominator have the same order of magnitude. When instruments are strong, the integrands in the weighted densities $h_{\Lambda _{n}}\left( s,t\right) $ and $h_{\Lambda _{n}}^{T}\left( t\right) $ grow exponentially fast and we can apply the Laplace approximation. Because both densities involve $k+1$ integrals, the test statistic in (\ref{(WAP original statistic)}) is again well-behaved. The caveat is that a simple, closed-form approximation for $h_{\Lambda _{n}}^{T}\left( t\right) $ does not seem available under strong instruments. The WAP similar and WAP-SU tests, however, remain the same if we standardize (\ref{(WAP original statistic)}) by any function of $t$. We replace $% h_{\Lambda _{n}}^{T}\left( t\right) $ by $\left( 1+\left\Vert t\right\Vert \right) ^{-1}h_{\Lambda _{\beta _{0},n}}^{T}\left( t\right) $, where \begin{equation} h_{\Lambda _{\beta _{0},n}}^{T}\left( t\right) =\int f_{\beta _{0},\left( Z^{\prime }Z\right) ^{1/2}\pi }^{T}\left( t\right) w\left( \beta _{0},\pi \right) d\pi . \label{(weighted null density)} \end{equation} The WAP similar and WAP-SU tests reject the null when% \begin{equation} WAP=\frac{h_{\Lambda _{n}}\left( S,T\right) }{f_{\beta _{0}}^{S}\left( S\right) \cdot \left( 1+\left\Vert T\right\Vert \right) ^{-1}h_{\Lambda _{\beta _{0},n}}^{T}\left( T\right) } \label{(WAP statistic)} \end{equation}% is larger than $\kappa _{n}\left( t\right) $ and $\kappa _{n}\left( s,t\right) $, respectively\footnote{% The use of a Laplace approximation of the ratio of weighted average under the alternative and the null is standard under the usual asymptotics. What is perhaps not standard is the additional term to absorb different rates and unify nonstandard asymptotics. Indeed, if we were to replace $h_{\Lambda }^{T}\left( t\right) $ only by $h_{\Lambda _{0,n}}^{T}\left( t\right) $, the numerator and denominator in (\ref{(WAP original statistic)}) would have different orders of magnitude under strong instruments.}. Whether the instruments are weak or strong, we are able to obtain an approximation to (\ref{(WAP statistic)}). Define% \begin{eqnarray*} n\cdot Q_{n}(\beta ,\pi ) &=&\frac{1}{2}\left\Vert \Sigma ^{-1/2}\left( \overline{R}-(a\otimes \left( Z^{\prime }Z\right) ^{1/2}\pi )\right) \right\Vert ^{2} \\ &=&\frac{1}{2}\left\Vert [S:T]-\left[ (\beta -\beta _{0})C_{\beta _{0}}:% \text{ }D_{\beta }\right] (I_{2}\otimes \left( Z^{\prime }Z\right) ^{1/2}\pi )\right\Vert ^{2}. \end{eqnarray*}% In Appendix B shows that the WAP statistic is asymptotically equivalent to% \begin{equation} \frac{\int \exp \left( -n\cdot Q_{n}\left( \beta ,\pi \left( \beta \right) \right) \right) w\left( \beta ,\pi \left( \beta \right) \right) \left\vert \left( a^{\prime }\otimes \widehat{D}_{Z}^{1/2}\right) \Sigma _{n}^{-1}\left( a\otimes \widehat{D}_{Z}^{1/2}\right) \right\vert ^{-1/2}d\beta }{\exp \left( -\frac{S^{\prime }S}{2}\right) \left[ 1+\left\Vert T\right\Vert \right] ^{-1}w\left( \beta _{0},\pi \left( \beta _{0}\right) \right) \left\vert \left( a_{0}^{\prime }\otimes \widehat{D}% _{Z}^{1/2}\right) \Sigma _{n}^{-1}\left( a_{0}\otimes \widehat{D}% _{Z}^{1/2}\right) \right\vert ^{-1/2}}, \label{(WAP 1st Laplace)} \end{equation}% where the constrained maximum likelihood estimator (MLE) for $\pi $ is% \begin{eqnarray} \pi \left( \beta \right) &=&\left( Z^{\prime }Z\right) ^{-1/2}\left[ (a^{\prime }\otimes I_{k})\Sigma _{n}^{-1}(a\otimes I_{k})\right] ^{-1}(a^{\prime }\otimes I_{k})\Sigma _{n}^{-1}\overline{R}\text{ and} \\ \overline{R} &=&\Sigma _{n}^{1/2}\left[ \begin{array}{c} \left[ \left( b_{0}^{\prime }\otimes I_{k}\right) \Sigma _{n}\left( b_{0}\otimes I_{k}\right) \right] ^{-1/2}\left( b_{0}^{\prime }\otimes I_{k}\right) \Sigma _{n}^{1/2} \\ \left[ \left( a_{0}^{\prime }\otimes I_{k}\right) \Sigma _{n}^{-1}\left( a_{0}\otimes I_{k}\right) \right] ^{-1/2}\left( a_{0}^{\prime }\otimes I_{k}\right) \Sigma _{n}^{-1/2}% \end{array}% \right] ^{\prime }\left[ \begin{array}{c} S \\ T% \end{array}% \right] . \notag \end{eqnarray} The same approximation (\ref{(WAP 1st Laplace)}) holds for the $\widehat{WAP} $ statistic where we replace $S$, $T$, and $\Sigma $ by their feasible versions given in (\ref{(S^ and T^ defn)}). The resulting approximation to the $\widehat{WAP}$ statistic is a function of $\widehat{S}_{n}$, $\widehat{T% }_{n}$, $\Sigma _{n}$, and $\widehat{D}_{Z}$. The critical values for the WAP conditional tests and WAP-SU tests, respectively $\kappa _{n}\left( t\right) $ and $\kappa _{n}\left( s,t\right) $, are taken under the assumption that the $k$-dimensional vector $\widehat{S}_{n}$ has a standard normal distribution (in practice, these critical values are also functions of the consistent estimators $\widehat{\Sigma }_{n}$ and $\widehat{D}_{Z}$ as well, but we omit this dependence out of convenience). For example, for a given weight density $w\left( \beta ,\pi \right) $, the critical function $% \kappa _{n}\left( t\right) $ is simply the $1-\alpha $ quantile of (\ref% {(WAP 1st Laplace)}) given $T=t$. We now find the asymptotic distribution for the WAP\ tests under the WIV asymptotics. We make the following assumption. \bigskip \noindent \textbf{Assumption WIV-FA}. (a) $\pi =C/n^{1/2}$ for some non-stochastic vector $C$. (b) $\beta $ is a fixed constant for all $n\geq 1.$ (c) $k$ is a fixed positive integer that does not depend on $n.$\smallskip \bigskip Under WIV, $\pi \left( \beta \right) $ is $o_{p}\left( 1\right) $ and the WAP statistics behave the same as if the weights were simply $w\left( \beta ,0\right) $. As $n\rightarrow \infty $, the finite-sample critical value functions $\kappa _{n}\left( t\right) $ and $\kappa _{n}\left( s,t\right) $ respectively converge to their asymptotic counterparts $\kappa _{\infty }\left( t\right) $ and $\kappa _{\infty }\left( s,t\right) $, which are based on (\ref{(WAP 1st Laplace)}) with $w\left( \beta ,\pi \left( \beta \right) \right) $ replaced by $w\left( \beta ,0\right) $. We then obtain the following convergence by the continuous mapping theorem and the joint distribution% \begin{eqnarray} \left[ \begin{array}{c} S_{\infty } \\ T_{\infty }% \end{array}% \right] &\sim &N\left( \left[ \begin{array}{c} \left( \beta -\beta _{0}\right) C_{\beta _{0},\infty } \\ D_{\beta _{0},\infty }% \end{array}% \right] \left( D_{Z}\right) ^{1/2}C,I_{2k}\right) \text{, where} \label{(S and T WIV-FA)} \\ C_{\beta _{0},\infty } &=&\left[ \left( b_{0}^{\prime }\otimes I_{k}\right) \Sigma _{\infty }\left( b_{0}\otimes I_{k}\right) \right] ^{-1/2}\text{ and} \notag \\ D_{\beta _{0},\infty } &=&\left[ \left( a_{0}^{\prime }\otimes I_{k}\right) \Sigma _{\infty }^{-1}\left( a_{0}\otimes I_{k}\right) \right] ^{-1/2}\left( a_{0}^{\prime }\otimes I_{k}\right) \Sigma _{\infty }^{-1}\left( a\otimes I_{k}\right) . \notag \end{eqnarray} \bigskip \begin{theorem} \label{Weak IV Thm} Under Assumptions \emph{W\emph{IV-FA} }and \emph{1-3}:% \newline \emph{(i)} $\left( \widehat{S}_{n},\widehat{T}_{n}\right) \rightarrow _{d}\left( S_{\infty },T_{\infty }\right) ;$\newline \emph{(ii)} $P\left( WAP\left( \widehat{S}_{n},\widehat{T}_{n}\right) >\kappa _{n}\left( \widehat{T}_{n}\right) \right) \rightarrow P\left( WAP\left( S_{\infty },T_{\infty }\right) >\kappa _{\infty }\left( T_{\infty }\right) \right) ;$ and\newline \emph{(iii)} $P\left( WAP\left( \widehat{S}_{n},\widehat{T}_{n}\right) >\kappa _{n}\left( \widehat{S}_{n},\widehat{T}_{n}\right) \right) \rightarrow P\left( WAP\left( S_{\infty },T_{\infty }\right) >\kappa _{\infty }\left( S_{\infty },T_{\infty }\right) \right) .$ \end{theorem} \bigskip Both WAP conditional and WAP-SU tests have asymptotic null rejection probabilities being equal to $\alpha $. The asymptotic power of the WAP tests has a complicated form under WIV asymptotics. We can, of course, rely on numerical simulations to compare their performance with other available tests. In Section \ref{Application Sec}, we present power plots for testing the intertemporal elasticity of substitution based on the designs of % \citet{Yogo04}. For strong instruments with local alternatives (SIV-LA), we consider the Pitman drift where $\beta $ is local to the null value $\beta _{0}$ as $% n\rightarrow \infty $. \bigskip \noindent \textbf{Assumption SIV-LA. }(a)\textbf{\ }$\beta =\beta _{0}+B/n^{1/2}$ for some constant $B\in \mathbb{R}.$ (b) $\pi $ is a fixed non-zero $k$-vector for all $n\geq 1.$ (c) $k$ is a fixed positive integer that does not depend on $n$.$\medskip $ \bigskip Under the SIV-LA asymptotics, the WAP statistics are shown to be increasing transformations of the $LR$ statistic. This result is general and holds for any prior which satisfies Assumption 3. \bigskip \begin{theorem} \label{Asy Eff Strong IV Thm} Suppose Assumptions \emph{\emph{SIV-LA} }and \emph{1-3 }hold. The long-run variance $\Sigma _{\infty }$ is known, or unknown but consistently estimable by $\widehat{\Sigma }_{n}$. Then the WAP similar and WAP-SU tests are asymptotically equivalent to the LR test given in (\ref{(LR stat)}). \end{theorem} \textbf{Comment.} \textbf{1. }In the proof, we apply the Laplace approximation twice, first with respect to the integral for $\pi $ and then for $\beta $. For the MM1 and MM2 statistics, we can alternatively find a simple expression after integrating out the prior for the instruments' coefficients with $\sigma ^{2}$ or $\zeta $ growing at rate $n$ and then applying the Laplace approximation for $\beta $. Both approaches coincide. \textbf{2. }The SIV-LA behavior of the ECS (HAC-IV) test appears to be just a special case of our theory using Laplace approximations. \textbf{3. }For higher-order expansions, we can use Watson's lemma; for references, we recommend \citet{Olver97} for deterministic functions and % \citeauthor{OnatskiMoreiraHallin14a} (\citeyear{OnatskiMoreiraHallin14a}, % \citeyear{OnatskiMoreiraHallin14b}) for random functions. \textbf{4.} Because $T_{n}/n^{1/2}\rightarrow _{p}D_{\beta _{0}}D_{Z}^{1/2}\pi $ under SIV-LA, $\left\Vert T_{n}\right\Vert $ diverges to infinity w.p.1 (with probability approaching one). The critical value functions for both the WAP conditional and WAP-SU tests collapse then to the $1-\alpha $ asymptotic (unconditional) quantile. As a result, the WAP conditional and WAP-SU tests are asymptotically similar and efficient under the SIV asymptotics. \bigskip The null rejection probability of WAP tests is $\alpha $ under WIV and SIV asymptotics. Pointwise convergence of the null rejection probability, of course, does not necessarily imply the size is asymptotically $\alpha $ (in a uniform sense). \citet[p. 1037]{Moreira03} suggests to use \citet{Parzen54} and \citet{Andrews86} to assure size is uniformly controlled. A series of papers, including \citet{AndrewsChengGuggenberger11} and % \citet{AndrewsGuggenberger14a}, develop several powerful methods to check uniform size control and have been applied to many econometric models; see % \citet{AndrewsGuggenberger10}, \citet{AndrewsGuggenberger14a}, and % \citet{MillsMoreiraVilela14b}, among others. Conceivably, we can apply those methods to the WAP statistics coupled with the critical value functions $% \kappa _{n}\left( t\right) $ and $\kappa _{n}\left( s,t\right) $. This line of research will be considered in a separate paper. We can also analyze the WAP tests under strong instruments with fixed alternatives (SIV-FA). We follow \citet{MillsMoreiraVilela14} and make the following assumption. \bigskip \noindent \textbf{Assumption SIV-FA. }(a)\textbf{\ }$\beta =\beta _{0}+B$ for some nonzero $B\in \mathbb{R}.$ (b) $\pi $ is a fixed non-zero $k$-vector for all $n\geq 1.$ (c) $k$ is a fixed positive integer that does not depend on $n$.$\medskip $ \bigskip It is natural to expect that the power converges to one if the parameter $% \beta $ is fixed. However, not all tests have this property even in the IV model with homoskedastic errors; see \citet{AndrewsMoreiraStock04} and % \citet{MillsMoreiraVilela14} for examples. Hence, it is important to establish consistency for the WAP tests. If the parameter $\beta $ is fixed, the WAP statistics are proportional to the exponential of $LR$. Because $LR/n$ converges to a non-zero constant, the WAP tests are consistent. The next theorem formalizes this result. \bigskip \begin{theorem} \label{Consistency Thm} Suppose Assumptions \emph{\emph{SIV-FA} }and \emph{% 1-3 }hold. The long-run variance $\Sigma _{\infty }$ is known, or unknown but consistently estimable by $\widehat{\Sigma }_{n}$. Then the following hold:\newline \emph{(i)} $2.\left( \log \widehat{WAP}\right) /n=\widehat{LR}/n+o_{p}\left( 1\right) ;$ and\newline \emph{(ii)} $\widehat{LR}/n=LR/n+o_{p}\left( 1\right) \rightarrow \gamma >0.$ \end{theorem} \textbf{Comment:} If $D_{\beta }\neq 0$, the functions $\kappa_{n} \left( t\right) $ and $\kappa_{n} \left( s,t\right) $ converge to a constant obtained under SIV-FA. If $D_{\beta }=0$, the critical functions do not converge. However, they are bounded, and so WAP tests are consistent. \section{Power Comparison \label{Application Sec}} In this section, we follow I. \citet{Andrews15} who calibrates designs for power comparison based on the work of \citet{Yogo04} on the elasticity of intertemporal substitution in eleven developed countries. \citet{Yogo04} tests the effect of interest rates on the level of aggregate demand in an IV model. He considers a linear regression in which asset return affects consumption growth, and the reverse form of this regression. In both equations, the endogenous variable (consumption or asset return) can be correlated with the error (innovation). To remedy this problem, he chooses four instruments: lagged values of nominal interest rate, inflation, consumption growth, and log dividend-price ratio. I. \citet{Andrews15} selects the real interest rate (\emph{rf} in % \citeauthor{Yogo04}'s (\citeyear{Yogo04}) notation) as the endogenous variable. Several tests perform well in his design, including MM2-SU, PI-CLC, and (WAP similar) ECS tests. In fact, only in a few countries do these tests have slightly different performance; see Section 7.2.1 of I. % \citet{Andrews15}. The difficulty in assessing the relative performance of each test arises because the instruments are not particularly weak in this design. Indeed, the first-stage F-statistic reported by \citet{Yogo04} (see his Table I) is below 10 in only four countries (Japan, Switzerland, United Kingdom, and the United States). We instead join \citet{deCastro15} in choosing the real stock return (\emph{re} in \citeauthor{Yogo04}'s (% \citeyear{Yogo04}) notation) as the endogenous variable. The instruments are considerably weaker in this design: the F-statistic is smaller than 4.18 in all countries, and always less than the F-statistic for interest rate. Our decision to use stock returns aims to highlight the differences between the tests proposed for the HAC-IV model. Apart from using stock returns instead of interest rates, our design is akin to that of I. \citet{Andrews15}. We use the Newey-West estimator with three lags, and the resulting power curves are based on 5,000 Monte Carlo simulations. In parallel to our asymptotic theory, we choose the ratio of the tuning parameters $\sigma ^{2}$ and $% \zeta $ to the sample size to be one-tenth for the MM1 and MM2 statistics, respectively. Figure 3 plots power curves for the two-sided power envelope, Anderson-Rubin (AR), score (LM), WAP similar MM1, WAP similar MM2, and ECS (HAC-IV) tests. Although the AR and LM tests are unbiased, the MM1, MM2, and ECS tests perform unreliably. To illustrate the problem, we mention three countries. For Australia, the MM1 and ECS tests have low power for parts of the parameter space, while the MM2 test behaves more like a two-sided test. For France, the ECS test performs well, while both MM1 and MM2 tests can have low power. For the USA, the ECS test has power near zero and behaves more as a one-sided test while the MM1 and MM2 tests are nearly unbiased. In some countries, these three tests have power even lower than the Anderson-Rubin test (e.g., the ECS test for Germany and Italy). \begin{figure}[tbh] \caption{Power Comparison (WAP similar tests)}% \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[trim = 45mm 80mm 45mm 80mm, clip, width=5.5cm]{AUS1.pdf} \end{subfigure}% \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[trim = 45mm 80mm 45mm 80mm, clip, width=5.5cm]{CAN1.pdf} \end{subfigure} \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[trim = 45mm 80mm 45mm 80mm, clip, width=5.5cm]{FR1.pdf} \end{subfigure}% \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[trim = 45mm 80mm 45mm 80mm, clip, width=5.5cm]{GER1.pdf} \end{subfigure} \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[trim = 45mm 80mm 45mm 80mm, clip, width=5.5cm]{ITA1.pdf} \end{subfigure}% \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[trim = 45mm 80mm 45mm 80mm, clip, width=5.5cm]{JAP1.pdf} \end{subfigure} \end{figure} \clearpage \begin{figure}[tbh] \ContinuedFloat \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[trim = 45mm 80mm 45mm 80mm, clip, width=5.5cm]{NTH1.pdf} \end{subfigure}% \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[trim = 45mm 80mm 45mm 80mm, clip, width=5.5cm]{SWD1.pdf} \end{subfigure} \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[trim = 45mm 80mm 45mm 80mm, clip, width=5.5cm]{SWT1.pdf} \end{subfigure}% \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[trim = 45mm 80mm 45mm 80mm, clip, width=5.5cm]{UK1.pdf} \end{subfigure} \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[trim = 45mm 80mm 45mm 80mm, clip, width=5.5cm]{USA1.pdf} \end{subfigure} \end{figure} We then compare power among two-sided tests which have arguably better performance. Figure 4 plots power curves for the two-sided power envelope, MM1-SU, MM2-SU, CQLR, CQLR-kron, and PI-CLC tests. All tests are adequate for two-sided hypothesis testing. The PI-CLC and CQLR-kron test show some improvements over the CQLR test for some, but not all, countries. The MM1-SU test behaves near the MM2-SU test for several countries, but it has considerably lower power for Japan and the United States\footnote{% Conceivably, this power loss can be due to numerical integration over the whole real line. Power may be improved by transforming the parameter $\beta $ to the quantity $\theta =\tan ^{-1}\left( d_{\beta }/c_{\beta }\right) $. This improvement is left for future work.}. The MM2-SU test outperforms these tests and when it occasionally has less power, the power loss is small. This application based on real data supports our theoretical contribution and the use of the MM2-SU test in practice. \begin{figure}[tbh] \caption{Power Comparison (two-sided tests)}% \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[trim = 45mm 80mm 45mm 80mm, clip, width=5.5cm]{AUS2.pdf} \end{subfigure}% \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[trim = 45mm 80mm 45mm 80mm, clip, width=5.5cm]{CAN2.pdf} \end{subfigure} \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[trim = 45mm 80mm 45mm 80mm, clip, width=5.5cm]{FR2.pdf} \end{subfigure}% \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[trim = 45mm 80mm 45mm 80mm, clip, width=5.5cm]{GER2.pdf} \end{subfigure} \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[trim = 45mm 80mm 45mm 80mm, clip, width=5.5cm]{ITA2.pdf} \end{subfigure}% \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[trim = 45mm 80mm 45mm 80mm, clip, width=5.5cm]{JAP2.pdf} \end{subfigure} \end{figure} \clearpage \begin{figure}[tbh] \ContinuedFloat \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[trim = 45mm 80mm 45mm 80mm, clip, width=5.5cm]{NTH2.pdf} \end{subfigure}% \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[trim = 45mm 80mm 45mm 80mm, clip, width=5.5cm]{SWD2.pdf} \end{subfigure} \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[trim = 45mm 80mm 45mm 80mm, clip, width=5.5cm]{SWT2.pdf} \end{subfigure}% \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[trim = 45mm 80mm 45mm 80mm, clip, width=5.5cm]{UK2.pdf} \end{subfigure} \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[trim = 45mm 80mm 45mm 80mm, clip, width=5.5cm]{USA2.pdf} \end{subfigure} \end{figure} \section{Concluding Remarks \label{Conclusion Sec}} In this paper, we study the instrumental variable (IV) model with one endogenous regressor and heteroskedastic and autocorrelated (HAC)\ errors. The HAC-IV model with a known variance matrix is simpler than the model with an unknown but consistently estimable long-run variance. However, inference in both models is approximately the same whether or not the instruments are weakly correlated with the endogenous variable. This simplification allows us to develop a theory of optimal two-sided tests when the error stochastic process is of unknown form. We find that a test that has correct size and is optimal under standard asymptotics may still have unacceptably low power in finite samples. This issue appears in several econometric models. For the HAC-IV model, we solve this problem by finding weighted-average power tests satisfying additional two-sided conditions. In this paper, we consider two possibilities: the locally unbiased (LU)\ and strongly unbiased (SU) conditions. While the local condition yields admissible tests, the stronger condition is easier to implement. Better yet, the MM1-SU and MM2-SU tests have power numerically very close to their LU\ versions. Numerical simulations also show that the MM2-SU test outperforms other tests proposed for the HAC-IV model. The only other paper that satisfactorily addresses optimality of two-sided tests in the HAC-IV model is that of I. \citet{Andrews15}. He explores linear combinations of the Anderson-Rubin and score statistics, with weights dependent on the conditioning statistic $T$. A class of these conditional linear combination (CLC) tests is unbiased and admissible in the conditional problem. By proposing a minimax regret criterion, he delivers a test which plugs in a nuisance-parameter estimator. There is some power gained by broadening the focus beyond those three statistics. On the other hand, we impose $k$ additional constraints which are related to the SU condition. It would be interesting to reduce the required computational time while maintaining the power gains of the MM2-SU test by reducing the number of boundary conditions when finding a WAP test. Finally, the asymptotic theory based on Laplace approximations, developed in this paper, is easily adaptable to other econometric models. For the HAC-IV model, it relies on priors for the parameters $\beta $ and $\pi $ being insensitive to the sample size. For the MM1 and MM2 weights, this implies that the tuning parameters $\sigma ^{2}$ and $\zeta $ (used in the prior for $\mu =\left( Z^{\prime }Z\right) ^{1/2}\pi $) eventually grow at the sample size $n$. Some power gains with weak instruments may be possible when the tuning parameters are held constant. Another alternative is to find an automatic rate for $\sigma ^{2}$ and $\zeta $ using a plug-in method. For example, we could let these parameters be proportional to either $\left\Vert T\right\Vert ^{2}$ or $n\cdot \left\Vert \pi \left( \beta _{0}\right) \right\Vert ^{2}$. These quantities are stochastically bounded under weak instruments and grow at the rate $n$ under strong instruments (which assures asymptotic optimality). Since the constrained MLE $\pi \left( \beta _{0}\right) $ is a one-to-one transformation of $T$, these modifications of WAP-SU tests are still similar and uncorrelated with the pivotal statistic $% S $ (hence, satisfy the SU\ Condition)\footnote{% See \citeauthor{Moreira01} (\citeyear{Moreira01}, \citeyear{Moreira09a}) for selecting among similar tests without creating size distortions; the argument uses completeness of $T$ and is applicable to the SU condition as well.}. We will consider this possibility in future work. \bibliographystyle{econometrica}
3,212,635,537,842
arxiv
\section{Introduction} Phase-field approximations provide a convenient way of treating curvature energies numerically. Typically, the phase-field problem is more stable numerically than the potentially highly non-linear original problem. A classical example of a curvature energy is the Willmore functional \[ {\mathcal W}(\Sigma) = \int_\Sigma H^2\d\H^{n-1} \] where $\Sigma\subset\R^{n}$ is a hypersurface, $H$ denotes its mean curvature and $\H^k$ the $k$-dimensional Hausdorff measure. The same functional on plane curves is also sometimes referred to as Euler's elastica. There are several distinct phase-field approximations of Willmore's energy \cite{bretin2013phase}. The model we will use in the following is due to Bellettini and Paolini \cite{bellettini:1993vg}, based on a functional proposed by De Giorgi \cite[Conjecture 4]{degiorgi:1991jc}. Let $\Omega\Subset\R^n$ and $W$ be the double-well potential $W(u) = \frac14\,(u^2-1)^2$. Then we consider the Modica-Mortola energy \cite{modica:1987us,MR0445362} \begin{equation*} S_\varepsilon\colon L^1(\Omega)\to \R, \quad S_\varepsilon(u) = \begin{cases}\frac1{c_0}\int_\Omega \frac\eps2\, |\nabla u|^2 + \frac1\varepsilon\,W(u)\,\mathrm{d}x &u\in W^{1,2}(\Omega)\\ +\infty&\text{else}\end{cases} \end{equation*} as an approximation of the perimeter functional and \begin{equation*} {\mathcal W}_\varepsilon\colon L^1(\Omega)\to\R, \quad {\mathcal W}_\varepsilon(u) = \begin{cases}\frac1{c_0\,\varepsilon}\int_\Omega\,\left(\varepsilon\,\Delta u - \frac1\varepsilon\,W'(u)\right)^2\,\mathrm{d}x&u\in W^{2,2}(\Omega)\\ +\infty &\text{else}\end{cases} \end{equation*} as an approximation of Willmore's energy, where $c_0 = \int_{-1}^1\sqrt{2\,W(s)}\:ds = 2\sqrt{2}/3$ is a normalising constant. As proved in \cite{roger:2006ta}, the sum of the functionals satisfies \begin{equation*} \left[\Gamma(L^1(\Omega))-\lim_{\varepsilon\to 0}\,({\mathcal W}_\varepsilon + \Lambda\,S_\varepsilon)\right]\,(\chi_E - \chi_{\Omega\setminus E})\: =\: {\mathcal W}(\partial E) + \Lambda\,\H^{n-1}(\partial E) \end{equation*} for any $\Lambda>0$ if $E\Subset \Omega$ and $\partial E\in C^2$ in low dimension $n=2,3$. Consider a general sequence $u_\varepsilon$ such that \[ \limsup_{\varepsilon\to 0}(S_\varepsilon + {\mathcal W}_\varepsilon)(u_\varepsilon) < \infty. \] Then the diffuse area measures \[ \mu_\varepsilon := \frac1{c_0}\left(\frac\eps2 \,|\nabla u_\varepsilon|^2 + \frac1\varepsilon \,W(u_\varepsilon)\right)\cdot\L^n \] which localise the diffuse perimeter functional $S_\varepsilon$ and the diffuse Willmore measures \[ \alpha_\varepsilon := \frac1{c_0\,\varepsilon }\left(\varepsilon\,\Delta u_\varepsilon - \frac{W'(u_\varepsilon)}\varepsilon\right)\cdot\L^n \] which localise the functionals ${\mathcal W}_\varepsilon$ have weak limits $\mu$ and $\alpha$ in the sense of Radon measures, at least for a suitable subsequence. Due to \cite{roger:2006ta}, $\mu$ is the mass measure of an integral $(n-1)$-varifold $V$ in $\Omega$ with square integrable mean curvature and \begin{equation}\label{eq willmore alpha} |H_\mu|^2\cdot\mu \leq \alpha. \end{equation} In this article, we will show among other things that the relationship \eqref{eq willmore alpha} is only valid {\em inside} $\Omega$ and that $\mu$ may be very irregular on $\partial\Omega$ if the boundary values of the phase-fields $u_\varepsilon$ are not controlled. The choice of boundary values corresponds to a modelling assumption. In \cite{MR3590663}, we have investigated thin elastic structures in a bounded container, where the natural boundary condition is \begin{equation}\label{eq strict boundary} u_\varepsilon \equiv -1, \quad\partial_\nu u_\varepsilon \equiv 0\quad\text{on }\partial\Omega\qquad\text{ or in simpler terms}\quad u_\varepsilon \in -1 + W^{2,2}_0(\Omega) \end{equation} to express that the structures are confined to $\Omega$ and only touch the boundary tangentially. Another interesting boundary condition is \begin{equation}\label{eq neumann boundary} \partial_\nu u_\varepsilon \equiv 0 \quad\text{ on }\partial \Omega \end{equation} which expresses that the level sets of $u_\varepsilon$ can only meet $\partial\Omega$ at a right angle. This approximates the minimisation problem explored in \cite{alessandroni2014local}. Another possible boundary condition is \begin{equation}\label{eq weak boundary} u_\varepsilon \equiv 1 \text{ on }\Gamma_+, \qquad u_\varepsilon \equiv -1 \text{ on } \Gamma_-, \qquad u_\varepsilon \text{ free on }\partial\Omega\setminus \Gamma_+ \cup \Gamma_- \end{equation} which prescribes a phase transition inside $\Omega$ but leaves the particular nature of the transition free. It is clear that any regularity result for $\mu$ or the functions $u_\varepsilon$ inside $\Omega$ can be extended to $\overline \Omega$ under the boundary conditions \eqref{eq strict boundary}, since $u_\varepsilon$ can be extended to the whole space $\R^n$ as a constant function without changing the energy \[ {\mathcal E}_\varepsilon(u_\varepsilon) := ({\mathcal W}_\varepsilon + S_\varepsilon)(u_\varepsilon). \] On the other hand, the regularity of $u_\varepsilon$ and $\mu$ under the boundary values \eqref{eq neumann boundary} or \eqref{eq weak boundary} is less obvious. Furthermore, not specifying boundary values can simplify proofs significantly when local results are considered, see for example \cite[Corollary 2.15]{DW_conv}. In this article, we extend regularity results for the phase-fields $u_\varepsilon$ from \cite{DW_conv,MR3590663}. Our main results are the following. \begin{theorem}\label{thm:main1} Let $\Omega \Subset\R^n$ for $n=2,3$. Then the following hold true. \begin{enumerate} \item Assume that $u_\varepsilon \in C^0(\overline\Omega)$ is uniformly bounded in $L^\infty(\partial\Omega)$. Then $u_\varepsilon$ is uniformly bounded in $L^\infty(\Omega)$ if $n=2$ and in $L^p(\Omega)$ for all $p<\infty$ if $n=3$. \item Assume that $\partial\Omega\in C^2$ and $\partial_\nu u_\varepsilon \equiv 0$ on $\partial\Omega$ for all $\varepsilon>0$. Then $u_\varepsilon$ is uniformly bounded in $L^\infty(\Omega)$ and \[ |u_\varepsilon(x) - u_\varepsilon(y)| \leq \frac{C}{\varepsilon^\gamma}\,|x-y|^\gamma \qquad \forall\ x\in \Omega, y\in B_\varepsilon(x)\cap \Omega \] with $\gamma <1$ if $n=2$ and $\gamma\leq 1/2$ if $n=3$. The constant $C$ depends on $n,\gamma, \Omega$ and $\limsup_{\varepsilon\to 0}{\mathcal E}_\varepsilon(u_\varepsilon)$. \item If either condition is given and $u_\varepsilon\to u$ in $L^1(\Omega)$, then $u_\varepsilon\to u$ in $L^p(\Omega)$ for all $1\leq p <\infty$. \end{enumerate} \end{theorem} Further results can be found in the main text. The proof is split over Lemmas \ref{third regularity lemma}, \ref{fourth regularity lemma} and \ref{second regularity lemma}. On the other hand, we have the following results on situations where phase fields fail to be regular at the boundary. \begin{theorem} \label{thm:main2} Let $\partial\Omega\in C^2$. Then the following hold true. \begin{enumerate} \item There exists a sequence $u_\varepsilon\in W^{2,2}(\Omega)$ such that $({\mathcal W}_\varepsilon + S_\varepsilon)(u_\varepsilon)\to 0$, but $u_\varepsilon$ is not bounded in $L^\infty(\Omega)$. \item There exists a sequence $u_\varepsilon$ such that such that $\alpha = 0$, $\mu = 0$ but the Hausdorff limit \[ K\coloneqq \lim_{\varepsilon\to 0} u_\varepsilon^{-1}(I)\qquad \emptyset \neq I \Subset (-1,1) \] of level sets or their unions contains an open subset of $\partial\Omega$. Similar constructions give $K = \{x_0\}$ or $K=\gamma$ for a point $x_0\in\partial\Omega$ and a closed curve $\gamma\subset\partial\Omega$. \item Let $S>0$ and $\emptyset \neq I \Subset (-1,1)$. Then there exists a point $x_0\in \partial\Omega$ and a sequence $u_\varepsilon\in W^{2,2}(\Omega)$ such that $|u_\varepsilon|\leq 1$ in $\overline\Omega$, ${\mathcal W}_\varepsilon(u_\varepsilon) \equiv 0$, $\mu_\varepsilon(\Omega)\equiv S$, $K=\emptyset$ and $\mu = S\cdot\delta_{x_0}$. \end{enumerate} If $\Omega$ is convex, any point $x_0$ or closed curve $\gamma$ in $\partial\Omega$ can be chosen and $u_\varepsilon$ may be such that it is not uniformly bounded in $\Omega\cap U$ for all open sets $U$ with $U\cap \partial\Omega\neq\emptyset$. \end{theorem} This shows that for example the minimisation problem for \[ {\mathcal F}_\varepsilon = {\mathcal W}_\varepsilon + \varepsilon^{-\sigma}(S_\varepsilon -S)^2 \] is not physically meaningful without boundary conditions or with partly free boundary conditions \eqref{eq weak boundary} if $\partial_{\text{free}}\Omega := \partial\Omega\setminus (\Gamma_+\cup \Gamma_-) \neq \emptyset$. A minimising sequence is given by the superposition of a phase-field making an optimal transition along a minimal surface spanning a suitable boundary curve inside $\partial_{\text{free}}\Omega$ and a second phase-field creating an atom of $\mu$ of the correct size at a single point $x\in \partial_{\text{free}}\Omega$. This can be realised with energy ${\mathcal W}_\varepsilon(u_\varepsilon) \to 0$ as $\varepsilon\to 0$. The question under which boundary conditions other than \eqref{eq strict boundary} the measure $\mu$ can be expected to be regular at the boundary for either finite energy sequences or minimising sequences remains open. \section{Positive Results on Boundary Regularity} In this chapter, we describe partial regularity results for weakly controlled boundary values. \begin{lemma}\label{third regularity lemma} Assume that $u_\varepsilon$ is continuous on $\overline \Omega$ and there is $\theta\geq 1$ such that $|u_\varepsilon|\leq\theta$ on $\partial\Omega$ for all $\varepsilon>0$. Then the following hold true. \begin{enumerate} \item There exists $C>0$ such that $\mu_\varepsilon(\{|u_\varepsilon|\geq \theta\})\leq C\,\varepsilon^2$. \item For the set $\tilde\Omega_\varepsilon = \{x\in \Omega\:|\: B_{2\varepsilon}(x)\subset\Omega\}$ we can show that there exists $C$ depending only on $\bar\alpha, \gamma$ and $\theta$ such that \[ ||u_\varepsilon||_{\infty,\tilde\Omega_\varepsilon}\leq C, \qquad |u_\varepsilon(y) - u_\varepsilon(z)|\leq \frac{C_{\bar\alpha, \theta,\gamma}}{\varepsilon^{\gamma}}\,|y-z|^{\gamma} \] if there is $x\in\tilde\Omega_\varepsilon$ such that $y, z\in B_\varepsilon(x)$ and $\gamma\leq 1/2$ if $n=3$, $\gamma<1$ if $n=2$. \end{enumerate} \end{lemma} \begin{proof} This proof is an adaptation of the proof of Lemma \cite[Lemma 3.1]{MR3590663} using a modified argument in the first step of the proof. We observe that for the proof of Lemma \cite[Lemma 3.1]{MR3590663} to work, we needed that $B_{2\varepsilon}(x)\subset\Omega$ to employ the elliptic inequality \[ ||\tilde u_\varepsilon||_{2,2, B_1(0)}\leq C\,\left(||\tilde u_\varepsilon||_{2,B_2(0)} + ||\Delta \tilde u_\varepsilon||_{2, B_2(0)}\right) \] and an estimate of $\int_{B_{2\varepsilon}(x)}\frac1{\varepsilon^n}W'(u_\varepsilon)^2\,\mathrm{d}x$. The first one we are given directly by the choice of $\Omega_\varepsilon^\beta$ or $\tilde \Omega_\varepsilon$, the second can be obtained through integration by parts \begin{align*} c_0\,\alpha_\varepsilon(\{&|u_\varepsilon|>\theta'\}) = \int_{\{|u_\varepsilon|>\theta'\}} \frac1\varepsilon \left(\varepsilon\,\Delta u_\varepsilon - \frac1\varepsilon\,W'(u_\varepsilon)\right)^2\,\mathrm{d}x\\ &= - \frac2\varepsilon \int_{\partial\{|u_\varepsilon|>\theta'\}}W'(u_\varepsilon)\,\partial_\nu u_\varepsilon\d\H^{n-1}\\ &\qquad + \int_{\{|u_\varepsilon|>\theta'\}} \varepsilon\,(\Delta u_\varepsilon)^2 + \frac2\varepsilon\,W''(u_\varepsilon)\,|\nabla u_\varepsilon|^2 + \frac1{\varepsilon^3}\,W'(u_\varepsilon)^2\,\mathrm{d}x\\ &\geq \int_{\{|u_\varepsilon|>\theta'\}} \varepsilon\,(\Delta u_\varepsilon)^2 + \frac4\varepsilon\,|\nabla u_\varepsilon|^2 + \frac1{\varepsilon^3}\,W'(u_\varepsilon)^2\,\mathrm{d}x \end{align*} for $\theta'>\theta$ when $\{|u_\varepsilon|>\theta'\}$ is a Caccioppoli set (i.e.\ for almost all $\theta'>\theta$). If $|u_\varepsilon|<\theta'$ on $\partial\Omega$, the set $\{u_\varepsilon>\theta'\}$ does not touch the boundary $\partial\Omega$, so $\partial\{u_\varepsilon>\theta'\}\subset \{u_\varepsilon = \theta'\} \subset\Omega$. Because $W'(\theta)>0$ and $\partial_\nu u_\varepsilon$ is inward pointing on $\partial\{u_\varepsilon>\theta\}$, the boundary integral is non-positive. The rest of the argument goes through as before. Additionally, taking $\theta'\to \theta$ establishes the first claim. \end{proof} \begin{remark} The same bound holds for example on $\widetilde \Omega_{\varepsilon^{1/2}} = \{x\in \Omega\:|\:B_{\varepsilon^{1/2}}(x)\subset \Omega\}$ without boundary values. In that situation, we employ the estimate from \cite[Proposition 3.6]{roger:2006ta} to bound \[ \frac1{\varepsilon^3}\int_{\{|u_\varepsilon|>1\}} W'(u_\varepsilon)^2\,\mathrm{d}x \leq C. \] \end{remark} Another situation with a similar improvement is that of prescribed Neumann boundary data. \begin{lemma}\label{fourth regularity lemma} Assume that $\Omega$ is a Caccioppoli set and $\partial_\nu u_\varepsilon = 0$ almost everywhere on $\partial \Omega$ with respect to the boundary measure $|D\chi_\Omega|$. Then the following hold true. \begin{enumerate} \item There exists $C>0$ such that $\mu_\varepsilon(\{|u_\varepsilon|\geq 1\})\leq C\,\varepsilon^2$. \item For the set $\tilde\Omega_\varepsilon = \{x\in \Omega\:|\: B_{2\varepsilon}(x)\subset\Omega\}$ we can show that there exists $C$ depending only on $\bar\alpha$ and $\gamma$ such that \[ ||u_\varepsilon||_{\infty,\tilde\Omega_\varepsilon}\leq C, \qquad |u_\varepsilon(y) - u_\varepsilon(z)|\leq \frac{C}{\varepsilon^{\gamma}}\,|y-z|^{\gamma} \] if there is $x\in\tilde\Omega_\varepsilon$ such that $y, z\in B_\varepsilon(x)$. Here $\gamma\leq 1/2$ if $n=3$, $\gamma<1$ if $n=2$. \end{enumerate} If $\partial\Omega \in C^2$ and $\partial_\nu u_\varepsilon = 0$ almost everywhere on $\partial \Omega$, then the second statement can be sharpened as follows: \begin{enumerate} \item[2'.] For all $x\in \overline\Omega$ there exists a constant $C$ depending only on $\bar\alpha,\bar \mu,\gamma$ and $\partial\Omega$ such that \[ |u_\varepsilon(x)| \leq C, \qquad |u_\varepsilon(y) - u_\varepsilon(z)|\leq \frac{C}{\varepsilon^\gamma}\,|x-y|^\gamma\qquad\forall\ y,z\in B_\varepsilon(x)\cap \overline\Omega. \] The dependence of $C$ on $\partial \Omega$ vanishes in the limit $\varepsilon\to 0$. \end{enumerate} \end{lemma} In particular, for regular boundaries, the Neumann condition implies the boundedness of solutions (in particular also on the boundary). \begin{proof} As before, we obtain \begin{align*} \alpha_\varepsilon(\{&|u_\varepsilon|>\theta'\}) = \int_{\{|u_\varepsilon|>\theta'\}} \frac1\varepsilon \left(\varepsilon\,\Delta u_\varepsilon - \frac1\varepsilon\,W'(u_\varepsilon)\right)^2\,\mathrm{d}x\\ &= - \frac2\varepsilon \int_{\partial\Omega\cap \partial\{|u_\varepsilon|>\theta'\}}W'(u_\varepsilon)\,\partial_\nu u_\varepsilon\d\H^{n-1} - \frac2\varepsilon \int_{\partial\{|u_\varepsilon|>\theta'\}\cap \Omega}W'(u_\varepsilon)\,\partial_\nu u_\varepsilon\d\H^{n-1}\\ &\qquad + \int_{\{|u_\varepsilon|>\theta'\}} \varepsilon\,(\Delta u_\varepsilon)^2 + \frac2\varepsilon\,W''(u_\varepsilon)\,|\nabla u_\varepsilon|^2 + \frac1{\varepsilon^3}\,W'(u_\varepsilon)^2\,\mathrm{d}x\\ &\geq \int_{\{|u_\varepsilon|>\theta'\}} \varepsilon\,(\Delta u_\varepsilon)^2 + \frac4\varepsilon\,|\nabla u_\varepsilon|^2 + \frac1{\varepsilon^3}\,W'(u_\varepsilon)^2\,\mathrm{d}x \end{align*} for any $\theta'>1$ such that $\{|u_\varepsilon|>\theta'\}$ is a Caccioppoli set. Here the boundary integral can be split into two parts, one of which has a sign, while the other one vanishes due to the Neumann condition. This implies the boundedness on $\tilde\Omega_\varepsilon$ and the bound on the mass measures $\mu_\varepsilon(\{|u_\varepsilon|>\theta'\})$ as before. We can take $\theta'\to 1$ to prove the first part of the Lemma. Now assume that $\partial\Omega\in C^2$ and pick $x\in\partial\Omega$. The rest of the argument is a fairly standard `straightening the boundary' argument with the feature that the boundary becomes flatter as $\varepsilon\to 0$. Without loss of generality, we assume that $x=0$. We may now blow up to \[ \tilde u_\varepsilon : B_{2}(0)\cap (\Omega/\varepsilon)\to \R, \qquad \tilde u_\varepsilon(y) = u_\varepsilon(\varepsilon y). \] We pick a $C^2$-diffeomorphism $\phi_\varepsilon:B_2(0)\to B_2(0)$ such that \begin{enumerate} \item $\phi_\varepsilon (\Omega/\varepsilon \cap B_2(0))= B_2^+(0)$, \item $\phi_\varepsilon \to \mathrm{id}_{B_2(0)}$ in $C^2(B_2(0), B_2(0))$ as the domain becomes increasingly flat, \item under $\phi_\varepsilon$, the normal to $\partial\Omega/\varepsilon$ gets mapped to $e_n$ on the boundary, i.e.\ the orthogonality condition is preserved. \end{enumerate} With this we obtain a function \[ {\tilde w_\varepsilon}:B_2^+(0)\to\R, \qquad \tilde w_\varepsilon(y) = u_\varepsilon(\phi_\varepsilon^{-1}(y)) \] in flattened coordinates. Since $\phi_\varepsilon$ is $C^2$-smooth, it preserves $W^{2,2}$-functions and it is easy to calculate \begin{align*} \partial_i\tilde u_\varepsilon &= \partial_i (\tilde w_\varepsilon\circ\phi_\varepsilon)\\ &= \partial_i(\phi_{\varepsilon})_j\,\left((\partial_j\tilde w_\varepsilon) \circ \phi_\varepsilon\right)\\ \partial_{ij}\, \tilde u_\varepsilon &=\partial_{ij}(\phi_\varepsilon)_k\,\left((\partial_k\tilde w_\varepsilon) \circ \phi_\varepsilon\right) + \partial_i(\phi_\varepsilon)_k \,\partial_j(\phi_\varepsilon)_l\,\left((\partial_{kl}\tilde w_\varepsilon)\circ \phi_\varepsilon\right). \end{align*} In shorter notation, this means that \[ \nabla \tilde u_\varepsilon = D\phi\cdot\nabla \tilde w_\varepsilon, \qquad \Delta \tilde u_\varepsilon = a^{ij}_\varepsilon\,\partial_{ij}\tilde w_\varepsilon + \langle \Delta\phi_\varepsilon,\nabla \tilde w_\varepsilon \rangle \] with \[ a^{ij}_\varepsilon = \langle \partial_i\phi_\varepsilon,\partial_j\phi_\varepsilon\rangle. \] The coefficients are $C^1$-differentiable -- so the associated operator $A_\varepsilon$ can be equivalently written in divergence form -- and $C^1$-close to $\delta_{ij}$. We observe that \[ \left(\Delta \tilde u_\varepsilon - W'(\tilde u_\varepsilon)\right)(\phi(y)) = \left(\partial_i\left(a_\varepsilon^{ij}\,\partial_j\tilde w_\varepsilon\right) - (\partial_i\,a_\varepsilon^{ij})\partial_j\tilde w_\varepsilon + \langle \Delta\phi_\varepsilon,\nabla\tilde w_\varepsilon\rangle - W'(\tilde w_\varepsilon) \right)(y). \] We extend $\tilde w_\varepsilon$ by even reflection to the whole ball $B_2(0)$, which preserves the $W^{2,2}$-smoothness since we preserved the property that $\partial_\nu\tilde u_\varepsilon =0$ on the boundary when straightening the boundary. We observe that \[ \partial_i\,(a_\varepsilon^{ij}\partial_j\tilde w_\varepsilon) - \langle \div A_\varepsilon - \Delta \phi_\varepsilon, \nabla \tilde w_\varepsilon\rangle =: f_\varepsilon \in L^2(B_2(0)) \] since \begin{align*} \int_{B_2(0)}W'(\tilde w_\varepsilon)^2\,\mathrm{d}y &= 2\int_{B_2^+(0)}W'(\tilde w_\varepsilon)^2(y)\,\mathrm{d}y\\ &= 2\int_{\Omega/\varepsilon\cap B_2(0)} W'(\tilde u_\varepsilon((z)))\,\det(D\phi^{-1}_\varepsilon)(z)\,\mathrm{d}z\\ &\leq 2(1+c_\varepsilon) \int_{B_{2\varepsilon}(x)}\frac1{\varepsilon^n}W'(\tilde u_\varepsilon)\,\mathrm{d}z\\ &\leq C \end{align*} as shown above. The constants $c_\varepsilon$ vanish as $\varepsilon\to 0$ and $\phi_\varepsilon\to \mathrm{id}$. The coefficients $a_{ij}$ are uniformly elliptic and approach $\delta_{ij}$ uniformly as $\varepsilon\to 0$, so we can employ the elliptic estimate \begin{align*} ||\nabla \tilde w_\varepsilon||_{L^2(B_{3/2})} &\leq C\left\{||\tilde w_\varepsilon||_{L^2(B_2)} + ||f_\varepsilon + \langle \div A_\varepsilon - \Delta \phi_\varepsilon, \nabla \tilde w_\varepsilon\rangle||_{L^2(B_2)}\right\}\\ &\leq C\left\{||\tilde w_\varepsilon||_{L^2(B_2)} + ||f_\varepsilon||_{L^2(B_2)} + ||\div A_\varepsilon - \Delta \phi_\varepsilon||_{L^\infty(B_2)}\, ||\nabla \tilde w_\varepsilon||_{L^2(B_2)}\right\}. \end{align*} The constant is uniform in $\varepsilon$ and $||\div A_\varepsilon - \Delta \phi_\varepsilon||_{L^\infty(B_2)}\to 0$ as $\varepsilon\to 0$, so we can bring the term to the other side and obtain a uniform $W^{1,2}$-bound for all sufficiently small $\varepsilon$, where the necessary smallness depends only on ${\mathcal W}_\varepsilon(u_\varepsilon)$ and $\partial\Omega$. In a second step, this gives us a uniform bound on $||\tilde w_\varepsilon||_{W^{2,2}(B_1(0))}$, which gives us a uniform bound on $||\tilde u_\varepsilon||_{W^{2,2}(B_{3/2}(0)\cap \Omega/\varepsilon)}$ after transforming back. The rest follows by Sobolev embeddings as in \cite[Lemma 3.1]{MR3590663}. \end{proof} \begin{remark} The case that $\Omega$ has finite perimeter and $\partial_\nu u_\varepsilon=0$ almost everywhere on the reduced boundary is a generalisation of the situation in which $\partial\Omega\in C^2$ and the level sets of $u_\varepsilon$ meet $\partial\Omega$ at a ninety degrees angle. Such conditions arise naturally when we search for surfaces of minimal perimeter bounding a prescribed volume and may be useful also for models containing Willmore's energy \cite{alessandroni2014local}. \end{remark} We give an improvement of the $L^\infty$-bound up to the boundary which implies $L^p$-convergence for all finite $p$. \begin{lemma}\label{second regularity lemma} Assume that there is $\theta\geq 1$ such that $|u_\varepsilon|\leq\theta$ on $\partial\Omega$ for all $\varepsilon>0$. Then the following hold true. \begin{enumerate} \item If $n=2$, $\partial\Omega\in C^{1,1}$ and $\theta>1$, then for every $\beta<1$ there exists a constant $C$ depending only on $\bar\alpha, \theta, \Omega$ and $\beta$ such that $\sup_{x\in \Omega} |u_\varepsilon(x)|\leq \theta+ C\varepsilon^\beta$ for all $\varepsilon>0$. If $\theta =1$, then for every $\beta<1/2$ there exists a constant $C$ depending only on $\bar\alpha, \Omega$ and $\beta$ such that $\sup_{x\in \Omega} |u_\varepsilon(x)|\leq 1+ C\varepsilon^\beta$ for all $\varepsilon>0$. \item If $n=3$ and $\partial\Omega\in C^{1,1}$, then for every $p<\infty$ there exists $C$ depending only on $\bar\mu,\bar\alpha, \theta, p$ and $\Omega$ such that $||u_\varepsilon||_{p,\Omega}\leq C$. Furthermore, for every $\sigma>0$ there exists $C$ depending only on $\bar\alpha, \theta, \Omega$ and $\sigma$ such that $||u_\varepsilon||_{\infty,\Omega}\leq C\,\varepsilon^{-\sigma}$. \end{enumerate} \end{lemma} We assume that also in three dimensions, uniformly bounded boundary values lead to uniform interior bounds. \begin{proof} The proof is a modified version of that of \cite[Proposition 3.6]{roger:2006ta}. We follow that proof closely, but use a different maximum principle. Let $\theta'>\theta\geq 1$ such that $\{|u_\varepsilon|>\theta'\}$ has finite perimeter and define $w_\varepsilon\coloneqq (u_\varepsilon- \theta')_+$. Then $w_\varepsilon\in W^{1,2}_0(\Omega)$ and from the same integration by parts as before we obtain that \[ ||w_\varepsilon||_{1,2,\Omega}^2 \leq \int_{\{u_\varepsilon>\theta'\}} W'(u_\varepsilon)^2 + |\nabla u_\varepsilon|^2 \leq \alpha_\varepsilon(\Omega)\,\varepsilon. \] The function satisfies \begin{align*} \int_\Omega w_\varepsilon\,(-\Delta\phi)\,\mathrm{d}x &= \int_{\{u_\varepsilon>\theta'\}} (u_\varepsilon-\theta')\,(-\Delta\phi)\,\mathrm{d}x \\ &= -\int_{\partial\{u_\varepsilon>\theta'\}} (u_\varepsilon-\theta')\,\partial_\nu\phi \d\H^{n-1} + \int_{\{u_\varepsilon>\theta'\}}\langle \nabla\phi, \nabla u_\varepsilon\rangle\,\mathrm{d}x\\ &= \int_{\partial \{u_\varepsilon>\theta'\}}\phi\,\partial_\nu u_\varepsilon - (u_\varepsilon-\theta')\,\partial_\nu\phi\d\H^{n-1} + \int_{\{u_\varepsilon>\theta'\}} \phi\,(-\Delta u_\varepsilon)\,\mathrm{d}x\\ &\leq \int_{\{u_\varepsilon>\theta'\}} \phi\,(-\Delta u_\varepsilon)\,\mathrm{d}x \end{align*} for $\phi\geq 0$. Again, this holds true because $\partial\{u_\varepsilon>\theta'\}\subset\{u_\varepsilon=\theta'\}$. Obviously \begin{align*} \int_{\{u_\varepsilon>\theta'\}} \phi\,(-\Delta u_\varepsilon)\,\mathrm{d}x &= \int_{\{u_\varepsilon>\theta'\}} \left(-\Delta u_\varepsilon + \frac1{\varepsilon^2}W'(u_\varepsilon) - \frac1{\varepsilon^2}\,W'(u_\varepsilon)\right)\,\phi\,\mathrm{d}x\\ &\leq \int_{\{u_\varepsilon>\theta'\}}\frac1\varepsilon\,\left(h_\varepsilon - \frac1\varepsilon\,W'(\theta')\right)_+\phi\,\mathrm{d}x, \end{align*} so $-\Delta w_\varepsilon \leq \frac1\varepsilon\,\chi_{\{u_\varepsilon>\theta'\}}\left(h_\varepsilon - \frac1\varepsilon\,W'(\theta')\right)_+ $ in the distributional sense. When we consider the solution $\psi_\varepsilon \in W^{1,2}_0(\Omega)$ of the problem \[ - \Delta\psi_\varepsilon = \frac1\varepsilon\,\left(h_\varepsilon - \frac1\varepsilon\,W'(\theta')\right)_+\chi_{\{u_\varepsilon>\theta'\}}, \] the weak maximum principle \cite[Theorem 8.1]{gilbarg:2001vb} applied to $w_\varepsilon - \psi_\varepsilon$ implies that \begin{equation}\label{equation estimate u psi} u_\varepsilon \leq \theta + w_\varepsilon \leq \theta + \psi_\varepsilon. \end{equation} We proceed to estimate \begin{align*} ||\Delta\psi_\varepsilon||_{q,\Omega}^q &= \varepsilon^{-q} \int_{\{u_\varepsilon>\theta'\}}\left(h_\varepsilon - \frac1\varepsilon\,W'(\theta')\right)_+^q\,\mathrm{d}x \\ & \leq \varepsilon^{-q} \left(\int_{\{u_\varepsilon>\theta'\}} 1\,\mathrm{d}x\right)^{1-q/2}\left(\int_\Omega\,h_\varepsilon^2\,\mathrm{d}x\right)^{q/2}\\ &\leq \varepsilon^{-q} \left(\frac{\varepsilon^3}{W'(\theta')^2}\int_{\{u_\varepsilon>\theta'\}} \frac1{\varepsilon^3}\,W'(u_\varepsilon)^2\,\mathrm{d}x\right)^{1-q/2}\left(\varepsilon\int_{\Omega}\frac1\varepsilon\,h_\varepsilon^2\,\mathrm{d}x\right)^{q/2}\\ &\leq c_{\bar\alpha,q}\,(W'(\theta'))^{q-2}\,\varepsilon^{-q + 3(1-q/2) + q/2}\\ &= c_{\bar\alpha,q} \,(W'(\theta'))^{q-2}\,\varepsilon^{3 - 2q} \end{align*} for $1\leq q<2$. Thus $||\Delta\psi_\varepsilon||_{q,\Omega}\leq C_{\bar\alpha,q}\,(W'(\theta'))^{1-2/q}\,\varepsilon^{3/q -2}$, and by the elliptic estimate \cite[Lemma 9.17]{gilbarg:2001vb}, we have \[ ||\psi_\varepsilon||_{2,q,\Omega} \leq c_{\Omega,\bar\alpha,q}\,(W'(\theta'))^{1-2/q}\,\varepsilon^{3/q-2}. \] Let us insert this estimate into \eqref{equation estimate u psi}. If $n=3$, we take $q=3/2$ and use that $W^{2,3/2}(\Omega)$ embeds into $L^p(\Omega)$ for all finite $p$. Thus (taking some $\theta'>1$ if $\theta=1$), we see that $u_\varepsilon\leq \theta' + \psi_\varepsilon$ where $\psi_\varepsilon$ is uniformly bounded in $L^p(\Omega)$. We may use the same argument on the negative part of $u_\varepsilon$, so in total $u_\varepsilon$ is uniformly bounded in $L^p(\Omega)$ for all $1\leq p<\infty$ by domination through $\psi_\varepsilon$. Taking $q=3/(2-\sigma)>3/2$ proves the $L^\infty$-estimate by the same comparison. If $n=2$, we have a Sobolev embedding $W^{2,q} (\Omega) \to L^\infty(\Omega)$ for all $q>1$. Assuming that $\theta>1$ and $\beta<1$ we take $\theta'\to \theta$ to obtain \[ u_\varepsilon \leq \theta + w_\varepsilon \leq \theta + \psi_\varepsilon \leq \theta + C_{\Omega,\bar\alpha,q}\,(W'(\theta))^{1-2/q}\, \varepsilon^{3/q-2}. \] For $q = 3/(2+\beta)$, this gives $u_\varepsilon\leq 1 + C\,\varepsilon^\beta$. Here $q\in (1,2)$ is admissible since $\beta\in(0,1)$. If $\theta=1$, we may take $0 < \beta<1/2$, $q=(3-2\beta)/2 \in(1,2)$ and $1 + \varepsilon^\beta \leq \theta' \leq 1 + 2\varepsilon^\beta$ to obtain \[ |u_\varepsilon|\leq 1 + C_{\Omega, \bar\alpha, q} \varepsilon^{\beta(1-2/q) + (3/q-2)} = 1 + C_{\Omega, \bar\alpha, q}\varepsilon^\beta \] with the approximation $W'(\theta') = O(\varepsilon^\beta)$. \end{proof} \begin{corollary} If $u_\varepsilon \to u$ in $L^1(\Omega)$ and either \begin{enumerate} \item $u_\varepsilon\in C^0(\overline\Omega)$ and there exists $\theta\geq1$ such that $|u_\varepsilon|\leq\theta$ on $\partial\Omega$ for all $\varepsilon>0$ or \item $\partial\Omega\in C^2$ and $\partial_\nu u_\varepsilon = 0$ a.e.\ on $\partial\Omega$, \end{enumerate} then $u_\varepsilon\to u$ in $L^p(\Omega)$ for all $1\leq p<\infty$. \end{corollary} \begin{proof} The sequence $u_\varepsilon$ converges to $u$ in $L^1(\Omega)$ and is bounded in $L^q(\Omega)$ for all $q<\infty$ (or even $L^\infty(\Omega)$). H\"older's inequality implies $L^p$-convergence. \end{proof} \begin{remark} If $n=2$, $\beta<1/2$ and $|u_\varepsilon|\leq 1+\varepsilon^\beta$ on $\partial\Omega$, then the proof still shows that \[ \sup_{\Omega} |u_\varepsilon|\leq 1+ C\,\varepsilon^\beta \] for this particular $\beta$. The case $\beta = 1/2$ is still open at the boundary. \end{remark} For a counterexample to uniform boundedness on $\Omega$ without boundary conditions, see Example \ref{counterexample 1}. Even with boundary values satisfying $|u_\varepsilon|\leq 1$ on $\partial\Omega\in C^2$, we shall construct a sequence $u_\varepsilon$ for which uniform H\"older continuity fails at the boundary in Example \ref{counterexample 2}. \section{Counterexamples to Boundary Regularity}\label{section counterexamples} The idea here is simple: namely, the energy ${\mathcal W}_\varepsilon$ can be seen to control the $W^{2,2}$-norm of blow ups of phase-fields onto $\varepsilon$-scale since those are asymptotic to bounded entire solutions of the stationary Allen-Cahn equation $-\Delta \tilde u + W'(\tilde u) = 0$ at (almost all) points away from the boundary. At the boundary on the other hand, the asymptotic behaviour corresponds to solutions of the same equation on half-space, whose behaviour is essentially governed by their boundary values. To make this precise, take $h\in C_c^\infty(\R^n)$ and $H\coloneqq \{x_n>0\}$. The energy \[ {\mathcal F}\colon W^{1,2}_{loc}(H)\to \R\cup\{\infty\}, \quad {\mathcal F}(u) = \int_H\frac12\,|\nabla u|^2 + W(u)\,\mathrm{d}x \] has a minimiser $\tilde u$ in the affine space $(1+h) + W^{1,2}_0(H)$ by the direct method of the calculus of variations. Namely, take a sequence $u_k$ such that $\lim_{k\to\infty}{\mathcal F}(u_k) = \inf{\mathcal F}(u) \leq {\mathcal F}(h+1) < \infty$. Then \[ ||\nabla u_k||_{L^2(H)}\leq C, \qquad\text{and}\quad (u_k-1)^2(x) \leq (u_k-1)^2(u_k+1)^2(x) = 4\,W(u_k(x)) \] at all points $x\in H$ such that $u_k(x)\geq 0$. Using the boundary values, also the negative part of $u_k$ is uniformly controlled in $L^2(H)$ by the $H^1$-semi norm. Thus the sequence $u_k$ is bounded in $W^{1,2}(H)$ and there exists $\tilde u$ such that $u_k\stackrel*\rightharpoonup \tilde u$ (up to a subsequence). Since the affine space is convex and strongly closed, it is weakly = weakly* closed and $\tilde u\in 1+h+W_0^{1,2}(H)$. For any $R>0$, we can use the compact embedding $W^{1,2}(B_R^+)\to L^4(B_R^+)$ to deduce that \[ \int_{B_R^+}\frac12\,|\nabla \tilde u|^2 + W(\tilde u)\,\mathrm{d}x \leq \liminf_{k\to\infty} \int_{B_R^+}\frac12\,|\nabla u_k|^2 + W(u_k)\,\mathrm{d}x \leq \liminf_{k\to\infty} \int_{H}\frac12\,|\nabla u_k|^2 + W(u_k)\,\mathrm{d}x. \] Letting $R\to\infty$ shows that $\tilde u$ is in fact a minimiser of ${\mathcal F}$. If $h\geq 0$, then \[ 1 + (\tilde u-1)_+\in 1 + h + W^{1,2}_0(H), \qquad {\mathcal F}\left( 1 + (\tilde u-1)_+ \right) \leq {\mathcal F}(\tilde u) \] with strict inequality unless $\tilde u = 1 + (\tilde u-1)_+$. Since we assume $\tilde u$ to be a minimiser, we find that $\tilde u\geq 1$ almost everywhere. The same argument shows that $\tilde u\leq 1 + ||h||_\infty$ almost everywhere. Calculating the Euler-Lagrange equation of ${\mathcal F}$, we see that $\tilde u$ is a weak solution of \[ -\Delta\tilde u + W'(\tilde u) = 0. \] On the convex set \[ C_h\coloneqq \{ u\in W^{1,2}(H) \:|\: u = 1+h\text{ on }\partial H, u\geq 1\} \] the operator \[ A\colon C_h\to W^{-1,2}(H), \quad A(u) = -\Delta u + W'(u) \] is well-defined (since $n\leq 3$ and $W'$ has cubical growth) and strongly monotone, so the equation $Au=0$ has a unique solution $\tilde u\in C_h$ which coincides with the minimiser $\tilde u$ of ${\mathcal F}$ in $1+h+ W_0^{1,2}(H)$). A bootstrapping argument via elliptic regularity theory shows that $\tilde u\in C^\infty_{loc}(\overline H)$. By trace theory we have that \[ ||h||_{2, \partial H}^2 \: = \: ||\tilde u - 1||_{2,\partial H}^2 \:\leq \: ||\tilde u-1 ||_{1,2,H}^2 /2 \:\leq \:{\mathcal F}(\tilde u)\: \leq \: {\mathcal F}(1+h) . \] In this way, we can fully control the mass density $\tilde\mu = \frac12\,|\nabla \tilde u|^2 + W(\tilde u)$ created by $\tilde u$ in terms of its boundary values. For later purposes, we have to obtain suitable decay estimates for the functions $\tilde u$ depending on $h$. In a first step, we show that the limit $\lim_{|x|\to\infty} \tilde u(x) = 1$ exists. Assume the contrary. Then there exist $\theta>1$ and a sequence $x_k\in H$ such that \[ |x_k|\to\infty, \qquad \tilde u(x_k) \geq \theta. \] Taking a suitable subsequence, we may assume that the balls $B_1(x_k)$ are disjoint and $|x_k|\geq R+2$ is so large that $h$ is supported in $B_R(0)$. If $B_2(x_k)\subset H$, we may proceed as in Lemma \ref{regularity lemma} to deduce uniform H\"older continuity on the balls $B_1(x_k)$ from the $L^\infty$-bound to $\tilde u$ and the fact that $\tilde u$ solves $\Delta\tilde u = W'(\tilde u)$. This means that there exists $r>0$ such that $\tilde u \geq (1+\theta)/2$ on $B_r(x_k)$. Otherwise, the same argument still goes through after extending $\tilde u$ by a standard reflection principle and the fact that the boundary values are constant on $\partial H\cap B_2(x_k)$. The geometry of $H$ gives us $\L^n(B_r(x_k)\cap H)\geq \omega_n\,r^n/2$. So we deduce that \[ {\mathcal F}(\tilde u) \geq \sum_{k=0}^\infty \int_{B_r(x_k)} W(\,(1+\theta)/2)\,\mathrm{d}x \geq \sum_{k=0}^\infty \,W((1+\theta)/2) \,\omega_n\,r^n/2= \infty \] in contradiction to the definition of $\tilde u$. Now we can estimate the decay of $\tilde u$ in a more precise fashion. Since $h\in C_c(\partial H)$, there is $C_h>0$ such that $h\leq C_h\,e^{-|x|}$ on $\partial H$. To simplify the following calculations, we assume that $C_h=1$. Then we claim that $1\leq u\leq 1+ e^{-|x|}$ for all $x\in \R^n$. Assume the contrary and observe that $\psi(x) = 1+ e^{-|x|}$ satisfies \[ \Delta\psi(x) = \left(1 + \frac{1-n}{|x|}\right)\,e^{-|x|},\qquad W'(\psi(x)) = \left(2 + 3\,e^{-|x|} + e^{-2\,|x|}\right)\,e^{-|x|}, \] so in particular $\Delta \psi(x) \leq W'(\psi(x))$ for all $x\in \R^n$. Since $\tilde u=h \leq \psi$ on $\partial H$ by assumption and $\lim_{|x|\to \infty}\tilde u(x) = 1$, there must be a point $x_0\in H$ such that \[ (\psi - u)(x_0) = \min_{H} (\psi-u) < 0, \] but then \[ \Delta(\psi - u)(x_0) \leq W'(\psi(x_0)) - W'(u(x_0)) < 0 \] so $\psi - u$ cannot be minimal at $x_0$. This proves the claim. It follows that \[ \int_{H\setminus B_R^+}W(\tilde u) \,\mathrm{d}x \leq 2\,\int_R^\infty e^{-2r}\,r^{n-1}\d r = P_n(R)\,e^{-2R} \] where $P_n$ is a polynomial of degree $n$ depending on the dimension. To estimate the second part of the energy functional, we use the gradient bound \[ |\nabla u(x)| \leq n\,\sqrt{n}\,\sup_{\partial Q} |u| + \frac12\,\sup_Q |\Delta u| \] from \cite[Section 3.4]{gilbarg:2001vb} where $Q$ is a cube of side length $d=1$ with a corner at $x$. Applied to our problem, for $x\in \partial B_R^+$ we can find a cube $Q$ satisfying $\bar Q \cap \bar {B_R^+} = \{x\}$ such that \[ |\nabla \tilde u(x)| \: =\: |\nabla (\tilde u-1)|(x) \leq n\,\sqrt{n}\,\sup_{\partial Q} |\tilde u -1 | + \frac12\,\sup_Q |W'(\tilde u)| \:\leq\: (n\,\sqrt{n} + 5/2)\,e^{-|x|}. \] Thus we also have \[ \int_{H\setminus B_R^+}\frac12\,|\nabla \tilde u|^2\,\mathrm{d}x \:\leq\: \left(n\,\sqrt{n} + 5/2\right)^2\,P_n(R)\,e^{-2R} \] Finally, we remark that the same type of estimate obviously holds for $\Delta\tilde u = W'(\tilde u) \in L^2(H)$. Having given the general construction for suitable functions of zero ${\mathcal W}_1$ curvature energy, we are finally ready to apply these results to obtain counterexamples. For simplicity, we construct the counterexamples first on the half space $H$ and transfer them to bounded $\Omega$ later on. \begin{example}[Counterexample to Boundedness]\label{counterexample 1} Fix $h\in C_c^\infty(\R^n)$ such that $0\leq h\leq e^{-|x|}$, $h\not\equiv 0$ and set $h_\theta = \theta\,h$. Every function of this type induces a minimiser $\tilde u_\theta$. We may take a sequence $\theta_\varepsilon\to \infty$ such that $\varepsilon^{n-1}/\theta_\varepsilon^{\,4} \to 0$ and set $u_\varepsilon(x) = \tilde u_{\theta_\varepsilon}(x/\varepsilon)$. Clearly, $u_\varepsilon$ becomes unbounded as $\varepsilon\to 0$, but \begin{enumerate} \item ${\mathcal W}_\varepsilon(u_\varepsilon)\equiv 0$ and \item $S_\varepsilon(u_\varepsilon) = \varepsilon^{n-1}\,{\mathcal F}(\tilde u_{\theta_\varepsilon}) \leq C\,\varepsilon^{n-1}\,{\mathcal F}(h_{\theta_\varepsilon})\to 0$. \end{enumerate} So the sequence $u_\varepsilon$ induces limiting measures $\mu = \alpha = 0$, but fails to be uniformly bounded. \end{example} The next example is a technically more demanding version of this one where the energy scaling is chosen so that we create an atom of size $S>0$ at the origin. \begin{example}[Counterexample to Boundary Regularity of $\mu$] Take $h_\theta, \tilde u_\theta$ as above. Then the map \[ f\colon [0,\infty) \to \R, \quad f(\theta) = {\mathcal F}(\tilde u_{\theta}) = \inf \{{\mathcal F}(u)\:|\:u\in 1+ h_{\theta} + W^{1,2}_0(H)\} \] is continuous. To see this, take pairs $\theta_1$, $\theta_2$ and the corresponding minimisers $\tilde u_1$, $\tilde u_2$ and observe that \[ \tilde u_{1,2} = \frac{\theta_2}{\theta_1}\, \left[\tilde u_1 - 1\right] + 1 \:\:\in \:1 + h_{\theta_2} + W^{1,2}_0(H). \] Since \[ W(1+\alpha u) = ((1+\alpha u)^2 -1)^2 /4 = (2\alpha u + \alpha^2 u^2)^2/4 \leq \max\{\alpha^2, \alpha^4\} W(1+u) \] we have \[ f(\theta_2) = {\mathcal F}(\tilde u_2) \leq {\mathcal F}(\tilde u_{1,2}) \leq \max\left\{\left(\frac{\theta_2}{\theta_1}\right)^2, \:\left(\frac{\theta_2}{\theta_1}\right)^4\right\}\,{\mathcal F}(\tilde u_1) = \max\left\{\left(\frac{\theta_2}{\theta_1}\right)^2, \:\left(\frac{\theta_2}{\theta_1}\right)^4\right\}\,f(\theta_1). \] Reversing the roles of $\theta_1$ and $\theta_2$ shows that $f$ is continuous. Now let $S>0$. Due to the continuity of $f$ in $\theta$ and the trace inequality \[ \theta^2||h||_{2,\partial H}^2 = ||h_\theta||_{2,\partial H}^2 \leq {\mathcal F}(\tilde u_\theta) \] we can pick a sequence $\theta_\varepsilon\to\infty$ at most polynomially in $1/\varepsilon$ such that ${\mathcal F}(\tilde u_{\theta_\varepsilon}) = S\,\varepsilon^{1-n}$. As before, set $u_\varepsilon(x) = \tilde u_{\theta_\varepsilon}(x/\varepsilon)$ and observe that ${\mathcal W}_\varepsilon(u_\varepsilon)\equiv 0$, $S_\varepsilon(u_\varepsilon)\equiv S$. It remains to show that $\mu = S\,\delta_0$, i.e.\ that the limiting measure is concentrated in one point. The functions $\tilde u_{\theta}$ actually tend to shift more of their mass towards the origin as $\theta\to \infty$ since the steepness (and overall height) is best concentrated on a ball of small radius for a low energy. The same application of the maximum principle as before shows that $\tilde u_\theta \leq \tilde w_\theta \coloneqq 1 + \theta(\tilde u_1 -1)$ since \[ \Delta(\tilde w_\theta - \tilde u_\theta) = \theta\,\Delta \tilde u_1 - \Delta \tilde u_\theta = \theta\,W'(\tilde u_1) - W'(\tilde u_\theta) \leq W'(\tilde w_\theta ) - W'(\tilde u_\theta) \] is monotone in $\tilde w_\theta$, $\tilde u_\theta$ and the boundary values satisfy $\tilde u_\theta = \tilde w_\theta$ on $\partial H$ and $\lim_{|x|\to\infty}\tilde u_\theta = \lim_{|x|\to\infty} \tilde w_\theta = 1$. Like above, we now obtain that \[ \int_{H\setminus B_R^+}\frac12\,|\nabla \tilde u_\varepsilon|^2 + W(\tilde u_\varepsilon)\,\mathrm{d}x \leq \max\{\theta_\varepsilon^2, \theta_\varepsilon^4\}\,P_n(R)\,e^{-2R}. \] Thus we can choose a sequence $R_\varepsilon\to \infty$ such that $\theta_\varepsilon^4\, P_n(R_\varepsilon)\,e^{-2R_\varepsilon}\to 0$ and $\varepsilon\,R_\varepsilon\to 0$ since $\theta_\varepsilon$ grows only polynomially in $1/\varepsilon$ and the exponential term dominates (take e.g.\ $R_\varepsilon = \varepsilon^{-1/2}$). Thus for all $R>0$ \[ \mu_\varepsilon(B_R(0)) = \varepsilon^{1-n}\int_{B_{R/\varepsilon}^+} |\nabla\tilde u_{\theta_\varepsilon}|^2 + W(\tilde u_{\theta_\varepsilon})\,\mathrm{d}x \geq \varepsilon^{1-n} \int_{B_{R_\varepsilon}^+} |\nabla\tilde u_{\theta_\varepsilon}|^2 + W(\tilde u_{\theta_\varepsilon})\,\mathrm{d}x \to S \] and hence $\mu(B_R(0))\geq S$. Taking $R\to 0$ shows that $\mu(\{0\}) = \mu(\overline H) = S$, i.e.\ $\mu = S\,\delta_0$. \end{example} Functions as described above can appear as minimisers of functionals like ${\mathcal W}_\varepsilon + \varepsilon^{-1}\,(S_\varepsilon - S)^2$ which are used to search for minimisers of Willmore's energy with prescribed surface area -- even as functions with energy zero. The same is true for functionals including the topological penalisation term discussed below. By construction, the previous example shows that the inclusion $\operatorname{spt}(\mu)\subset \lim_{\varepsilon\to 0}u_\varepsilon^{-1}(I)$ need not be true for any $I\Subset (-1,1)$ since $u_\varepsilon\geq 1$ and thus $K=\emptyset$. We use a similar construction to demonstrate that the reverse inclusion need not hold, either. \begin{example}[Counterexample to Hausdorff Convergence]\label{counterexample 2} Using the same arguments as above, if $0\leq h\leq 2$, we can find a solution $\tilde u \in (1-h) + W^{1,2}_0(H) \cap C^\infty_{loc}(\overline H)$ of \[ -\Delta \tilde u + W'(\tilde u) = 0\quad\text{in }H, \qquad \bar u = 1-h\quad\text{on }\partial H \] satisfying $-1\leq \tilde u\leq 1$, $\lim_{|x|\to\infty} \tilde u(x) = 1$ and ${\mathcal F}(\tilde u) \leq {\mathcal F}(1+h)<\infty$. Decay estimates are harder to obtain here since $W'$ is not monotone inside $[-1,1]$, but we will not need them, either. If we take $h$ such that $h(0) =2$, $h\in C_c^\infty(B_1)$, we can use continuity up to the boundary to deduce that $\tilde u^{-1}(\rho)\cap B_1^+ \neq \emptyset$ for all $\rho\in(-1,1)$. So when we set $u_\varepsilon(x) = \tilde u(x/\varepsilon)$, we see that \begin{enumerate} \item $\mu_\varepsilon(H) = \varepsilon^{n-1}\,\tilde \mu(H) = \varepsilon^{n-1}\,{\mathcal F}(\tilde u)\to 0$, \item ${\mathcal W}_\varepsilon(u_\varepsilon)\equiv 0$ and \item $0\in \lim_{\varepsilon\to 0}u_\varepsilon^{-1}(I)$ in the Hausdorff sense for all $\emptyset\neq I\Subset (-1,1)$. \end{enumerate} \end{example} \begin{example}[Counterexample to Uniform H\"older Continuity]\label{counterexample 3} If we take $h$ like in the previous example and replace it by $h^\omega(x)= h(\omega x)$ we observe that the associated minimisers satisfy \[ {\mathcal F}(\tilde u^\omega) \leq {\mathcal F}(h^\omega) \leq {\mathcal F}(h) \] for all $\omega\geq 1$ since the gradient term stays invariant in two dimensions and decreases in three, while the integral of the double well potential decreases in both cases for any fixed $h$. Thus, if we take any sequence $\omega_\varepsilon\to \infty$ and define $u_\varepsilon(x) = \tilde u^{\omega_\varepsilon}(x/\varepsilon)$, we get the same results as before. As the function becomes steeper and steeper on the boundary faster than $\varepsilon$, uniform H\"older continuity up to the boundary cannot hold, even for uniformly bounded boundary values. \end{example} \begin{example}[Counterexample to Boundary Regularity of $\mu$ with $-1<u_\varepsilon<1$] We can refine the examples to show that growth of $u_\varepsilon$ on $\partial\Omega$ is not the only reason that $\mu$ might develop atoms on $\partial\Omega$, but that this is in fact possible with $|u_\varepsilon|\leq 1$. This happens when we prescribe highly oscillating boundary values on $\partial H$. Let $h\in C_c^\infty(\partial H)$, then for any $u\in H^1(H)$ with $u|_{\partial H}=g$ we have \[ \int_H|\nabla u|^2\,\mathrm{d}x \geq [h]_{H^{1/2}(\partial H)}^2 = c_{n-1} \int_{\partial H\times \partial H}\frac{|h(x)-h(y)|^2}{|x-y|^{n+1}}\,\mathrm{d}x\,\mathrm{d}y. \] for a constant depending on the dimension $n-1\in\{1,2\}$. For any $S'>0$ and $\delta>0$ we can construct $h\in C^\infty (H)$ such that \begin{enumerate} \item $0\leq h\leq\delta$, \item $\mathrm{supp}(h) \subset B_1(0)$ and \item $[h]_{H^{1/2}}^2 \geq S'$. \end{enumerate} We construct a solution of the stationary Allen-Cahn equation with the boundary values $1-h$ as before, but for a modified potential \[ \overline W(s) = \begin{cases}W(1-2\delta) &s\leq 1-2\delta\\ W(s) &s\geq 1-2\delta\end{cases}. \] An energy minimiser will never dip below $1-2\delta$ then, and consequently never below $1-\delta$ by the maximum principle if $\delta$ is chosen so small that $W'$ is monotone on $[1-2\delta, \infty)$. The rest of the proof goes through as before with suitable scaling of $h$ to get the right energy since $W'$ behaves correctly just below $1$, as it does slightly above $1$. We will not repeat the details. The boundary values need to be constructed with slightly more care since we cannot just have vertical growth and the $H^{1/2}$-norm behaves badly under spacial scaling. This is compensated in the boundary construction by having a larger number of faster oscillations. When we have constructed $h$ with a large enough half-norm, we can always reduce it by scaling with a constant $<1$. \end{example} For the sake of simplicity, we chose to construct the examples on half space due to its scaling invariance. Let us sketch how they can be transferred to $C^2$-domains. If $\Omega\Subset\R^n$ and $\partial\Omega\in C^2$ there exists $x_0\in \partial\Omega$ such that $|x_0| = \max_{x\in\partial\Omega}|x|$. At $x_0$, both principal curvatures of $\partial\Omega$ are strictly positive, so in a ball around $x_0$, up to a rigid motion we may write \[ \Omega\cap B_r(x_0) = \{x\in B_r(x_0)\:|\:x_n>\phi(\hat x)\} \] where $\hat x = (x^1,\dots,x^{n-1})$ and $\phi$ is a strictly convex $C^2$-function satisfying $\phi(0)=0$, $\nabla \phi(0) = 0$ and $\Omega\subset H$. If $\Omega$ is convex in the first place, this is possible at every point $x_0\in \partial\Omega$. Thus, the function $u_\varepsilon(x) = \tilde u(x/\varepsilon)$ is well-defined on $\Omega$ for any of the functions $\tilde u$ constructed above. If $\varepsilon$ is chosen small enough, the difference between $H$ and $\Omega/\varepsilon$ becomes negligible for any given $\tilde u$ and we can still construct counterexamples to boundedness, local H\"older-continuity, relationship between $\operatorname{spt}(\mu)$ and the Hausdorff limit of the level sets and to the regularity of $\mu$ this way. Using the exponential decay (or modifying functions to become constant for larger arguments) it is also possible to create singular behaviour for example along curves in the convex portion of the boundary by placing singular solutions of the stationary Allen-Cahn equation at an increasing number of points distributed along the curve. We restricted our analysis to convex boundary points since then $u_\varepsilon = \tilde u_\theta(x/\varepsilon)$ is well-defined for all small $\varepsilon>0$, whereas at other points, half space does not provide enough information to fill an entire neighbourhood of $x_0$. We believe that the same pathologies can arise at general boundary points.
3,212,635,537,843
arxiv
\section{Introduction.} A recent account of the baryon distribution in the local Universe concluded that about half the baryons synthesized in the Big-Bang have yet to be identified (Shull, Smith \& Danforth 2012) confirming an earlier deficit of baryons found by Fukugita, Hogan \& Peebles (1998). Numerical simulations (Cen \& Ostriker 1999; Dav\'e et al 1999, 2001, Cen \& Ostriker 2006, Smith et al 2011) indicated that only 10-20\% of all baryons are in collapsed objects. Baryons in Intergalactic Medium (IGM) exist in a wide range of densities and temperatures. Penton, Stocke \& Shull (2004) and Lehner et al (2007) concluded that another $\sim$30\% resides in low redshift Ly$\alpha$ absorption systems, while the rest could reside in the shock-heated IGM with temperatures $10^{5-7}$K and overdensities $\xi\le 100$. This unbound gas is usually known as Warm-Hot Intergalactic Medium (WHIM). Identification of the WHIM phase and its spatial distribution at low redshift is an on-going theoretical and observational effort (for a review, see McQuinn 2016). The low density makes it difficult to detect the WHIM in emission (Soltan 2006); more promising is using absorption lines in the far ultraviolet to the soft X-ray range but some earlier detections remain controversial (Shull et al 2012). Cappelluti et al (2012) and Roncarelli et al (2012) have searched for the contribution of the WHIM to the diffuse X-ray emission but failed to find a statistically significant result. Since the WHIM is highly ionized, there has been an extensive search on the thermal and kinematic Sunyaev-Zel'dovich CMB temperature anisotropies (hereafter tSZ and kSZ, Sunyaev \& Zeldovich 1970; Sunyaev \& Zeldovich 1972) generated by this baryon component (Atrio-Barandela \& M\"ucket 2006; Atrio-Barandela et al 2008). Cross-correlation of CMB temperature data from {\it WMAP} or {\it Planck} with matter templates produced only marginal evidence of tSZ anisotropies due to the WHIM (Suarez-Vel\'asquez et al 2013b, G\'enova-Santos et al 2013, G\'enova-Santos et al 2015). Combining X-ray and tSZ observations could be a promising tool to study the WHIM (Ursino, Galeazzi \& Huffenberger 2014). The first evidence of warm-hot gas beyond the virial radius of clusters was presented in Planck Collaboration (2013) who detected a filamentary structure between the cluster pair A399-A401. The distribution of gas in a cosmic web has also been confirmed by XMM-Newton observations of the cluster Abell 2744 by Eckert et al (2015) who found filamentary structures of gas at temperature $10^7$K and coherent over a scale of 8Mpc. At those temperatures and densities, the kSZ effect can have a contribution of similar amplitude to the tSZ effect. The kSZ effect has been used to trace large scale peculiar velocity fields (Kashlinsky et al 2008, Atrio-Barandela et al 2015) and the anisotropies due to the pair-wise velocity dispersion of clusters and galaxies have been measured (Hand et al 2012, Soergel et al 2016, Schaan et al 2016, de Bernardis et al 2017). These latter observations probe baryons on cluster and galaxy scales but have not yet provide a measurement of the fraction of free electrons. A search of the kSZ anisotropies due to the WHIM found no statistically significant evidence in {\it WMAP} data (G\'enova-Santos et al 2009). Only recently, Hern\'andez-Monteagudo et al (2015) and Planck Collaboration (2016) have presented evidence of the peculiar motion of extended gas on Mpc scales with a statistical significance at the $3-3.7\sigma$ level. Hill et al (2016) measured the kSZ effect correlating {\it WMAP} and {\it Planck} data with a galaxy sample from the Wide-field Infrared Survey Explorer (WISE) verifying that baryons approximately trace the Dark Matter (DM) distribution down to $\sim$Mpc scales. The cross-correlation of gravitational lensing maps with tSZ anisotropies is another potential probe of the relation between the hot, ionized gas and the matter density field. Hill \& Spergel (2014) determined the cross-power spectrum of weak lensing of the CMB with the tSZ anisotropies measured by {\it Planck} at the $6\sigma$ confidence level, obtaining a constrain on the bias between the hydrostatic mass and the true mass of clusters and groups at redshifts $z\le 2.5$. This authors interpreted their signal as being produced by baryons in halos. In parallel, Van Waerbeke, Hinshaw \& Murray (2014) found a detection of the cross-correlation between the tSZ signal from {\it Planck} and the galaxy lensing convergence from the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) with the same level of significance. Originally, the data were interpreted as the signal from warm and diffuse baryons. Since the distribution of galaxies in the survey peaks at $z=0.37$, this result suggested that a large fraction of the missing baryon population had been identified. New studies and numerical simulations demonstrated that the majority of the signal came from a small fraction of baryons within halos (Ma et al 2015, Hojjati et al 2015, Battaglia, Hill \& Murray, 2015). On large angular scales the simulations showed a correlation slightly above that of the halo model prediction, pointing to a $10-15\%$ contribution from unbound gas. The latter contribution is degenerate with respect to cosmological and physical parameters and the data did not permit a robust inference (Battaglia et al 2015). Hojjati et al (2016) improved the statistical significance of the lensing -- tSZ cross-correlation by using a larger weak lensing map derived from the Red Sequence Cluster Lensing Survey (RCSLenS) and found that their signal was best interpreted if AGN feedback removed a large quantity of hot gas from galaxy groups. To estimate the contribution of unbound gas to the tSZ--lensing cross-correlation results described above requires an analytical model that correctly predicts the amplitude and shape of the expected signal. In Atrio-Barandela \& M\"ucket (2006) we described the unbound gas in the weakly non-linear filaments by means of the log-normal Probability Distribution Function (PDF). In this article we use this description of the unbound gas to predict the cross-correlation of the lensing convergence due to the large scale matter distribution and the tSZ temperature anisotropies. The outline of this paper is as follows: In Section~2 we describe the model and compute the tSZ--convergence cross-correlation; the derived expressions are solved numerically and the results are presented in Section~3; finally, our conclusions are summarized in Section~4. \section{Lensing-tSZ correlation in the filament model.} The WHIM generates temperature anisotropies on the CMB via the tSZ. If $n_e$ and $T_e$ are the electron density and temperature along the line of sight then the anisotropy generated by the free electrons residing in the potential wells of the WHIM filaments in units of the current CMB black-body temperature $T_0$ is $\Delta T_{tSZ}/T_0=Y_CG(\nu)$. The comptonization parameter measures the integrated electron pressure along the line of sight, $Y_C=k_B\sigma_T/m_ec^2\int n_eT_e adw$ with $a$ the scale factor, $w$ the comoving radial distance, $m_ec^2$ the electron annihilation temperature, $k_B$ the Boltzmann constant and $\sigma_T$ the Thomson cross section; $G(\nu)=(x\coth(x/2)-4)$ gives the frequency dependence of the tSZ effect being $x=h\nu/k_BT_0$ the reduced frequency with $h$ the Planck constant and $\nu$ the frequency of observation. This frequency dependence is different from that of any other known foreground making the tSZ anisotropy possible to distinguish from other CMB anisotropies given sufficient multi-frequency coverage. The data are usually expressed in terms of the comptonization parameter $Y_C$ instead of the temperature anisotropy. The intrinsic CMB temperature anisotropies are lensed by the large scale structure traced by galaxy catalogs. The tSZ anisotropies are themselves generated by the ionized gas within the same large scale structure. The two-point correlation function of the lenses and the spatial variations of the electron pressure along the line of sight is the weighted average of the lensing kernel $\Delta\kappa_{eff}$ due to the large scale structure traced by galaxy catalogs and the anisotropies generated by the tSZ effect of the ionized gas. The weight is given by the probability that the gravitational fields that lens the primary CMB anisotropies contain the electrons that generate the tSZ anisotropies. If the ionized gas generates a comptonization parameter $\Delta Y_C=(k_B\sigma_T/m_ec^2)n_eT_ea(dw/dz)dz$ at redshift $z_1$ in the direction $\hat{x}_1$ and the lenses are located at $z_2$ in direction $\hat{x}_2$ then their correlation is \begin{equation} C(\theta)\equiv \langle\kappa_{eff} Y_C\rangle(\theta)= \int_0^{z_H}\int_0^{z_H}\langle \Delta Y_C (\hat{x}_1,w_1)\Delta\kappa_{eff}(\hat{x}_2,w_2)\rangle dz_1 dz_2 , \label{eq:cfull} \end{equation} where $\theta$ is the angle between the directions $\hat{x}_1$ and $\hat{x}_2$, i.e., $\cos\theta=\hat{x}_1\cdot\hat{x}_2$ and $w_1,w_2$ are the comoving radial distances (for notation and definitions, see Bartelmann \& Schneider 2001). The integration extends out to the redshift of the surface of the last scattering, $z_H$. The average $\langle\cdots\rangle$ takes into account the distribution of the WHIM filaments and of the lenses and their correlation. Let us briefly summarize our WHIM model and the effect of a population of lenses before discussing their statistics. \subsection{The log-normal distribution of WHIM filaments.} Numerical simulations have shown that at redshifts $z>1$ and at small scales the IGM forms filaments of mildly non-linear overdensities, giving rise to the observed Ly$\alpha$ forest. At $z<1$ most of the IGM baryons resides in shock-heated regions of low density gas at temperatures $0.01-1$KeV (Shull et al 2012) and sizes larger than 1 Mpc (Cen \& Ostriker, 2006). We model the distribution of this unbound IGM gas as a log-normal random field evolving with time. The log-normal PDF was introduced in Cosmology by Coles \& Jones (1991) to describe the non-linear distribution of matter in the Universe when the peculiar velocity field was still in the linear regime. Based on the improved Wiener density reconstruction from the Sloan Digital Sky Survey, Kitaura et al (2009) found that this distribution describes the statistics of the matter inhomogeneities on scales larger than $7h^{-1}$Mpc. In the log-normal approximation, the number density of baryons at $\vec{x}$, located at redshift $z$ and at a proper distance $|\vec{x}(z)|$ is $n_B(\vec{x},z)=n_0(z)\xi$, where $\xi$ is a log-normal distributed random variable normalized to have unit mean, $\langle\xi\rangle=1$, and $n_0(z)=f_e\rho_B(1+z)^3/\mu_B m_p$ is the mean baryon number density, $\rho_B$ the baryon density, $f_e$ the fraction of baryons in the WHIM, $m_p$ the proton mass, $\mu_B=4/(8-5Y)$ the mean molecular weight of the IGM and $Y$ the He fraction by weight that we fixed to the value $Y=0.24$. The non-linear baryon density contrast $\xi$ in units of the baryon mean density should not be confused with $\delta$ or $\delta_B$, respectively the matter and IGM baryon overdensities in the {\it linear regime}; $\xi$ is given in terms of the gaussian distributed variable $\delta_B$ (Choudhury et al 2001, Atrio-Barandela \& M\"ucket 2006) \begin{equation} \xi={\rm e}^{\delta_B(\vec{x},z)-\sigma_B^2(z)/2} , \label{eq:logn} \end{equation} where $\sigma_B^2(z) = <\delta_B^2(\vec{x},z)>$ is the variance of the zero-mean linear IGM baryon density field. The number density of electrons in the IGM, $n_e$, is obtained by assuming equilibrium between recombination and photo-ionization and collisional ionization. For the conditions of the IGM, temperatures in the range $10^5-10^7$K and density contrasts $\xi \le 100$, the gas can be considered fully ionized so $n_e\approx n_B$. The spectrum of density fluctuations of the baryons in the IGM is related to the DM density contrast $\delta_{DM}$ by (Fang et al 1993) \begin{equation} \delta_B(k,z)=\frac{\delta_{\rm DM}(k,z)}{[1+k^2L_{0}^2]} . \label{pk} \end{equation} The cut-off length $L_{0}$ corresponds to the scale below which baryon density perturbations are smoothed due to physical processes. The variance of the baryon density field is given by \begin{equation} \sigma_B^2(z)=\frac{D_+^2(z)}{2\pi^2}\int \frac{P_{DM}(k)}{[1+L_0^2(z)k^2]^2}k^2dk , \end{equation} being $D_+(z)$ the linear growth factor of matter density perturbations. \subsubsection{Baryon damping scales.} At redshifts $z\le 1$ small scale baryon perturbations are erased by shock-heating (Klar \& M\"ucket 2010). If $T_{\mathrm{IGM}}$ is the mean IGM temperature, the comoving cut-off scale $L_0$ is determined by the condition that the linear velocity perturbation $\vec{v}(\vec{x},z)$ averaged on a scale $L_{0}$ is equal to or larger than the IGM sound speed $c_s=(k_BT_{\mathrm{IGM}}(z)/m_p)^{1/2}$. The IGM temperature is determined by the evolution of the UV background. At redshifts $z\le 3$, the temperature varies within the range $T_{\mathrm IGM}=[10^{3.6}-10^4]$K and its weakly dependent on redshift (Tittley \& Meiksin, 2007). For $T_{IGM}=10^4$K the sound speed is $c_s\simeq 10$km/s; in our subsequent analyses we will fix the sound speed to this value at all redshifts. In the linear regime and in comoving coordinates, $\dot{\delta}=-(1+z)\nabla\vec{v}$ and $\dot{\delta}=Hf\delta$ with $f(z)=d\ln\delta/d\ln a$. In Fourier space, the peculiar velocity $\vec{v}(k)$ on a scale $k=2\pi/L_0$ is $|\vec{v}(k,z)|\sim(L_0/2\pi)Hf(z)\delta(k,z)$. From the condition $|\vec{v}|\ge c_s$ and expressing $\delta(k,z)=\delta_0(k)D_+(z)$, with $\delta_0(k)$ the current amplitude of the density contrast at wavenumber $k=2\pi/L_0$, we obtain $L_0\ge[2\pi c_s](1+z)/[Hf(z)\delta_0(k)D_+(z)]$. This condition is valid only in the linear regime, hence the lower bound is obtained by imposing $\delta_0\simeq 1$. Finally \begin{equation} L_0(z)=\frac{2\pi(1+z)c_sH_0^{-1}}{(\Omega_{\Lambda}+\Omega_m(1+z)^3)^{1/2} f(z)D_+(z)} , \label{eq:shock} \end{equation} where $H_0$ is the Hubble constant and $\Omega_\Lambda$, $\Omega_m$ are the energy density of the cosmological constant and matter density in units of the critical density. In our numerical estimates, we fixed $\Omega_m$, $\Omega_\Lambda$ to their concordance values. At $z=0$ the comoving damping scale is $L_0\approx 1.7h^{-1}$Mpc. At redshifts $z \ge 1.0$, shock heating is no longer so effective and the damping scale $L_{0}$ corresponds to the comoving Jeans length at the conditions of the photo-ionized IGM \begin{equation} L_0(z)=H_0^{-1}\left[\frac{2\gamma k_{\rm B}T_b(z)}{3\mu m_p\Omega_m(1+z)}\right]^{1/2} , \label{eq:jeans} \end{equation} where $\gamma$ the polytropic index and $T_b$ the averaged background temperature of the IGM. This last parameter is little constrained by observations; Schaye et al (2000) argues that at $z\simeq 3$ HeII re-ionization requires $T_b$ to be larger than $5\times 10^4$K while Viel \& Haehnelt (2006) gave an upper bound of $T\simeq 2\times 10^5$K. To simplify, we fix the average background temperature to the constant value $T_b=10^5$K, within the interval allowed by observations. \subsubsection{The IGM temperature.}\label{sec:temperature} To describe the IGM distribution at all redshifts we will consider two limiting cases: At $z>1$ the cut-off scale is the Jeans length given by eq.~(\ref{eq:jeans}) and at $z\le 1$ the shock-heated scale $L_0$ of eq.~(\ref{eq:shock}). To compute the tSZ contribution to CMB temperature anisotropies due to the IGM, we need to specify its temperature at each position and redshift. For the Jeans cut-off length scale we assume the temperature follows a polytropic equation of state $T(\hat{x},z)=T_0(z)\xi^{(\gamma-1)}$. We take $T_0(z)=1.4\times 10^4(1+z)^\beta$K in agreement with the values obtained by Hui \& Haiman (2003), with a weak dependence on redshift ($\beta\approx 0$). We chose $\gamma=1.5$ and $\beta=1$ as a conservative upper limit to WHIM tSZ anisotropies at $z\ge 1$. At $z\le 1$, the shock heated IGM has a complex distribution of densities and temperatures. Kang et al (2005), hereafter K05, computes phase-space diagrams that can be fitted by the following equation of state: $\log_{10}(T_e(\xi)/10^8K)=-2/\log_{10}(4+\xi^{\alpha+1/\xi})$, valid for overdensities $\xi\le 100$. Alternatively, Cen \& Ostriker (2006), hereafter C06, find lower IGM temperatures; their phase-space diagram approximately corresponds to the equation of state $\log_{10}(T_e(\xi)/10^8K)=-2.5/\log_{10}(4.0+\xi^{0.9})$. We have considered all equations of state to be independent of redshift except the polytropic one. These models are represented in Fig.~\ref{fig:fig1}a; solid (black), dashed (blue) and dot-dashed (red) lines correspond to K05 with $\alpha=(3,1.5,1)$, respectively. The triple-dot dashed (green) line corresponds to C06 and the dotted (gold) line corresponds to the polytropic model at $z=1$. The overall amplitude of the cross-correlation function is proportional to the fraction of baryons in the WHIM and of the mean temperature of the electron gas. In our numerical estimates we have assumed that this baryon fraction is the same at all redshifts and equal to $f_e=0.5$. The overdensity weighted temperature average $\bar{T}_e\equiv\langle T_e\xi\rangle/\langle\xi\rangle$ depends on the temperature model. For the K05 and C06 models this average in the interval overdensity $\xi=[1,100]$ is $\bar{T}_e\approx[20,7,3,0.7]\times 10^6$K, weakly dependent on redshift; for the polytropic model, whose equation of state varies with redshift, the mean temperature is in the range $\bar{T}_e=[0.4-1.7]\times 10^6$K. Any constrain on the amplitude of the cross-correlation will translate into an upper limit on the the product $f_e\bar{T}_e$ and if $f_e$ is independently measured, then it would be a constrain on the mean temperature of the IGM, offering a direct probe onto the physical state of the WHIM. \subsection{Lensing kernel} The gravitational field generated by weak density perturbations lenses the radiation propagating in the Universe. The deflection angle of the weakly deflected rays can be related to an effective surface-mass density $\kappa_{eff}$, known as convergence, closely related to the mass distribution (Bartelmann \& Schneider 2001). The convergence due to a population of lenses distributed as $G(w)dw=p(z)dz$ along the line of sight is \begin{equation} \kappa_{eff}(\hat{x}) = \frac{3H_0^2\Omega_m}{2c^2}\int_0^{w_H} W(w) f_K(w)\frac{\delta[f_K(w)\hat{x},w]}{a(w)}dw , \label{eq:convergence} \end{equation} where $\hat{x}$ is the direction in the sky into which the light ray starts to propagate, $f_K(w)$ is the comoving angular diameter distance at $w$, $\delta$ the matter density contrast along the unperturbed light ray and $c$ the speed of light. The kernel weights the relative contribution of lenses along the line of sight \begin{equation} W(w)\equiv\int_w^{w_H}dw^\prime G(w^\prime)\frac{f_K(w^\prime-w)}{f_K(w^\prime)} . \label{eq:weight} \end{equation} The redshift distribution of galaxies is modeled as $p(z)=A(z/z_0)^2\exp[-(z/z_0)^{3/2}]$, where $z_0$ is the effective depth of the lens population, related to the mean redshift of the distribution as $z_m=1.412z_0$ (Smail et al 1995); the normalization constant is fixed by setting $\int p(w)dw=1$. The distribution of galaxies in the CFHTLenS peaks at $z=0.37$ (van Waerbeke et al 2014) that corresponds to $z_0=0.3$. These lens distributions are represented in Fig.~\ref{fig:fig1}b; dashed (blue), dot-dashed (red) and triple dot-dashed (green) lines correspond to $z=0.1,0.3,0.5$ respectively. The more recent analysis by Hojjati et al (2016) uses the deeper RCSLenS catalog that includes all galaxies with $mag_r>18$. These authors provide a numerical fit of their lens distribution, plotted in Fig.~\ref{fig:fig1}b with a solid (black) line. From eq.~(\ref{eq:convergence}), the contribution from lenses on a thin shell of width $dz$ at comoving distance $w=w(z)$ and direction $\hat{x}$ is \begin{equation} \Delta\kappa_{eff}=\frac{3H_0^2\Omega_m}{2c^2} W(z) f_K(z)\frac{\delta[f_K(z)\hat{x},z]}{a(z)}\frac{dw}{dz}dz . \label{eq:delta_convergence} \end{equation} In Fig.~\ref{fig:fig1}c we plot the convergence of eq.~(\ref{eq:delta_convergence}) for $\delta(f_K(w)\hat{x},w)=1$ as a function of redshift for the four lens distributions given in Fig.~\ref{fig:fig1}b, with lines following the same conventions. The integration range of eq.~(\ref{eq:convergence}) must extend up to the horizon $w_H$ or up to a redshift $z^{up}$ high enough to include the effect of all possible lenses. We took $z^{up}=5$, and no significant differences were found when taking $z^{up}=10$. This is expected since the lensing kernel drops exponentially following the distribution of the lensing sources (see Figs.~\ref{fig:fig1}b and ~\ref{fig:fig1}c). Eq.~(\ref{eq:delta_convergence}) was derived in the thin lens approximation and only terms linear in the density contrast were retained. Higher order terms contain products of the density field but while the density contrast could be large for a density perturbation crossed by a given ray, the average overdensity is $\delta\ll 1$ for most rays and higher order terms can be safely neglected (Bartelmann \& Schneider 2001). Within this approximation the PDF of the lenses is that of the linear density field and, consequently, is well described by a gaussian distribution. \subsection{Lensing -- TSZ cross-correlation} To compute the correlation function of lenses and WHIM sources of tSZ anisotropies given by eq.~(\ref{eq:cfull}), the average $\langle\cdots\rangle$ has to account for the probability distribution of the WHIM filaments and of the lenses. Let $dP(\xi,\delta)=F(\xi,\delta)d\xi d\delta$ be the probability that a filament with overdensity $\xi$ is located at $(\hat{x}_1,z_1)$ when an overdensity $\delta$ is at $(\hat{x}_2,z_2)$, with $F(\xi,\delta)$ the associated PDF. Then, the average in eq.~(\ref{eq:cfull}) can be written as \begin{equation} \langle \Delta Y_C(\hat{x}_1,z_1)\Delta\kappa_{eff}(\hat{x}_2,z_2)\rangle(\theta)= \int_0^{z_1^{up}}dz_1\int_0^{z_2^{up}}dz_2\int_1^{100}d\xi\int_{-\infty}^\infty d\delta \Delta Y_C(\hat{x}_1,z_1)\Delta\kappa_{eff}(\hat{x}_2,z_2) F(\xi,\delta) , \label{eq:corr} \end{equation} with $\cos\theta=\hat{x}_1\cdot\hat{x}_2$ and $z_1^{up}$ and $z_2^{up}$ the highest redshifts beyond which WHIM and lenses do not generate a significant cross-correlation. To complete our model we need to specify the bivariate PDF of the lens-filament distribution, $F(\xi,\delta)$. As discussed above, lensing is dominated by the large scale structure and the lensing overdensities $\delta$ are well described by a gaussian PDF, but the non-linear overdensities $\xi$ of the IGM filaments are distributed according to a log-normal PDF. Since $\xi=Ae^{\delta_B}$ is log-normal distributed, $\log(\xi)$ follows a gaussian distribution with mean $\mu_\xi=-\sigma_B^2/2$ and variance $\sigma_B^2$; in term of this variable the probability can be written as $dP={\cal G}(\log(\xi),\delta) d\log(\xi)d\delta$ where ${\cal G}$ is a bivariate gaussian and \begin{equation} F(\xi,\delta)=\frac{1}{2\pi\xi\sigma_B\sigma_\delta(1-\rho_c)^{1/2}} \exp\left[\frac{1}{2(1-\rho_c^2)} \left(\frac{(\log\xi-\sigma_B^2/2)^2}{\sigma_B^2}- 2\rho_c\frac{(\log\xi-\sigma^2_B/2)(\delta-\mu_\delta)}{\sigma_B\sigma_\delta} +\frac{(\delta-\mu_\delta)^2}{\sigma_\delta^2}\right)\right] . \label{eq:pdf} \end{equation} In this expression $\mu_\delta$ is the mean of the matter density contrast, in this case $\mu_\delta=0$. The variance of the matter density field is $\sigma_\delta^2=(D_+^2(z)/2\pi^2)\int P_{DM}(k)k^2dk$. At small scales, $P_{DM}(k)\propto k^{-3}$ and the integral is logarithmically divergent. Therefore, we remove small scale perturbations by filtering the density field with a {\it top-hat} window of radius $R_{cut}=0.5h^{-1}$Mpc. Physically this corresponds to removing from the lensing kernel the contribution from galaxies, groups and clusters. Then \begin{equation} \sigma_\delta^2=\frac{D_+^2(z)}{2\pi^2}\int P_{DM}(k)W^2_{th}(kR_{cut})k^2dk , \end{equation} where $W_{th}(kR_{cut})$ is the Fourier transform of the {\it top-hat} filter. Changing the cut-off scale to $R_{cut}=1h^{-1}$Mpc reduces $\sigma_\delta$ by a factor 0.85. The coefficient $\rho_c=\langle\log\xi\delta\rangle/\sigma_B\sigma_\delta$ is the correlation between two gaussian variables \begin{equation} \rho_c(r)=\frac{D_+(z_1)D_+(z_2)}{2\pi^2\sigma_\xi\sigma_\delta} \int \frac{P_{DM}(k)}{1+L_0^2(z_1)k^2}W_{th}(kR_{cut})j_0(k|\vec{x}_1-\vec{x}_2|)k^2dk , \label{eq:rc} \end{equation} where $j_0$ is the 0th order spherical Bessel function and $r=|\vec{x}_1-\vec{x}_2|$ is the comoving distance between a filament in the IGM at $\vec{x}_1$ and a lens at $\vec{x}_2$, corresponding to redshifts $z_1$ and $z_2$, and separated by an angle $\theta$. As in Suarez-Vel\'asquez, M\"ucket \& Atrio-Barandela (2013a) we use the flat sky approximation and \begin{equation} r\equiv|\vec{x}_1-\vec{x}_2|\approx \sqrt{l_{\perp}(\theta,z_1)^2 + [w(z_1)-w(z_2)]^2} , \label{eq:flat_sky} \end{equation} with $l_\perp(\theta,z_1)$ being the transverse distance between two points located at the same redshift. Notice that $\rho_c(0)\ne 1$, since the two distributions, IGM and lenses, are not fully correlated. In Fig.~\ref{fig:fig1}d we represent the absolute value of the correlation coefficient for different cosmological parameters. We assume a flat Universe, i.e., $\Omega_m+\Omega_\Lambda=1$. The matter power spectrum is normalized to $\sigma_8=0.8$. We verified that varying parameters within the ranges given in Fig.~\ref{fig:fig1}d has an effect on the comptonization-convergence cross-correlation that is small compared with the differences in the lens distribution or the equation of the state of the IGM temperature, so we will not discuss further variations of cosmological parameters and their effect on our results. \section{Results and discussion.} We compute the comptonization-convergence cross-correlation using eq.~(\ref{eq:corr}). The integration over the lensing part extends up to the redshift of the last scattering surface. However, as Fig.~\ref{fig:fig1}c indicates, the lensing kernel drops exponentially following the distribution of the lensing sources and, effectively, we can stop the integration at $z_2^{up}=1,2.6,4.6$ for lens distributions with $z_0=0.1,0.3,0.5$, respectively, when the kernel has decreased by a factor $10^{-15}$ from its maximum value. For RCSLenS sources the integration stops at $z_2^{up}=4.6$ when similar drop factor has been reached. We verified that, as expected, extending the integration further does not increase the cross-correlation. More delicate is to decide out to what redshift is valid our model of the IGM. At $z\ge 1$ shock-heating stops being dynamically important. In Fig.~\ref{fig:fig2}a we compute the amplitude of the effective lensing-comptonization cross-correlation at zero lag, $C(0)=\langle\kappa_{eff}Y_C\rangle(0)$ as a function of the upper limit of integration $z_1^{up}$. The results, from top to bottom correspond to the K05 with $\alpha=3,1.5,1$ (black solid, dashed blue and dot-dashed red lines) and the C06 model (triple dot-dashed green line). In Fig.~\ref{fig:fig2}b we plot the differential contribution. This figure indicates that most of the cross-correlation originates from $z\le 1$. Since the cross-correlation scales with the fraction of electrons in the IGM as $\langle\kappa_{eff}Y_C\rangle\propto(f_e/0.5)$, we need to know the fraction of electrons in the WHIM to translate constraints on $C(0)$ into constraints in the mean temperature of the gas. Although 80\% of all baryons reside in Ly$\alpha$ systems at redshift $z\simeq 2$ and $f_e\le 0.2$ at that redshift (Fukugita et al 1998), numerical simulations indicate that $f_e\ge 0.4$ out to $z\simeq 1$ (C06), the range in redshift space that dominates the cross-correlation. Therefore, by taking $f_e=0.5$ and constant our constraints on the mean WHIM temperature will be reasonably accurate. The contribution of the IGM comptonization parameter to the cross-correlation from $z\ge 1$ is less than 10\%. In fact, this correction is overestimated. First, the fraction of baryons in the WHIM drops with redshift. Second, and as mentioned in Sec~\ref{sec:temperature}, the IGM behaves as a polytrope and, on average, its temperature is smaller than the K05 shock-heated models and is similar to C06 (see also Fig~\ref{fig:fig1}a). In the interval $z\ge 1$ the cross-correlation with the polytropic equation of state and the damping scale of eq.~(\ref{eq:jeans}) is $\langle\kappa_{eff}Y_C\rangle\sim 1-10\times 10^{-12}$ for the different lens distributions. This is a very small contribution and essentially we could have stopped our calculation at $z=1$ or extend the shock model out to $z=3$ since it would have introduced an error smaller than 10\%. We adopted this latter option and by not including the Jeans cut-off scale (eq.~\ref{eq:jeans}) and its corresponding polytropic equation of state, we simplify the parameter space of our model and the physical interpretation of our results. Figs.~\ref{fig:fig3} and ~\ref{fig:fig4} constitute our main result. In Fig.~\ref{fig:fig3} we plot the $\langle\kappa_{eff}Y_C\rangle$ cross-correlation for the three lens distributions with $z_0=0.1,0.3,0.5$ (upper panels) and their corresponding power spectra (lower panels). The power spectrum is computed from the correlation function integrating the quadrature \begin{equation} C_\ell=2\pi\int\langle \kappa_{eff}Y_C\rangle P_\ell(\cos\theta)d\cos\theta , \end{equation} with $P_\ell$ the $\ell$-th Legendre polynomial. Hence, we are required to compute the correlation function over the range $\theta=[0,\pi]$rad. To simplify our calculation we have assumed the sky to be flat (eq.~\ref{eq:flat_sky}) and although this approximation limits the accuracy of the low-$\ell$ multipoles, it should be accurate for multipoles where data is available, $\ell\ge 100$. In the panels of Fig.~\ref{fig:fig3} and from top to bottom the solid (black), dashed (blue) and dot-dashed (red) lines correspond to K05 with $\alpha=3,1.5,1$ and the triple dot-dashed (green) line corresponds to C06. The distribution of the CFHTLenS is well approximated by $z_0=0.3$ then, in Fig.~\ref{fig:fig3}b we also plot the data from van Waerbeke et al (2014) and their respective error bars. In Fig.~\ref{fig:fig4} we plot the correlation function and the power spectrum for the same shock-heating temperature models but for the RCSLenS sources with $mag_r>18$. Lines follow the same conventions than in Fig.~\ref{fig:fig3}. The data are taken from Hojjati et al (2016). To analyze the contribution of the different IGM overdensities we divide the integration of eq.~(\ref{eq:corr}) in four intervals with equal logarithmic spacing: $\xi=([1-3.3], [3.3-10],[10-33],[33-100])$. We computed the contribution in each interval for the K05 with $\alpha=1.5$ and for lens distributions $z_0=0.3$ (Fig.~\ref{fig:fig3}b) and RCSLenS sources (Fig.~\ref{fig:fig4}a). The fractional contribution to the correlation at the origin, $\langle\kappa Y_C\rangle$(0), was $(0.08,0.45,0.41,0.06)$ for the first case and $(0.1,0.39,0.46,0.05)$ in the second. Similar results occur for other lens distributions and temperature models: most of the correlation comes from overdensities in the range $\xi\approx[3-33]$. The numerical simulations of Dav\'e et al (2001) found that this is the density range where most of the WHIM is stored. In this respect, if our log-normal model were to be accurate only at these intermediate overdensities, integration of eq.~(\ref{eq:corr}) would still provide a very accurate result. The comparison of the measured data with the theoretical predictions already offers some insights into the nature of the IGM. For the CFHTLenS sources shown in Fig~\ref{fig:fig3}b, all temperature models are allowed by the data. As indicated in Sec.~\ref{sec:temperature}, the shock-heating model with $\alpha=3$ corresponds to an average temperature of $\bar{T}_e=2\times 10^7$K and is still compatible with the measured correlation. However, our results do not include the contribution due to clusters and galaxy groups. Since at most 15\% of the measured signal comes from unbound gas if we restrict the overall IGM contribution to be that fraction of the overall signal then only models with $\alpha\le 1.5$ are compatible with the data. In other words, the mean temperature of the IGM free electron gas would be $\bar{T}_e\le 7\times 10^6$K. The data from Hojjati et al (2016) shown in Fig~\ref{fig:fig4}a are even more restrictive. These authors compared their measurements against the predictions of the halo model and from numerical simulations that included diffuse gas. The simulations showed a very good agreement with the observed cross-correlation from RCSLenS galaxies, with about 10-15\% contributions coming from unbound gas. The amplitude of the correlation (Fig~\ref{fig:fig4}a) in the range $\theta=[40-120]$arcmin is that of the $\alpha=1.5$ model. In that interval, only the C06 temperature model predicts an amplitude 10\% of the measured correlation. That would imply that the average temperature of the IGM is $\bar{T}_e\sim 10^6$K, a stricter bound than derived from the van Waerbeke et al (2014) data. There is a caveat when translating the results on the cross-correlation onto an upper limit on the average temperature of the IGM. Hydro-simulations consistently show that unbound gas is not well characterized by a single equation of state; more accurately, the gas coexists in different phases and there is a large spread in temperature within regions with the same overdensity. Since our temperature models fails to encode the full complexity of the temperature-density phase diagram, our upper bounds on the average temperature must be understood as an order of magnitude estimate not as a strict upper limit. The constrains that can be derived from the measured power spectrum shown in Fig.~\ref{fig:fig4}b are not as tight as those derived from the correlation function. Only the measurement at $\ell\sim 1800$ is well below the prediction for the K05 models. What is more relevant is that the overall shape is very different. Hojjati et al (2016) found that the shape of the correlation function and power spectrum was strongly dependent on physical processes undergone by baryons in halos such as radiative cooling, star formation, supernovae winds and AGN feedback. For instance, AGNs expel gas to large distances from the center of halos, lowering the signal at small scales. The properties of the hot gas in our model are rather simplified. No effects of specific physical processes are considered and only the density and temperature distributions are important. More realistic models would require detailed numerical simulations including the most relevant processes in low density regions. Physical effects could remove power at $\ell\ge 1000$, modifying the overall shape of the power spectrum and bringing it in closer agreement with the data. While a detailed discussion on this point is beyond the scope of the current paper, if the shape were to be independent of the physics of baryons the power spectrum could be a useful discriminant between halo and unbound gas contributions. An alternative approach to detect the WHIM contribution would be to remove known galaxies down to a given magnitude to eliminate the contribution of their halos to the comptonization-convergence correlation. When removing fainter galaxies does not produce a further decrement of the cross-correlation, we have reached the level when the signal is due to gas outside halos. A similar approach has been used by Kashlinsky et al (2005) and Helgason et al (2015) to isolate Cosmic Infrared Background fluctuations due to first stars at the epoch of reionization from those of known galaxy populations in deep Spitzer data. \section{Conclusions.} Models of galaxy formation predict that a significant fraction, close to half the total number of baryons, could be stored in the WHIM. The low densities and temperatures $10^{5-7}$K of this medium makes it difficult to detect. Search for absorption lines and SZ contributions have provided preliminary evidence of its existence. The $\langle\kappa Y_C\rangle$ cross-correlation measured by Van Waerbeke et al (2014) and Hojjati et al (2016) probes the fluctuations on the electron pressure along the line of sight and its distribution but it is not yet a detection of the missing baryon component. As indicated by Hojjati et al (2015) about 50\% of the signal comes from the small fraction of baryons within massive halos; at most, 15\% of the cross-correlation power at $\ell\sim 500$ could come from unbound gas. In this article we have shown that the contribution from the unbound gas in filaments could be of this order of magnitude, depending on model parameters. In particular, if the unbound gas is well described by a log-normal distribution and the gas is shock heated out to a mean temperature $\bar{T}_e\sim 10^6$K, then about half the baryons on the Universe could be stored in the WHIM producing a signal that is at least one order of magnitude smaller than the measured amplitude. We have considered two different baryon cut-off lengths: the Jeans length given by eq.~(\ref{eq:jeans}) that would describe better the physical state of the IGM at $z>1$ and the shock-heated cut-off scale given by eq.~(\ref{eq:shock}), that provides a better description at $z<1$. We have shown that $\sim 90$\% of the contribution to the $\kappa_{eff}-Y_C$ cross-correlation and to its power spectrum originates at $z\le 1$ and at overdensities in the range $\xi\sim[3-33]$. The overall amplitude depends on the depth of the source catalog probing the convergence due to the large scale structure, the average electron temperature and is proportional to the fraction of baryons in the IGM. The tSZ-lensing cross-correlation could be a potentially powerful technique to trace the distribution of baryons at large scales. The shape of the measured comptonization-convergence power spectrum and the theoretical prediction for IGM gas show maxima at different scale. The difference could be due to not having include the physical effects that are relevant to the evolution of the IGM gas; but if the differences in shape are real, they could be used to separate the contribution of unbound gas from that of gas in halos. In real space, the cross-correlation is also dominated by halos. To detect the contribution due to the WHIM would require to mask galaxy populations with increasing magnitude down to the level when further masking does not reduce the residual correlation. This would require to extend the measurement to larger areas and to deeper lens surveys, as Hojjati et al (2016), to increase the signal-to-noise by a factor 5-10. Then, masking the halo contribution down to 10\% of its original amplitude would still leave a statistically significant signal. \vspace*{1cm} {\bf Acknowledgments} F. A.-B. acknowledges financial support from the grant FIS2015-65140-P (MINECO/FEDER). He also thanks the hospitality of the Leibniz Institute f\"ur Astrophysik at Postdam where part of this work was done.
3,212,635,537,844
arxiv
\section{Introduction} \label{s:intro} This paper has to do with determining information about the internal structure of a finite group $\Gamma$ from the knowledge of the universal deformation rings $R(\Gamma,V)$ associated to absolutely irreducible $\mathbb{F}_p\Gamma$-modules $V$. The kind of internal structure we will consider is the fusion of certain subgroups $N$ in $\Gamma$. A pair of elements of $N$ are said to be fused in $\Gamma$ if they are conjugate in $\Gamma$, but not in $N$. By determining the fusion of $N$ in $\Gamma$, we mean listing all such pairs. The universal deformation ring $R(\Gamma,V)$ is characterized by the property that the isomorphism class of every lift of $V$ over a complete local commutative Noetherian ring $R$ with residue field $\mathbb{F}_p$ arises from a unique local ring homomorphism $\alpha: R(\Gamma,V)\to R$. Our main goal is thus to determine how to transfer information about the universal deformation rings to information about the structure of groups. It is natural to expect a connection with fusion because fusion plays a key role in the character theory of $\Gamma$, which in turn enters into finding universal deformation rings of representations. In this paper, we consider $\Gamma$ which are extensions of a group $G$ whose order is relatively prime to $p$ by an elementary abelian $p$-group $N$ of rank 2. We can now state our main result: \begin{theorem} Let $G$ be a dihedral group of order $2n \geq 6$ and let $p$ be an odd prime such that $p \equiv 1$ mod $n$. Fix an irreducible action of $G$ on $N = \mathbb{Z}/p\mathbb{Z} \times \mathbb{Z}/p\mathbb{Z}$, and let $\Gamma$ be the resulting semi-direct product of $G$ with $N$. \begin{enumerate} \item[a.] If the center of $G$ acts trivially on $N$, then one can determine the fusion of $N$ in $\Gamma$ from the absolutely irreducible $\mathbb{F}_p\Gamma$-modules $V$ of dimension 2 over $\mathbb{F}_p$ which have universal deformation ring $R(\Gamma,V)$ different from $\mathbb{Z}_p$. \item[b.] If the center of $G$ acts non-trivially on $N$, then $n$ is even and $R(\Gamma,V) \cong \mathbb{Z}_p$ for all absolutely irreducible $\mathbb{F}_p\Gamma$-modules $V$ of dimension 2 over $\mathbb{F}_p$. In this case, one can determine the fusion of $N$ in $\Gamma$ if and only if $n$ is either a power of $2$, or $n = 2 q$, for some odd prime $q$. \end{enumerate} \end{theorem} In section \ref{ss:ab} we prove a weaker result when $G$ is abelian. In the course of proving Theorem 1.1, we must calculate $\charg{i}{\Gamma}{ \mathrm{Hom}_{\,\mathbb{F}_p}(V,V)}$, for $i = 1, 2$, since these enter into the computation of $R(\Gamma,V)$. The paper is organized as follows. In section \ref{s:prelim}, we recall the definitions of deformations and deformation rings, including some basic results. In section 3, we concentrate on the case when $\Gamma$ is an extension of a finite group $G$ by an elementary abelian $p$-group of rank $\ell \geq 2$. We give an explicit formula for the cohomology groups $\mathrm{H}^i(\Gamma,\mathrm{Hom}_{\,\mathbb{F}_p}(V,V))$ for $i=1,2$ for all projective $\mathbb{F}_pG$-modules $V$ which are viewed as $\mathbb{F}_p\Gamma$-modules by inflation (see Theorem 3.1). In section 4, we prove our main results, Theorems \ref{th:no2} and \ref{th:no3}, on the connection between fusion and universal deformation rings, respectively cohomology groups, in the case when $G$ is a dihedral group. In section 4.6, we briefly discuss the case when $G$ is abelian and compare this case to the dihedral one. This paper is part of my dissertation at the University of Iowa under the supervision of Professor Frauke Bleher \cite{meyer}. I would like to thank her for all of her advice and guidance. \section{Preliminaries} \label{s:prelim} In this section, we give a brief introduction to universal deformation rings and deformations. For more background material, we refer the reader to \cite{mazur} and \cite{desmit-lenstra}. Let $p$ be an odd prime, $\mathbb{F}_p$ be the field with $p$ elements, and $\mathbb{Z}_p$ denote the ring of $p$-adic integers. Let $\hat{\mathcal{C}}$ be the category of all complete local commutative Noetherian rings with residue field $\mathbb{F}_p$. Note that all rings in $\hat{\mathcal{C}}$ have a natural $\mathbb{Z}_p$-algebra structure. The morphisms in $\hat{\mathcal{C}}$ are continuous $\mathbb{Z}_p$-algebra homomorphisms that induce the identity map on $\mathbb{F}_p$. Suppose $\Gamma$ is a finite group and $V$ is a finitely generated $\mathbb{F}_p\Gamma$-module. A lift of $V$ over an object $R$ in $\hat{\mathcal{C}}$ is a pair $(M,\phi)$ where $M$ is a finitely generated $R\Gamma$-module that is free over $R$, and $\phi:\mathbb{F}_p\otimes_R M\to V$ is an isomorphism of $\mathbb{F}_p\Gamma$-modules. Two lifts $(M,\phi)$ and $(M',\phi')$ of $V$ over $R$ are isomorphic if there is an isomorphism $\alpha:M\to M'$ with $\phi=\phi'\circ (\mathrm{id}_{\mathbb{F}_p}\otimes\alpha)$. The isomorphism class $[M,\phi]$ of a lift $(M,\phi)$ of $V$ over $R$ is called a deformation of $V$ over $R$, and the set of such deformations is denoted by $\mathrm{Def}_\Gamma(V,R)$. The deformation functor $$\hat{F}_V:\hat{\mathcal{C}} \to \mathrm{Sets}$$ sends an object $R$ in $\hat{\mathcal{C}}$ to $\mathrm{Def}_\Gamma(V,R)$ and a morphism $f:R\to R'$ in $\hat{\mathcal{C}}$ to the map $\mathrm{Def}_\Gamma(V,R) \to \mathrm{Def}_\Gamma(V,R')$ defined by $[M,\phi]\mapsto [R'\otimes_{R,f} M,\phi']$, where $\phi'=\phi$ after identifying $\mathbb{F}_p\otimes_{R'}(R'\otimes_{R,f} M)$ with $\mathbb{F}_p\otimes_R M$. If there exists an object $R(\Gamma,V)$ in $\hat{\mathcal{C}}$ and a deformation $[U(\Gamma,V),\phi_U]$ of $V$ over $R(\Gamma,V)$ such that for each $R$ in $\hat{\mathcal{C}}$ and for each lift $(M,\phi)$ of $V$ over $R$ there is a unique morphism $\alpha:R(\Gamma,V)\to R$ in $\hat{\mathcal{C}}$ such that $\hat{F}_V(\alpha)([U(\Gamma,V),\phi_U])=[M,\phi]$, then we call $R(\Gamma,V)$ the universal deformation ring of $V$ and $[U(\Gamma,V),\phi_U]$ the universal deformation of $V$. In other words, $R(\Gamma,V)$ represents the functor $\hat{F}_V$ in the sense that $\hat{F}_V$ is naturally isomorphic to $\mathrm{Hom}_{\hat{\mathcal{C}}}(R(\Gamma,V),-)$. In the case when the morphism $\alpha:R(\Gamma,V)\to R$ relative to the lift $(M,\phi)$ of $V$ over $R$ is only known to be unique if $R$ is the ring of dual numbers over $\mathbb{F}_p$ but may be not unique for other $R$, $R(\Gamma,V)$ is called the versal deformation ring of $V$ and $[U(\Gamma,V),\phi_U]$ is called the versal deformation of $V$. By \cite{mazur}, every finitely generated $k\Gamma$-module $V$ has a versal deformation ring $R(\Gamma,V)$. Moreover, if $V$ is an absolutely irreducible $\mathbb{F}_p\Gamma$-module, then $R(\Gamma,V)$ is universal. The following result shows the connection between $R(\Gamma,V)$ and certain first and second cohomology groups of $\Gamma$ that are related to $V$. \begin{theorem} {\rm (\cite[\S1.6]{mazur}, \cite[Thm. 2.4]{bockle})} \label{thm:udr} Suppose $V$ is an absolutely irreducible $\mathbb{F}_p\Gamma$-module, and let $d^i_V=\mathrm{dim}_{\mathbb{F}_p}\mathrm{H}^i(\Gamma,\mathrm{Hom}_{\mathbb{F}_p}(V,V))$ for $i=1,2$. Then $R(\Gamma,V)$ is isomorphic to a quotient algebra $\mathbb{Z}_p[[t_1,\ldots,t_r]]/J$ where $r=d^1_V$ and $d^2_V$ is an upper bound on the minimal number of generators of $J$. \end{theorem} \section{Cohomology} \label{s:coh} \noindent Let $p$ be an odd prime, and consider a short exact sequence of groups $$0\rightarrow\\N\rightarrow\Gamma\rightarrow G\cong \Gamma/N\rightarrow 1$$ where $N$ is an elementary abelian $p$-group of rank $\ell \geq 2$ and $G$ is a finite group. We identify $G$ with $\Gamma/N$ in the following. Note that the action of $G$ = $\Gamma/N$ on $N$ corresponds to an $ \mathbb{F}_{p}$-representation of $G$ denoted by $\phi$. Let $V$ be a projective $\mathbb{F}_{p}G$-module, and view $V$ also as an ${\mathbb{F}_{p}}\Gamma$-module by inflation. Let $\tilde{\phi}$ be the contragredient of $\phi$ (i.e. $\tilde{\phi}$ is the dual representation of $\phi$). Let $V_{\tilde{\phi}}$ (resp. $V_{\tilde{\phi} \wedge \tilde{\phi}}$) denote the $\mathbb{F}_{p}\Gamma$-module associated to $\tilde{\phi}$ (resp. $\tilde{\phi} \wedge \tilde{\phi}$). If $X$ is a $\Gamma/N$-module, let $X^{\Gamma/N}$ denote the fixed points of the action of $\Gamma/N$. Let $\otimes$ stand for the tensor product over $\mathbb{F}_p$. We prove the following result. \vspace*{.05 in} \begin{theorem} \label{th:no1} Using the above notation, $$\charg{2}{\Gamma}{{\rm {Hom}}_{\mathbb{F}_{p}}(V,V)}\cong[(V_{\tilde{\phi}}\otimes V^{*}\otimes V)\oplus (V_{\tilde{\phi} \wedge \tilde{\phi}}\otimes V^{*}\otimes V)]^{\Gamma/N}.$$ \noindent If $N$ is elementary abelian of rank two, the representation $\tilde{\phi} \wedge \tilde{\phi}$ is the one-dimensional representation $\rm{det}\circ (\tilde{\phi}).$ \end{theorem} In the case when ${\mathbb{F}_{p}}G$ is semisimple, this result will provide a way of using character theory to compute the first and second cohomology groups of $\Gamma$ with coefficients in $\text{Hom}_{\mathbb{F}_{p}}(V,V)$. To prove Theorem \ref{th:no1} we need the following result. \vspace*{.05 in} \noindent \begin{proposition} \label{pr:no1} Let $A$ be a projective ${\mathbb{F}_{p}}G $-module. Then for all $i \geq 1$, $$A \otimes {\rm H}^{i}(N,{\mathbb{F}}_{p}) \cong {\rm H}^{i}(N,A)$$ as $\mathbb{F}_{p}G$-modules, and $$\charg{i}{\Gamma}{A} \cong \charg{0}{\Gamma/N}{\charg{i}{N}{A}} \cong [\charg{i}{N}{A}]^{G}.$$ \end{proposition} \noindent \begin{proof} Let $i \geq 1$. We first show that $\charg{i}{N}{A} \cong A \otimes \charg{i}{N}{{\mathbb{F}}_p}$ as ${\mathbb{F}_{p}}G $-modules, where we identify $G$ with $\Gamma/N$ as before. Let $Z^{i}(N,A)$ denote the space of $i$-cocycles of $N$ with coefficients in $A$, and let $B^i(N,A)$ denote the space of $i$-coboundaries of $N$ with coefficients in $A$. Let \{$e_{j}$\} be an ${\mathbb{F}}_{p}$-basis for $A$. Recall, $N$ acts trivially on $A$. Consider the maps \vspace*{.1 in} \noindent $\Phi: A \otimes Z^{i}(N,{\mathbb{F}}_{p}) \rightarrow Z^{i}(N,A)$, \hspace*{.3 in} $a\otimes c \xrightarrow{\Phi}$ ${\Delta}_{c,a}$, for all $(a,c)$ in $A \times Z^{i}(N,{\mathbb{F}}_{p})$ \vspace*{.1 in} \noindent $\Psi: Z^{i}(N,A) \rightarrow A \otimes Z^{i}(N,{\mathbb{F}}_{p})$, \hspace*{.3 in} $d \xrightarrow{\Psi} \sum\limits_{j}^{} e_{j}\otimes({e_{j}}^{*}\circ d)$, for all $d$ in $Z^{i}(N,A)$ \vspace*{.1 in} \noindent where ${\Delta}_{c,a}$($n_{1},n_{2},...,n_{i}$) = $c$($n_{1},n_{2},...,n_{i}$)$a$ and ${e_{j}}^{*}$ is the dual basis element to $e_{j}$. Then $\Psi$ and $\Phi$ are ${\mathbb{F}_{p}}G $-module homomorphisms that are inverses of each other which restrict to isomorphisms between $B^i(N,A)$, and $A \otimes B^i(N,{\mathbb{F}}_{p})$. Thus, ${\rm H}^{i}(N,A) \cong \dfrac{A \otimes Z^{i}(N,{\mathbb{F}}_{p})}{A \otimes B^i(N,{\mathbb{F}}_{p})}$ as $\mathbb{F}_pG$-modules. Tensoring the short exact sequence of ${\mathbb{F}_{p}}G $-modules $$0\rightarrow\ B^{i}(N,{\mathbb{F}}_{p})\rightarrow Z^{i}(N,{\mathbb{F}}_{p})\rightarrow {\rm H}^{i}(N,{\mathbb{F}}_{p})\rightarrow 0$$ with $A$ over $\mathbb{F}_{p}$, we obtain $A \otimes {\rm H}^{i}(N,{\mathbb{F}}_{p}) \cong \dfrac{A \otimes Z^{i}(N,{\mathbb{F}}_{p})}{A \otimes B^{i}(N,{\mathbb{F}}_{p})}$ as ${\mathbb{F}_{p}}G $-modules. Therefore, $A \otimes {\rm H}^{i}(N,{\mathbb{F}}_{p}) \cong {\rm H}^{i}(N,A)$ as $\mathbb{F}_{p}G$-modules, which implies that, in particular, ${\rm H}^{i}(N,A)$ is a projective $\mathbb{F}_{p}G$-module. Next, consider the Lyndon-Hochschild-Serre spectral sequence $$\textrm{H}^{p_{0}}(\Gamma/N,\textrm{H}^{q_{0}}(N,A))\Rightarrow\textrm{H}^{p_{0}+q_{0}}(\Gamma,A).$$ Since ${\rm H}^{q_{0}}(N,A)$ is a projective $\mathbb{F}_{p}G$-module for all $q_{0} \geq 1$ by the above argument, and since ${\rm H}^{0}(N,A) \cong A^{N} \cong A$ which is also projective, the terms corresponding to $(p_{0},q_{0}) = (1,i-1), (2,i-2),...,(i,0)$ vanish for i=$p_{0} + q_{0} \geq 1$. Therefore, ${\rm H}^i(\Gamma,A) \cong {\rm H}^0(\Gamma/N,H^i(N,A))$. \end{proof} \noindent We are now ready to show the main result of the section. \vspace*{.05 in} \noindent \textit{Proof of Theorem 3.1}. Recall that $V$ is assumed to be a projective $\mathbb{F}_pG$-module, where we identify $G$ with $\Gamma/N$. By Proposition \ref{pr:no1}, $\charg{2}{N}{\textrm{Hom}_{\mathbb{F}_{p}}(V,V)}\cong \textrm{Hom}_{\mathbb{F}_{p}}(V,V)\otimes \charg{2}{N}{\mathbb{F}_p}$ as $\mathbb{F}_{p}{\Gamma/N}$-modules. Consider the Kummer sequence $ 1\rightarrow \mu_{p} \xrightarrow{\iota} \mathbb{C}^{*}\xrightarrow{p} \mathbb{C}^{*}\rightarrow 1$, where $\mathbb{C}^{*}\xrightarrow{p} \mathbb{C}^{*}$ denotes the map given by $z \xrightarrow{p} z^p$. We consider this sequence as a sequence of $\mathbb{Z}N$-modules with trivial $N$-action. Applying the functor $\textrm{Hom}_{\mathbb{Z}N}(\mathbb{Z},-)$ we obtain the long exact sequence ...$\xrightarrow{\delta} \charg{1}{N}{\mu_p}\xrightarrow{\iota_{*}} \charg{1}{N}{\mathbb{C^{*}}}\xrightarrow{p_{*}}\charg{1}{N}{\mathbb{C^{*}}}\xrightarrow{\delta} \charg{2}{N}{\mu_p} \xrightarrow{\iota_{*}}$ \vspace*{.1 in} \hspace*{1.1 in} $\charg{2}{N}{\mathbb{C^{*}}} \xrightarrow{p_{*}} \charg{2}{N}{\mathbb{C^{*}}} \xrightarrow{\delta} \charg{3}{N}{\mu_p} \xrightarrow{\iota_{*}} $... \vspace*{.1 in} \noindent Since $N$ is elementary abelian, $\charg{i}{N}{\mathbb{C^{*}}} \xrightarrow{p_{*}} \charg{i}{N}{\mathbb{C^{*}}}$ is trivial, for $i \geq 1$. Identifying $\mathbb{F}_p = \mu_p$, we get a short exact sequence of $\mathbb{F}_{p}{\Gamma/N}$-modules \hspace*{.5 in} $0\rightarrow \charg{1}{N}{\mathbb{C^{*}}}\xrightarrow{\delta} \charg{2}{N}{\mathbb{F}_p} \xrightarrow{\iota_{*}} \charg{2}{N}{\mathbb{C^{*}}} \rightarrow 0$. \vspace*{.1 in} \noindent Applying the functor $\textrm{Hom}_{\mathbb{F}_{p}}(V,V)\otimes -$ , and taking fixed points, we obtain, using Proposition \ref{pr:no1}, \vspace*{.1 in} \noindent $\charg{2}{\Gamma}{\textrm{Hom}_{\mathbb{F}_{p}}(V,V))\cong [(\charg{1}{N}{\mathbb{C^{*}}}\otimes {\textrm{Hom}_{\mathbb{F}_{p}}(V,V)}}]^{\Gamma/N}\oplus [\charg{2}{N}{\mathbb{C^{*}}}\otimes {\textrm{Hom}}_{\mathbb{F}_{p}}(V,V)]^{\Gamma/N}$. \vspace*{.1 in} \noindent Therefore, Theorem \ref{th:no1} follows once we show that $\charg{1}{N}{\mathbb{C^{*}}}\cong V_{\tilde{\phi}}$ and $\charg{2}{N}{\mathbb{C^{*}}} \cong V_{\tilde{\phi} \wedge \tilde{\phi}}$ as ${\mathbb{F}_{p}\Gamma/N}$- modules. Since $N$ is an elementary abelian $p$-group which acts trivially on $\mathbb{C^{*}}, \charg{1}{N}{\mathbb{C^{*}}}=\textrm{Hom}(N,\mathbb{C^{*}})\cong \textrm{Hom}_{\mathbb{F}_{p}}(N,\mathbb{F}_{p})$ as $\mathbb{F}_{p}G$-modules, which implies $\charg{1}{N}{\mathbb{C}^*} \cong V_{\tilde{\phi}}$. It remains to determine the $\Gamma/N$-module structure of $\charg{2}{N}{\mathbb{C^{*}}}$. Our result follows after a quick computation, using that $\charg{2}{N}{\mathbb{C^{*}}} \cong N \wedge N$. This completes the proof of Theorem 3.1. \bigskip \noindent As a consequence of the proof of Theorem 3.1 we obtain the following result. \begin{corollary} \label{co:no1} Under the general hypothesis of Theorem 3.1, we obtain: \begin{enumerate} \item[a.] $\charg{1}{\Gamma}{{\rm Hom}_{\mathbb{F}_{p}}(V,V)} \cong (V_{\tilde{\phi}}\otimes V^{*}\otimes V)^{\Gamma/N} $. \item[b.] $\charg{1}{\Gamma}{{\rm Hom}_{\mathbb{F}_{p}}(V,V)}$ is a summand of $\charg{2}{\Gamma}{{\rm Hom}_{\mathbb{F}_{p}}(V,V)}$. \item[c.] ${\rm dim}_{\mathbb{F}_{p}}(\charg{1}{\Gamma}{{\rm Hom}_{\mathbb{F}_{p}}(V,V))} \leq {\rm dim}_{\mathbb{F}_{p}}(\charg{2}{\Gamma}{{\rm Hom}_{\mathbb{F}_{p}}(V,V))}$. \end{enumerate} \end{corollary} \bigskip \noindent For the remainder of the paper, we consider the special case $$0\rightarrow\\N\rightarrow\Gamma\rightarrow G = \Gamma/N\rightarrow 1$$ where $\mathbb{F}_{p}G$ is semisimple, $\mathbb{F}_{p}$ is sufficiently large for $G$, and $V$ is an irreducible $\mathbb{F}_{p}G$-module. As before, let $\phi$ denote the action of $G$ on $N$. \begin{corollary} \label{co:center} Assume the notation of the previous paragraph, and let $\phi$ be irreducible. Suppose there exists an absolutely irreducible $\mathbb{F}_{p}\Gamma$-module $V_0$ with universal deformation ring $R(\Gamma,V_0) \ncong {\mathbb{Z}}_p$. Then, the restriction of $\phi$ to the center of $G$ is trivial. \end{corollary} \begin{proof} Let $V_0$ be as in the statement of the corollary. By Theorem 2.1, $$R(\Gamma,V_0) \cong \mathbb{Z}_p[[t_1,\ldots,t_r]]/J$$ where $r = d^1_{V_0}$, and $d^2_{V_0}$ is an upper bound on the minimal number of generators for $J$. Since $V_0$ is a projective $\mathbb{F}_{p}G$-module, it has a lift over $\mathbb{Z}_p$. Because $R(\Gamma,V_0) \ncong \mathbb{Z}_p$, it follows that $d^1_{V_0} \geq 1$. We now use Corollary \ref{co:no1} for $V = V_0$. Since we assume $\mathbb{F}_{p}G$ is semisimple, the $\mathbb{F}_{p}$-dimension of the $G$-fixed points of any $\mathbb{F}_{p}G$-module is the multiplicity of the trivial simple $\mathbb{F}_{p}G$-module as a summand. Recall that we identify $G = \Gamma/N$. By Corollary \ref{co:no1}, $d^1_{V_0} \geq 1$ implies that $V_{\phi}$ occurs as a summand of the module $V_0^* \otimes V_0$ with the adjoint $G$-action. Since $V_0$ is absolutely irreducible, the action of an element $z \in Z(G)$ on $V_0^* \otimes V_0$ is given by conjugation with a scalar matrix. Hence $z$ acts trivially on $V_{\phi}$. \end{proof} Our main goal is to relate the universal deformation rings $R(\Gamma,V)$ and the cohomology groups $\charg{i}{\Gamma}{\textrm{Hom}_{\mathbb{F}_{p}}(V,V)}$ for $i$ = 1,2, to the fusion of $N$ in $\Gamma$. \vspace*{.1 in} \noindent We need the following definitions. \begin{definition} \label{df:no1} Let $N, \Gamma, G, \phi$ be as above. \begin{enumerate} \item[a.] For every irreducible $\mathbb{F}_{p}G$-module $V$, let $d_{V}^i$ = ${\rm dim}_{\mathbb{F}_{p}}(\charg{i}{\Gamma}{{\rm Hom}_{\mathbb{F}_{p}}(V,V)}$ for i=1,2. Note that this number depends on $\phi$. We say an irreducible $\mathbb{F}_{p}G$-module $V_0$ is cohomologically maximal for $\phi$ if $d_{V_0}^2$ is maximal among all $d_V^2$. We say an irreducible representation $\rho$ of $G$ over $\mathbb{F}_p$ is cohomologically maximal for $\phi$ if $\rho$ corresponds to an $\mathbb{F}_{p}G$-module with this property. \item[b.] We call the orbits of the action $\phi$ of $G$ on $N$ the fusion orbits of $\phi$. For all $m\geq 1$, let $F_{\phi,m}$ be the number of fusion orbits of $\phi$ with cardinality $m$. Then, the sequence $\{F_{\phi,m}\}_{m\geq 1}$ is called the fusion numbers of $\phi$. \end{enumerate} \end{definition} Note that the fusion of $N \textrm{ in } \Gamma$ is uniquely determined by the fusion orbits of $\phi$, since two elements in $N$ are conjugate in $\Gamma$ if and only if they lie in the same fusion orbit of $\phi$. \section{Dihedral Groups} \label{s:dih} \subsection{Main Results} \label{ss:main} In this section, we consider the case when $\ell = 2, n \geq 3$ and $\Gamma/N$ = $G$ is the dihedral group $D_{2n}$ of order $2n$. That is, we have a short exact sequence of groups $$0\rightarrow\\N\rightarrow\Gamma\rightarrow G = \Gamma/N\rightarrow 1$$ where $G$ is dihedral and $N$ is an elementary abelian $p$-group of rank two. Moreover, we assume $\mathbb{F}_{p}G$ is semisimple and $\mathbb{F}_p$ is sufficiently large for $G$. Again, we let $\phi$ denote the action of $G$ on $N$; and we assume $\phi$ is irreducible. Our main results, Theorems \ref{th:no2} and \ref{th:no3}, show how the first and second cohomology groups, respectively the universal deformation rings, associated to certain ${\mathbb{F}_p}\Gamma$-modules $V$, can detect the fusion of $N$ in $\Gamma$, i.e. the fusion of $\phi$. In particular, we will prove Theorem 1.1. Since $N$ is a $p$-group, every irreducible $\mathbb{F}_p\Gamma$-module is inflated from an irreducible $\mathbb{F}_pG$-module. Let $\textrm{Rep}_2(G)$ be a complete set of representatives of isomorphism classes of all 2-dimensional representations of $G$ over $\mathbb{F}_p$. Let $\textrm{Irr}_2(G) \subset \textrm{Rep}_2(G)$ be the subset of isomorphism classes of irreducible 2-dimensional representations. For $\rho$ in $\textrm{Irr}_2(G)$, let $V_\rho$ be an irreducible ${\mathbb{F}}_{p}G$-module with representation $\rho$. We let $n \geq 3$, and consider the standard presentation for $G$ = $D_{2n}$, given by $\langle r,s | r^n, s^2, srs^{-1}r \rangle.$ Moreover, we assume $p \equiv 1 (\textrm{mod } n)$. Recall that all isomorphism classes of 2-dimensional irreducible representations of $G$ over $\mathbb{F}_p$ are represented by: $$r\xrightarrow{\theta_{i}} \begin{pmatrix} \omega^{i}&0\\ 0&\omega^{-i} \end{pmatrix} \hspace*{.2 in} s\xrightarrow{\theta_{i}} \begin{pmatrix} 0&1\\ 1&0 \end{pmatrix}$$ \noindent for $1 \leq i < \frac{n}{2}$, and $\omega$ a primitve $n$th root of unity in $\mathbb{F}_p^*$. Note that $\theta_i = \mathrm{Ind}_{\langle r \rangle}^G(\chi_i)$, where $\chi_i$ is the one-dimensional representation of $\langle r \rangle$ with $\chi_i(r) = \omega^i$. For our discussion on dihedral groups $G$, we fix the basis corresponding to the matrices above. \vspace*{.1 in} \noindent \begin{definition} \label{df:no2} Define the set map $T: \mathrm{Irr}_{2}(G) \rightarrow \mathrm{Rep}_{2}(G)$ by $T(\theta_i) = T(\mathrm{Ind}_{\langle r \rangle}^G(\chi_i))$ = $\mathrm{Ind}_{\langle r \rangle}^G({\chi}_{i}^2)$. If $n$ is odd, let $\Omega$ = $\mathrm{Irr}_2(G)$ = $T(\mathrm{Irr}_2(G))$. If $n$ is even, let $\Omega$ = $\mathrm{Irr}_2(G) \cap T(\mathrm{Irr}_2(G))$. In the latter case, for all $\psi$ in $\Omega$, $\mid T^{-1}(\psi)\mid$ = 2. Note that $\Omega$ consists precisely of those representations in $\mathrm{Irr}_2(G)$ whose restriction to the center of $G$ is trivial. \end{definition} \begin{theorem} \label{th:no2} If $\phi \in \Omega$, then the fusion of $\phi$ is uniquely determined by the set $\{ker(\rho) : \rho \in \mathrm{Irr}_2(G)$ is cohomologically maximal for $\phi \}$ = $\{ker(\rho) : \rho \in \mathrm{Irr}_2(G)$ with $R(\Gamma,V_{\rho}) \ncong \mathbb{Z}_p\}$. \end{theorem} \begin{theorem} \label{th:no3} Let $G$ = $D_{2n}$. Let $\textrm{T}$ and $\Omega$ be as above. \begin{enumerate} \item[a.] Let $n$ be arbitrary and let $\phi$ be in $\Omega$. Then, for any $\psi$ in $\mathrm{Irr}_2(G)$, $\psi$ is cohomologically maximal for $\phi$ if and only if $\textrm{T}(\psi) = \phi$. \item[b.] Let $n$ be odd, $\phi_{1}, \phi_{2} \in \mathrm{Irr}_2(G) = \Omega$. Then $\phi_{1}$ and $\phi_{2}$ have the same fusion if and only if $\textrm{T}^{-1}(\phi_{1}) \textrm{and } T^{-1}(\phi_{2})$ have the same kernel. \item[c.] Let $n$ be even, $\phi_{1}, \phi_{2} \in \Omega$. Then $\phi_{1}$ and $\phi_{2}$ have the same fusion if and only if $\{\textrm{kernel of } \psi: \psi \in \textrm{T}^{-1}(\phi_{1})\}=\{\textrm{kernel of } \psi: \psi \in \textrm{T}^{-1}(\phi_{2})\}$. \end{enumerate} \end{theorem} Theorems \ref{th:no2} and \ref{th:no3} say that for $\phi$ in $\Omega$, the fusion of $N$ in $\Gamma$ can be detected by the cohomology groups, respectively the universal deformation rings, in the following sense. Given $\phi$ in $\Omega$, we may determine the irreducible representations $\psi$ such that $\psi$ is cohomologically maximal for $\phi$. Additionally, this assignment is reversible. That is, given a collection of irreducible representations that are cohomologically maximal for some $\phi$ in $\Omega$, we may determine $\phi$. Moreover, given only the fusion of $\phi$ in $\Omega$ we can determine the kernels of the representations that are cohomologically maximal for $\phi$. Analagously, this assignment is again reversible. In addition, since $\psi$ is cohomologically maximal for $\phi$ in $\Omega$ if and only if $R(\Gamma,V_{\psi}) \ncong \mathbb{Z}_{p}$, the fusion of $N$ in $\Gamma$ can also be determined by the knowledge of the universal deformation rings. Thus, for $\phi$ in $\Omega$ we have the following one-to-one correspondences: \vspace*{.1 in} \begin{align*} \phi &\leftrightsquigarrow \{\psi \in \mathrm{Irr}_{2}(G): \psi \textrm{ is cohomologically maximal for } \phi\} \\ \phi &\leftrightsquigarrow \{\psi \in \mathrm{Irr}_{2}(G): R(\Gamma,V_{\psi}) \ncong \mathbb{Z}_p \}. \end{align*} \begin{align*} \textrm{ Fusion of } \phi &\leftrightsquigarrow \{\textrm{ker}(\psi) : \psi \in \mathrm{Irr}_{2}(G)\textrm{ is cohomologically maximal for } \phi\} \\ \textrm{ Fusion of } \phi &\leftrightsquigarrow \{\textrm{ker}(\psi) : \psi \in \mathrm{Irr}_{2}(G) \textrm{ and }R(\Gamma,V_{\psi}) \ncong \mathbb{Z}_p \}. \end{align*} Theorem 1.1 says that even if $\phi$ is not in $\Omega$, knowledge of all $R(\Gamma,V)$ may still be enough to determine the fusion of $N$ in $\Gamma$. For a generic choice of $n$, however, $\Omega$ is precisely the set of isomorphism classes of representations for which fusion may be determined. In subsections 4.2-4.5, we prove our main results. In subsection 4.6, we briefly discuss the case when $G$ = $\Gamma/N$ is an abelian group and compare this case to the dihedral case. \subsection{Cohomology for $\bf{D_{2n}}$} \label{ss:cohfordih} \noindent In this subsection we determine $\charg{2}{\Gamma}{\textrm{Hom}_{\mathbb{F}_{p}}(V_{\psi},V_{\psi})}$ for $\phi$ in $\Omega$ and $\psi$ in $\textrm{Irr}_2(G)$. We make the same assumptions as before. In particular, $n \geq 3$ and $p \equiv 1 (\textrm{mod }n)$, which means that $\mathbb{F}_pG$ is semisimple and $\mathbb{F}_p$ is sufficiently large for $G$. Recall, we have that $T: \textrm{Irr}_{2}(G) \rightarrow \textrm{Rep}_{2}(G)$ is given by $T(\theta_i) = T(\mathrm{Ind}_{\langle r \rangle}^G(\chi_i))$ = $\textrm{Ind}_{\langle r \rangle}^G({\chi}_{i}^2)$. Recall also that for $n$ odd, $T$ is a bijection from $\textrm{Irr}_2(G)$ to $\textrm{Irr}_2(G)$ = $\Omega$. For $n$ even, $\Omega$ = $\textrm{Irr}_2(G) \cap T(\textrm{Irr}_2(G))$ and $T: \rightarrow \Omega$ is a two to one set map. \begin{proposition} \label{pr:no2} Let $G$ = $D_{2n}$. Let $\Omega$ and $T$ be as above. \begin{enumerate} \item[a.] Let $n$ be odd, and let $\phi$ be an element of ${\rm Irr}_2(G)$ = $\Omega$. Then, there exists a unique $\psi$ = $T^{-1}(\phi) $ in ${\rm Irr}_2(G)$ with $d^2_{V_\psi}$ = 2. For all other $V$, $d^2_V$ = 1. So $V_\psi$ is cohomologically maximal for $\phi$. \item[b.] Let $n$ be even, and let $\phi$ be an element of $\Omega$. Then, there exist exactly two $\psi$ in ${\rm Irr}_2(G)$ with $d^2_{V_\psi}$ = 2. For all other $V, d^2_V$ = 1. Thus, there are precisely two $\psi$ that are cohomologically maximal for $\phi$. These representations are exactly the elements of $T^{-1}(\{\phi\})$. \end{enumerate} \end{proposition} \noindent The proposition follows from the following two lemmas. \begin{lemma} \label{le:no1} Let $G = D_{2n}$, let $1 \leq i < \frac{n}{2}$, let $V = V_{\theta_i}$, and let $\phi = T(\theta_i)$. Then, $$V^{*} \otimes V \cong \mathbb{F}_p \oplus V_{\chi_1} \oplus V_{\phi},$$ as $\mathbb{F}_pG$-modules, where $\mathbb{F}_p$ is the trivial simple $\mathbb{F}_pG$-module and $\chi_1$ is the sign representation. More precisely, identifying $V^*\otimes V = {\textrm{Hom}}_{\mathbb{F}_p}(V,V) = M_2(\mathbb{F}_p)$ with the adjoint action of $\theta_i$ we obtain: \begin{enumerate} \item[a.] The $\mathbb{F}_p$-span of $\begin{pmatrix} \ 1&0\\ 0&1 \end{pmatrix}$ is isomorphic to the trivial simple $\mathbb{F}_p G$-module $\mathbb{F}_p$. \item[b.] The ${\mathbb{F}}_p$-span of $\begin{pmatrix} \ 1&0\\ 0&-1 \end{pmatrix}$ is isomorphic to $V_{\chi_1}$. \item[c.] The ${\mathbb{F}}_p$-span of $f = \begin{pmatrix} \ 0&1\\ 0&0 \end{pmatrix}$ and $g = \begin{pmatrix} \ 0&0\\ 1&0 \end{pmatrix}$ is isomorphic to $V_{\tilde{\phi}}$ which is isomorphic to $V_{\phi}$. \end{enumerate} \end{lemma} \begin{proof} The first two statements a. and b. are clear. Statement c. follows since $\theta_i(r) f \theta_i(r)^{-1} = \omega^{2i} f \textrm {, } \theta_i(r) g \theta_i(r)^{-1} = \omega^{-2i} g, \textrm{ and } \theta_i(s) u \theta_i(s)^{-1} = v, \textrm{ for } \{u, v \} = \{f, g \}$. \end{proof} \begin{lemma} \label{le:no2} Let $G = D_{2n}, 1 \leq i,j < \frac{n}{2}$, let $V = V_{{\theta}_i}$, and $\phi = \theta_{j}$. Then, $d_V^2 = d_V^1 + 1$ and $d_V^1$ = $\begin{cases} 0, & \textrm{if } \theta_{j} \neq \textrm{T}(\theta_{i})\\ 1, & \text{if } \theta_{j} = \textrm{T}(\theta_{i}). \end{cases}$ \end{lemma} \begin{proof} Define $T(V) = V_{T(\theta_i)}$. By Lemma \ref{le:no1}, we have $V^{*} \otimes V \cong \mathbb{F}_p \oplus V_{\chi_1} \oplus T(V)$ as $\mathbb{F}_pG$-modules. Note for any $\phi$ in $\textrm{Irr}_2(G)$, we have $\textrm{det}\circ (\tilde{\phi}) = \chi_{1}$. Since we assume $\mathbb{F}_pG$ is semisimple, the $\mathbb{F}_p$-dimension of the $G$-fixed points of any $\mathbb{F}_pG$-module is the multiplicity of the trivial simple $\mathbb{F}_pG$-module as a summand. Recall that we identify $G = \Gamma/N$. By Theorem \ref{th:no1} and Corollary \ref{co:no1}, we have that $\charg{2}{\Gamma}{\textrm{Hom}_{\mathbb{F}_{p}}(V,V)}\cong[(V_{\tilde{\phi}}\otimes V^{*}\otimes V)\oplus (V_{\rm{det}\circ (\tilde{\phi})}\otimes V^{*}\otimes V)]^{G}$, and $\charg{1}{\Gamma}{\textrm{Hom}_{\mathbb{F}_p}(V,V)}$ = $(V_{\tilde{\phi}}\otimes V^{*}\otimes V)^{G} $. Hence, $\charg{2}{\Gamma}{\textrm{Hom}_{\mathbb{F}_{p}}(V,V)} \cong [(V_{\tilde{\phi}}\otimes (\mathbb{F}_p \oplus V_{\chi_1} \oplus T(V))]^G \oplus [(V_{\chi_{1}}\otimes (\mathbb{F}_p \oplus V_{\chi_1} \oplus T(V))]^G \cong [(V_{\tilde{\phi}}\otimes (\mathbb{F}_p \oplus V_{\chi_1} \oplus T(V))]^G \oplus [V_{\chi_1} \oplus \mathbb{F}_p \oplus T(V)]^G \cong [V_{\tilde{\phi}} \oplus V_{\tilde{\phi}} \oplus (V_{\tilde{\phi}} \otimes T(V))]^G \oplus [V_{\chi_1} \oplus \mathbb{F}_p \oplus T(V)]^G$. It is clear that the trivial simple $\mathbb{F}_pG$-module appears as a summand of the second term with multiplicity 1. Additionally, the trivial simple $\mathbb{F}_pG$-module is a summand of the first term if and only if $ V_{\phi} \cong V_{\tilde{\phi}} \cong T(V)$, i.e. $\phi = \theta_j = T(\theta_i)$. \end{proof} Observe that we have shown that for all $\phi$ not in $\Omega$, $\charg{2}{\Gamma}{\textrm{Hom}_{\mathbb{F}_{p}}(V,V)}$ is one-dimensional for every two-dimensional irreducible $\mathbb{F}_pG$-module $V$. Hence, in this case, every $V$ in $\textrm{Irr}_2(G)$ is cohomologically maximal. By the argument in Corollary \ref{co:center}, we moreover have that for all such $V$, $R(\Gamma,V) \cong {\mathbb{Z}}_p$. In the following sections, we will show that when this happens, the fusion of $N$ in $\Gamma$ cannot typically be detected by the knowledge of $R(\Gamma,V)$. For certain choices of $n$, however, the situation is actually better. More precisely, if $n$ is either a power of $2$, or $n = 2 q$ for some odd prime $q$, then the fusion of $N$ in $\Gamma$ can always be determined by the knowledge of all $R(\Gamma,V)$. \subsection{Universal Deformation Rings} \label{ss:udr} \noindent In this subsection we determine the universal deformation ring $R(\Gamma,V)$ for every 2-dimensional irreducible $\mathbb{F}_pG$-module $V$, which we view as an $\mathbb{F}_p\Gamma$-module by inflation. We continue to assume that $\mathbb{F}_pG$ is semisimple and $\mathbb{F}_p$ is sufficiently large for $G$. We use a result from \cite[Thm. 3.1]{bleher-chinburg-desmit} to show that if $\charg{2}{\Gamma}{\textrm{Hom}_{\mathbb{F}_{p}}(V,V)}$ is two-dimensional, then $R(\Gamma,V) \cong {\mathbb{Z}}_{p}[[t]]/(t^2,pt)$. Recall, we have shown that for $G$ = $D_{2n}$, $d^2_V = \textrm{dim}_{{\mathbb{F}}_p}(\charg{2}{\Gamma}{\mathrm{Hom}_{\mathbb{F}_{p}}(V,V))}$ is two-dimensional if and only if $d^1_V$ = 1. Otherwise $d^2_V$ = 1 and $d^1_V$ = 0. In the latter case, $R(\Gamma,V)$ is a quotient of ${\mathbb{Z}}_{p}$. Since any $V$ has a lift to $\mathbb{Z}_p$, it follows that in this case the universal deformation ring is ${\mathbb{Z}}_{p}$. \begin{proposition} \label{pr:no3} Let $G$ = $D_{2n}$, let $\phi$ be in $\Omega$, and let $V$ be a 2-dimensional irreducible $\mathbb{F}_pG$-module. Then, \vspace*{.1 in} $R(\Gamma,V) = \begin{cases} {\mathbb{Z}}_{p} & \textrm{if V is not cohomologically maximal for } \phi,\\ {\mathbb{Z}}_{p}[[t]]/(t^2,pt) & \textrm{if V is cohomologically maximal for } \phi. \end{cases}$ \vspace*{.1 in} \noindent Additionally, for any $\phi$ in ${\rm Irr}_2(G)$, $R(\Gamma,V) \cong {\mathbb{Z}}_{p}[[t]]/(t^2,pt)$ if and only if $d^2_V$ is equal to two. Thus, for $\phi$ not in $\Omega$, $R(\Gamma,V) \cong {\mathbb{Z}_p}$. \end{proposition} \noindent \begin{proof} By our comments before the statement of the proposition, we only need to consider the case when $d_V^2 = 2$. Following the proof of \cite[Thm. 3.1]{bleher-chinburg-desmit}, let $W$ = ${\mathbb{Z}}_p$ and $R = W[[t]]/(pt, t^2)$. Since $d^2_V = 2$, it follows from Lemmas \ref{le:no1} and \ref{le:no2} that $V_{\phi}$ is a summand of $V^*\otimes V$. Identifying $N = \mathbb{F}_p \times \mathbb{F}_p$ and using Lemma \ref{le:no1}, we obtain an injective group homomorphism $\iota: N \rightarrow M_2(\mathbb{F}_p) \cong M_2(W/pW)$ given by $\iota((n_1,n_2)) = n_1f + n_2g = \begin{pmatrix} 0 & n_1\\ n_2 & 0\end{pmatrix}$. Hence, we have a commutative diagram $$\begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix (m) [matrix of math nodes, row sep=3em, column sep=2.5em, text height=1.5ex, text depth=0.25ex] { 0 & N & \Gamma & G & 1\\ 0 & M_{2}(W/pW) & GL_{2}(R) & GL_{2}(W) & 1\\ }; \path[->,font=\scriptsize] (m-1-1) edge node[auto] {} (m-1-2) (m-1-2) edge node[auto] {} (m-1-3) edge node[auto] {$\iota$} (m-2-2) (m-1-3) edge node[auto] {} (m-1-4) edge node[auto] {$\rho_{R}$} (m-2-3) (m-1-4) edge node[auto] {} (m-1-5) edge node[auto] {$\rho_{W}$} (m-2-4) (m-2-1) edge node[auto] {} (m-2-2) (m-2-2) edge node[auto] {$d$} (m-2-3) (m-2-3) edge node[auto] {} (m-2-4) (m-2-4) edge node[auto] {} (m-2-5); \end{tikzpicture}$$ \vspace*{.1 in} \noindent where $d$($X$) = $1 + tX$ as in \cite[Thm. 3.1]{bleher-chinburg-desmit}. We notice that all the arguments in the proof of \cite[Thm. 3.1]{bleher-chinburg-desmit} go through once we have proved that the image under $\iota$ of the group $N$ contains two elements which do not commute with each other under multiplication in $M_{2}(W/pW)$. Using the notation in Lemma \ref{le:no1}, we see that $f \cdot g \neq g \cdot f$. Thus, $R(\Gamma,V) \cong {\mathbb{Z}}_{p}[t]/(t^2, pt)$. \end{proof} \subsection{Fusion for Dihedral Groups} \label{ss:fus} \noindent In this subsection, we determine the fusion of $\phi \in \textrm{Irr}_2(G)$, which uniquely determines the fusion of $N$ in $\Gamma$ when the action of $G = \Gamma/N$ on $N$ is given by $\phi$ (see Definition 3.5). \begin{proposition} Let $G = D_{2n}, 1 \leq i_0 < \frac{n}{2}$ and $\phi$ = $\theta_{{i}_{0}}$. Let $(i_0,n)$ denote the greatest common divisor of $i_0$ and $n$, and define $k = n/(i_0,n)$. Let $\omega \in {\mathbb{F}}_p^*$ be a primitive $n$-th root of unity. Writing each element in $N$ as $\left(\begin{array}{c}x\\y\end{array}\right)$ with respect to the fixed basis for the representation $\theta_{i_0}$, the fusion orbits are as follows: \begin{enumerate} \item $ Orb \left(\begin{array}{c}0\\0\end{array}\right) $ = $\left\{ \left(\begin{array}{c}0\\0\end{array}\right) \right\}_.$ \item For $\left(\begin{array}{c}x\\y\end{array}\right) \in \mathbb{F}_{p}^{*} \times \mathbb{F}_{p}^{*}$, $ y/x \in \langle \omega^{i_{0}} \rangle$, we have \\ $ Orb \left(\begin{array}{c}x\\y\end{array}\right) $ = $\left\{ \left(\begin{array}{c}x\\y\end{array}\right)_, \left(\begin{array}{c}{\omega}^{i_0}x\\{\omega}^{-i_0}y\end{array}\right)_{,...,}\left(\begin{array}{c}{\omega}^{(k - 1)i_0}x\\{\omega}^{-(k - 1)i_0}y\end{array}\right)\right\}_.$ \item For $\left(\begin{array}{c}x\\y\end{array}\right)$ not in 1. or 2., we have \\ $ Orb \left(\begin{array}{c}x\\y\end{array}\right) $ = $\left\{ \left(\begin{array}{c}x\\y\end{array}\right)_, \left(\begin{array}{c}{\omega}^{i_0}x\\{\omega}^{-i_0}y\end{array}\right)_{,...,}\left(\begin{array}{c}{\omega}^{(k - 1)i_0}x\\{\omega}^{-(k - 1)i_0}y\end{array}\right)_,\left(\begin{array}{c}y\\x\end{array}\right)_{,...,}\left(\begin{array}{c}{\omega}^{-(k - 1)i_0}y\\{\omega}^{(k - 1)i_0}x\end{array}\right)\right\}_.$ \end{enumerate} \end{proposition} \begin{proof} As before, $G = \langle r,s \mid r^n, s^2, srs^{-1}r \rangle$. We have $r^{j} \cdot \left(\begin{array}{c}x\\y\end{array}\right)$= $\left(\begin{array}{c}x\\y\end{array}\right)$ if and only if $\omega^{i_{0}j} \cdot x = x$ and $\omega^{-i_{0}j} \cdot y = y$. Also, $sr^{j} \cdot \left(\begin{array}{c}x\\y\end{array}\right)$= $\left(\begin{array}{c}x\\y\end{array}\right)$ if and only if $x = \omega^{-i_{0}j} \cdot y$ and $y = \omega^{i_{0}j} \cdot x$. Therefore, for all $\left(\begin{array}{c}x\\y\end{array}\right) \neq \left(\begin{array}{c}0\\0\end{array}\right)$, the intersection of the stabilizer of $\left(\begin{array}{c}x\\y\end{array}\right)$ with $\langle r \rangle$ is $\langle r^{n/(i_0,n)} \rangle = \langle r^k \rangle$. In fact, for $\left(\begin{array}{c}x\\y\end{array}\right)$ as in $3.$, this is the full stabilizer. If $\left(\begin{array}{c}x\\y\end{array}\right)$ is as in $2.$, say $y/x = \omega^{i_0 j_0}$, then the full stabilizer is $\langle r^k, sr^{j_0} \rangle$. This implies that the fusion orbits are as stated in the proposition. \end{proof} \begin{corollary} \noindent With the same notation as in Proposition 4.8, the fusion of $\phi$ is uniquely determined by the greatest common divisor $(i_0,n)$. Moreover, the fusion numbers of $\phi$ (see Definition 3.5) uniquely determine the fusion of $\phi$. \end{corollary} \begin{proof} The first statement follows from the stabilizer calculation in the proof of Proposition 4.8. Moreover, the fusion numbers $F_{\phi,m}$ are as follows (letting $k = n/(i_0,n)$ as before):\\ $F_{\phi,1} = 1$\\ $F_{\phi,k} = p - 1$\\ $F_{\phi,2k} = {\frac{(p - 1)(p + 1 - k)}{2k}}_,$\\ and $F_{\phi,m} = 0$ for all other $m \geq 1$. \end{proof} In particular, two representations $\theta_i$, $\theta_{i_0}$ in $\mathrm{Irr}_{2}(G)$ have the same fusion if and only if $(i,n) = (i_0,n)$. \subsection{Proof of Main Results} \label{ss:proof} \noindent In view of the results proved in subsections \ref{ss:cohfordih}, \ref{ss:udr} and \ref{ss:fus}, to complete the proofs of Theorems \ref{th:no2} and \ref{th:no3}, it remains only to prove the one-to-one correspondence for $\phi \in \Omega$: \vspace*{.1 in} \noindent $$\textrm{Fusion of } \phi \leftrightsquigarrow \{\textrm{ker}(\psi) : \psi \in \mathrm{Irr}_{2}(G)\textrm{ is cohomologically maximal for } \phi\}.$$ \noindent We note that for any $1 \leq i < \frac{n}{2}$, the kernel of $\theta_i$ is uniquely determined by $(i, n)$. Moreover, we have $T(\theta_i) = \begin{cases} \theta_{2i} & \textrm{if } 2i < \frac{n}{2}\\ \theta_{n - 2i} & \textrm{otherwise} \end{cases}$ \vspace*{.1 in} \noindent Therefore, for $n$ odd, the result follows since $(i,n) = (i_0,n)$ when $T(\theta_i) = \theta_{i_0}$. In the case when $n$ is even, let $\theta_{i_0} \in \Omega$, i.e. $1 \leq i_0 \leq \frac{n}{2} - 1$ and $i_0 = 2d_0$ for some $d_0$. Moreover, $T^{-1}(\theta_{i_0}) = \{ \theta_{d_0}, \theta_{k - {d_0} } \}$ for $k = \frac{n}{2}$. Therefore, for $n$ even, the result follows from the following lemma. \begin{lemma} Let $n$ be even, $k = \frac{n}{2}$, and write $n = 2^{\lambda} \cdot m$, for some odd $m$. Let $\theta_{i_0} \in \Omega$ and write $i_0 = 2d_0$. Define $a_0 = (d_0, k)$. Then $\{(d_{0},n), (k-d_{0},n)\}$ = $\{(a_{0},n), (k-a_{0},n)\}$. Moreover, $(i_0,n) = 2a_0, (a_0,n) = a_0$, and $(k - a_0,n) \in \{a_0, 2a_0 \}$ \end{lemma} \begin{proof} Suppose first that $2^{\lambda} \nmid d_0$. Then $(d_0, n) \mid k$, and hence $(d_0, n) = (d_0, k) = a_0$. If $2^{\lambda - 1} \nmid d_0$, then $(k - d_0, n) = (k - d_0, k) = a_0 = (k - a_0, k) = (k - a_0, n)$. If $2^{\lambda - 1} \mid d_0$, but $2^{\lambda} \nmid d_0$, then $k - d_0$ and $k - a_0$ are even, and so $(k - d_0, n) = 2(k - d_0, k) = 2a_0 = 2(k - a_0, k) = (k - a_0, n)$. On the other hand, if $2^{\lambda} \mid d_0$, then $2^{\lambda} \nmid (k - d_0)$ but $2^{\lambda - 1} \mid (k - d_0)$ and $2^{\lambda - 1} \mid (k - a_0)$. Hence we can use the above argument to obtain $(d_0, n) = (k - (k - d_0), n) = 2(k - (k - d_0), k) = 2(d_0, k) = 2a_0 = 2(k - a_0, k) = (k - a_0, n)$. \end{proof} Thus, Theorems 4.2 and 4.3 are established. In particular, this proves part a. of Theorem 1.1. Moreover, we have shown in Corollary 3.4 that for $\phi \notin \Omega$, $R(\Gamma,V) \cong \mathbb{Z}_p$, for all absolutely irreducible $V$. Therefore, to prove part b. of Theorem 1.1, we consider $D_{2n}$ for $n$ even. If $n$ is either a power of $2$ or equal to $2 q$ for some odd prime $q$, then $\phi \notin \Omega$ if and only if $\phi$ is faithful. Thus, if one knows that $R(\Gamma,V) \cong \mathbb{Z}_p$, for all absolutely irreducible $V$, then it must be the case that the fusion of $N$ in $\Gamma$ corresponds to $(1,n)$ in the sense of Corollary 4.9. On the other hand, if $n$ is even, but not as above, then there must exist some odd prime $v$ such that $\theta_v \notin \Omega$. But then $\theta_1$ and $\theta_v$ have different fusion, but in both cases $R(\Gamma,V) \cong \mathbb{Z}_p$ for all irreducible $V$. This, together with Theorems 4.2 and 4.3, completes the proof of Theorem 1.1. \subsection{Abelian Groups} \label{ss:ab} In this subsection, we briefly discuss the case when $\Gamma/N$ = $G$ is an abelian group and compare this case to the dihedral case discussed in subsections 4.1-4.5. In other words, we consider a short exact sequence of groups $$0\rightarrow\\N\rightarrow\Gamma\rightarrow G = \Gamma/N\rightarrow 1$$ where $G$ is finite abelian, and $N$ is an elementary abelian $p$-group of rank two. As before, we assume $\mathbb{F}_{p}G$ is semisimple and $\mathbb{F}_p$ is sufficiently large for $G$. Let $V$ be an irreducible $\mathbb{F}_{p}G$-module viewed as an $\mathbb{F}_{p}\Gamma$-module via inflation. Let $\phi$ denote the action of $G$ on $N$. Since $G$ is abelian, $V$ is one-dimensional, and $\phi$ splits into a direct sum of two one-dimensional representations. Let $\phi$ = $(\theta_1,\theta_2)$, where $\theta_i : G \rightarrow \mathbb{F}_{p}^*$. We again analyze the extent to which the universal deformation ring $R(\Gamma,V)$ can see the fusion of $N$ in $\Gamma$. In contrast to the dihedral case, if $G$ is abelian, then $R(\Gamma,V)$ will only be able to detect some information about fusion. \vspace*{.1 in} \noindent \begin{proposition} \label{pr:ab0} Let $G$ be abelian, and let $V$ and $\phi$ be as above. Let ${\{F_{\phi,m}\}}_{m\geq1}$ be the fusion numbers of $\phi$. \begin{enumerate} \item[a.] The universal deformation ring $R(\Gamma,V) \cong \mathbb{Z}_p$ if and only if $F_{\phi,1} = 1$ if and only if both $\theta_1$ and $\theta_2$ are not trivial if and only if $d^1_V = 0$. \item[b.] The universal deformation ring $R(\Gamma,V) \cong \mathbb{Z}_p[\mathbb{Z}/p\mathbb{Z}]$ if and only if $F_{\phi,1} = p$ if and only if exactly one of $\theta_1, \theta_2$ is trivial if and only if $d^1_V = 1$. \item[c.] The universal deformation ring $R(\Gamma,V) \cong \mathbb{Z}_p[\mathbb{Z}/p\mathbb{Z} \times \mathbb{Z}/p\mathbb{Z}]$ if and only if $F_{\phi,1} = p^2$ if and only if both $\theta_1, \theta_2$ are trivial if and only if $d^1_V = 2$. \end{enumerate} \end{proposition} In the statement of the proposition, we have added brackets to the group rings for clarity. The above proposition illustrates the extent to which fusion can be detected by universal deformation rings in the case when $G$ is abelian. Note that Corollary 3.4 is not applicable, as $\phi$ is reducible. In contrast to the dihedral case, we get no information by varying $V$, as both $R(\Gamma,V)$ and $d^i_V$ for $i = 1,2$ are constant with respect to $V$. In the abelian case, while some information about the fusion of $N$ in $\Gamma$ may be detected by the universal deformation ring (and indeed the cohomology), it is simply too coarse to completely determine the full fusion (compare with Theorems 1.1, \ref{th:no2}, and \ref{th:no3}). Instead, for any absolutely irreducible $V$, $R(\Gamma,V)$ sees only the number of fusion orbits of size $1$, i.e. those elements of $N$ which are not fused. Additionally, unlike the dihedral case, the fusion numbers are not enough to determine the fusion. \bigskip \noindent \textit{Proof of Proposition \ref{pr:ab0}}. Let $G$ be abelian, and let $V$ and $\phi$ = $(\theta_1,\theta_2)$ be as above. We first determine the number of fusion orbits of size $1$, i.e. $F_{\phi,1}$. Considering the action of $\phi$ on $N = {\mathbb{F}}_p \times {\mathbb{F}}_p$, we see that $F_{\phi,1} = p^j$, where $j$ counts how many of $\theta_1$, $\theta_2$ are trivial. In particular, the fusion of $N$ in $\Gamma$ depends on more than just $F_{\phi,1}$. Next, we determine $d^i_V, i = 1, 2$. By Theorem \ref{th:no1} and Corollary \ref{co:no1}, we need to calculate $(V_{\tilde{\phi}}\otimes V^{*}\otimes V)^G$ and $(V_{\rm{det}\circ (\tilde{\phi})}\otimes V^{*}\otimes V)^G$. Since $V$ is one-dimensional, $V^{*}\otimes V$ is trivial, thus $d^i_V$ is independent of $V$ for $i = 1,2$. Since $\phi$ = $(\theta_1,\theta_2)$, $d^1_V$ counts how many of $\theta_1$, $\theta_2$ are trivial. Also, $d^2_V - d^1_V$ is $1$ if $\theta_2 = {\theta_1}^{-1}$, and is $0$ otherwise. Finally, we determine $R(\Gamma,V)$. Since $G$ is abelian, it follows by \cite[\S1.4]{mazur} that $R(\Gamma,V) = \mathbb{Z}_p[\Gamma^{ab, p}]$, where $\Gamma^{ab, p}$ denotes the maximal abelian $p$-quotient of $\Gamma$. Since the order of $G$ is relatively prime to $p$, $\Gamma^{ab, p}$ can only be the trivial group, or $\mathbb{Z}/p\mathbb{Z}, \textrm{ or } \mathbb{Z}/p\mathbb{Z} \times \mathbb{Z}/p\mathbb{Z}$. Since $d := d^1_V$ is minimal such that $R(\Gamma,V)$ is a quotient of $\mathbb{Z}_p[[t_1, t_2, ..., t_d]]$, it follows that: \begin{enumerate} \item[a.] $d^1_V = 0$ if and only if $R(\Gamma,V) = \mathbb{Z}_p$, \item[b.] $d^1_V = 1$ if and only if $R(\Gamma,V) = \mathbb{Z}_p[\mathbb{Z}/p\mathbb{Z}]$, \item[c.] $d^1_V = 2$ if and only if $R(\Gamma,V) = \mathbb{Z}_p[\mathbb{Z}/p\mathbb{Z} \times \mathbb{Z}/p\mathbb{Z}].$ \end{enumerate} \noindent This completes the proof of Proposition \ref{pr:ab0}. \bibliographystyle{amsplain}
3,212,635,537,845
arxiv
\section{Introduction} Symmetry considerations make it seem obvious that an ideal cubical die lands on all six faces with an identical probability of $\frac{1}{6}$. However, what happens when non-fair dice are tossed? In particular, what are the face-probabilities of a homogeneous \emph{cuboid}, i.e.\ a six-sided polyhedron with parallel faces but different side-lengths? This is a surprisingly challenging problem. This paper offers a robust answer to the question above. It begins with a brief historical account (Section \ref{section_history}) and presents a control experiment with a single cuboid (Section \ref{section_experiment}) that is used as a benchmark for the theoretical modelling. Section \ref{section_gibbs} then introduces a new model based on a Gibbs distribution, which is found to be consistent with the control experiment. This model naturally contains a free parameter, which characterizes the physical conditions of the experiment. The new model is then compared against experimental data with differently sized cuboids drawn from the literature (Section \ref{section_classical}) and extended to non-cuboidal dice via the example of U-shaped dice (Section \ref{section_extension}). Section \ref{section_summary} summaries the findings of this paper. \section{Brief history}\label{section_history} Already Isaac Newton mentioned the problem. In a private writing dated between 1664 and 1666, and published in 1967 (Newton, 1967, p.~60--61), he wrote on the face-probabilities of a tossed cuboid: ``if a die bee not a Regular body but a Parallelepipedon or otherwise unequally sided, it may bee found how much one cast is more easily gotten then another.'' It remains unclear whether Newton really tried to solve this problem. In 1692, the problem appeared again in a paper by John Arbuthnot: ``In a Parallelopipedon, whose Sides are to another in the Ratio of $a$,$b$,$c$: to find at how many Throws any one may undertake that any given Plane, viz. $ab$, may arise'' (quoted from Hyk\v{s}ov\'a et al., 2012). Arbuthnot wrote that he left ``the solution to those who think it merits their pains.'' Fifty years later, Thomas Simpson (1740) used a simple geometrical idea to model the face-probabilities of a tossed cuboid. He assumed the probability of each face to be proportional to the surface area of the corresponding spherical quadrilateral, i.e.\ to the solid angle spanned by the face when seen from the centre of the cuboid. However, subsequent experimental investigations (e.g.~Singmaster, 1981) clearly rejected Simpson's model. Budden (1980) and Heilbronner (1985) also experimented with series of cuboids. Although Budden and Heilbronner did not find a formula for the face-probabilities, their data again disqualifies the Simpson model, and so do modern computer simulations of tossed cuboids (Obreschkow, 2006). Regardless of the clear insufficiency of the Simpson model, a recent paper by Hyk\v{s}ov\'a et al.~(2012) still refers to this model without criticism. Because of this discrepancy, this paper will first reemphasize the insufficiency of the Simpson model (Section \ref{section_experiment}), before introducing a much more accurate model (Section \ref{section_gibbs}). \section{Control experiment}\label{section_experiment} The control experiment is performed with a wooden ($13\times20\times23\rm~mm^3$)-cuboid. When tossing this cuboid, it became clear that the face-probabilities significantly depend on the physical conditions, such as the tossing technique, the height of free fall, the shape of the cuboid's edges, and the elasticity of the surface on which the cuboid lands. For example, a rough or elastic surface generally increases the face-probabilities of the two largest faces. The same qualitative change is observed when the cuboid is tossed from an arm-length above the table rather than using a dice cup. To account for the importance of the physical conditions, two experimental runs were performed. In experiment I, the cuboid was tossed $N=2,700$ times on a wooden table using a leather dice cup. In experiment II, the cuboid was dropped $N=1,000$ times onto a polished steel surface from an initial height of 1~m. Table \ref{table_control} lists the observed frequencies $f_i=n_i/N$, where $n_i$ is the number of times that face $i$ $(i=1,...,6)$ showed up. As expected, the measured frequencies differ significantly between experiment I and II, thus demonstrating that the shape of the cuboid alone does not determine the face-probabilities. Several physical reasons might be responsible for the different outcome probabilities in the two experiments. For example, the dice cup might have a stabilizing function when the cuboid lands on one of its small faces. In turn, dropping a cuboid from a high level (1~m) implies that the cuboid bounces off the floor many times before it comes to rest. Multiple bounces tend to result in a fast rolling around the longest axis of the cuboid, which implies that the cuboid is very unlikely to land on one of the two smallest faces. Qualitatively this suggests that the face-probabilities of the largest faces increase with the initial energy of the tossing process. The differences in the observed frequencies of opposite faces (e.g.~face 3 and 4) give a rough estimate of the deviation between measured frequencies and underlying probabilities. Those deviations would disappear if $N\rightarrow\infty$. For comparison Table \ref{table_control} shows the face-probabilities predicted by the Simpson model (explained in Section \ref{section_history}). This model fits neither of the two experiments. In comparison to both experiments, it clearly overpredicts the probabilities of the smallest faces and underestimates the probabilities of the largest faces. A much more accurate description is offered by the Gibbs models in Table \ref{table_control}, whose free parameter has been fitted to experiment I and II, respectively. This new model is now explained in Section \ref{section_gibbs}. \begin{table}[t] \centering \begin{tabular}{|l|c|c|c|c|c|c|} \hline \bf{Face $i$} & \bf{1} & \bf{2} & \bf{3} & \bf{4} & \bf{5} & \bf{6} \\ \hline Surface area [mm$^2$] & 299 & 260 & 460 & 460 & 260 & 299 \\ Half-height $h_i$ [mm] & 10 & 11.5 & 6.5 & 6.5 & 11.5 & 10 \\ \hline $f_{i}$ experiment I ($N=2,700$) [\%] & 10.3 & 7.7 & 30.9 & 32.7 & 7.6 & 10.9\\ $f_{i}$ experiment II ($N=1,000$) [\%] & 5.5 & 1.5 & 43.5 & 42.5 & 2.6 & 4.1\\ \hline $p_{i}$ Simpson model [\%] & 13.5 & 10.5 & 26.0 & 26.0 & 10.5 & 13.5\\ $p_{i}$ Gibbs model ($\beta=4.90$) [\%] & 11.2 & 7.2 & 31.6 & 31.6 & 7.2 & 11.2\\ $p_{i}$ Gibbs model ($\beta=10.2$) [\%] & 5.0 & 2.0 & 43.0 & 43.0 & 2.0 & 5.0 \\ \hline \end{tabular} \caption{Control experiment with a homogeneous ($13\times20\times23\rm~mm^3$)-cuboid. Faces 1 and 6: $13\times23$~mm; faces 2 and 5: $13\times20$~mm, faces 3 and 4: $20\times23$~mm. See Section \ref{section_experiment} for details.} \label{table_control} \end{table} \section{Gibbs distribution}\label{section_gibbs} Following independent ideas of Riemer (1991) and Obreschkow (2006), this section uses Gibbs distributions to model the face-probabilities of tossed cuboids. Gibbs distributions are probability distributions that are commonly used in many fields of probability theory, mathematical statistics, as well as statistical mechanics, from where they originate. The philosophy of this paper is to adopt Gibbs distributions in a heuristic way, that is without deriving them from a set physical assumptions. The model then gains its validity \emph{a posteriori} though verification against experimental data -- a common approach in statistics. A Gibbs distribution can be summarized as follows: consider a system with $k$ states, where each state $i=1,...,k$ has a positive energy $E_i$. If the Gibbs theory applies, the system is found in state $i$ with probability \begin{equation} \label{Gib} p_i(\beta) = Z(\beta)^{-1} \exp(-\beta E_i), \end{equation} where $\beta$ is a positive parameter, called inverse temperature (because it is proportional to $T^{-1}$ in thermodynamics), and $Z(\beta)\equiv\sum_i\exp(-\beta E_i)$ is a normalization factor, called the partition function. The parameter $\beta$ controls the character of the Gibbs distribution: if $\beta=0$ the distribution is uniform with equal probabilities for all states $i\in\{1,...,k\}$; as $\beta\rightarrow \infty$ the distribution becomes peaked with the minimal energy state(s) having a probability equal to 1; for any intermediate $\beta\in(0,\infty)$, the probability of a state increases monotonically with decreasing energy. In modeling the tossing experiments, the states are the faces that end up lying on top, i.e.~the cuboid is said to be in state $i$ if it comes to rest with face $i$ on top (thus $k=6$). The energy of state $i$ is taken proportional to the potential energy, i.e.~to the height $h_i$ of the center of gravity in state $i$. Note that in this way inhomogeneities in the mass distribution of the cuboid are accounted for, as illustrated in Section \ref{section_extension}. If the cuboid is homogeneous, $h_i=s_i/2$ where $s_i$ is the vertical side-length of the cuboid in state $i$. To eliminate physical units, the energy $E_i\propto h_i$ is normalized to the half-diagonal, \begin{equation}\label{Ei} E_i \equiv \frac {h_i}{(\sum_{i=1}^3h_i)^{1/3}} = \frac {s_i}{(\sum_{i=1}^3s_i)^{1/3}}. \end{equation} Given this definition of the energies $E_i$, $\beta$ is the only free parameter in the Gibbs distribution of Eq.~(\ref{Gib}). This parameter can be fitted to experimental data, for example using a maximum likelihood estimation (MLE). This method consists in maximizing the likelihood function \begin{equation}\label{ML} L(\beta)\equiv \prod_{i=1}^6 [p_i(\beta)]^{n_i} = Z(\beta)^{-N} \prod_{i=1}^6 \exp(-\beta E_i n_i), \end{equation} where $n_i$ is the number of observations of state $i$ and $N\equiv\sum_{i=1}^6n_i$. In practice, $L(\beta)$ can easily be maximized by minimizing $\ln Z(\beta)+\beta\sum_{i=1}^6E_i f_i$. Different values of $\beta$ can maximize $L(\beta)$ in different experimental conditions. As explained above, small values of $\beta$ are expected in experimental conditions where all six faces appear frequently, while higher values of $\beta$ are expected if the smallest faces appear rarely. Explicitly, in the control experiment I (tossing with a dice cup) smaller values of $\beta$ are expected, than in the control experiment II (free fall from 1~m height). In fact, the MLE yields $\beta=4.90$ and $\beta=10.2$ for experiments I and II, respectively. The corresponding probabilities of the Gibbs model are displayed in the bottom rows of Table \ref{table_control}. These probabilities (e.g.~43.0\% for faces 3 and 4 in experiment I) lie often between the measured frequencies of the corresponding faces (e.g.\ 43.5\% and 42.5\%), thus suggesting that the model sufficiently describes the data. An explicit $\chi^2$ goodness-of-fit test shows that the predictions of the Gibbs model are indeed statistically consistent with the experimental data. This test will be used again and explained in more detail in the following section. \section{Two classical experiments revisited}\label{section_classical} Section \ref{section_gibbs} revealed that the Gibbs model offers a good approximation of the data gathered in the control experiment. The control experiment was based on a single cuboid with three different side-lengths. This section confronts the Gibbs model with other experimental data drawn from the literature. The main purpose of this comparison is to test whether the Gibbs model with a constant parameter $\beta$ can describe a variety of differently shaped cuboids, tossed in similar experimental conditions. The experiments considered here are summarized in Table \ref{table_experiments}. They were performed by Budden (1980) and Heilbronner (1985), respectively. Both authors used families of $xxy$-cuboids, i.e.~cuboids with equal side-lengths $s_x$ in two orthogonal directions and a different side-length $s_y$ in the third direction. Both authors experimented with a family of $m$ cuboids $j=1,...,m$ ($m=15$ for Budden and $m=7$ for Heilbronner) with identical side-lengths $s_x$ ($s_x=15\rm~mm$ for Budden and $s_x=25\rm~mm$ for Heilbronner), and varying side-lengths $s_{y,j}$. Budden used cuboids ``cut from a mild steel bar whose cross-section was a square of side 15~mm. These were distributed to a class of boys who tossed and rolled them while recording the results.'' By contrast, Heilbronner used cuboids from polyvinylchloride of density $\approx1,500\rm~kg~m^{-3}$. They were tossed ``in the usual manner, i.e. rolled manually or from a shaker on cloth covered surfaces as well as on linoleum in a ratio of approximately one to one for each set of dice.'' The vast differences in the material and tossing techniques between Budden and Heilbronner suggest that these two datasets are described by different values $\beta$ in the Gibbs model. However, the question to be investigated is whether within each dataset (Budden or Heilbronner) the $m$ cuboids can be described by a constant parameter $\beta$. \begin{table}[b!] \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline & $s_x$ [mm] & $s_y$ [mm] & $N$ & $n_{xx}$ & $f_{xx}$ [\%] & $p_{xx}$ [\%] \\ \hline \multirow{15}{*}{\rotatebox{90}{\mbox{Budden}}} & 15 & 7.1 & 332 & 304 & 91.6 & 91.0 \\ & 15 & 9.5 & 840 & 620 & 73.8 & 77.0 \\ & 15 & 11.2 & 799 & 438 & 54.8 & 63.5 \\ & 15 & 12.15 & 740 & 367 & 49.6 & 55.4 \\ & 15 & 13.95 & 516 & 206 & 39.9 & 40.8 \\ & 15 & 14.5 & 530 & 204 & 38.5 & 36.8 \\ & 15 & 17.4 & 1011 & 150 & 14.8 & 20.2 \\ & 15 & 18.45 & 532 & 82 & 15.4 & 16.1 \\ & 15 & 21.6 & 654 & 34 & 5.2 & 8.1 \\ & 15 & 23.25 & 606 & 24 & 4.0 & 5.7 \\ & 15 & 24 & 702 & 12 & 1.7 & 4.8 \\ & 15 & 25.6 & 609 & 19 & 3.1 & 3.5 \\ & 15 & 28 & 680 & 6 & 0.9 & 2.1 \\ & 15 & 31.75 & 275 & 2 & 0.7 & 1.0\\ & 15 & 39.7 & 503 & 3 & 0.6 & 0.2 \\ \hline \multirow{7}{*}{\rotatebox{90}{\mbox{Heilbronner}}} & 25 & 5 & 2145 & 2089 & 97.4 & 98.4 \\ & 25 & 10 & 2184 & 1929 & 88.3 & 89.8 \\ & 25 & 15 & 2103 & 1559 & 74.1 & 72.7 \\ & 25 & 20 & 2238 & 1244 & 55.6 & 51.7 \\ & 25 & 30 & 2202 & 421 & 19.1 & 20.5 \\ & 25 & 35 & 2259 & 239 & 10.6 & 12.4 \\ & 25 & 40 & 2250 & 162 & 7.2 & 7.6 \\ \hline \end{tabular} \caption{Experimental datasets obtained by Budden and Heilbronner using different $xxy$-cuboids. The last column is the prediction of the Gibbs model using $\beta=4.46$ (Budden) and $\beta=3.53$ (Heilbronner).} \label{table_experiments} \end{table} To model the face-probabilities of a single $xxy$-cuboid, the formalism of Section \ref{section_gibbs} can be simplified, since $xxy$-cuboids only exhibit two macro-states: the $xx$-state, showing one of the two square faces, and the $xy$-state showing one of the four rectangular faces. Given $N$ tosses and $n_{xx}$ observations of the $xx$-state, the corresponding frequency is $f_{xx}=n_{xx}/N$. To model the face-probability $p_{xx}$, the Gibbs model of Eq.~(\ref{Gib}) can be rewritten as \begin{equation}\label{px} p_{xx} = Z(\beta)^{-1}\exp(-\beta E_y) \end{equation} where $E_x=s_x/(s_xs_xs_y)^{1/3}$, $E_y=s_y/(s_xs_xs_y)^{1/3}$, and $Z(\beta)=2\exp(-\beta E_x)+\exp(-\beta E_y)$. Using Eq.~(\ref{ML}) and $p_{xy}=1-p_{xx}$ the likelihood function for $\beta$ becomes \begin{equation} L(\beta) = p_{xx}(\beta)^{n_{xx}}(1-p_{xx}(\beta))^{N-n_{xx}}. \end{equation} Given a set of $m$ differently sized $xxy$-cuboids $j=1,...,m$ (with respective variables $p_{xx,j}$, $N_j$, $n_{xx,j}$, etc.), the best fitting constant parameter $\beta$ for all cuboids in the same dataset can be obtained by maximizing the global likelihood function \begin{equation}\label{GML} \mathcal{L}(\beta) = \prod_{j=1}^m L_j(\beta) = \prod_{j=1}^m p_{xx,j}(\beta)^{n_{xx,j}}(1-p_{xx,j}(\beta))^{N_j-n_{xx,j}}. \end{equation} In practice, $\mathcal{L}(\beta)$ can be maximized more easily by minimizing the function $\sum_{j=1}^m[N_j\ln Z_j(\beta)+\beta(n_{xx,j}E_{y,j}+(1-n_{xx,j})E_{x,j})]$. This results in $\beta=4.46$ for the experiments performed by Budden and $\beta=3.53$ for those performed by Heilbronner. The corresponding face-probabilities predicted by the Gibbs model are listed in the last column of Table \ref{table_experiments} and plotted against $s_y/s_x$ in Figure \ref{fig_experiments}. Qualitatively, there seems to be a good agreement between the experimental data and the model. \begin{figure}[t] \centerline{\includegraphics[width=9cm]{fig_experiments.jpg}} \caption{Measured frequencies $f_{xx}$ and fitted Gibbs probabilities $p_{xx}$ with a constant $\beta$, as a function of the side-ratio $s_y/s_x$ . The experimental values are those listed in Table \ref{table_experiments}. Vertical error bars represent standard deviations of $f_{xx}$ approximated as $\sqrt{f_{xx}/n_{xx}}$, and horizontal error bars are standard deviations associated with 5\% manufacturing errors for the side-lengths.} \label{fig_experiments} \end{figure} The rest of this section investigates whether the data is indeed statistically consistent -- in a quantitative way -- with the Gibbs model using a constant $\beta$ per dataset. To do so, a $\chi^2$ test inspired by Gibbons and Chakraborti (2003) is used. The expected number of appearances of the $xx$-state is $N_jp_{xx,j}$ with a variance of $N_jp_{xx,j}$. Therefore, the variance between experimental counts and model-prediction, normalized to the model variance, reads $(N_jp_{xx,j}-n_{xx,j})^2/(N_jp_{xx,j})$. Applying an analogous reasoning to the $xy$-state and summing over all the different cuboids, yields \begin{equation}\label{chi} \chi^2 = \sum\limits_{j=1}^m \left[\frac{(N_jp_{xx,j}-n_{xx,j})^2}{N_jp_{xx,j}}+\frac{(N_j(1-p_{xx,j})-(N_j-n_{xx,j}))^2}{N_j(1-p_{xx,j})} \right]. \end{equation} If $\chi^2/m\leq1$, then the Gibbs model with a constant $\beta$ per dataset fully describes the experimental data; if $\chi^2/m>1$, the experimental data is not sufficiently matched by the model. Explicit calculations yield $\chi^2/m=6.2$ (Budden) and $\chi^2/m=6.6$ (Heilbronner), thus rejecting the hypothesis of a Gibbs model with a constant $\beta$ per dataset at 2.5 standard deviations. In other words, this hypothesis seems to be rejected with a certainty of nearly 99\%. However, this $\chi^2$ test ignores experimental uncertainties of various kinds, which shall now be addressed approximately. Potentially, there are various sources of systematic uncertainties in the experimental data. For example, the tossing techniques might have been different for every $xxy$-cuboid. This is particularly plausible in Budden's experiment, where different cuboids were tossed by different children. Further, the cuboids are not perfect due to material and manufacturing errors. To estimate the effect of manufacturing errors on the consistency between the data and the Gibbs model, an extended $\chi^2$ test is performed, which explicitly accounts for uncertainties in the side-lengths $s_x$ and $s_{y,j}$. It is assumed that these side-lengths are only known up to Gaussian errors with standard deviations $\epsilon s_x$ and $\epsilon s_{y,j}$. Hence, $\epsilon$ represents the relative uncertainty of the side-lengths. The hypothesis that the $m$ empirical $f_{xx,j}$ belong to Gibbs distributions with a constant $\beta$ is then tested using a parametric bootstrap test based on 999 independent iterations. In each iteration the following steps are executed. \begin{itemize} \item For every cuboid $j=1,...,m$, \begin{itemize} \item chose side-lengths $s^\ast_x=G(s_x,\epsilon s_x)$ and $s^\ast_{y,j}=G(s_{y,j},\epsilon s_{y,j})$, where $G(x,\sigma)$ denotes a random number from a normal distribution with mean $x$ and standard deviation $\sigma$, \item calculate the probability $p^\ast_{xx,j}$ of the $xx$-state using eq.~(\ref{px}) with the original $\beta$ (4.46 for Budden, 3.53 for Heilbronner) and the new side-lengths $s^\ast_x$ and $s^\ast_{y,j}$, \item simulate $N_j$ tossing events, in which the $xx$-state appears with probability $p^\ast_{xx,j}$, and count the number $n^\ast_{xx,j}$, \item calculate the corresponding frequencies $f^\ast_{xx,j}=n^\ast_{xx,j}/N_j$. \end{itemize} \item Use the $m$ values of $f^\ast_{xx,j}$ to estimate the best parameter $\beta^\ast$ via eq.~(\ref{ML}). \item Calculate the new probabilities $\tilde{p}_{xx,j}$ using eq.~(\ref{px}) with $\beta^\ast$, $s_x$ and $s_{y,j}$. \item Calculate the value $\tilde{\chi}^2$ using eq.~(\ref{chi}) with the probabilities $\tilde{p}_{xx,j}$. \end{itemize} \begin{table}[b] \centering \begin{tabular}{|c|c|c|} \hline $\epsilon$ & Budden & Heilbronner \\ \hline 0.03 & 0.000 & 0.003 \\ 0.04 & 0.006 & 0.021 \\ 0.05 & 0.067 & 0.090 \\ 0.06 & 0.187 & 0.206 \\ \hline \end{tabular} \caption{$p$-values of $\chi^2$ in the simulated $\chi^2$-distribution to test the hypothesis of the Gibbs model with a constant $\beta$ per dataset (Budden or Heilbronner).} \label{table_pvalues} \end{table} If the original $\chi^2$ is large in comparison to the 999 simulated values of $\tilde{\chi}^2$, the hypothesis of a constant $\beta$ must be rejected. However, the $p$-values listed in Table \ref{table_pvalues} show that already values around $\epsilon=0.05$, i.e.~manufacturing errors of 5\%, make the experimental data compatible with the hypothesis of a constant $\beta$ for all $m$ cuboids. Measurements of the masses of machine-manufactured wood cuboids similar to those of Budden revealed mass deviations around 7\% between `identical' cuboids, roughly in line with side-length variations of 5\%. In summary, the Gibbs model with a constant $\beta$ for all the cuboids in a dataset (Budden or Heilbronner) is consistent with the experimental data as long as plausible manufacturing errors are accounted for. Figuratively speaking, the data points in Figure~\ref{fig_experiments} are consistent with the models, as long as the horizontal error bars are included. \section{Extension to non-cuboidal dice}\label{section_extension} As shown so far, the Gibbs model fully describes the face-probabilities of tossed cuboids within the uncertainties of currently available experimental data. This motivates the idea that the Gibbs model could be extended to more complex dice geometries and inhomogeneous cuboids. A full investigation of this idea lies beyond the scope of this paper, but to provide an illustration the U-shaped die shown in Figure \ref{fig_ushape} is considered. Two experimental runs were performed with this die. In experiment I, the die was tossed $N=1,950$ times onto a hard surface; in experiment II it was dropped $N=150$ times onto a wool carpet. The measured frequencies of the different faces are listed in Table \ref{table_ushape}. Unlike the cuboid, the U-shaped die has no symmetry between the faces 3 and 4. However, the Gibbs model as given in Eq.~(\ref{Gib}) can still be applied using the heights $h_i$ of the center of gravity listed in Table \ref{table_ushape}. To calculate the corresponding energies $E_i$ the $h_i$ are normalized to the half-diagonal of 16.45~mm. The maximum likelihood method yields $\beta=5.11$ (experiment I) and $\beta=8.41$ (experiment II), respectively. The higher $\beta$ of the second experiment is clearly related to the wool carpet's softness, which tends to destabilize positions with a high center of gravity, thus making the probabilities more skewed towards the most stable positions. The probabilities of the Gibbs model are consistent with the data in terms of the $\chi^2$ test discussed in Section \ref{section_classical}, hence demonstrating that the Gibbs model extends to non-cuboidal dice. \begin{figure}[h] \centerline{\includegraphics[width=6cm]{fig_ushape.jpg}} \caption{Image of the U-shaped die. The digits are the indices of the visible faces. Face 1 (opposite face 6) is hidden on the left, face 2 (opposite face 5) is hidden at the back, and face 3 (opposite face 4) is hidden at the bottom.} \label{fig_ushape} \end{figure} \begin{table}[t] \centering \begin{tabular}{|l|c|c|c|c|c|c|} \hline \bf{Face $i$} & \bf{1} & \bf{2} & \bf{3} & \bf{4} & \bf{5} & \bf{6} \\ \hline Heights of center of gravity $h_i$ [mm] & 10.0 & 11.5 & 7.61 & 5.39 & 11.5 & 10.0 \\ \hline $f_{i}$ experiment I ($N=1,950$) [\%] & 10.6 & 6.9 & 23.9 & 42.5 & 6.8 & 9.3 \\ $f_{i}$ experiment II ($N=150$) [\%] & 4.7 & 2.0 & 28.0 & 57.3 & 1.3 & 6.7 \\ \hline $p_{i}$ Gibbs model ($\beta=5.11$) [\%] & 10.4 & 7.3 & 21.9 & 43.6 & 6.5 & 10.4 \\ $p_{i}$ Gibbs model ($\beta=8.41$) [\%] & 5.9 & 3.3 & 20.0 & 62.2 & 2.7 & 5.9 \\ \hline \end{tabular} \caption{Results of tossing the U-shaped die shown in Figure \ref{fig_ushape}. Note that experiment II has a very small number $N$, thus very large statistical uncertainties on the values $f_i$.} \label{table_ushape} \end{table} \section{Summary}\label{section_summary} This paper uncovered that the face-probabilities of a tossed cuboid are well described by the Gibbs model defined via Eqs.~(\ref{Gib}) and (\ref{Ei}). These face-probabilities depend heavily on the tossing conditions -- an effect that can be accounted for by the Gibbs model by adjusting the free parameter $\beta$. Good fits of $\beta$ can be obtained via the maximum likelihood method of Eq.~(\ref{ML}). Typical values of $\beta$ range between 3 and 10. If differently shaped cuboids are all tossed using similar conditions (material, technique, etc.), then the face-probabilities of all these cuboids can be well approximated using a constant parameter $\beta$, estimated via the global maximum likelihood method of Eq.~(\ref{GML}). \begin{acknowledgements} The authors thank Robert Allin for valuable discussions about an earlier version of this paper. D.O.~acknowledges the discussions with Nick Jones. \end{acknowledgements}
3,212,635,537,846
arxiv
\section{Introduction} Light transport simulation can be notoriously hard. The main problem is that forming an image requires evaluating millions of infinite dimensional integrals, whose integrands, while correlated, may contain an infinity of singularities and different modes at disparate frequencies. Many approaches have been proposed to solve the rendering equation, though most of them rely on variants of Monte Carlo integration. One of the most robust algorithms, Metropolis light transport (MLT), has been proposed by Veach and Guibas in 1997 \cite{Veach:1997:MLT} and has been later extended in many different ways. One of the most commonly used variants is primary sample space MLT \cite{Kelemen:2002}, partly because in some scenarios it is more efficient (though not always), partly because it is generally considered simpler to implement. However, both variants are still considered relatively complex compared to other algorithms that are not based on Markov chain Monte Carlo (MCMC) methods, or that employ a simplified target distribution \cite{Hachisuka:2011}. In this paper we show that the original primary sample space MLT uses a suboptimal target distribution, and that fixing the problem makes the algorithm more efficient while also greatly simplifying it at the same time. Inspired by this simpler formulation, we then propose a novel family of general Markov chain Monte Carlo algorithms called \emph{charted Metropolis-Hastings} (CMH). The core idea is to extend the concept of primary sample spaces into that of \emph{sampling charts} of the target space, extending the domain of the desired target distribution and introducing novel mutation types that swap charts and perform coordinate changes (analogous to those found in regular tensor calculus) in order to craft better proposals. We then apply the new MCMC algorithm to light transport simulation, obtaining a type of algorithms called \emph{charted Metropolis light transport} (CMLT), that considers all local path sampling methods as parameterizations of the path space manifold, and employs stochastic path inversion as a way to perform coordinate transformations between charts. Our algorithm is made practical by avoiding the requirement to use fully invertible path sampling methods - a property we believe fundamental - and only requiring stochastic right inverses. This new type of algorithms can be seen as fundamentally bridging the difference between the original formulation of path space MLT and the primary sample space version, allowing to easily combine both. Finally, we briefly propose a novel scheme to integrate density estimation inside MCMC frameworks that exploits its robustness with respect to sampling near-singular and singular paths while mantaining overall simplicity and efficiency of implementation. \section{Main contribution} The main contribution of our paper is extending primary sample space MLT \cite{Kelemen:2002} by introducing mutations that allow to \emph{swap} bidirectional sampling techniques at any time \emph{while preserving the underlying path}. Alternatively, adopting a different viewpoint, we could say our main contribution is allowing to freely apply all types of primary space mutations to any given path. The key strength, missing from the original primary space formulation, is allowing to break the path in the middle at any arbitrary point along it and mutate the two resulting subpaths using the corresponding primary space perturbations, bringing back the flexibility of path space MLT, combined with primary space BSDF importance sampling. This is achieved in two ways: the first is realizing that the single primary sample space defined in the original work of Kelemen et al \shortcite{Kelemen:2002} can be more flexibly thought of as a \emph{collection} of different primary sample spaces stitched together through Russian Roulette, with each space corresponding to a specific bidirectional sampling technique. The second is realizing that each primary sample space is nothing more than a parameterization of path space, and that if we could \emph{invert} them we could effectively transform this set of parameterizations into a proper atlas, where each primary space is a chart. Once this is achieved, crafting mutations that jump between the charts while not changing the represented path is just a matter of applying proper transformations and following the rules for mantaining detailed balance. However, this second step is made complicated by the fact that the parameterizations typically used in bidirectional path tracers are not always classically invertible, making it impossible to unambiguosly recover the primary space coordinates of a given path. In fact, in the presence of layered materials, sampling the BSDF, which is at the core of any local path sampling technique, is often based on the use of non-injective maps from primary coordinates to the sphere of outgoing directions: for example, if a diffuse and a glossy layer are present, each outgoing direction might be sampled by both layers. In these cases the local primary sample space corresponding to each scattering event is typically divided in two or more strata, each of which maps to the entire sphere (or hemisphere) of directions. As this means we cannot employ the notion of charts used in standard manifold geometry, which requires the parameterizations to be invertible, we hence introduce the notion of \emph{sampling charts}, that unlike the deterministic counterpart doesn't rely on classical inverses, but rather requires to only provide stochastic right inverses. This new definition allows to move freely between different primary sample spaces even in cases of ambiguity, employing the probability densities associated to these stochastic inverses to compute the transition probabilities needed to satisfy detailed balance. The rest of the paper is dedicated to explaining our framework in detail. In particular, the following sections are organized as follows: section 3 introduces some preliminaries required to properly frame the problem, as well as a simpler reformulation of primary sample space MLT in which all the primary spaces are kept explicitly separate; section 4 introduces our new framework in a very abstract and general mathematical setting; finally section 5 details its application to light transport simulation, and section 6 and 7 are dedicated to describing our massively parallel implementation of the algorithms, and providing test results. This paper is a preprint of a SIGGRAPH publication \cite{SelfSiggraph:2017}. Concurrent to our work Otsu et al \shortcite{Anon:0462} have developed a novel set of mutations relying on an inverse mapping from path space to primary sample space: while proposing different solutions and mathematical methods, our algorithms share a similar underlying idea. \section{Preliminaries} Veach \shortcite{Veach:PHD} showed that light transport simulation can be expressed as the solution of per-pixel integrals of the form: \begin{equation} I_j = \int_{\Omega} f_j({\bf x}) d\mu({\bf x}) \end{equation} where $\Omega = \bigcup_{k=1}^{\infty} \Omega_k$ represents the space of light paths of all finite lengths $k$ and $\mu$ is the area measure, and $j$ is the pixel index. For a path ${\bf x} = x_0 \rightarrow x_1 \dots \rightarrow x_k$, the integrand is defined by the \emph{measurement contribution function}: \begin{eqnarray} f_j({\bf x}) &=& L_e(x_0 \rightarrow x_1) \nonumber \\ &\cdot& \prod_{i=0}^{k-1} \big[ f_s(x_{i-1} \rightarrow x_i \rightarrow x_{i+1}) G(x_i \leftrightarrow x_{i+1}) \big] \nonumber \\ &\cdot& W_e^j(x_{k-1} \rightarrow x_{k}) \end{eqnarray} where $L_e$ is the surface emission, $W_e^j$ is the pixel response (or emitted importance), $f_s$ denotes the local BSDF and $G$ is the geometric term. To simplify notation, in the following we will simply omit the pixel index and consider the positions $f = f_j$ and $I = I_j$. Veach further showed that if one employs a family $\mathcal{F}_k = \{s,t : s+t-1 = k\}$ of \emph{local path sampling} techniques to sample subpaths ${\bf y} = {y_0 \dots y_{s-1}}$ and ${\bf z} = {z_0 \dots z_{t-1}}$ from the light and the eye respectively, and build the joined path ${\bf x} = y_0 \dots y_{s-1} z_{t-1} \dots z_0$, an unbiased estimator of $I$ can be obtained as a \emph{multiple importance sampling} combination: \begin{equation} F = \sum_{s,t} C_{s,t}({\bf x}) \end{equation} with the following definitions: \begin{equation} C_{s,t}({\bf x}) = w_{s,t}C^*_{s,t} \end{equation} \begin{equation} C^*_{s,t}({\bf x}) = \frac{f({\bf x})} {p_{s,t}({\bf x})} \end{equation} \begin{equation} p_{s,t}({\bf x}) = p_s({\bf x}) p_t({\bf x}) \end{equation} \begin{equation} w_{s,t} = \frac{p_{s,t}({\bf x})} { \sum_{(i,j) \in \mathcal{F}_k} p_{i,j}({\bf x})} \end{equation} While a complete analysis of the above formulas is beyond the scope of this paper (we refer the reader to \cite{Veach:PHD}), we feel it is important to make the following: \paragraph{Remark:} if importance sampling is used, the connection term $C^*_{s,t}$ effectively contains only the parts of $f$ which have not been importance sampled; particularly, if $p_s$ and $p_t$ importance sample all terms of the measurement contribution function up to the $s$-th and $t$-th light and eye vertex respectively, $C^*_{s,t}$ will be proportional to the BSDFs at the connecting vertices times the geometric term $G(y_{s-1},z_{t-1})$. This is the only remaining singularity, which gets eventually suppressed in $C_{s,t}$ by the multiple importance sampling weight $w_{s,t}$. In fact, simplifying equation (4), one gets: \begin{equation} C_{s,t}({\bf x}) = \frac{f({\bf x})}{ \sum_{(i,j) \in \mathcal{F}_k} p_{i,j}({\bf x}) } \nonumber \end{equation} \subsection{The Metropolis-Hastings Algorithm} The Metropolis-Hastings algorithm is a Markov-Chain Monte Carlo method that, given an arbitrary target distribution $\pi(x)$, builds a chain of samples $X_1, X_2, \dots $ that have $\pi$ as the stationary distribution, i.e. $\lim_{n \rightarrow \infty} p(X_n) = \pi(X_n)$. The algorithm is based on two simple steps: \paragraph{proposal:} a new sample $Y$ is obtained from $X=X_i$ by means of a \emph{transition kernel} $K(Y|X)$ \paragraph{acceptance-rejection:} $X_{i+1}$ is set to $Y$ with probability: \begin{equation} A(Y|X) = \min \left( 1, \frac{\pi(Y)K(X|Y)}{\pi(X)K(Y|X)} \right) \end{equation} and to $X_i$ otherwise. Importantly, note that $\pi$ can be defined up to a constant. In other words, if $\int \pi(x) dx = c$, the algorithm will simply admit $\pi/c$ as its stationary distribution. Finally, it is also possible to use mutations in which the proposal $K(Y|X) = K(Y)$ depends only on $Y$: in this case, the mutation type is called an independence sampler \cite{Tierney:1994}. \subsection{Primary sample space Metropolis light transport, revisited} Kelemen et al \shortcite{Kelemen:2002} showed that if one considers the transformation $T : U \rightarrow \Omega$ that is typically used to map random numbers to paths when performing forward and backward path tracing (i.e. when sampling eye and light subpaths), one can apply the Metropolis-Hastings algorithm on the unit hypercube $U$ instead of working in the more complex path space. The advantage is that crafting mutations in $U$ is much easier to implement - a simple Gaussian kernel will do - and will often lead to better mutations, since they will naturally follow the local BSDFs.\footnote{This can, however, be detrimental in cases of complex occlusion, where the original path space MLT is generally superior. The reason is that the BSDF parameterizations might squeeze unoccluded, off-specular directions into vanishingly small regions of the primary sample space.} The only requirement is pulling back the desired measure from $\Omega$ to $U$, which is easily achieved by multiplying by the Jacobian of the transformation $T$, which is nothing more than the reciprocal of path probability: \begin{equation} I = \int_U f(T(u)) \left|\frac{dT(u)}{du}\right| du = \int_U \frac{ f(T(u)) } {p(T(u))} du \end{equation} We now provide a novel formulation that improves the choice of mapping and target distributions compared to the ones employed by Kelemen et al \shortcite{Kelemen:2002}. In fact, what was done in the original work was to consider a mapping from the product of two infinite-dimensional unit hypercubes,\footnote{A formulation which, technically, poses some definition challenges, as infinite dimensional spaces do not possess a Lebesgue measure.} to the product space of light and eye subpaths sampled using Russian Roulette terminated path tracing. Furthermore, instead of simply considering the single path obtained by joining the two endpoints of the respective subpaths, and using the measurement contribution function as the target distribution, they considered the sum of the MIS weighted contributions from all paths obtained joining any two vertices of the light and eye subpaths. The reason why this was done can be understood: this was the historical way to perform bidirectional path tracing. In order not to \emph{waste} any vertex, one would \emph{reuse} all of them at the expense of some added correlation and some added shadow rays. However, this is undesirable for several reasons: \paragraph{1.} by joining all vertices in the generated subpaths, and summing up all the weighted contributions from the obtained paths (which are in fact truly different paths, except for the fact they share their light and eye prefixes), they were using \emph{a target distribution which was no longer proportional to path throughput} (or, more precisely, the measurement contribution function we are finally interested in). In other words, the obtained paths have a \emph{skewed} distribution which is not necessarily optimal.\footnote{One can consider their technique to generate \emph{path bundles} and in this sense their target distribution is optimal for the constructed bundles, relative to the overall bundle contribution, but not for the individual paths.} \paragraph{2.} dealing with the infinite dimensional unit hypercubes introduces some unnecessary algorithmic complications, including the need for lazy coordinate evaluations. \paragraph{3.} by joining all vertices in the generated subpaths, we are introducing some additional sample correlation that might not necessarily improve the per-sample efficiency. In some situations, for example in the presence of incoherent transport or complex occlusion, it will in fact reduce it. \vspace{2mm} In light of these problems, we now propose a much simpler variant. Let's for the moment consider the space of paths of length $k$, and a single technique $i \in \mathcal{F}_k$ to generate them, where $i$ defines the number of light vertices and the number of eye vertices is given as $j = k+1-i$. If sampling $n$ vertices through path tracing requires $m \cdot n$ random numbers, we will consider the following definition of the primary sample space: \begin{equation} U_i = [0,1]^{m \cdot i} \times [0,1]^{m \cdot (k+1-i)}. \end{equation} The transformation $T = T_i: U \rightarrow \Omega_k$ will have the following Jacobian: \begin{equation} \left| \frac{dT(u)}{du} \right| = \frac{1}{ p_{i}(T(u)) }. \end{equation} We now have two options for the choice of our target distribution. The simplest is to set: \vspace{2mm} {\bf Definition}: \emph{Importance sampled distributions} \begin{equation} \pi_i(u) = \frac{ f(T(u)) }{ p_{i}(T(u)) }. \label{eqn:ISDistributions} \end{equation} This choice keeps the corresponding path space distribution invariant relative to the area measure $\mu$, as we have: \begin{eqnarray} \pi_i(u) du &=& \pi_i(u) p_i(T(u)) |d\mu(T(u))/du|du \nonumber \\ &=& \pi_i(u) p_i(T(u)) d\mu(T(u)) \nonumber \\ &=& f(T(u)) d\mu(T(u)) \nonumber \\ &=& \bar{\pi}(T(u)) d\mu(T(u)). \end{eqnarray} In other words, it ensures that all our distributions $\pi_i(u)$ are designed to have a distribution in their primary space $U_i$ that becomes the same distribution $\bar{\pi}({\bf x}) = f({\bf x})$ in path space. \noindent The second choice is to use the following: \vspace{2mm} {\bf Definition}: \emph{Weighted distributions} \begin{equation} \pi_i(u) = w_i(T(u)) \frac{ f(T(u)) }{ p_i(T(u)) }, \label{eqn:WDistributions-1} \end{equation} exploiting the fact that, while now the corresponding path space distributions $\bar{\pi}_i({\bf x}) = w_i({\bf x})f({\bf x})$ are biased,\footnote{In practice instead of sampling $f$, they are sampling a version downscaled locally according to the efficiency of $p_i$} our desired path space distribution $f$ is obtained as their sum: \begin{equation} \sum_{i\in\mathcal{F}_k}\bar{\pi}_i({\bf x}) = \sum_{i\in\mathcal{F}_k} w_i({\bf x}) f({\bf x}) = f({\bf x}). \end{equation} This definition leads to some interesting properties. First and foremost, we have the following simplifications: \begin{equation} \pi_i(u) = \frac{ f(T(u)) } {\sum_{j \in \mathcal{F}_k} p_j(T(u))} \label{eqn:SimplifiedWDistributions-1} \end{equation} Second, in each primary sample space the target distribution depends only on the path ${\bf x} = T(u)$, but not on the particular choice of technique $i$ used to generate it. In other words, if $u^i \in U_i$ and $u^j \in U_j$ map to the same path ${\bf x} = T_i(u^i) = T_j(u^j)$, we have: \begin{equation} \pi_i(u^i) = \pi_j(u^j) \end{equation} In particular, the target distribution depends only on how well the \emph{sum}\footnote{Equivalently, their average, since $\pi$ is here defined up to a constant.} of the individual pdfs $p_i$ approximate $f$. This is an interesting result, as we will see later on. Third, notice that if all bidirectional techniques are included in $\mathcal{F}_k$, the target distribution does not contain any of the weak singularities induced by the geometric terms. This is the case because each pdf includes all but one of the geometric terms: thus their sum will contain all of them, and counterbalance those in the numerator of (\ref{eqn:SimplifiedWDistributions-1}). In particular, this means there will be no singular concentration of paths near geometric corners.\footnote{The only sources of singularie Diracs in unsampled specular BSDFs in SDS paths (not containing any DD edge).} Notice that this would have not been the case if we simply adopted $\pi = f / p_i$, omitting the multiple importance sampling weight. \vspace{2mm} \subsection{Auxiliary Distributions} {\fat{v}}{S}ik et al \shortcite{Sik:2016} proposed using an auxiliary distribution in conjunction with replica exchange \cite{Swendsen:1986} to help the primary MLT chain escape from local maxima. The auxiliary distribution is designed to be easier to sample, and hence favor exploration. Given they were working in the context of the original PSSMLT formulation where all connections are performed, they proposed using an auxiliary distribution with a target defined as 1 if any of the paths formed provides a non-zero contribution, and 0 otherwise. With our new primary sample space formulation, a similar but even easier objective can be achieved by simply dropping all connection terms except for visibility, i.e. the only terms which are not sampled by the $i$-th local path sampling technique, giving: \begin{equation} \pi_i'(u) = V(x_{i-1} \leftrightarrow x_{i}) \end{equation} which in path space becomes: \begin{equation} \bar{\pi_i}'(x) = V(x_{i-1} \leftrightarrow x_{i}) p_i(x) \end{equation} Notice that due to our use of primary sample space mutations, this function is very easy to sample, as our base sampling technique already generates samples distributed according to $p_i$. Importantly, we might not even need Metropolis at all, as we could simply use our path generation technique as an independence sampler, akin to the \emph{large steps} in the original work of Kelemen et al \shortcite{Kelemen:2002}. However, using Metropolis with local perturbations might still help in regions of difficult visibility. \vspace{2mm} \subsection{Handling color} In the above we treated $f$ as a scalar, though in practice it is actually a color represented either in RGB or with some other spectral sampling. While handling spectral rendering in all generality can require custom techniques \cite{Wilkie:2014:HWS} and is beyond the scope of this paper, for RGB (and even in many cases of spectral transport) it is sufficient to use the maximum of the components $f^* = \max_i\{(f)_i\}$ when constructing the target distribution, and weighting the resulting color samples accordingly before final image accumulation. \vspace{10mm} \section{Charted Metropolis-Hastings} Before introducing our light transport algorithm, we introduce a novel family of general Markov chain Monte Carlo algorithms inspired by the primary sample space MLT formulation we just described. The idea is that we want to allow \emph{jumping} between different primary sample spaces, as this will allow to more freely escape from local maxima in situations in which the current parameterization is not the best fit for the target distribution. Suppose in all generality that we have an arbitrary \emph{target space} $(\Omega,\mu)$, a function $f:\Omega \rightarrow \mathbb{R}$ we are interested in sampling, and a parametric family $\mathcal{F} = (U_i,T_i,R_i) _{i = 0,\dots,n-1}$, such that: \begin{description} \item $U_i$ is a measured \emph{primary sample space}; \item $T_i$, the \emph{forward map}, is a function $T_i:U_i \rightarrow \Omega$; \item $R_i$, the \emph{reverse map}, is a right-inverse of $T_i$, i.e. $R_i:\Omega \rightarrow U_i$ with: \begin{equation} T_i(R_i(x)) = x \quad \forall x \in \Omega; \end{equation} \end{description} Let's also consider the density $p_i:\Omega \rightarrow \mathbb{R}$ defined as the pdf of the transformation $T_i(U)$ of a uniform random variable\footnote{More precisely, $p_i$ is uniquely defined almost everywhere as the function that satisfies the equation: $P(T_i(U) \in A) = \int_{A} p_i(x) d\mu(x)$, for any measurable subset $A \subseteq \Omega$ and $U \sim Uniform(U_i)$.}, and the function $r_i:U_i \rightarrow \mathbb{R}$ defined as its reciprocal: \begin{equation} r_i(u) = \frac{1}{p_i(T_i(u))}. \nonumber \end{equation} Now, consider again the weighted distributions defined by: \begin{equation} \pi_i(u) = \frac{ f(T_i(u)) }{ \sum_i p_i(T_i(u)) } \label{eqn:WDistributions-2} \end{equation} The idea is that we could use the reverse maps $R_i$, which can be interpreted as inverse sampling functions, to perform the desired jumps between primary sample spaces, e.g performing swaps in the context of a replica exchange framework where we run $n$ chains, each sampled according to a different $\pi_i$. We now show how to achieve it. \begin{figure} \fbox{\includegraphics[width=82.0mm]{cmh-fig}} \caption{Charted Metropolis-Hastings allows performing coordinate changes between the target space $\Omega$ and its sampling charts. When multiple points of a given sampling domain map to a single point in $\Omega$, it's sufficient for the right inversion mappings to return one of them (as for the case of $u_0$), or return one picked at random inside the set (as for the case of $u_3$) with the help of an additional sampling domain ($V_3$, light violet box). } \label{CMH} \end{figure} Given two states, $u_1^i$, generated by the $i$-chain, and $u_2^j$, generated by the $j$-chain, consider their target space mappings: \begin{eqnarray} {x_1} := T_i(u_1^i) \nonumber \\ {x_2} := T_j(u_2^j) \nonumber \end{eqnarray} and their \emph{reverse} mappings: \begin{eqnarray} {u_1^j} := R_j(x_1) \nonumber \\ {u_2^i} := R_i(x_2) \nonumber \end{eqnarray} if we wanted to perform a swap, preserving detailed balance between the chains requires accepting the swap with probability: \begin{equation} A = \min \left( 1, \frac{ \pi_i(u_2^i) \pi_j(u_1^j) r_i(u_1^i) r_j(u_2^j) } { \pi_i(u_1^i) \pi_j(u_2^j) r_i(u_2^i) r_j(u_1^j) } \right) \label{eqn:PTAcceptanceRatio} \end{equation} This can be proven by looking at the two chains as an ensemble in the space $U_i \times U_j$, with target distribution $\pi_i \cdot \pi_j$. Equation (\ref{eqn:PTAcceptanceRatio}) is then obtained from equation (8) following the usual Metropolis-Hastings rule described in section 3.1, viewing $(u_1^i,u_2^j)$ as the current state and $(u_2^i,u_1^j)$ as the proposal. In the previous section we saw that our target distributions $\pi_i$ assume the same value on the same points of $\Omega$, independently of the underlying technique $i$ used to generate it. Now since $R_i$ has been defined as a right inverse of $T_i$, if $u^j = R_j(T_i(u^i))$, we would again have: \begin{equation} \pi_j(u^j) = \pi_i(u^i). \end{equation} This property is essentially stating that our target distribution is invariant under a change of charts of the target space. Hence, equation (\ref{eqn:PTAcceptanceRatio}) simplifies to: \begin{equation} A = \min \left( 1, \frac{ r_i(u_1^i) r_j(u_2^j) } { r_i(u_2^i) r_j(u_1^j) } \right) \end{equation} without requiring any evaluation of the target distributions. Notice that we didn't require the transformations $T_i$ to be fully invertible: if the fiber of $x$ under $T_i$, i.e. the set $T^\leftarrow_i(x) = \{u | T_i(u) = x\}$, contains several points, it's sufficient that $R_i$ returns one of them. This approach is very general, as such a function can always be constructed. However, it can be made even more general by \emph{randomizing} the selection of the point in the fiber. We do so by extending the domains in which the functions $R_i$ operate. \paragraph{{\bf Definition}:} Sampling Atlas. We call \emph{sampling atlas} a family $\mathcal{F} = (U_i,V_i,T_i,R_i)_{i=0,...,n-1}$ where $U_i$ and $T_i$ are defined as before, but: \begin{description} \item $V_i$ is a measured \emph{reverse sampling space}, and \item $R_i$ is an \emph{extended right-inversion map}, $R_i:\Omega \times V_i \rightarrow U_i$, such that: \begin{equation} T_i(R_i(x,v)) = x \quad \forall x \in \Omega \quad \textrm{and} \quad \forall v \in V_i. \nonumber \end{equation} \end{description} Each tuple $(U_i,V_i,T_i,R_i)$ is called a \emph{sampling chart}. \vspace{2mm} With these definitions, we can draw two uniform random variables $v_1 \in V_i$ and $v_2 \in V_j$, and replace the reverse mappings $u_1^j$ and $u_2^i$ with: \begin{eqnarray} {u_1^j} := R_j(x_1,v_1) \nonumber \\ {u_2^i} := R_i(x_2,v_2) \nonumber \end{eqnarray} \ifnum 1 = 0 which can now be tested for acceptance with the same acceptance ratio $A$ as defined in equation 22. \else which can now be tested for acceptance with the same acceptance ratio: \begin{equation} A = \min \left( 1, \frac{ r_i(u_1^i) r_j(u_2^j) } { r_i(u_2^i) r_j(u_1^j) } \right). \nonumber \end{equation} \fi This construction is depicted in Figure~\ref{CMH}, where: a. the chart $U_0$ contains two points, $u_0$ and $u'_0$, that map to the same point ${\bf x} \in \Omega$, but $R_{0}({\bf x})$ selects just one of them, in this case $u_0$; b. the chart $U_3$ contains an entire set that maps to $x$, but its points are identified by means of points of the reverse sampling domain $V_3$. \vspace{2mm} A similar mathematical framework can be used in the context of serial (or simulated) tempering \cite{Marinari:1992}. In this context, one could run a single chain $u^i = (u,i)$ in an extended state space $U \times \mathcal{F}$, where $i$ denotes the technique used to map the chain to target space. Drawing a uniform random variable $v \in V_i$ and swapping from $i$ to $j$ through the transformation: \begin{equation} u^j = R_j(u^i,v) \nonumber \end{equation} would then require accepting the swap with probability: \begin{equation} \min \left( 1, \frac{ r_i(u^i) } { r_j(u^j) } \right) \label{eqn:STAcceptanceRatio} \end{equation} and rejecting it otherwise. Once again, no evaluation of the target distributions is required. We call both this and the above mutations \emph{chart swaps} or \emph{coordinate changes}. Notice that if there is a way to craft mutations in the target space itself, it is always possible to add the identity chart to $\mathcal{F}$: \begin{description} \item $U_n = \Omega$, $V_n = \emptyset $ \item $T_n(x) = R_n(x) = x$; \end{description} care must only be taken in adding the probability $p_n = 1$ to the denominator of all the distributions $\pi_i$ in equation (\ref{eqn:WDistributions-2}). Finally, we consider another type of mutation, \emph{inverse primary space perturbations}, which can be in a sense considered the dual of the above. Suppose we are now running a chain in the target space $\Omega$, distributed according to $\pi({\bf x})$. We can then use inversion to momentarily parameterize the target space through a given technique $i$ and take a detour or \emph{move down} from $\Omega$ to $U_i$ to perform a symmetric primary sample space perturbation there, before finally getting back to $\Omega$. With this scheme, given a state ${\bf x}$ and a uniform random variable $v \in V_i$, applying the transformation $R_i$ to obtain $u = R_i({\bf x},v)$ and the perturbation kernel $K$ to obtain the proposal $u' = K(u)$ and ${\bf y} = T_i(u')$, would result in the following acceptance ratio: \begin{equation} A({\bf y}|{\bf x}) = \min \left( 1, \frac{ \pi({\bf y})K(u|u')r_i(u') } { \pi({\bf x})K(u'|u)r_i(u) } \right) \end{equation} which simplifies to the standard primary sample space formula if $K$ is symmetric: \begin{equation} A({\bf y}|{\bf x}) = \min \left( 1, \frac{ \pi({\bf y}) } { p_i({\bf y}) } \cdot \frac{ p_i({\bf x}) } { \pi({\bf x}) } \right). \end{equation} We call this family of MCMC algorithms that jump between charts of the target space \emph{charted Metropolis-Hastings}, or CMH. \begin{figure*} \fbox{\includegraphics[width=170.0mm]{charted-mlt}} \caption{A visualization of two path space charts, where one of the bidirectional sampling techniques, in this case $T_{3,2}$, maps multiple points to the same selected path, while $T_{2,3}$ is locally invertible. Notice how a naive transfer of coordinates such as that employed in MMLT (dashed gray lines) could result in a very different path. } \label{CMLT-fig} \end{figure*} \section{Charted Metropolis Light Transport} It should now be clear how the above algorithms can be applied to light transport simulation. If we consider the framework for primary sample space MLT outlined in section 3.2, it is sufficient to add functions for \emph{path sampling inversion} to be able to apply our new charted Metropolis-Hastings replica exchange or serial tempering mutations in conjunction with the standard set of primary sample space perturbations. The advantage of these mutations is that they will allow to more easily escape from local maxima when the current sampling technique is not locally the best fit for $f$. The mutations are relatively cheap, as they don't require any expensive evaluations of the target distribution. Moreover, and very importantly, the algorithm is made practical by not requiring the path sampling functions $T_i$ to be classically invertible. In the context of light transport simulation this property is crucial, as BSDF sampling is seldom invertible: in fact, with layered materials often a random decision is taken to decide which layer to sample, but the resulting output directions could be equally sampled (with different probabilities) by more than one layer. Our framework requires to return just one of them, but it also allows selecting which one at random with a proper probability. All is needed is the ability to compute the density of the resulting transformation. This construction is illustrated in Figure~\ref{CMLT-fig}, which shows how the same path ${\bf x}$ can be represented both in the chart corresponding to the bidirectinal technique $(2,3)$ and the one coresponding to the technique $(3,2)$, where the latter contains two distinct points, $u_{3,2}$ and $u'_{3,2}$, that map to ${\bf x}$. In the picture $R_{3,2}({\bf x})$ selects just one of them, in this case $u'_{3,2}$. Further on, by adding the identity target space chart, we can also add the original path space mutations proposed by Veach and Guibas \shortcite{Veach:1997:MLT}, potentially coupled with the new inverse primary space perturbations. We call the family of such algorithms \emph{charted Metropolis light transport}, or CMLT. \subsection{Connection to path space MLT} The new algorithms can be considered as a bridge between primary sample space MLT and the original path space MLT proposed by Veach and Guibas \shortcite{Veach:1997:MLT}. In fact, one of the advantages of the original formulation over Kelemen's variant \shortcite{Kelemen:2002} was its ability to \emph{break the path in the middle} and resample the given path segment with any arbitrary bidirectional technique. This ability was entirely lost in primary sample space, as the bidirectional sampling technique was implicitly determined by the sample coordinates (or needed to be chosen ahead of time in the version we outlined in section 3.2). While Multiplexed Metropolis Light Transport (MMLT) \cite{Hachisuka:2014} added the ability to change technique over time, as the coordinates $u$ were kept fixed such a scheme was leading to swap proposals that sample unrelated paths: in fact, two techniques $i$ and $j$ map the same coordinates $u$ to different paths $T_i(u) \neq T_j(u)$ that share only a portion of their prefixes (in other words, the two resulting paths are \emph{spuriously} correlated by the algorithm, whereas in fact there is no reason for them to be - see Fig.\ref{CMLT-fig}). Our coordinate changes, in contrast, \emph{preserve} the path while changing its parameterization, thus allowing to simply perturb it later on with a different bidirectional sampler. Adding the identity path space chart and inverse primary space perturbations makes the connection even tighter, allowing to smoothly integrate the original bidirectional mutations and perturbations with an entirely new set of primary sample space perturbations. Notice that while inverse primary space perturbations could also be applied to a single path space chain, the advantage of also incorporating primary space chains in a replica exchange or serial tempering context is that the target distributions (defined by equation~\ref{eqn:WDistributions-2}) become generally smoother due to the implicit use of the multiple importance sampling weight, raising the acceptance rate. \subsection{Alternative parameterizations} While the original primary sample space Metropolis used path space parameterizations based on plain BSDF sampling, it is also possible to use other parameterizations that can provide further advantages: for example the half vector space parameterizations that have been recently explored \cite{Kaplanyan:2014:HSLT,Hanika:2015:IHLST}. \subsection{Density estimation} So far, we have concentrated on standard bidirectional path tracing with vertex connections. However, all the above extends naturally to density estimation methods, using the framework outlined in \cite{Hachisuka:2012}. The only major difference is the computation of the subpath probabilities. However, we here suggest an alternative approach. Instead of using density estimation as an additional technique, applying multiple importance sampling to combine it into a unique estimator, we can use it only to craft additional proposals. In other words, we can use density estimation as another independence sampler. Suppose we are running an MCMC simulation in $\Omega_k$, and at some point in time our chain is in the state $u^i$, with $s = i$, and $t = k - s + 1$. We can then try to build a candidate path through density estimation with the $(s+1,t)$-technique and, if the resulting path has non-zero contribution, we can drop one light vertex (and the corresponding primary sample space coordinates) and consider it as a new proposal $u^i_{de}$. Notice that in doing so, we have to adjust the acceptance ratio for the actual proposal distribution. For clarity, we will now omit the superscripts $i$, and obtain: \begin{equation} A(u_{de}|u) = \min \left( 1, \frac{ \pi(u_{de})p_{de}(T(u)) } { \pi(u)p_{de}(T(u_{de})) } \right) \end{equation} where $p_{de}(x)$ is the probability of sampling the path $x$ by density estimation (which can be approximated at the cost of some bias as described in \cite{Hachisuka:2012} or estimated unbiasedly as in \cite{Qin:2015:UPG}). If we want to further raise the acceptance rate, we can also mix this proposal scheme with an independence sampler based on bidirectional connections and combine the two, calculating the total expected probability to make both more robust: \begin{equation} A(u'|u) = \min \left( 1, \frac{ \pi(u')(p_{de}(T(u)) + p_{bc}(T(u))) } { \pi(u)(p_{de}(T(u')) + p_{bc}(T(u'))) } \right). \end{equation} Notice that this formula is now agnostic of how the samples were generated in the first place, i.e. whether the candidate $u'$ was proposed by density estimation or bidirectional connections: this is a positive side-effect of using expectations.\footnote{While this looks similar to multiple importance sampling, it is not quite the same: multiple importance sampling is a more general technique used to combine estimators, whereas here we are just interested in computing an expected probability density, using so called state-independent mixing \cite{Geyer:2011}. However, multiple importance sampling using the balance heuristic is equivalent to using an estimator based on the average of the probabilities, which is exactly the expected probability we need: hence the reason of the similarity. This approach is the same used in the original MLT to compute the expected probability of bidirectional mutations.} \begin{figure} \fbox{\includegraphics[width=82.0mm]{pipeline}} \caption{A schematic visualization of the basic bidirectional path tracing pipeline, showing the different shading and tracing kernels. Notice that while they are shown here side by side, light path tracing and eye path tracing happen in subsequent phases of the algorithm. } \label{PT} \end{figure} \vspace{8mm} \subsection{Designing a complete algorithm} So far we have only constructed a theoretical background to build novel algorithms, but we didn't prescribe practical recipes. The way we combine all the above techniques into an actual algorithm is described here. First of all, we start by estimating the total image brightness with a simplified version of bidirectional path tracing. The algorithm first traces $N_{init}$ light subpaths in parallel and stores all generated light vertices. Then it proceeds tracing $N_{init}$ eye subpaths, and connects each eye vertex to a single light vertex chosen at random among the ones we previously stored. At the same time, the emission distribution function at each eye vertex is considered, forming pure path tracing estimators with light subpaths with zero vertices. All evaluated connections (both implicit and explicit) with non-zero contribution (which represent entire paths, each with a different number of light and eye vertices $s$ and $t$) are stored in an unordered list. Second, in order to remove startup bias, we resample a population of $N$ seed paths for a corresponding amount of chains. In order to do this, we build the cumulative distribution over the scalar contributions of the previously stored paths, and resample $N$ of them randomly. Notice that the $N$ seed paths will be distributed according to their contribution to the image. Particularly, the number of paths sampled with technique $i$ will be proportional to the overall contribution of that technique, and similarly for path length. At this point, though not crucial for the algorithm, we sort the seeds by path length $k$ so as to improve execution coherence in the next stages. In practice, sorting divides the $N$ seeds into groups of $N_k$ paths each, such that $\sum_k N_k = N$. Finally, we run the $N$ Markov chains in parallel using both classic primary sample space perturbations and the novel simulated tempering or replica exchange mutations described in sections 3 and 4. As the new mutations have a low cost compared to performing actual perturbations, they can be mixed in rather frequently (with very low overhead up to once every four iterations). \RestyleAlgo{boxruled} \begin{algorithm \KwData{x, $\omega_i$, $\omega_o$} \KwResult{u (primary space coordinates)} { probs[] $\leftarrow$ layerSamplingProbabilities(x,$\omega_i$)\; prob\_sum $\leftarrow$ probs[diffuse] + probs[glossy]\; v $\leftarrow$ random() * prob\_sum\; \eIf{v $<$ probs[diffuse]}{ u $\leftarrow$ (v, invertLambert(x,$\omega_i$,$\omega_o$))\; }{ u $\leftarrow$ (v, invertGGX(x,$\omega_i$,$\omega_o$))\; } } \caption{inversion of a composite BSDF containing a diffuse and a glossy layer} \end{algorithm} \section{Implementation} We implemented our algorithm, together with MMLT, PSSMLT and bidirectional path tracing (BPT) in CUDA C++, exposing massive parallelism at every single stage, including ray tracing, shading, cdf construction (prefix sum), resampling and sorting (radix sort). The basic bidirectional path tracing algorithm is constructed as a pipeline of kernels (also known as wavefront tracing \cite{Laine:2013:MCH}), and relies on the OptiX Prime library for ray tracing. We ran all tests on an NVIDIA Maxwell Titan X GPU. The basic bidirectional path tracing pipeline, composed of seven shading and tracing stages, is shown schematically in Figure~\ref{PT}. This pipeline is further extended in all the MCMC rendering algorithms by additional stages performing primary sample space coordinates generation (applying both perturbations and chart swaps in the case of CMLT), and the final acceptance-rejection step. All the pipeline stages communicate through global memory work queues. In order to keep storage and bandwidth consumption to a minimum, only minimal information is stored for each path vertex (including instance id, primitive id and uv coordinates), using on the fly vertex attributes interpolation where needed (such as during path inversion). For $256K$ paths, of a maximum of 10 vertices each, this requires about 64MB of storage. Both our CMLT and MMLT implementations run several thousand chains in parallel, using the seeding algorithm described in section 5.4. Besides being strictly necessary to scale to massively parallel hardware, we found this to produce some additional image stratification, as discussed in the Results section. The CMLT implementation is based on the serial tempering formulation. Our framework employs a layered material system that combines a diffuse BSDF (Lambertian) and rough glossy reflection and transmission BSDFs (GGX) using a Fresnel weighting. Sampling of the glossy component is implemented using the distribution of visible normals \cite{heitz:hal-2014}, and selection between the diffuse and glossy components is performed based on Fresnel weights. Clearly, this path sampling scheme is not invertible, as both the diffuse and glossy components can map different primary sample space values to the same outgoing directions. Hence, we used the machinery described in section 4 to enable randomized inversion. \subsection{Chart swaps and path inversion} Given a bidirectional path generated by the technique $(s,t)$ using coordinates $u$, in order to perform a chart swap we propose a new pair $(s',t')$ with $s' + t' = s + t$ distributed according to the total energy of the techniques (i.e. the normalization constants of the target distributions). After the candidate is sampled, path inversion needs to be performed using the transformation $u' = R_{s',t'}(T_{s,t}(u))$. This transformation can be widely optimized noticing that there are only two cases: \begin{description} \item $s' > s$: in this case it is only necessary to invert the coordinates of the light subpath vertices $\{y_s, ..., y_{s' - 1}\}$. \item $t' > t$: in this case it is only necessary to invert the coordinates of the eye subpath vertices $\{z_t, ..., z_{t' - 1}\}$. \end{description} Computing the inverse pdf $r_{s,t}$ can be optimized analogously. In each of these cases, we start the stochastic inversion from the end of the selected subpath, and proceed backwards. At each vertex, we consider the local composite BSDF, and compute the forward probabilities originally used to select which layer to sample (for example, based on their Fresnel weighted albedos). Using these, we draw a single random number $v$ to select which of the layers to use for inversion. Pseudocode for a material with a diffuse and glossy layer is provided in Algorithm 1. Pseudocode for a serial version of the overall CMLT algorithm is given in Algorithm 2. The Appendix provides further details and pseudocode for the inversion of typical BSDFs. \section{Results} \input{simple-test.txt} We performed two sets of tests. The first is aimed at testing the many possible algorithmic variations of CMLT on a simplified light transport problem. The second, using full light transport simulation, compares a single CMLT variant against MMLT, which could be currently considered state-of-the-art in primary sample space MLT. \subsection{Simplified light transport tests} This test consists of rendering an orthographic projection of the $XY$ plane directly lit by two area light sources. The first light is a unit square on the plane $Y = 0$, with a spatially varying emission distribution function changing color and increasing in intensity along the $X$ axis. The light source is partially blocked by a thin black vertical strip near its area of strongest emission. The second light is another unit square on the plane $Y = 1$, with uniform green emission properties. This light is completely blocked except for a tiny hole. In this case, our path space consists of two three-dimensional points: the first on the ground plane, the second on the light source. As charts, we used two different parameterizations: \paragraph{{\bf 1.}} generating a point uniformly on the visible portion of the ground plane and a point on the light sources distributed according to their spatial emission kernels - corresponding to path tracing with next-event estimation, i.e. the bidirectional path tracing technique $(s,t) = (1,1)$; \paragraph{{\bf 2.}} generating a point uniformly on the visible portion of the ground plane, sampling a cosine distributed direction, and intersecting the resulting ray with the scene geometry to obtain the second point - corresponding to pure path tracing, i.e. the bidirectional path tracing technique $(s,t) = (0,2)$. \vspace{2mm} Both charts have a four dimensional domain, and in both cases we used exact inverses of the sampling functions. We tested six different MCMC algorithms: \begin{description} \item {\bf PSSMLT-1}: a single PSSMLT chain using the first parameterization; \item {\bf PSSMLT-2}: a single PSSMLT chain using the second parameterization; \item {\bf PSSMLT-AVG}: two PSSMLT chains using both the first and the second parameterization, both using the importance sampled distribitions (equation~\ref{eqn:ISDistributions}), where the accumulated image samples are weighted (i.e. averaged) through multiple importance sampling with the balance heuristic; \item {\bf PSSMLT-MIX}: two PSSMLT chains using both the first and the second parameterization, with the weighted distributions (equation~\ref{eqn:WDistributions-1}); \item {\bf CMLT-IPSM}: a single CMLT chain in path space alternating inverse primary space mutations using the first and the second parameterizations; \item {\bf CMLT}: two CMLT chains using both the first and the second parameterization as charts, coupled with replica-exchange swaps performed every four iterations; \end{description} Results are shown in Figure~\ref{CMH-tests}, while their root mean square error (RMSE) is reported in Table 1. All images except for the reference were produced using the same total number of samples $n = 16 \cdot 10^6$: PSSMLT-1, PSSMLT-2 and CMLT-IPSM running a single chain of length $n$, whereas PSSMLT-AVG, PSSMLT-MIX and CMLT running two chains of length $n/2$. In table 1 we further report RMSE values for $n = 128 \cdot 10^6$. The reference image has been generated by plain Monte Carlo sampling. As can be noticed, our PSSMLT-MIX formulation using the distributions defined by equation (\ref{eqn:WDistributions-1}) is superior to simply averaging two PSSMLT chains using multiple importance sampling (PSSMLT-AVG), which is in fact worse than PSSMLT using a single chain according to the second distribution (PSSMLT-2). \input{simple-test-table.txt} CMLT-IPSM produces results that are just slightly worse than PSSMLT-MIX, but still superior to all other PSSMLT variants. The reason why CMLT-IPSM is inferior to PSSMLT-MIX is that while the target distribution for CMLT-IPSM is proportional to $f$, the target distributions of the chains in PSSMLT-MIX are smoother due to the embedded multiple importance sampling weights, and contain no singularities. Finally, CMLT produces the best results among all algorithms. \subsection{Full light transport tests} For these tests we compared the CMLT implementation described in section 5 against our own implementation of MMLT. We provide five test scenes representative of different transport phenomena: \begin{description} \item {\bf The Gray \& White Room}: a scene from Bitterli's repository \shortcite{resources16}. \item {\bf Escher's Room}: an M.C. Escher themed adaptation of the above scene, featuring multi-layer materials with variable surface properties. This scene contains many light sources of different size: the large back wall, with a variable Lambertian emission distribution displaying a famous painting by the artist, a smaller area light on the ceiling, and the external lighting coming from the windows. The smaller light is partially blocked by a rough glossy reflector, which causes a blurry caustic on the partially glossy ceiling. Notice how all the above elements contribute to forming an \emph{all-frequency} lighting scenario. \item {\bf Escher's Glossy Room}: a variation of the above scene in which all surfaces are glossy (with no diffuse component), with variable roughness (with GGX exponents ranging between 5 and 100). Notice that this scene contains a variety of caustics of all frequencies (in a sense, all lighting is due to caustics). This scene stresses the advantages of chart swaps in the presence of near-specular transport, where there are many narrow modes and there is often no single best sampling technique. \item {\bf Wall Ajar}: another variation of the above scene mimicking Eric Veach's famous scene \emph{the door ajar}. Most of the lighting in the scene comes from a narrow opening in the \emph{sliding} back wall, which covers an equally large but completely hidden emissive wall. Hence, the room is almost entirely indirectly lit, except for the blue light coming from the windows. The ceiling area light source is also considerably smaller, casting a sharper caustic, and most surfaces are now about half diffuse half glossy. The sofa also features some rough transmission. \item {\bf Salle de bain}: another scene from Bitterli's repository \shortcite{resources16}. While in terms of light transport this scene is considerably simpler than any of the others, we chose it as representative of some typical architectural lighting situations. \end{description} It is important to note that while the first four scenes look superficially similar, they stress entirely different transport phenomena. Moreover, all of them, while relatively simple in terms of geometric complexity, are very hard in terms of pure light transport, requiring between $16 \cdot 10^3$ and $64 \cdot 10^3$ samples per pixel (spp) for bidirectional path tracing to converge. Figure~\ref{CMLT-same-time} shows equal-time comparisons of MMLT and CMLT on all scenes. Except for the last row, both the MMLT and CMLT renders were generated using 256 spp, taking roughly the same computation time, whereas the reference images have been rendered with bidirectional path tracing using $16 \cdot 10^3$ spp. The images in the last row used 512 spp for MMLT and CMLT, and $64 \cdot 10^3$ spp for the reference image. CMLT produces considerably less noise on all test scenes. In particular, it is very effective in cases of complex glossy reflections and reflections of caustics, where there is no clear winner among all bidirectional sampling techniques. Figure~\ref{CMLT-convergence} shows the convergence of MMLT and CMLT on the Salle de bain scene. Notice how MMLT needs more than twice as many samples as CMLT to get approximately the same RMSE. In the early stages, MMLT is not capable of finding many important light paths, leading to an apparently darker image (due to energy being concentrated on a subset of the pixels); the difference vanishes at higher sampling rates. Figure~\ref{CMLT-convergence-2} shows a similar graph comparing also to PSSMLT. Since each PSSMLT sample requires both more shadow rays and BSDF evaluations, in our implementations PSSMLT can perform roughly one half the mutations as CMLT in the same time. Finally, Figure~\ref{CMLT-stratification} shows the effect of varying the number of chains run in parallel, trading it against chain length to keep the total number of samples fixed. The images in the top row are obtained running $32 \cdot 10^3$ chains in parallel, whereas the ones in the bottom row are obtained using $256 \cdot 10^3$ chains. It can be seen that using more, shorter chains generally improves stratification and widely reduces the spotty appearance typical of Metropolis autocorrelation. The exception is the caustic on the ceiling that benefits from the higher adaptation of the longer chains. Note that the additional stratification is similar to the one obtained by ERPT \cite{Cline:2005}, which however was using a different, per-pixel chain distribution strategy (as opposed to our global resampling stage), and was not specifically targeted at introducing massive parallelism. While Cline et al \shortcite{Cline:2005} discussed only the stratification benefits, we believe it is worth documenting what seems an intrinsic tradeoff between local exploration and better stratification: running more, shorter chains generally helps image stratification, while necessarily losing some exploration capabilities in narrow regions of path space. In all cases, for CMLT we used one chart swap proposal every 16 mutations, resulting in negligible overhead. \input{parallel-convergence.txt} \input{main-tests.txt} \input{progressive-tests.txt} \subsection{Performance analysis} On our system, the 1024 spp CMLT and MMLT images take roughly 80s to render at a resolution of $1600 \times 900$ using $256 \cdot 10^3$ chains. Figure~\ref{PerfBreakdown} shows a performance breakdown on Salle de bain: roughly 50\% of the time is spent in ray tracing, with shading taking 45\%, and the initial path sampling and path inversion taking roughly 2.5\% each. If we substantially reduce the number of chains we start to notice a slowdown due to underutilization of the hardware resources, mostly caused by insufficient parallelism in the late stages of the bidirectional path tracing pipeline needed to process longer than average paths. This could likely be mitigated by better scheduling policies, for example not requiring all chains to be processed in sync (currently we finish applying a mutation to all paths before starting to process the next). \section{Discussion} We proposed a novel family of MCMC algorithms that use sampling charts to extend the sampling domain and allow better exploration. We applied the new scheme to propose a new type of light transport simulation algorithms that bridge primary sample space and path space MLT. We also showed that the new algorithms arising from this framework require to implement only a new set of relatively cheap mutations that can be constructed using simple, stochastic right inverses of the path sampling functions: particularly, the fact our framework requires only such type of probabilistic inversion is what makes the algorithm practical, as classical BSDF inversion with layered material models is generally impossible. We believe this to be a major strength of our work. We implemented both the old and new methods exposing massive parallelism at all levels, and showed how increasing the number of chains that run in parallel can increase stratification. Finally, we suggested a novel, simpler method to integrate path density estimation into MCMC light transport algorithms as a mechanism to craft independent proposals. \subsection{Future work} There are multiple avenues in which this work could be extended. The first is testing all possible variants of our new algorithmic family more thoroughly. In such a context, it will be particularly interesting to test the combination with the original path space MLT mutations, which might provide some advantages in regions with complex visibility. Similarly, it would be interesting to test the new technique for including path density estimation as an independence sampler. Another potential venue is considering \emph{dimension jumps} to switch between the charts underlying different path spaces $\Omega_k$ and $\Omega_{k'}$. This could be achieved using the \emph{Metropolis-Hastings-Green with Jacobians} algorithm as described by Geyer \shortcite{Geyer:2011}. Finally, it would be interesting to integrate half vector space light transport \cite{Kaplanyan:2014:HSLT,Hanika:2015:IHLST} as yet another path space chart. \paragraph{{\bf Aknowledgements}} We would like to thank Cem Cebenoyan at NVIDIA for constantly supporting our work; Luca Fascione and Marc Droske at Weta Digital for early reviews and continuous feedback; Matthias Raab at NVIDIA for helping us with modern layered material sampling methods; Nicholas Hull and Nir Benty at NVIDIA for their precious help with the setup and import of the original Gray \& White Room and Salle de bain scenes and Thomas Iuliano for providing beautiful artwork that ought to be included in this paper, and was not for mere lack of time. Finally, we would like to thank the anonymous SIGGRAPH reviewers, particularly \#30, for their detailed comments which led to significant improvements in the exposition of the paper. \section{Appendix} We here describe how to invert the sampling functions for typical BSDF layers as needed to implement chart swaps. The key insight is that most common BSDF sampling methods can be seen as bijective functions $S(\omega_i)$ from the unit square to the hemisphere of directions: \begin{eqnarray} S(\omega_i):[0,1]^2 &\rightarrow& H \\ (u,v) &\mapsto& \omega_o \nonumber \end{eqnarray} where the notation $S(\omega_i)$ denotes the potential dependence on the incident direction $\omega_i$. Hence, in order to perform BSDF inversion, we need to simply compute the inverse $S^\leftarrow(\omega_i): H \rightarrow [0,1]^2$. \subsection{Lambertian distribution} Lambertian BSDFs are typically importance sampled using the mapping: \begin{equation} S: (u,v) \mapsto (\theta,\phi) = (acos(\sqrt{v}), u \cdot 2 \pi) \end{equation} where $(\theta, \phi)$ represent spherical coordinates relative to the surface normal. Inverting this mapping can hence be done very easily: \begin{equation} S^\leftarrow: (\theta,\phi) \mapsto (u,v) = \left( \frac{\phi}{2 \pi}, cos^2(\theta) \right) \end{equation} \subsection{GGX distribution} Sampling the GGX distribution is slightly more involved as it is the composition of two functions: $S(\omega_i) = R(\omega_i) \circ F_m$, where the function $F_m:[0,1]^2 \rightarrow H$ samples a microfacet according to the roughness parameter $m$, and $R(\omega_i) : H \rightarrow H$ returns the input direction $\omega_i$ reflected about the sampled microfacet normal. Its inverse can hence be obtained as $S^\leftarrow(\omega_i) = F_m^\leftarrow \circ R^\leftarrow(\omega_i)$. Finding the microfacet normal given the incident and outgoing directions $\omega_i$ and $\omega_o$ is trivial, as the normal can be simply computed using the half vector formula: \begin{eqnarray} R^\leftarrow(\omega_i) : H &\rightarrow& H \\ \omega_o &\mapsto& \frac{\omega_i + \omega_o}{|\omega_i + \omega_o|}. \nonumber \end{eqnarray} The forward mapping for sampling a microfacet is instead given by the following expression: \begin{equation} F_m: (u,v) \mapsto (\theta,\phi) = \left( acos\left(\frac{1}{\sqrt{1 + t(v)}}\right), u \cdot 2 \pi \right) \end{equation} with: \begin{equation} t(v) = \frac{v}{ (1 - v) \cdot m^2 }. \end{equation} The inverse can hence be computed as: \begin{equation} F_m^\leftarrow: (\theta,\phi) \mapsto (u,v) = \left( \frac{\phi}{2 \pi}, \frac{q(\theta) }{1 + q(\theta) } \right) \end{equation} with: \begin{equation} q(\theta) = m^2 \cdot (1 / cos^2(\theta) - 1). \end{equation} The composition of the two can now be obtained considering the polar coordinates $(\theta_h, \phi_h)$ of the vector: \begin{equation} {\bf h} = R^\leftarrow(\omega_i,\omega_o), \end{equation} and finally computing: \begin{equation} (u,v) = F_m^\leftarrow(\theta_h,\phi_h). \end{equation} \subsection{Specular scattering} Specular scattering introduces singularities in the transformations $T_i$, which manifest as Dirac deltas in the respective pdfs $p_i$. While we did not explicitly study how to handle these in this work, we believe it would be possible to include them in our chart swaps, as long as the scattering mode at specular vertices is not altered. In fact, altering the mode from specular to diffuse would simply result in a zero acceptance rate: this can be verified looking at equation (\ref{eqn:STAcceptanceRatio}), and considering the fact that the numerator, equal to the reciprocal of the density of the current (specular) pdf $p_i$, would be zero. Conversely, if the mode was not altered, the implicit Dirac deltas in the numerator and denominator would cancel out. \input{cmlt-algorithm.txt} \input{cmlt-mmlt-pssmlt.txt} \begin{figure} \fbox{\includegraphics[width=82.0mm]{perf-breakdown}} \caption{Performance breakdown for running 256K chains of length 350 (equivalent to about 64 spp at a resolution of $1600 \times 900$). All timings are in milliseconds. } \label{PerfBreakdown} \end{figure} \bibliographystyle{acmsiggraph}
3,212,635,537,847
arxiv
\section{INTRODUCTION} Extremely low-mass ($\la 0.3$ M$_\odot$) white dwarfs (ELM WDs) are the remnants of stars that cannot burn helium (He) in their cores. Because the Universe does not have enough time to produce them by single-star evolution, the He-core ELM WDs are thought to be formed by considerable mass loss at the red giant branch phase of stellar binaries before He burning (Marsh, Dhillon \& Duck 1995; Kilic, Stanek \& Pinsonneault 2007). In this case, they can provide information about the past evolution of the precursor binaries. We have been performing time-series spectroscopy on eclipsing binaries (EBs) containing an ELM WD precursor (pre-ELM WD) as possible (Lee et al. 2020). The main purpose of the observations is to measure the fundamental parameters of the interesting rare objects, combining existing or new photometric data, and to present their evolution scenario. The promising targets for this subject are EL CVn-type binaries, which are post-mass transfer stars comprising an A/F main sequence and a pre-ELM WD companion (Maxted et al. 2014; van Roestel et al. 2018). Five out of $\sim70$ EL CVn stars are pulsating EBs that exhibit possible multiperiodic oscillations arising from both a $\delta$ Sct (or $\gamma$ Dor)-type primary and a pre-ELMV companion (Hong et al. 2021; Kim et al. 2021). Because the fundamental parameters such as masses and radii can be measured accurately and in detail, spectroscopic and eclipsing binaries provide us with an opportunity to test and refine the evolutionary models of stars (Hilditch 2001; Torres, Andersen \& Gim\'enez 2010). At the same time, pulsating stars help to probe and constrain their interior physics from core to surface through asteroseismology (Antoci et al. 2019; Aerts 2021). Thus, the stellar pulsations in EBs are of great value, because the synergy between the two kinds of variables can clearly enhance our understanding of stellar physics (Murphy 2018; Kurtz 2022). We focus on the EL CVn-type star 1SWASPJ181417.43+481117.0 (WASP 1814+48; TIC 420947520; Gaia EDR3 2122136709525411072), which was recognized by Maxted et al. (2014) to be an eclipsing variable with a period of 1.7994305 $\pm$ 0.0000005 days. From archival WASP photometry, they found that the binary target is a totally-eclipsing detached system with a mass ratio of $q$ = 0.134 $\pm$ 0.015, relative radii of $r_1$ = 0.2565 $\pm$ 0.0025 and $r_2$ = 0.0247 $\pm$ 0.0004, and surface brightness and luminosity ratios of $J$ = 3.000 $\pm$ 0.064 and $L_2/L_1$ = 0.0277 $\pm$ 0.0002, respectively. Further, the effective temperatures of the brighter, more massive primary star and its companion were obtained to be $8000\pm300$ K and $12,500\pm1800$ K, respectively, by comparing the observed and synthetic flux distributions. This work is the fourth in a paper-series on the pre-ELM WDs in EBs (Lee et al. 2020; Lee, Hong \& Park 2022; Hong et al. 2021). We analyze in detail our high-resolution spectra and the TESS photometric data of WASP 1814+48, and report the discovery of multiple types of pulsations originating from the EB system. \section{TESS PHOTOMETRY AND ECLIPSE TIMINGS} Highly precise photometry of WASP 1814+48 has been performed by the TESS mission (Ricker et al. 2015) from 2019 July 18. We downloaded the 2-min cadence data taken during Sector 14, 25-26, and 40-41 from MAST\footnote{https://archive.stsci.edu/} and used the simple aperture photometry (\texttt{SAP$_-$FLUX}) data in this study. The flux measurements were detrended and normalized by fitting a second-order polynomial to the out-of-eclipse portion of each sector's light curve, and they were converted to magnitude units. The resultant observations are displayed in the top panel of Figure 1. A total of 91,240 individual points were obtained for the five sectors. The crowdedness factor CROWDSAP reported in the TESS data is the ratio of the target flux to the total flux in a photometric aperture. This can be used to see if nearby stars are observed in the same pixel as the target, where a value of 1 means that there is no contamination. The CROWDSAP value for WASP 1814+48 is 0.9900$\pm$0.0024 on average for these sectors. The TESS eclipse times of WASP 1814+48 and their errors were determined using the Kwee \& van Woerden (1956) method. These are presented in Table 1, where Min I and II present the primary and secondary eclipses, respectively, at orbital phases 0.0 and 0.50. In order to obtain the updated linear ephemeris of the binary star and to phase its time-series data, we used the primary minimum epochs in the following least-squares solution: \begin{equation} \mbox{Min I} = \mbox{BJD}~ 2,459,010.506694(27) + 1.79943078(17)E. \end{equation} The 1$\sigma$-error values for each coefficient are given in the parentheses. The timing residuals from the linear ephemeris appear as $O-C$ in Table 1. The TESS light curve phased with Equation (1) is depicted in the middle panel of Figure 1. \section{GROUND-BASED SPECTROSCOPY AND DATA ANALYSIS} The spectroscopic observations of WASP 1814+48 were performed with the Bohyunsan Observatory Echelle Spectrograph (BOES; Kim et al. 2007), attached to the 1.8-m telescope in the Bohyunsan Optical Astronomy Observatory (BOAO), Korea. A total of 31 spectra were secured for seven nights between 2015 April and 2021 March. Their wavelength coverage ranges from 3600 to 10,200 $\rm \AA$ with a resolving power of $R=30,000$. Each exposure time of our target star was 40 min, resulting in a signal-to-noise (S/N) ratio of approximately 20 around 4500 $\rm \AA$. This corresponds to 0.015 of the eclipsing period, so orbital smearing was not considered. All observed spectra were reduced by following the IRAF standard data-reduction procedures (Hong et al. 2015), which include flat fielding, de-biasing, extraction, and wavelength and flux calibration. Figure 2 shows the trailed spectra of WASP 1814+48 in the Mg II $\lambda$4481 region. In this figure, the S-wave feature of the more massive primary component (WASP 1814+48 A) is clearly shown, while there is no sign of the hotter secondary star (WASP 1814+48 B). To measure the radial velocities (RVs) from the observed spectra, we used the cross-correlation function (CCF) method (Simkin 1974; Tonry \& Davis1979) implemented in the RaveSpan software (Pilecki, Konorski \& Gorski 2012; Pilecki et al. 2017). In the run, most of metallic and Balmer lines are difficult to measure, so we selected the spectral region of Mg II $\lambda$4481 that is useful for determining the RVs of the EL CVn-type stars (Lee et al. 2020 for WASP 0131+28; Hong et al. 2021 for WASP 0843-11; Lee, Hong \& Park 2022 for WASP 1625-04). Template spectra were taken from the BOSZ stellar atmosphere models\footnote{https://archive.stsci.edu/prepds/bosz} (Bohlin et al. 2017) matching the effective temperature ($T_{\rm eff,1}$) and surface gravity ($\log g_1$) of the primary component discussed below. We measured the RVs of WASP 1814+48 A and they are presented in Figure 3 and Table 2. The RV measurements were fitted to a sine wave (Lee et al. 2018), in order to obtain the spectroscopic orbit of WASP 1814+48 A. The results from this calculation are summarized in Table 3, where $\gamma$ is the systemic velocity, $K_1$ and $a_1$$\sin$$i$ are the velocity semi-amplitude and semimajor axis of the primary component, and $f$(M) is the mass function. The orbital period $P$ = 1.79943078 days was taken from Eq. (1). To determine the rotational velocities ($v_1$$\sin$$i$) and surface temperature ($T_{\rm eff,1}$) of WASP 1814+48 A, we selected five absorption lines, Ca II K $\lambda$3933, H$_{\rm \gamma}$ $\lambda$4340, Fe II $\lambda$4383, H$_{\rm \beta}$ $\lambda$4861, and H$_{\rm \alpha}$ $\lambda$6563, which are useful spectral indicator for A0$-$F0 type stars (Gray \& Corbally 2009). From the BOES spectra, each absorption region was combined using the FDB\textsc{inary} code (Iliji\'c et al. 2004) to obtain a better S/N spectrum. Maxted et al. (2014) estimated the surface temperature of WASP 1814+48 A to be $8000\pm300$ K, based on the surface brightness ratio and the observed flux distribution. Therefore, the reconstructed spectrum was compared to the synthetic spectra with a temperature range of 6500 K$-$9000 K (in steps of 10 K) and the projected rotational velocity range of 10$-$200 km s$^{-1}$ (in steps of 1 km s$^{-1}$). The synthetic spectra were interpolated using the stellar models from the BOSZ spectral library (Bohlin et al. 2017) by adopting the solar metallicity and the surface gravity of $\log g_1$ = 4.1 (cf. Section 4). We obtained both atmospheric parameters of WASP 1814+48 A by performing a $\chi^2$ grid search that minimizes the difference between the synthetic and reconstructed spectra and averaging the values found in each region (Hong et al. 2017; Lee, Hong \& Park 2022). As a consequence, $T_{\rm eff,1}=7770 \pm 130$ K and $v_1$sin$i=47\pm6$ km s$^{-1}$. In Figure 4, the reconstructed spectrum from the FDB\textsc{inary} code is presented with the best-fit model. \section{BINARY MODELING} As with the archival WASP observations, the TESS light curve of WASP 1814+48 shows a box-shaped primary eclipse and an ellipsoidal variation. The shape resembles that of EL CVn, its class prototype, and there are no noticeable differences between each sector. The observed depths for the primary and secondary eclipses are 0.031 mag and 0.016 mag, respectively, in the TESS photometry, and 0.037 mag and 0.017 mag in the WASP one. In both the WASP and TESS light curves, the phase difference between Min I and Min II implies that WASP 1814+48 is in a circular orbit. To obtain improved binary parameters, we solved the TESS photometric data with our RV curve using the detached mode 2 of the Wilson-Devinney program (Wilson \& Devinney 1971; van Hamme \& Wilson 2007; hereafter W-D) and applied a mass ratio ($q$)-search method (e.g., Lee, Hong \& Kristiansen 2019). The binary modeling was done in a manner similar to the single-lined eclipsing system HW Vir (Lee et al. 2009) and the EL CVn-type star WASP 0131+28 (Lee et al. 2020). In the synthesis of the RV and light curves, the surface temperature of WASP 1814+48 A was given as $T_1$ = 7700 $\pm$ 130 K from our spectroscopic analysis, as discussed in the previous section. The albedos ($A$) and gravity-darkening exponents ($g$) for both components were all set to 1.0 from their temperatures. The logarithmic limb-darkening (LD) law was adopted and its bolometric ($X$, $Y$) and monochromatic ($x$, $y$) coefficients were interpolated from the values of van Hamme (1993) incorporated into the W-D program. Because the TESS passband is not included in the binary code, we used the LD parameters for Cousins $I_{\rm c}$-band. Furthermore, a circular orbit ($e$ = 0) and a synchronous rotation ($F_1 = F_2$ = 1.0) were applied. In this article, WASP 1814+48 A and B are denoted by subscripts 1 and 2, respectively. The mass ratio is considered the most important parameter in binary modeling. We can directly calculate the velocity semi-amplitudes ($K_1$ and $K_2$) and the mass ratio ($q=K_1/K_2$) from double-lined RV curves. However, WASP 1814+48 B was too faint to measure its RVs. Hence, we computed a series of binary models for assumed mass ratios below 0.3, the so-called $q$-search procedure. The sum of squares of residuals ($\sum W(O-C)^2$) showed a global minimum around $q$ = 0.104. The $q$ value was adjusted to obtain the best-fit parameters of the eclipsing system. The final result is presented in Table 4, and the synthetic light and RV curves are displayed as solid curves in the middle panel of Figure 1 and the upper panel of Figure 3, respectively. The residuals between the observations and the binary model are plotted in the bottom panels of each figure. We can see our modeling result that describes the light and RV curves very well. The parameters' errors in Table 4 were obtained following the method introduced by Southworth et al. (2020) and applied by Lee, Hong \& Kim (2021) for the highly-precise TESS data. The fundamental stellar parameters of WASP 1814+48 A and B were first computed from our simultaneous light and velocity solution. The results are summarized in Table 5. We used the solar temperature and bolometric magnitude of $T_{\rm eff}$$_\odot$ = 5780 K and $M_{\rm bol}$$_\odot$ = +4.73, respectively. The bolometric corrections (BCs) were made using the empirical calibration between $\log T_{\rm eff}$ and BC presented in Flower (1996) and later corrected in Torres (2010). Using an apparent visual magnitude of $V$ = 10.673 $\pm$ 0.008 and the color excess of $E$($B-V$) = 0.028 $\pm$ 0.007 (Stassun et al. 2019), we determined the distance of the target star to be 536 $\pm$ 20 pc. Within 3$\sigma$-error values, our distance coincides with 586 $\pm$ 5 pc inverted from the GAIA EDR3 parallax of 1.706 $\pm$ 0.013 mas (Gaia Collaboration et al. 2021) and 584 $\pm$ 4 pc estimated from the GAIA EDR3 measurements by Bailer-Jones et al. (2021). The synchronous rotations for the component stars were calculated to be $v_{\rm 1,sync} = 54.70 \pm 0.77$ km s$^{-1}$ and $v_{\rm 2,sync} = 5.46 \pm 0.15$ km s$^{-1}$ from the binary period $P$ and their radii $R_{1,2}$. Since both values of $v_{\rm 1,sync}$ and $v_1$$\sin$$i$ coincide close to the margins of errors, we believe the primary star is currently in a state of synchronized rotation or nearly so. \section{PULSATIONAL CHARACTERISTICS} The physical parameters of WASP 1814+48 in Table 5 indicate that the primary and secondary components are $\delta$ Sct and pre-ELMV candidates, respectively (Maxted et al. 2013; Wang, Zhang \& Dai 2020; Hong et al. 2021). To find the pulsation signatures present in the TESS observations, we applied a multifrequency analysis to the entire light curve residuals from our binary model. The PERIOD04 software of Lenz \& Breger (2005) was conducted up to the Nyquist limit of about 360 day$^{-1}$. The periodogram for WASP 1814+48 is presented in Figure 5, where two dominant signals are clearly visible around 33 day$^{-1}$. In addition, as shown in the inset box, there are the orbital frequency ($f_{\rm orb}$ = 0.55573 day$^{-1}$) and its multiples ($Nf_{\rm orb}$, $N$ is an integer) in the low-frequency domain. We performed an iterative pre-whitening process (Lee et al. 2014), and extracted 52 frequencies with an S/N amplitude ratio larger than about 4.0 (Breger et al. 1993). The results of this process are summarized in Table 6, and the uncertainties of each parameter were computed following the method proposed by Kallinger, Reegen \& Weiss (2008). The amplitude spectra after pre-whitening the first two frequencies and then 38 frequencies are illustrated in the middle and bottom panels of Figure 5, respectively. Of the extracted signals, possible orbital harmonics and combination frequencies were carefully examined using the frequency resolution of $\Delta f \simeq$ 0.002 day$^{-1}$ (Loumos \& Deeming 1978). They are remarked in the last column of Table 6. Most frequencies lower than 24 day$^{-1}$ may be either alias effects or orbital harmonics up to $42f_{\rm orb}$. The integer multiples of the orbital frequency can arise from tidally excited pulsations by the companions in eccentric binary systems such as the heartbeat star KOI-54 (Welsh et al. 2011). However, our binary model favors the conclusion that WASP 1814+48 is in a circular orbit. Thus, it is difficult to regard the observed multiples of $Nf_{\rm orb}$ as stellar oscillations excited by tidal interaction. We think the aliases and the orbital harmonics result from insufficient removal of the binary effects and the systematic trends in the TESS data. To cross-check the significant frequencies found from the entire TESS observations and to examine their variations with time, we independently analyzed the light curve residuals of each sector in the same way as before. The resultant frequencies were compared with those in Table 6. Only $f_1$, $f_2$, $f_6$, and $f_9$ were detected in all sectors, and $f_{11}$ in all but Sector 14. The five frequencies were stable within the standard deviation of $\sim$0.002 day$^{-1}$. The remaining frequencies except for some orbital harmonics ($f_{\rm orb}$, $2f_{\rm orb}$, $3f_{\rm orb}$) were not found in most sectors. On the other hand, if the high frequencies between 128 and 288 day$^{-1}$ result from WASP 1814+48 B with about 2 \% light contribution to the EB system, they are highly diluted by the light of WASP 1814+48 A. Then, these signals would have been too weak to detect in each sector with the frequency resolution of $\sim$0.055 day$^{-1}$. As the pre-He WD transits the larger and more massive primary star during the secondary eclipses, we analyzed the entire secondary-eclipse residuals (orbital phases 0.454$-$0.546). The primary-eclipse data were also analyzed for comparison. Figure 6 presents the amplitude spectra of both eclipse phases in the frequency region of 100$-$300 day$^{-1}$. Two high-frequency signals of 127.2627 and 135.5987 day$^{-1}$ were detected only in the secondary eclipse phase. This implies that the main source of the high frequencies is the secondary companion WASP 1814+48 B. \section{DISCUSSION AND CONCLUSIONS} It is known from archival WASP photometry (Maxted et al. 2014) that WASP 1814+48 is a candidate EL CVn star. For the target star, we obtained the first spectroscopic observations using the BOES echelle spectrograph attached to the BOAO 1.8-m reflector. From the total 31 spectra, the RVs and atmospheric parameters of the cooler, more massive primary star were measured, and its surface temperature and rotation velocity were determined to be $T_{\rm eff,1}=7770 \pm 130$ K and $v_1$sin$i=47\pm6$ km s$^{-1}$, respectively. The spectroscopic measurements were solved with the high-precision photometric data from the TESS mission. The combined solution demonstrates that WASP 1814+48 is an EL CVn-type detached EB with masses of $M_1 = 1.659 \pm 0.048$ $M_\odot$ and $M_2 = 0.172 \pm 0.005$ $M_\odot$, radii of $R_1 = 1.945 \pm 0.027$ $R_\odot$ and $R_2 = 0.194 \pm 0.005$ $R_\odot$, and luminosities of $L_1 = 12.35 \pm 0.90$ $L_\odot$ and $L_2 = 0.69 \pm 0.07$ $L_\odot$. The component stars fill $f_1 = 49 \%$ and $f_2 = 36 \%$ of their inner Roche lobe, respectively. The surface gravity of the secondary companion can be calculated directly from the light and RV parameters without knowledge of its mass and radius (Southworth, Wheatley \& Sams 2007) and is represented as: \begin{equation} g_2 = {{G M_2} \over {R^2_2}} = {{2\pi} \over P} {{K_1(1-e^2)^{1/2}} \over {r^2_2\sin i}} \end{equation} The observable quantities of $P$, $r_2$, $i$ and $K_1$ were taken from Table 4. This calculation results in $\log g_2$ = 5.097$\pm$0.025, which is well-matched with the 5.098$\pm$0.026 obtained from the secondary's mass and radius presented in this article. Following the solar values\footnote{($X_\odot$, $Y_\odot$, $Z_\odot$) = ($-$8.0, 0.0, 0.0) kpc and ($U_\odot$, $V_\odot$, $W_\odot$) = (9.58, 10.52, 7.01) km s$^{-1}$} and the procedure applied by Lee et al. (2020), the Galactic space motion of WASP 1814+48 was computed from our system velocity ($\gamma$) and the GAIA EDR3 measurements (position, parallax, and proper motion). The obtained velocity components were $U = -18.4 \pm 0.3$ km s$^{-1}$, $V = 252.8 \pm 0.6$ km s$^{-1}$, and $W = 9.1 \pm 0.3$ km s$^{-1}$, which correspond to a total space velocity of 253.6 $\pm 0.8$ km s$^{-1}$. Also, the Galactic orbit's eccentricity and angular momentum in the $z$ direction were obtained to be $e_{\rm G}$ = 0.1813 $\pm$ 0.0003 and $J_{\rm z}$ = 1980 $\pm$ 17 kpc km s$^{-1}$, respectively. The position of WASP 1814+48 B in the $U-V$ and $J_{\rm z}-e_{\rm G}$ planes falls within the thin-disk population described by Pauli et al. (2006), indicating that our program target has the kinematics of thin-disk stars. Using the fundamental stellar parameters of WASP 1814+48, we studied the evolutionary history of the EB system in terms of the H-R and $\log T_{\rm eff}-\log g$ diagrams. The locations of the primary (A) and secondary (B) stars are presented as star symbols in Figure 7, while the oblique dash-dotted and dashed lines denote the instability strips of $\gamma$ Dor and $\delta$ Sct variables. WASP 1814+48 A resides inside the $\delta$ Sct region on the ZAMS, which implies that it is a $\delta$ Sct candidate. In both diagrams, the black dotted, dashed, and solid lines represent the evolutionary sequences of He-core WD stars with metallicities of $Z$ = 0.001, 0.01, and 0.02, respectively, for masses of $M$ = 0.182 $M_\odot$, 0.176 $M_\odot$, and 0.171 $M_\odot$ (Istrate et al. 2016). We can see that WASP 1814+48 B with 0.172 $\pm$ 0.005 $M_\odot$ is in good agreement with the 0.176 $M_\odot$ WD model for $Z$ = 0.01. The result matches well with the thin-disk population classified by our Galactic kinematics. The lifetime ($t$) of the binary star in a constant luminosity phase was estimated to be about 1.2 $\times$ 10$^{9}$ yr using the $M_{\rm WD}-t$ relation (Chen et al. 2017). The whole light residuals from our binary model were analyzed using the software Period04 to find multiperiodic frequencies in our target star. We found two dominant oscillations at $f_{1}$ = 33.70479 day$^{-1}$ and $f_{2}$ = 32.80546 day$^{-1}$, corresponding to about 42.7 min and 43.9 min, respectively. As a consequence of the pre-whitening process, WASP 1814+48 oscillated in a total of 52 frequencies that satisfied our criterion of S/N $\ga$ 4.0. Most signals in the low frequency region of $<$ 24 day$^{-1}$ may be sidelobes due to incomplete binary modeling and instrumental artifacts in the TESS observations, but not tidally excited modes and $\gamma$ Dor pulsations. The five frequencies between 32 and 36 day$^{-1}$ originated from WASP 1814+48 A located in the $\delta$ Sct instability domain, as shown in Figure 7. From the pulsation frequency-density relation of $Q_i$ = $f_i$$\sqrt{\rho / \rho_\odot}$, we determined their pulsation constants to be $Q_1$ = 0.014 days, $Q_2$ = 0.014 days, $Q_6$ = 0.015 days, $Q_9$ = 0.013 days, and $Q_{11}$ = 0.014 days in pressure ($p$) modes of $\delta$ Sct type with $Q < 0.04$ days (Breger 2000; Antoci et al. 2019). Moreover, the period ratios of $P_{\rm pul}/P_{\rm orb} = 0.0155 \sim 0.0171$ are within the threshold of 0.09 $\pm$ 0.02 for $\delta$ Sct EBs pulsating in $p$ modes (Zhang, Luo \& Fu 2013). On the other hand, the frequency signals between 128 and 288 day$^{-1}$ may be pulsation modes related to WASP 1814+48 B in the pre-He WD instability strip (C\'orsico et al. 2019). The periods and pulsation constants for the five high frequencies are in ranges of $5.0 < P_{\rm pul} < 11.2$ min and $0.017 \le Q \le 0.038$ days, respectively. These results make WASP 1814+48 a very promising target for asteroseismology, consisting of a $\delta$ Sct-type primary and a pulsating pre-He WD companion. \section*{Acknowledgments} We would like to thank the BOAO staffs for assistance during our spectroscopy. This work includes data collected by the TESS mission, which were obtained from MAST. Funding for the TESS mission is provided by the NASA Explorer Program. This research was supported by the KASI grant 2022-1-830-04. K.H. was supported by the grants 2019R1A2C2085965 and 2020R1A4A2002885 from the National Research Foundation (NRF) of Korea. \section*{DATA AVAILABILITY} The data underlying this article will be shared on reasonable request to the first author.
3,212,635,537,848
arxiv
\section{Introduction} \label{sec:intro} The development of reinforcement learning (RL) methods has achieved much success over the last decade, since together with advances in computer vision \citep{krizhevsky2012imagenet,he2016deep}, it became possible to teach agents to solve various tasks, play computer games \citep{mnih2013playing} (see overview in \citep{RLgames2023}) even surpassing human players \citep{mnih2015humanlevel}. Nevertheless, these single tasks require very long training times and a lot of computational resources. Coping with complex (continuous) environments such as real world is still a challenge. There are several research opportunities, one of them being the search for more efficient learning methods. Another is hardware development, which attempts to adapt to the requirements of neural networks that are currently being used in the RL field. The complex environments with sparse rewards pose a special challenge for RL approaches. The most popular computational approach to make RL more efficient is based on a concept of {\it intrinsic motivation} (IM) \citep{baldassarre2014intrinsic}. IM has a strong biological basis \citep{Ryan00,Morris2022} since it is observed among higher animals, especially in humans, engaging them in various activities. Intrinsic motivations appear early in life and guide the biological agents during their entire life. IM is considered one of the prerequites for open-ended (or, life-long) learning. If we want to achieve this capacity with artificial agents \citep{Parisi2019}, we have to master this first step and equip them with an ability to generate their own goals and acquire new skills. Therefore, computational approaches concerned with IMs and open-ended development provide the potential in this direction leading to more intelligent systems, in particular those capable of improving their own skills and knowledge autonomously and indefinitely \citep{baldassarre2014intrinsic,Baldassarre19}. The concept of intrinsic (and extrinsic) motivation was first studied in psychology \citep{Ryan00}, and later it entered the RL literature \citep{barto2005intrinsic, singh2010intrinsically, Barto2013}. The first taxonomy of computational models appeared in \cite{oudeyer2009intrinsic} where the concept of motivation is divided into external and internal, depending on the mechanism that generates motivation for the agent. \textit{External} motivation assumes the source of motivation coming from outside the agent and it is always associated with a particular goal in the environment. If the motivation is generated within the structures that make up the agent, this implies an \textit{internal} motivation. Another dimension for the differentiation, extrinsic or intrinsic, is less obvious (see also \citep{Morris2022}). \textit{Extrinsic} motivations pertain to behaviors whenever an activity is done in order to attain some separable outcome. Some variability exists in this context, since these behaviors can vary in the extent to which they represent self-determination (see the details in \citep{Ryan00}). On the other hand, \textit{intrinsic} motivation is defined as doing an activity for its inherent satisfaction rather than for some separable consequence (or instrumental value). It has been operationally defined in various ways, backed up by different psychological theories, which point to some uncertainty in what IM exactly means. Nevertheless, \citep{Baldassarre19} offers a solution of an operational definition of IMs as processes that can drive the acquisition of knowledge and skills in the absence of extrinsic motivations. Furthermore, the author proposes (and explains why) a new term of \textit{epistemic motivations} as a suitable substitution for intrinsic motivations. Despite some uncertainty, intrinsic motivation has remained a well coined term in the literature. Intrinsic motivation is a crucial factor that helps the agent not only to remain in open-ended learning hence solving different tasks \citep{Parisi2019}, but it also helps to solve single difficult tasks with extremely sparse rewards. In this paper, we focus on this case. There exists a variety of approaches aiming to use IM-based signal for agent learning. Information-theoretic view on IM is well represented in the literature, involving the concepts of novelty, surprise and skill-learning. The recent review \citep{Aubret2023} suggests that novelty and surprise can assist the building of a hierarchy of transferable skills which abstracts dynamics and makes the exploration process more robust. In this context, abstraction is a key feature of the agent's architecture where it makes sense to introduce learning mechanisms to enforce formation of proper internal representations that lead to improved agent's performance. Learning the proper internal representations from unlabelled input data (e.g.~images) for the purpose of solving various problems is in general a useful task in machine learning. This can be achieved in various ways (related methods are mentioned in Sec.~\ref{sec:related-work}), including supervised end-to-end (deep) learning, self-supervised autoencoders, unsupervised feature extractors (such as contrastive divergence learning) and RL-based approaches where a certain loss function is optimized. In our work, we focus on self-supervised approaches exploiting the novelty detection signal in feature domain to enhance RL agent's exploration. The paper is organized as follows. In the remaining part of Sec.~\ref{sec:intro} we present the original contribution of this work. Section~\ref{sec:related-work} contains related work. Section~\ref{sec:methods} describes the methods used in this work. Section~\ref{sec:exper} explains in detail all experiments performed. Section~\ref{sec:discussion} concludes the paper with the discussion. \subsection{The paper contribution} We introduced and tested a class of motivational models based on the exploitation of the distillation error as novelty detection. The first such model was the Random Network Distillation model \cite{burda2018exploration}, which became the basis of our models. However, random distilled features are not the best representation, since this leads to stacking or slow convergence. Instead of distilling random features, we proposed to distil the self-supervised latent space representations. \begin{figure*}[thb] \centering \includegraphics[width=0.8\textwidth]{fig/diagrams/cnd-overview.png} \caption{Self-supervised network distillation (SND) principle. The proposed method consists of two main parts. {\it Top:} Self-supervised learning of the suitable features for the target model. {\it Bottom:} Calculation of the intrinsic reward by target model distillation, using the squared Euclidean distance between the models' outputs.} \label{fig:cnd_overview} \end{figure*} The overall concept is shown in Fig.~\ref{fig:cnd_overview}. Our method uses two models, one providing target features $z_{t}^{\rm T}$, called the target model $\Phi^{\rm T}(s_t)$, and the learned model $\Phi^{\rm L}(s_t)$, providing features $z_{t}^{\rm L}$. Both models use the state (observation) $s_t$ at time step $t$ as their input. The learned model attempts to imitate the target model, and their difference is used for internal motivation as provided in \cite{burda2018exploration}. We asked ourselves the following questions: Are the features provided from the original random model $\Phi^{\rm T}(s_t)$ sufficient? Can we provide better features? In Random Network Distillation paper \citep{burda2018exploration}, the orthogonal weight initialization method was used to set $\Phi^{\rm T}(s_t)$ and spread out $z_{t}^{\rm T}$ across the state space, which provides sustainable intrinsic reward. We used self-supervised regularization of $\Phi^{\rm T}$ to provide features with higher variance, and better sensitivity for novelty detection. The states are randomly sampled from the buffer and used for learning $\Phi^{\rm T}$. For sampling the states, we used two approaches \begin{enumerate} \item sample one state with two different augmentations \item sample two consecutive states \end{enumerate} Simultaneously, the distillation proceeds with the learning model $\Phi^{\rm L}$ using MSE loss. All models are trained in the same loop as the policy and the value models. We experimented with different self-supervised losses (MSE, ST-DIM \cite{Anand2019} and VICReg \cite{Bardes2022}), covering both contrastive and non-contrastive approaches. With these methods, we were able to solve hard exploration seeds for Procgen and the complete first level in infamous Atari game Montezuma's revenge. \section{Related work} \label{sec:related-work} According to the prevailing view, the approaches to IM can be divided into two main categories with adaptive motivations. \textit{Knowledge-based} approach is focused on acquisition of knowledge of the world and it draws on the theory of drives, theory of cognitive dissonance and optimal incongruity theory. \textit{Competence-based} approach focuses on acquisition of skills by motivating the agent to achieve a higher level of performance in the environment, which means to acquire desired actions to achieve self-generated goals. Its psychological basis includes the theory of effectance and the theory of flow. The knowledge-based category focusing on exploration can be divided into \textit{prediction-based}, \textit{novelty-based} and \textit{information-based} approaches \citep{aubret2019survey}. Prediction-based approaches use prediction error as an intrinsic reward signal. The source of error can be a forward model (e.g. \cite{stadie2015incentivizing,bellemare13arcade,Pathak2017}), a generative model \cite{yu2020intrinsic} (e.g. based on a variational auto-encoder \cite{kingma2013auto}) or disagreement in learned world model \citep{sekar2020planning}. Exploration with Mutual Information (EMI) \citep{kim2018emi} extracts predictive signals that can be used to guide exploration based on forward prediction in the representation space. Model-Based Active eXploration (MAX) \citep{shyam2019model} uses an ensemble of forward models to plan observing novel events. The novelty-based approaches monitor the state novelty and the intrinsic signal is based on its value. The first models were based on count-based approach \citep{tang2017exploration}. This method is impractical for large or continuous state spaces and it was extended by introducing pseudo-count and neural density models \citep{ostrovski2017count,martin2017count,machado2018count}. A similar method to pseudo-count was used by a random network distillation (RND) model \citep{burda2018exploration} with a lower complexity. Never-give-up framework \citep{badia2020never} learns intrinsic rewards composed of episodic and life-long state novelty (which is detected by RND model). Information approaches use quantities from information theory \citep{shannon1948mathematical}, such as information gain, mutual information, entropy and try to maximize the information obtained by the agent from the environment. Variational Information Maximizing Exploration (VIME) \citep{houthooft2016vime} approximates the environment dynamics, uses the information gain of the learned dynamics model as intrinsic rewards. Random Encoders for Efficient Exploration (RE3) \citep{seo2021state} is an exploration method that utilizes state entropy as an intrinsic reward. For more details we recommend surveys \cite{burda2018large, aubret2019survey, yuan2022intrinsically}. Self-supervised learning is a paradigm of machine learning, when the agent does not have any labeling of the data, but generates it on its own. The goal is to use the information in the data itself and prepare the model to perform another task. Self-supervised learning also started to be used in the field of state representation learning \citep{Timoth2018} it is proving to be a suitable method for creating the feature space \citep{Anand2019} and has also found its use in reinforcement learning \citep{Srinivas2020, guo2022byol}. Contrastive learning \citep{Chopra2005} is a method of self-supervised learning used to learn the general features by teaching the model which data points are similar or different. Several different objective functions were proposed, e.g. Noise Contrastive Estimation (NCE) \citep{Gutmann2010}, InfoNCE \citep{Oord2018}, or multi-class $N$-pair loss \citep{Sohn2016}. Another method of self-supervised learning is based on regularization (non-contrastive methods). In this case, the model sees only positive examples and generates a feature space based on various regularization losses like invariance and covariance loss in the case of Barlow Twins model \citep{Zbontar2021}, triple variance-invariance-covariance losses in VICReg model \citep{Bardes2022} or other priors like proportionality, variability, slowness principle, or repeatability \citep{jonschkowski2015learning}. Bootstrap Your Own Laten (BYOL) \citep{grill2020bootstrap} relies on two neural networks, referred to as online and target networks, that interact and learn from each other. \section{Methods} \label{sec:methods} The decision making problem in the environment using RL is formalized as a Markov decision process which consists of a state space $\mathcal{S}$, action space $\mathcal{A}$, transition function $\mathcal{T}_{s,a,s'} = p(s_{t+1} = s'|s_t = s, a_{t} = a)$, reward function $\mathcal{R}_{sas'}$ and a discount factor $\gamma$. The main goal of the agent is to maximize the discounted return $R_t = \sum_{k=0}^\infty \gamma^k r_{t+k}$ in each state, where $r_t$ is immediate external reward at time $t$. Stochastic policy is defined as a state dependent probability function $\pi : \mathcal{S} \times \mathcal{A} \rightarrow [0, 1]$, such that $\pi_{t}(s,a) = p(a_t = a | s_t = s)$ and $\sum_{a \in \mathcal{A}} \pi(s,a) = 1$ and the deterministic policy $\pi: \mathcal{S}\rightarrow \mathcal{A}$ is defined as $\pi(s) = a$. An agent following the optimal policy $\pi^{*}$ maximizes the expected return $R$. The methods searching for the optimal policy can be divided into on-policy (family of actor--critic algorithms), e.g.~\cite{schulman2017proximal} and off-policy methods (family of Q-learning algorithms), e.g.~\cite{mnih2013playing}. Actor--critic algorithms are based on two separate modules: an \textit{actor} generates actions following the agent's policy $\pi$ and a \textit{critic} estimates the state value function $V^{\pi}$ defined as $$ V^{\pi}(s) = \sum_a \pi(s,a) \sum_{s'} \mathcal{T}_{s,a,s'} \left[ \mathcal{R}_{s,a,s'} + \gamma V^{\pi}(s') \right] $$ or the state-action value function $Q^{\pi}$ defined as $$ Q^{\pi}(s,a) = \sum_{s'} \mathcal{T}_{s,a,s'} \left[ \mathcal{R}_{s,a,s'} + \gamma V^{\pi}(s') \right] $$ The actor then updates its policy to maximize return $R$ based on critic's value function estimations. In high-dimensional tasks, when one cannot use the Bellman equation, the common approach is to use function approximators (deep convolutional neural networks) for estimating the critic and the actor. \subsection{Intrinsic motivation and exploration} During the learning process, the agent must explore the environment to encounter an external reward and learn to maximize it. This can be ensured by adding noise to the actions, if the policy is deterministic, or it is already its property, if the policy is stochastic. In both cases, we say that these are uninformed environmental exploration strategies. The problem arises if the external reward is very sparse and the agent cannot use these strategies to find the sources of reward. In such a case, it is advantageous to use informed strategies, which include the introduction of intrinsic motivation. In the context of RL, intrinsic motivation can be realized in various ways, but most often it is a new reward signal $r^{\rm intr}_t$ scaled by parameter $\eta$, which is generated by the motivational part of the model (we refer to it as the motivational module) and is added to the external reward $r^{\rm ext}_t$ \begin{equation} \label{eq:rintr} r_t = r_t^{\rm ext} + \eta \, r_t^{\rm intr} \end{equation} The goal of introducing such a new reward signal is to provide the agent with a source of information that is absent from the environment when the reward is sparse, and thus facilitate the exploration of the environment and the search for an external reward. \subsection{Intrinsic motivation based on distillation error} In this class of methods the motivation module has two components: the target model $\Phi^{\rm T}$ that generates features (typically as a kind of feature extractor), and the learning network $\Phi^{\rm L}$ that tries to replicate them. This process is called knowledge distillation. Intrinsic motivation, expressed as an intrinsic reward, is computed as the distillation error \begin{equation} \label{eq:distill_error} r_{t}^{\rm intr} = \| (\Phi^{\rm L}(s_t) - \Phi^{\rm T}(s_t)) \| ^{2}. \end{equation} It is assumed that the learning network will be able to more easily replicate feature vectors for states it has seen multiple times, while new states will induce a large distillation error. The RND \citep{burda2018exploration} model shown in Fig.~\ref{fig:cnd_rnd} is a representative of this type of IM. It is simple and successful in the environments with sparse reward but has two serious drawbacks: (1) It is necessary to properly initialize the random network; and (2) over time, the signal of intrinsic motivation disappears due to sufficient adaptation of the learning network (a phenomenon that could be called generalization). \begin{figure*}[thb] \centering \includegraphics[width=9cm]{fig/diagrams/cnd-rnd.png} \caption{The basic principle of generating an exploration signal in random network distillation.} \label{fig:cnd_rnd} \end{figure*} \begin{figure*}[h] \centering \includegraphics[width=11cm]{fig/diagrams/cnd-cnd.png} \caption{The basic principle of generating an exploration signal in the regularized target model, followed by RND.} \label{fig:cnd_cnd} \end{figure*} \begin{figure*}[thb] \centering \includegraphics[width=8cm]{fig/diagrams/cnd-std.png} \caption{Training of the SND target model using two consecutive states and the self-supervised learning algorithm.} \label{fig:std_dim_idea} \end{figure*} \subsection{Self-supervised Network Distillation} We modified the concept of distillation of randomly initialized static network RND \citep{burda2018exploration} and instead we distilled a network that learns continuously using self-supervised algorithms. We denote the methods SND. The architecture of such model consists of a target model $\Phi^{\rm T}$ and a learned model $\Phi^{\rm L}$, but with the essential difference that the network generating the target feature vectors (target model) is learned. The schematic representation of proposed approach is shown in Fig.~\ref{fig:cnd_cnd}. In order to be able to use the trained target model as a suitable source of target feature vectors for the learning network, it is necessary that it fulfills the following conditions: \begin{enumerate} \item Two identical states must map to the same feature vector. \item Two similar (e.g. successive states) are mapped on two similar feature vectors, e.g. their $L_{2}$ distance is small. \item Two different states are mapped on two different feature vectors, e.g. their $L_{2}$ distance is large. \end{enumerate} The feature space formed in this way can be distilled and then this process can be used as a source of internal motivation, because the new states will have different feature vectors than the states seen by the agent so far. We introduced three methods for forming a feature space that satisfies the above conditions. All three methods are based on self-supervised learning. \textbf{SND-V} method (vanilla SND) uses a contrastive approach. First the two augmented batches of states $S$, $S'$ are sampled and the corresponding feature batches $Z$, $Z'$ are computed as $\Phi^{\rm T}(s) = z$, where $s \in S, z \in Z$. The sampling process takes with $p=0.5$ the same states. For the same states, we set the target distance to $\tau_i = 0$, and for different states we set the distance to $\tau_i =1$. We experimented with three augmentation schemes : \begin{enumerate} \item uniform noise only, from the range $\langle -0.2, 0.2\rangle$ \item random tile masking + uniform noise, tiles sizes $2, 4, 8, 12, 16$ \item random convolution filter + random tile masking + uniform noise \end{enumerate} Uniform noise was used for each state pixel, remaining augmentations with $p=0.5$. The whole pipeline is shown in Fig.~\ref{fig:stdv_augmentations}. The idea of tiles masking can be supported by recently proposed self-supervised loss \citep{assran2022maskednetworks}. Random convolution was successfully used in \citep{lee2020randomization}, for a PPO agent in Procgen environment. And finally, noise is a common augmentation process. \begin{figure*} \centering \includegraphics[width=14cm]{fig/diagrams/cnd-augmentations.png} \caption{The scheme of the state augmentation pipeline.} \label{fig:stdv_augmentations} \end{figure*} The regularisation loss is defined as \begin{equation} \label{eq:sndv1} \mathcal{L} = \sum_{i}(\tau_i - \|Z_i - Z'_i\|^2_2)^2 \end{equation} where $\|.\|^2_2$ is the squared Euclidean distance between $i-{\rm th}$ feature vectors $Z_i$ and $Z'_i$. We also experimented with the loss function defined as \begin{equation} \label{eq:sndv2} \mathcal{L} = \begin{cases} \|Z_i - Z'_i\|^2_2 & \text{if $\tau_i = 0$} \\ \max(1 - \|Z_i - Z'_i\|^2_2; 0) & \text{otherwise} \end{cases} \end{equation} This loss function pulls the feature vectors of similar states (when $\tau = 0$) to one another by penalizing their squared distance. And on the contrary, it sets apart the feature vector from one another, up to a certain limit when their distance exceeds one. However, we did not find any benefits of this loss. \textbf{SND-STD} method uses the Spatio-Temporal DeepInfoMax (ST-DIM) algorithm \citep{Anand2019} (the simple diagram can be found in Fig.~\ref{fig:std_dim_idea}) leveraging multi-class $N$-pair losses \citep{Sohn2016}: \begin{equation} \label{eq:sndstd1} \mathcal{L}_{\rm GL} = - \sum_{i=1}^{I} \sum_{j=1}^{J} \log \frac{ \exp (g_{i,j})} { \sum_{s_{t}^{*} \in S_{\rm next}} \exp (g_{i,j})} \end{equation} \begin{equation} \label{eq:sndstd2} \mathcal{L}_{\rm LL} = - \sum_{i=1}^{I} \sum_{j=1}^{J} \log \frac{ \exp (f_{i,j})} { \sum_{s_{t}^{*} \in S_{\rm next}} \exp (f_{i,j})} \end{equation} where $f(.) = f(s_t, s_{t+1})$ and $g(.) = g(s_t, s_{t+1})$ are score functions for local-local objective $\mathcal{L}_{\rm LL}$ and global-local objective $\mathcal{L}_{\rm GL}$, respectively. Function $g_{i,j}$ is defined as the unnormalized cosine similarity between transformed global features $\Phi^{\rm T}(s_t)$ and the local features $\Phi^{\rm T}_{(l,i,j)}(s_{t+1})$ of the intermediate layer $l$ in $\Phi^{\rm T}$, where $(i,j)$ is the spatial location. Analogically $f_{i,j}$ is the unnormalized cosine similarity between transformed local features $\Phi^{\rm T}_{(l,i,j)}(s_t)$ and $\Phi^{\rm T}_{(l,i,j)}(s_{t+1})$. The details of this algorithm are provided in \cite{Anand2019}. $S_{\rm next}$ corresponds to the set of next states, $(s_t, s_{t+1})$ represents a pair of consecutive states, $(s_t, s_{t}^{*})$ represents a pair of non-consecutive states and $I,J$ are the width and the height from output shape of intermediate convolutional layer of the target model. The resulting loss function is then defined as \begin{equation} \label{eq:sndstd3} \mathcal{L} = \frac{1}{IJ} (\mathcal{L}_{\rm GL} + \mathcal{L}_{\rm LL}) \end{equation} Following this objective function, the target model becomes a good feature extractor adapting to new states discovered by the agent. However, after initial tests, we found that the feature space formed by such an objective function tends to grow exponentially from a certain point until it eventually explodes. We provide a more detailed analysis of this problem in Section~\ref{sec:discussion}. The solution to this problem was to find a suitable regularization that would add to the existing loss function. We decided to minimize $L_2$-norm of logits represented by functions $f$ and $g$: \begin{equation} \label{eq:sndstd4} \mathcal{L}_{n} = p_{\rm GL} + p_{\rm LL} = \sum_{i=1}^{I} \sum_{j=1}^J (\| f_{i,j} \| + \| g_{i,j} \|) \end{equation} Finally, we added one more regularization term $\mathcal{L}_v$ that maximizes the standard deviation $\sigma$ of the feature vector components and thus ensures that all dimensions of the feature space are used. The analysis section provides a more detailed justification for the introduction of the given term: \begin{equation} \label{eq:sndstd5} \mathcal{L}_{v} = -\sigma(\Phi^{\rm T}(s_t)) \end{equation} The final objective function, with the scaling parameters $\beta_{1}=\beta_{2}=0.0001$ (found experimentally), was defined as \begin{equation} \label{eq:sndsd6} \mathcal{L} = \frac{1}{I J} (\mathcal{L}_{\rm GL} + \mathcal{L}_{\rm LL} + \beta_{1} \mathcal{L}_{n}) + \beta_{2} \mathcal{L}_{v} \end{equation} \textbf{SND-VIC} method is based on VICReg algorithm \citep{Bardes2022}. The regularization function consists of three terms: the invariant term, which brings the feature vectors closer to each other, the variance term, which ensures that the feature vectors within one batch have different values, and the covariance term, which ensures the decorrelation of the feature vectors and prevents information collapse. The original method does not need any negative samples, it only takes the input, creates two augmented versions, and their feature vectors are updated using the mentioned terms of the regularization function. Our version uses the state $s_t$ and its successor $s_{t+1}$ (the same as ST-DIM, the simple diagram can be found in Fig.~\ref{fig:std_dim_idea}) instead of two augmentations of the same state. The variance regularization term $\mathcal{L}_{v(Z)}$ is defined as a hinge function on the standard deviation of the features along the batch dimension \begin{equation} \label{eq:sndvic1} \mathcal{L}_{v(Z)} = \frac{1}{d} \sum_{j=1}^{d} \max(0; \tau - \sigma(Z_j)) \end{equation} where $d$ is the dimensionality of the feature space, $Z_{j}$ is $j$-${\rm th}$ feature vector from the batch $Z$, $\sigma$ is the actual standard deviation and $\tau = 1$ is a constant target value for the standard deviation. The covariance regularization term $\mathcal{L}_{c(Z)}$ is defined as the sum of the squared off-diagonal coefficients of the covariance matrix $C(Z)$ \begin{equation} \label{eq:sndvic2} \mathcal{L}_{c(Z)} = \frac{1}{d} \sum_{i\neq j} [C(Z)]^{2}_{i,j} \end{equation} The invariance criterion $\mathcal{L}_{s(Z,Z')}$ between two batches $Z$ and $Z'$ is defined as the mean-squared Euclidean distance between each pair of feature vectors \begin{equation} \label{eq:sndvic3} \mathcal{L}_{s(Z, Z')} = \frac{1}{d}\sum_{i = 1}^{d}\|Z_i - Z'_i\|^2_{2} \end{equation} The overall loss $\mathcal{L}$ then takes the form \begin{equation} \label{eq:sndvic4} \mathcal{L} = \lambda \mathcal{L}_{s(Z, Z')} + \mu \left[\mathcal{L}_{v(Z)} + \mathcal{L}_{v(Z')}\right] + \nu \left[\mathcal{L}_{c(Z)} + \mathcal{L}_{c(Z')}\right] \end{equation} where the scaling parameters are set to $\lambda=1$, $\mu=1$ and $\nu=1/25$. \section{Experiments} \label{sec:exper} All together, we tested our methods on 10 environments (Atari and Procgen) that are considered difficult for exploration. These include 6 Atari environments: Montezuma's Revenge, Gravitar, Venture, Private eye, Pitfall, Solaris. The agent receives a reward of +1 for each increase in the score, regardless of its size. It does not receive any other reward or punishment. The state is represented by 4 consecutive frames of pixels on grey scale, so the dimensionality of the state representation is 4$\times$96$\times$96$\times$256. The action space is discrete, consisting of 18 actions, of which only some make sense (depending on the environment), the other actions have no impact on the environment. We also tested 4 Procgen environments: Coinrun, Caveflyer, Jumper and Climber. Procgen is a set of procedurally generated environments, designed primarily for testing agent's generalisation \citep{cobbe2020procgen}. The paper shows several problems for generalisation in RL, requiring special training and a huge amount of samples. For our purpose, interesting findings are provided in Appendix B.1 in \citep{cobbe2020procgen}. For several seeds, the baseline agent was not able to reach a non-zero score. Those seeds lead to hard exploration environments, with only a single reward at the end. Together with fast run of these environments (thousands of FPS on single CPU core), makes Procgen good candidate for our experiments. The state is represented by RGB color images, with the size 64$\times$64 pixels. The action space is discrete, consisting of 15 actions. Preliminary experiments were performed in \citep{pechac2022intrinsic}. \subsection{Training setup} We ran 9 simulations for each environment, taking 128M steps for Atari and 64M steps for Procgen games. Before the main training, we tried 3 hand-selected settings of hyperparameters for individual motivational models (mainly the scaling of the motivational signal, or regularization terms) and we always chose the one that had the best results. These short probes lasted 32M (Atari) or 16M (Procgen) steps and consisted of 2 to 3 simulations. All agents were trained with the PPO algorithm \citep{schulman2017proximal} using Adam algorithm \citep{kingma2015adam} to optimize the parameters of all modules. The basic agent consists of an actor and a critic, which are two multi-layer perceptrons sharing a common convolutional neural network (CNN) that processes the video input. The critic has two outputs (heads), one for estimating the value function for the external reward and the other for the internal reward. We used the orthogonal weight initialisation with a magnitude $\sqrt{2}$. Model architectures are presented in Figures \ref{img:ppo_arch} to \ref{img:cnd_learned_arch}. The motivational module consists of two CNNs (the target and the learning network), which receive input from a single frame. The learning network has two more linear layers to have an increased capacity over the target model. \begin{figure*}[t!] \centering \includegraphics[width=13cm]{fig/diagrams/cnd-ppo_model.png} \caption{The PPO agent model architecture.} \label{img:ppo_arch} \end{figure*} \begin{figure*}[t!] \centering \includegraphics[width=11cm]{fig/diagrams/cnd-target_arch.png} \caption{The target model architecture.} \label{img:cnd_target_arch} \end{figure*} \begin{figure*}[t!] \centering \includegraphics[width=13cm]{fig/diagrams/cnd-learned_arch.png} \caption{The learning model architecture.} \label{img:cnd_learned_arch} \end{figure*} We followed \citep{burda2018exploration} for setting the hyperparameters, to have more comparable results. We ran 128 parallel environments. For Atari we used 1M samples for each environment (total 128M frames), for Procgen we used 0.5M samples for each environment (total 64M frames). In Atari experiments we used gray scale downsampled 4 frames stacked observation. For Procgen we used 2 frames stacking, and fully RGB colored observation. The intrinsic motivation modules used no frame stacking, a single gray scale image for Atari environments and a single RGB image for Procgen. The summary of all environment hyperparameters is in Table \ref{tab:env_hyperparameters}. The discount factors were set to $\gamma^{\rm ext} = 0.998$ for external reward and $\gamma^{\rm intr} = 0.99$ for intrinsic reward. We found the importance of intrinsic reward scaling, the best result were achieved for $\eta = 0.5$. The learning rate for all models was set to 0.0001 with Adam optimizer. Actor and Critic models used ReLU, and motivation models worked best with ELU activation function. We also find the deeper model with 3$\times$3 convolutions works better then standart Atari model using 8$\times$8 or 4$\times$4 convolutions in \citep{mnih2013playing}. We retrained RND models to obtain comparable results and find faster convergence. The summary of PPO agent's hyperparameters is in Table \ref{tab:agent_hyperparameters}. More hyperparameters and further details of the learning process and the architectures of modules can be found in our source codes. \begin{table}[thb] \scriptsize \centering \caption{Environment hyperparameters} \begin{tabular}{l|ll} Hyperparameter & Atari & Procgen \\ \hline\hline Observation downsampling & 96$\times$96 & 64$\times$64 \\ Frame stacking & 4 & 2 \\ State shape for PPO & 4$\times$96$\times$96 & 6$\times$64$\times$64 \\ State shape for IM modules & 1$\times$96$\times$96 & 3$\times$64$\times$64 \\ Parallel environments count & 128 & 128 \\ State normalisation & $s/255$ & $s/255$ \\ Samples per environment & 1M & 0.5M \\ \hline \end{tabular} \label{tab:env_hyperparameters} \end{table} \begin{table}[thb] \scriptsize \centering \caption{Agent's hyperparameters} \begin{tabular}{l|l} Hyperparameter & Value \\ \hline\hline PPO model learning rate & $0.0001$ \\ Target model $\Phi^{\rm T}$ learning rate & $0.0001$ \\ Learned model $\Phi^{\rm L}$ learning rate & $0.0001$ \\ Discount factor $\gamma^{\rm ext}$ & $0.998$ \\ Discount factor $\gamma^{\rm intr}$ & $0.99$ \\ Advantages ext coefficient & $2.0$ \\ Advantages intr coefficient & $1.0$ \\ Intrinsic reward scaling & $0.5$ \\ Rollout length & $128$ \\ Number of optimization epochs & $4$ \\ Entropy coefficient & $0.001$ \\ Epsilon clipping & $0.1$ \\ Gradient norm clipping & $0.5$ \\ GAE $\lambda$ coefficient & $0.95$ \\ Optimizer & Adam \\ Weight initialisation & orthogonal \\ \hline \end{tabular} \label{tab:agent_hyperparameters} \end{table} \subsection{State preprocessing} \label{sec:exp2} The state before entering the motivation module of SND model can undergo preprocessing. We tested three preprocessing methods: \begin{enumerate} \item State normalization using the running mean and standard deviation, \item Subtraction of the running mean value from the state, \item No preprocessing. \end{enumerate} We performed two training runs for each preprocessing method in 32M steps on Montezuma's Revenge environment. For testing we used SND-STD model. Table~\ref{tab:res1} demonstrates that the state preprocessing did not have a significant effect on agent's performance (maximum reward achieved), only on the speed of learning. This also agrees with our assumption that operations such as subtraction of the mean or normalization should be able to find the network itself trained using the self-supervised loss function. Therefore it is not necessary for the designer to put them into the learning process explicitly. These conclusions will still need to be confirmed by statistical analysis. RND used mean subtraction and SND-V, SND-STD together with SND-VIC did not use input state preprocessing. \begin{figure*}[t!] \centering \includegraphics[width=10cm]{fig/results/aux_experiments/cnd_vicreg_architecture.png} \caption{Agent's performance based on various learned model architectures, evaluated in terms of the overall score, external reward obtained and the number of rooms explored.} \label{img:result_cnd_learned_arch} \end{figure*} \begin{table}[thb] \scriptsize \centering \caption{Average cumulative reward (with standard deviation) per episode for all 3 preprocessing methods and maximal reward achieved by the agents.} \begin{tabular}{l|ll} Method & Average reward & Max. reward \\ \hline\hline normalization & 3.60 $\pm$ 0.14 & 7 \\ mean subtraction & 4.13 $\pm$ 0.12 & 7 \\ none & 2.31 $\pm$ 0.20 & 7 \\ \hline \end{tabular} \label{tab:res1} \end{table} \subsection{Results} We processed several quick experiments on Montezuma's Revenge to explore an optimal setup. First we have to test the optimal architecture of $\Phi^{\rm T}$ and $\Phi^{\rm L}$ models. We experimented with 4 architectures: \begin{enumerate} \item identical models, one fully connected output layer, ELU activations \item identical models, two fully connected layers, ELU activations \item identical models, one fully connected layer, ReLU activations \item asymmetric models, three fully connected layers for $\Phi^{\rm L}$, none for $\Phi^{\rm T}$, ELU activations fixed to fully connected convention \end{enumerate} Results of different model architectures are in Figure~\ref{img:result_cnd_learned_arch}. The best result was achieved for the asymmetric architecture. Next, we tested the effect of different augmentations. We considered three scenarios: \begin{enumerate} \item uniform noise, $\langle -0.2, 0.2 \rangle$ \item uniform noise, random tiles masking (tiles with sizes 1, 2, 4, 8, 12, 16) \item uniform noise, random tiles masking, random convolution filter apply \end{enumerate} \begin{figure*}[t!] \centering \includegraphics[width=10cm]{fig/results/aux_experiments/cnd_vicreg_augmentations.png} \caption{Agent performance for different state augmentations, evaluated in terms of the overall score, external reward obtained and the number of rooms explored.} \label{img:result_cnd_aug} \end{figure*} Noise augmentation is commonly used in supervised image training. Tile masking forces the model to reconstruct non-complete information. Random convolution filter helps the model to learn to focus on informative features, not on texture colors. The results of different state augmentations are in Figure~\ref{img:result_cnd_aug}, revealing that the third scenario worked best. We hypothesise that it is caused by shallow target model, which is not able to learn sufficient transformation invariants. The models using noise or tile masking performed in a similar way. \begin{figure*}[t!] \centering \includegraphics[width=10cm]{fig/results/aux_experiments/cnd_scaling.png} \caption{Agent's performance for different intrinsic reward scaling methods, evaluated in terms of the overall score, external reward obtained and the number of rooms explored.} \label{img:result_cnd_scaling} \end{figure*} \begin{figure}[t!] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=4.1cm]{fig/results/montezuma.png} \caption{Montezuma's Revenge} \label{fig:res2a} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=4.1cm]{fig/results/gravitar.png} \caption{Gravitar} \label{fig:res2b} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=4.1cm]{fig/results/venture.png} \caption{Venture} \label{fig:res2c} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=4.1cm]{fig/results/private_eye.png} \caption{Private Eye} \label{fig:res2d} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=4.1cm]{fig/results/solaris.png} \caption{Solaris} \label{fig:res2f} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=4.1cm]{fig/results/caveflyer.png} \caption{Caveflyer} \label{fig:res2g} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=4.1cm]{fig/results/coinrun.png} \caption{Coinrun} \label{fig:res2h} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=4.1cm]{fig/results/jumper.png} \caption{Jumper} \label{fig:res2i} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=4.1cm]{fig/results/climber.png} \caption{Climber} \label{fig:res2j} \end{subfigure} \caption{The cumulative external reward per episode (with the standard deviation) received by the agent from the tested environment. We omitted the graph for the Pitfall environment, where no algorithm was successful and all achieved zero reward. The horizontal axis shows the number of steps in millions, the vertical axis refers the external reward.} \label{fig:result} \end{figure} \begin{table}[thb] \scriptsize \centering \caption{Average cumulative external reward per episode for tested models. The best model for each environment is shown in bold face.} \begin{tabular}{l|cccccc} \hline & Baseline & RND & SND-V & SND-STD & SND-VIC \\ \hline\hline Montezuma & 0.00 $\pm$ 0.00 & 5.33 $\pm$ 0.23 & \textbf{10.59 $\pm$ 1.99} & 7.76 $\pm$ 1.73 & 8.45 $\pm$ 1.12 \\ Gravitar & 1.19 $\pm$ 0.00 & 6.63 $\pm$ 1.55 & 4.38 $\pm$ 0.46 & 5.89 $\pm$ 0.43 &\textbf{ 10.05 $\pm$ 0.66} \\ Venture & 0.00 $\pm$ 0.00 & 11.18 $\pm$ 0.42 & 10.95 $\pm$ 0.14 & 9.54 $\pm$ 0.90 & \textbf{11.36 $\pm$ 0.37} \\ Private Eye & 0.81 $\pm$ 0.01 & 2.41 $\pm$ 0.95 & \textbf{6.59 $\pm$ 0.14} & 3.79 $\pm$ 1.24 & 5.93 $\pm$ 0.47 \\ Pitfall & 0.00 $\pm$ 0.00 & 0.00 $\pm$ 0.00 & 0.00 $\pm$ 0.00 & 0.00 $\pm$ 0.00 & 0.00 $\pm$ 0.00 \\ Solaris & 8.08 $\pm$ 0.15 & 3.84 $\pm$ 0.25 & 3.96 $\pm$ 0.41 & \textbf{11.61 $\pm$ 1.12} & 10.85 $\pm$ 1.20 \\ Caveflyer & 0.00 $\pm$ 0.00 & 0.00 $\pm$ 0.00 & 7.28 $\pm$ 1.62 & 10.86 $\pm$ 4.37 & \textbf{11.14 $\pm$ 2.35} \\ Coinrun & 0.00 $\pm$ 0.00 & 0.25 $\pm$ 0.50 & \textbf{9.40 $\pm$ 0.05} & 9.40 $\pm$ 0.07 & 2.55 $\pm$ 3.63 \\ Jumper & 0.00 $\pm$ 0.00 & 0.03 $\pm$ 0.02 & 9.22 $\pm$ 0.21 & 9.76 $\pm$ 0.04 & \textbf{9.76 $\pm$ 0.03}\\ Climber & 0.00 $\pm$ 0.00 & 0.00 $\pm$ 0.00 & 3.32 $\pm$ 3.04 & 1.48 $\pm$ 2.40 & \textbf{4.93 $\pm$ 2.88} \\ \hline \end{tabular} \label{tab:res2} \end{table} \begin{table}[thb] \scriptsize \centering \caption{Average maximal score reached by tested models on Atari environments. The best model for each environment is shown in bold face.} \begin{tabular}{l|cccccc} \hline & Baseline & RND & SND-V & SND-STD & SND-VIC \\ \hline\hline Montezuma & 400 & 6689 & \textbf{21565} & 7212 & 7838 \\ Gravitar & 2611 & 5600 & 2741 & 4643 & \textbf{6712} \\ Venture & 22 & 2167 & 1787 & 2138 & \textbf{2188} \\ Private Eye & 14870 & 14996 & 4213 & 15089 & \textbf{17313} \\ Pitfall & 0 & 0 & 0 & 0 & 0 \\ Solaris & 12344 & 10667 & 11582 & \textbf{12460} & 11865 \\ \hline \end{tabular} \label{tab:res3} \end{table} Finally, we tested intrinsic reward scaling. Low value can lead to stacking the agent into non-exploring policy. High value can prevent the agent from collecting extrinsic rewards, or make it too sensitive to small unimportant changes, both causing instability. We tested three values 0.25, 0.5 and 1.0. Figure~\ref{img:result_cnd_scaling} shows that the best score is achieved with 0.5 reward scaling value. However, some environments provide better results after fine-tuning to 0.25. Figure~\ref{fig:result} captures the cumulative external reward per episode and the standard deviation of the tested models in 9 different environments. In Table~\ref{tab:res2}, these indicators are then averaged over the number of episodes. Table~\ref{tab:res3} shows the maximum achieved score for Atari environments, which is often used for model comparison (although we would like to emphasize that the agent never receives this score as a reward and therefore it is not its goal to maximize it). Of the tested environments, Pitfall game exceeded the capabilities of all tested algorithms, since none of them achieved a single reward point. In the remaining 9 environments, the best results were achieved with the models based on SND motivation, while in 8 cases it was with a significant lead over the existing algorithms (in Venture environment the results were almost the same as those of the RND model). When evaluating the score, the SND models achieved the highest score in 5 Atari environments (with an exception of the already mentioned Pitfall) and in 3 cases (Montezuma's Revenge, Gravitar, Private Eye) it was significantly higher than the compared models. \subsection{Analysis of results} \begin{figure}[thb] \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=7cm]{fig/images/cnd_random.png} \caption{Random target model} \label{fig:target_features_random} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=7cm]{fig/images/cnd_trained.png} \caption{Trained target model} \label{fig:trained_features_random} \end{subfigure} \caption{The t-SNE projected feature representations of the target model in Montezuma's Revenge task. The colors correspond to different rooms.} \label{fig:cnd_feature_space} \end{figure} \begin{figure}[t!] \centering \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=4cm]{fig/analysis/montezuma.png} \caption{Montezuma's Revenge} \label{fig:analysis2a} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=4cm]{fig/analysis/gravitar.png} \caption{Gravitar} \label{fig:analysis2b} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=4cm]{fig/analysis/venture.png} \caption{Venture} \label{fig:analysis2c} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=4cm]{fig/analysis/private_eye.png} \caption{Private Eye} \label{fig:analysis2d} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=4cm]{fig/analysis/solaris.png} \caption{Solaris} \label{fig:analysis2e} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=4cm]{fig/analysis/caveflyer.png} \caption{Caveflyer} \label{fig:analysis2f} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=4cm]{fig/analysis/coinrun.png} \caption{Coinrun} \label{fig:analysis2g} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=4cm]{fig/analysis/jumper.png} \caption{Jumper} \label{fig:analysis2h} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=4cm]{fig/analysis/climber.png} \caption{Climber} \label{fig:analysis2i} \end{subfigure} \caption{Descendingly ordered eigenvalues of the linear envelope obtained using the PCA method, which show the stretching of the feature space in individual dimensions. The horizontal axis shows the indices of eigenvalues, the vertical axis denotes the magnitude of eigenvalue in logarithmic scale. Based on these data, we tried to find out if there is a connection between the shape of the feature space and the performance of the given model. The graph for Pitfall was omitted, since it looked very similar to Private Eye. } \label{fig:cnd_analysis} \end{figure} \begin{table}[t!] \scriptsize \centering \caption{Description of the target model feature space created by four selected methods. For the evaluation, we used the following parameters: mean value and standard deviation of the $L_{2}$-norm of features, 25th, 50th, 75th and 95th percentiles of eigenvalues of a linear envelop to obtain a rough representation of stretching of the feature space in individual dimensions. To this, we add the maximum achieved external reward ($\max{r_{\rm ext}}$), so that it is possible to search for a connection between the parameters of the feature space and the performance of the method.} \begin{tabular}{l|l|cccccc} \hline Environment & Method & $\max({r_{\rm ext}})$ & $L_2$-norm & $Q_{25}$ & $Q_{50}$ & $Q_{75}$ & $Q_{95}$ \\ \hline\hline \multirow{4}{*}{Montezuma} & \multicolumn{1}{l|}{RND} & \multicolumn{1}{c}{9} & \multicolumn{1}{c}{1.93 $\pm$ 1.15} & \multicolumn{1}{c}{9} & \multicolumn{1}{c}{24} & \multicolumn{1}{c}{89} & \multicolumn{1}{c}{778} \\ & \multicolumn{1}{l|}{SND-STD} & \multicolumn{1}{c}{16} & \multicolumn{1}{c}{4.85 $\pm$ 2.45} & \multicolumn{1}{c}{9} & \multicolumn{1}{c}{26} & \multicolumn{1}{c}{109} & \multicolumn{1}{c}{6835} \\ & \multicolumn{1}{l|}{SND-VIC} & \multicolumn{1}{c}{17} & \multicolumn{1}{c}{6.22 $\pm$ 2.99} & \multicolumn{1}{c}{3066} & \multicolumn{1}{c}{8429} & \multicolumn{1}{c}{14858} & \multicolumn{1}{c}{25634} \\ & \multicolumn{1}{l|}{SND-V} & \multicolumn{1}{c}{34} & \multicolumn{1}{c}{6.41 $\pm$ 3.96} & \multicolumn{1}{c}{98} & \multicolumn{1}{c}{378} & \multicolumn{1}{c}{1367} & \multicolumn{1}{c}{8944} \\ \hlin \multirow{4}{*}{Gravitar} & \multicolumn{1}{l|}{SND-V} & \multicolumn{1}{c}{9} & \multicolumn{1}{c}{11.94 $\pm$ 3.81} & \multicolumn{1}{c}{16} & \multicolumn{1}{c}{341} & \multicolumn{1}{c}{2919} & \multicolumn{1}{c}{24291} \\ & \multicolumn{1}{l|}{SND-STD} & \multicolumn{1}{c}{15} & \multicolumn{1}{c}{11.66 $\pm$ 2.82} & \multicolumn{1}{c}{8} & \multicolumn{1}{c}{30} & \multicolumn{1}{c}{250} & \multicolumn{1}{c}{20402} \\ & \multicolumn{1}{l|}{RND} & \multicolumn{1}{c}{20} & \multicolumn{1}{c}{1.12 $\pm$ 0.81} & \multicolumn{1}{c}{5} & \multicolumn{1}{c}{10} & \multicolumn{1}{c}{24} & \multicolumn{1}{c}{139} \\ & \multicolumn{1}{l|}{SND-VIC} & \multicolumn{1}{c}{21} & \multicolumn{1}{c}{7.26 $\pm$ 3.01} & \multicolumn{1}{c}{2263} & \multicolumn{1}{c}{6837} & \multicolumn{1}{c}{15173} & \multicolumn{1}{c}{27908} \\ \hlin \multirow{4}{*}{Private Eye} & \multicolumn{1}{l|}{SND-V} & \multicolumn{1}{c}{7} & \multicolumn{1}{c}{15.47 $\pm$ 6.50} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{27} & \multicolumn{1}{c}{6801} & \multicolumn{1}{c}{23823} \\ & \multicolumn{1}{l|}{RND} & \multicolumn{1}{c}{9} & \multicolumn{1}{c}{1.40 $\pm$ 0.65} & \multicolumn{1}{c}{3} & \multicolumn{1}{c}{9} & \multicolumn{1}{c}{37} & \multicolumn{1}{c}{410} \\ & \multicolumn{1}{l|}{SND-STD} & \multicolumn{1}{c}{9} & \multicolumn{1}{c}{8.03 $\pm$ 4.46} & \multicolumn{1}{c}{15} & \multicolumn{1}{c}{54} & \multicolumn{1}{c}{371} & \multicolumn{1}{c}{24025} \\ & \multicolumn{1}{l|}{SND-VIC} & \multicolumn{1}{c}{10} & \multicolumn{1}{c}{4.29 $\pm$ 3.56} & \multicolumn{1}{c}{232} & \multicolumn{1}{c}{1401} & \multicolumn{1}{c}{6197} & \multicolumn{1}{c}{47342} \\ \hlin \multirow{4}{*}{Pitfall} & \multicolumn{1}{l|}{RND} & \multicolumn{1}{c}{0} & \multicolumn{1}{c}{1.31 $\pm$ 0.39} & \multicolumn{1}{c}{3} & \multicolumn{1}{c}{10} & \multicolumn{1}{c}{47} & \multicolumn{1}{c}{410} \\ & \multicolumn{1}{l|}{SND-V} & \multicolumn{1}{c}{0} & \multicolumn{1}{c}{2.81 $\pm$ 1.43} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{124} & \multicolumn{1}{c}{7358} & \multicolumn{1}{c}{27402} \\ & \multicolumn{1}{l|}{SND-STD} & \multicolumn{1}{c}{0} & \multicolumn{1}{c}{10.64 $\pm$ 2.81} & \multicolumn{1}{c}{8} & \multicolumn{1}{c}{32} & \multicolumn{1}{c}{319} & \multicolumn{1}{c}{39904} \\ & \multicolumn{1}{l|}{SND-VIC} & \multicolumn{1}{c}{0} & \multicolumn{1}{c}{5.62 $\pm$ 1.79} & \multicolumn{1}{c}{153} & \multicolumn{1}{c}{1542} & \multicolumn{1}{c}{9827} & \multicolumn{1}{c}{44126} \\ \hlin \multirow{4}{*}{Venture} & \multicolumn{1}{l|}{SND-V} & \multicolumn{1}{c}{14} & \multicolumn{1}{c}{3.67 $\pm$ 3.07} & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{130} & \multicolumn{1}{c}{3921} & \multicolumn{1}{c}{24443} \\ & \multicolumn{1}{l|}{RND} & \multicolumn{1}{c}{18} & \multicolumn{1}{c}{0.68 $\pm$ 0.80} & \multicolumn{1}{c}{4} & \multicolumn{1}{c}{9} & \multicolumn{1}{c}{30} & \multicolumn{1}{c}{188} \\ & \multicolumn{1}{l|}{SND-STD} & \multicolumn{1}{c}{18} & \multicolumn{1}{c}{4.50 $\pm$ 3.79} & \multicolumn{1}{c}{4} & \multicolumn{1}{c}{15} & \multicolumn{1}{c}{115} & \multicolumn{1}{c}{32272} \\ & \multicolumn{1}{l|}{SND-VIC} & \multicolumn{1}{c}{18} & \multicolumn{1}{c}{4.91 $\pm$ 3.89} & \multicolumn{1}{c}{1006} & \multicolumn{1}{c}{3644} & \multicolumn{1}{c}{11256} & \multicolumn{1}{c}{41899} \\ \hlin \multirow{4}{*}{Solaris} & \multicolumn{1}{l|}{RND} & \multicolumn{1}{c}{55} & \multicolumn{1}{c}{3.68 $\pm$ 3.42} & \multicolumn{1}{c}{29} & \multicolumn{1}{c}{62} & \multicolumn{1}{c}{172} & \multicolumn{1}{c}{776} \\ & \multicolumn{1}{l|}{SND-V} & \multicolumn{1}{c}{65} & \multicolumn{1}{c}{10.19 $\pm$ 6.39} & \multicolumn{1}{c}{13} & \multicolumn{1}{c}{165} & \multicolumn{1}{c}{3337} & \multicolumn{1}{c}{18085} \\ & \multicolumn{1}{l|}{SND-STD} & \multicolumn{1}{c}{81} & \multicolumn{1}{c}{5.77 $\pm$ 4.28} & \multicolumn{1}{c}{12} & \multicolumn{1}{c}{37} & \multicolumn{1}{c}{211} & \multicolumn{1}{c}{11171} \\ & \multicolumn{1}{l|}{SND-VIC} & \multicolumn{1}{c}{87} & \multicolumn{1}{c}{6.00 $\pm$ 4.14} & \multicolumn{1}{c}{1649} & \multicolumn{1}{c}{5079} & \multicolumn{1}{c}{13163} & \multicolumn{1}{c}{32377} \\ \hline\hline \multirow{4}{*}{Caveflyer} & \multicolumn{1}{l|}{RND} & \multicolumn{1}{c}{10} & \multicolumn{1}{c}{5.06 $\pm$ 2.98} & \multicolumn{1}{c}{48} & \multicolumn{1}{c}{128} & \multicolumn{1}{c}{474} & \multicolumn{1}{c}{4792} \\ & \multicolumn{1}{l|}{SND-V} & \multicolumn{1}{c}{16} & \multicolumn{1}{c}{10.35 $\pm$ 8.37} & \multicolumn{1}{c}{28} & \multicolumn{1}{c}{293} & \multicolumn{1}{c}{2007} & \multicolumn{1}{c}{20416} \\ & \multicolumn{1}{l|}{SND-STD} & \multicolumn{1}{c}{16} & \multicolumn{1}{c}{9.69 $\pm$ 5.90} & \multicolumn{1}{c}{11} & \multicolumn{1}{c}{34} & \multicolumn{1}{c}{206} & \multicolumn{1}{c}{13452} \\ & \multicolumn{1}{l|}{SND-VIC} & \multicolumn{1}{c}{16} & \multicolumn{1}{c}{9.89 $\pm$ 5.70} & \multicolumn{1}{c}{23} & \multicolumn{1}{c}{271} & \multicolumn{1}{c}{2638} & \multicolumn{1}{c}{49517} \\ \hlin \multirow{4}{*}{Climber} & \multicolumn{1}{l|}{RND} & \multicolumn{1}{c}{0} & \multicolumn{1}{c}{7.47 $\pm$ 3.53} & \multicolumn{1}{c}{12} & \multicolumn{1}{c}{32} & \multicolumn{1}{c}{106} & \multicolumn{1}{c}{1500} \\ & \multicolumn{1}{l|}{SND-V} & \multicolumn{1}{c}{11} & \multicolumn{1}{c}{17.31 $\pm$ 6.45} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{53} & \multicolumn{1}{c}{5091} & \multicolumn{1}{c}{25463} \\ & \multicolumn{1}{l|}{SND-STD} & \multicolumn{1}{c}{11} & \multicolumn{1}{c}{48.56 $\pm$ 22.16} & \multicolumn{1}{c}{12} & \multicolumn{1}{c}{63} & \multicolumn{1}{c}{788} & \multicolumn{1}{c}{49506} \\ & \multicolumn{1}{l|}{SND-VIC} & \multicolumn{1}{c}{11} & \multicolumn{1}{c}{13.41 $\pm$ 4.01} & \multicolumn{1}{c}{144} & \multicolumn{1}{c}{2313} & \multicolumn{1}{c}{14132} & \multicolumn{1}{c}{78074} \\ \hlin \multirow{4}{*}{Coinrun} & \multicolumn{1}{l|}{RND} & \multicolumn{1}{c}{10} & \multicolumn{1}{c}{5.07 $\pm$ 3.00} & \multicolumn{1}{c}{40} & \multicolumn{1}{c}{109} & \multicolumn{1}{c}{389} & \multicolumn{1}{c}{4029} \\ & \multicolumn{1}{l|}{SND-V} & \multicolumn{1}{c}{10} & \multicolumn{1}{c}{12.02 $\pm$ 7.20} & \multicolumn{1}{c}{23} & \multicolumn{1}{c}{149} & \multicolumn{1}{c}{2473} & \multicolumn{1}{c}{21553} \\ & \multicolumn{1}{l|}{SND-STD} & \multicolumn{1}{c}{10} & \multicolumn{1}{c}{29.23 $\pm$ 17.06} & \multicolumn{1}{c}{54} & \multicolumn{1}{c}{200} & \multicolumn{1}{c}{1651} & \multicolumn{1}{c}{76730} \\ & \multicolumn{1}{l|}{SND-VIC} & \multicolumn{1}{c}{10} & \multicolumn{1}{c}{11.23 $\pm$ 5.77} & \multicolumn{1}{c}{332} & \multicolumn{1}{c}{1140} & \multicolumn{1}{c}{5414} & \multicolumn{1}{c}{44014} \\ \hlin \multirow{4}{*}{Jumper} & \multicolumn{1}{l|}{RND} & \multicolumn{1}{c}{10} & \multicolumn{1}{c}{8.63 $\pm$ 2.19} & \multicolumn{1}{c}{139} & \multicolumn{1}{c}{337} & \multicolumn{1}{c}{1008} & \multicolumn{1}{c}{6733} \\ & \multicolumn{1}{l|}{SND-V} & \multicolumn{1}{c}{10} & \multicolumn{1}{c}{14.97 $\pm$ 5.49} & \multicolumn{1}{c}{63} & \multicolumn{1}{c}{314} & \multicolumn{1}{c}{3310} & \multicolumn{1}{c}{20896} \\ & \multicolumn{1}{l|}{SND-STD} & \multicolumn{1}{c}{10} & \multicolumn{1}{c}{15.51 $\pm$ 3.73} & \multicolumn{1}{c}{61} & \multicolumn{1}{c}{189} & \multicolumn{1}{c}{1044} & \multicolumn{1}{c}{22008} \\ & \multicolumn{1}{l|}{SND-VIC} & \multicolumn{1}{c}{10} & \multicolumn{1}{c}{20.79 $\pm$ 6.18} & \multicolumn{1}{c}{47} & \multicolumn{1}{c}{257} & \multicolumn{1}{c}{6271} & \multicolumn{1}{c}{74860} \\ \hline\hline \end{tabular} \label{tab:analysis1} \end{table} \begin{figure*}[t!] \begin{subfigure}{1.0\textwidth} \centering\includegraphics[width=12cm]{fig/results/novelty_detection/rnd_result_summary.png} \caption{RND} \label{fig:nov_rnd_result_summary} \end{subfigure} \\ \begin{subfigure}{1.0\textwidth} \centering\includegraphics[width=12cm]{fig/results/novelty_detection/cnd_nce_summary.png} \caption{SND-STD} \label{fig:nov_nce_result_summary} \end{subfigure} \\ \begin{subfigure}{1.0\textwidth} \centering\includegraphics[width=12cm]{fig/results/novelty_detection/cnd_msev_result_summary.png} \caption{SND-V} \label{fig:nov_mse_result_summary} \end{subfigure} \\ \begin{subfigure}{1.0\textwidth} \centering\includegraphics[width=12cm]{fig/results/novelty_detection/cnd_vicreg_result_summary.png} \caption{SND-VIC} \label{fig:nov_vicreg_result_summary} \end{subfigure} \caption{Novelty detection for different regularisation losses as react on different future window. The states were collected on Montezuma's Revenge with our best agent, red dots correspond to state examples above.} \end{figure*} We can visualise learned feature vectors $Z$ in 2D using t-SNE method \citep{tSNE2008}. Figure~\ref{fig:cnd_feature_space} shows resulted features of trained $\Phi^{\rm T}$ on Atari Montezuma's revenge. The randomly initialised trained network (the same as in \cite{burda2018exploration} in Figure~\ref{fig:target_features_random}) can well distinguish between different rooms, however within the room the variance is low, pointing to the lack of exploration abilities. On the other hand, self-supervised regularized target model in Figure~\ref{fig:cnd_feature_space} provides much larger variance of features, which provides more sensitive novelty detection signal. The main goal of the analysis was to find out the differences (not only visually) between the individual spaces of features, to describe them with some quantities and to find a possible connection with the performance of the algorithm and the mentioned quantities. From the set of examined models for one environment, we always selected the model with the highest obtained reward and generated 10,000 samples of input states by running it in the environment. Subsequently, each model generated feature vectors for a sample of previously collected input states. Thus, we obtained feature space samples $Z$ of each model. Then, using principal component analysis (PCA), we found the linear envelope of the high-dimensional manifold that forms the feature space. We examined the mean value and especially the variance of the feature vectors and also the eigenvalues obtained using PCA, which at least indicate something about the basic shape of the feature space (i.e. the sizes of the individual dimensions). The results of this analysis are shown in Fig.~\ref{fig:cnd_analysis} and Tab.~\ref{tab:analysis1}. For the evaluation, we decided to use the following parameters: mean value and standard deviation of the $L_{2}$-norm of features, 25th, 50th, 75th and 95th percentiles of eigenvalues to obtain a rough representation of stretching of the feature space in individual dimensions. It can be seen that in almost all cases the RND target model has smaller eigenvalues than the SND models. This can also be seen in the $L_2$-norm values that the entire RND feature space seems to have a smaller volume compared to the SND feature space. We can see that RND and SND-STD are similar in shape. Their curve has a convex shape, with SND-STD having more stretched dimensions. SND-V and SND-VIC also have similarly concave shapes but SND-V stretches only about half of the available dimensions and then usually falls more steeply. The shape of SND-VIC curve is ensured by the variance (eq.~\ref{eq:sndvic1}) and covariance (eq.~\ref{eq:sndvic2}) components of its loss function. The missing decorrelation term in the SND-V loss function (eq.~\ref{eq:sndv2}) results in an uneven stretching of the dimensions. In contrast, the shape of the SND-STD curve is convex and the dimensions are used unevenly. After these analyses, we tried to improve the variance within the dimensions by adding a regularization term (eq.~\ref{eq:sndstd5}) which tried to maximize the variance within the feature vector. However, such a term had an expansive effect on the feature space and it was not possible to give it much weight, because the loss function (eq.~\ref{eq:sndstd3}) of the ST-DIM algorithm itself has an expansive effect, and the addition of another expansive term led to problems with the uncontrolled expansion of the feature space. Despite the small influence of the variance component of the loss function, the performance of SND-STD improved and it helped prevent agents from getting stuck in certain cases. If we compare RND and SND-STD feature spaces (in terms of eigenvalues) they look similar, but the latter model was able to achieve better results in 7 out of 9 environments. Our findings show that when training the target model, it is important to enforce the decorrelation of features and the equal use of all dimensions of the feature space. Such a model seems to be relatively robust and sufficiently sensitive to novelty. Interestingly, for Pitfall task (not shown in Figure~\ref{fig:cnd_analysis}), despite their failure, our methods still tried to take advantage of the feature space dimensions. From the analysis of the trained agents, we saw that they were able to explore several rooms, but in each there were enough moving objects that made the given state space rich and thus made it difficult to train the learned model, which led to a very slow decrease of the internal reward (we observed a similar behavior after short training sessions in other environments). We assume that with a larger number of training steps, the agent would eventually be able to reach the reward. Another approach for different regularisation losses is understanding its time evaluation and the ability to provide large IM signal for previously unseen states. For the purpose of exploration, the most important is the ability to detect near future states, which are very close to already seen ones. We collected a set of 2700 states, from our best agent playing Montezuma's Revenge. During the experiment, we trained the IM modules only on past data, and tested on future data. The testing batch was selected from the following 4 time horizons, with respect to the agent being in step $n$, and testing batch indices $m$: \begin{enumerate} \item past: already seen states, $m<n$ \item near future: $n < m < n+128$ steps in the future \item far future: $m > n$ \item random: any batch from the set \end{enumerate} We hypothesised that the Random Network distillation provides a sufficient signal only at the beginning of learning. Converging to zero leads to limited exploration abilities. This degradation corresponds to our results in Figure~\ref{fig:nov_rnd_result_summary}. On the other hand, continuously updated target models can provide useful signals for the entire run. The corresponding results are displayed in Figures \ref{fig:nov_nce_result_summary}, \ref{fig:nov_mse_result_summary}, and \ref{fig:nov_vicreg_result_summary}. On all three losses, the intrinsic motivation is much higher for non-seen states, and not converging into zero. The strong peeks for near future correspond to new rooms finding. The self supervised regularisation prevents collapsing motivation signal to zero. This insight gives us requirements for exploration signal. For future research we got also simple methodology for testing exploration abilities, without training whole RL agent which can be time consuming. \section{Discussion} \label{sec:discussion} We introduced a class of internal motivation algorithms, based on distillation error as a novelty indicator (SND), where the target model is trained using self-supervised learning. We adapted three existing self-supervised methods for this purpose and experimentally tested them on a set of environments that are considered difficult to explore. The proposed variants have been shown to eliminate the identified shortcomings of the RND model -- the need for good initialization, low variance of intrinsic reward on different states and the loss of the motivational signal caused by the adaptation of the learning network. In the experiments, we tested the overall performance of the agents in 10 environments. With the exception of one environment, it was confirmed that the SND algorithms achieved better results than other methods with which we compared them. For the Atari environments, we also evaluated the achieved game scores so that they could be compared with other published models that we did not include in in this work. Also from the point of view of the SND score, the models dominated the compared models. In the analytical part, we focused on deeper understanding of the SND methods. We used a geometric approach trying to capture, at least in rough outlines, the properties of feature spaces. A comparison between a randomly initialized feature space and a feature space formed using one of the SND algorithms shows the correctness of our assumptions that self-supervised algorithms can distinguish even subtle differences within the state space. This turned out to be one of the weaknesses of the RND algorithm, which, while being good at distinguishing between sufficiently different states (e.g. different rooms in Montezuma's Revenge), it placed similar states close to each other in the feature space, making the work of the learned model easier. In the experiments, we thus observed a decrease in the standard deviation of the average intrinsic reward per episode, which meant that most of the visited states generated a similar reward. We experimented with different target model architectures, different augmentations and intrinsic reward scaling. We found that the target model using ELU activation and only one fully connected layer with the learned model with three hidden layers performs the best. From the tested augmentations (noise, random tile masking, random convectional filter), we found the best performance for a combination of uniform noise with random tiles masking. As a questionable augmentation remains random down sampling and up sampling back, which could help remove noise while preserving representative information in the state vectors. We suggest to investigate this idea for next research. The scaling of intrinsic reward shows big sensitivity to this parameter. The best working value was $0.5$, however we think this value should be optimized separately for each specific environment. We did not specifically investigate the robustness of SND methods with regard to the initialization of the target model (which was again a problem with RND). We assume that self-supervised learning algorithms can cope with a poorly initialized model to a certain extent, but from the training experience we found that it is better to initialize the target models of SND-STD and SND-VIC to small values ($gain = 0.5$) and letting it expand itself while SND-V was initialized like RND to higher values ($gain = \sqrt{2}$). Our experiments revealed that if the ST-DIM algorithm works on an incomplete dataset that takes on new samples (the authors probably did not test it in such conditions), there is an instability and an exponential increase of activity in the feature space at certain moments. This is related to the use of cross-entropy loss function in its core (which does not limit the values of inputs, logits), where derivatives can reach large values and subsequently inflate the entire feature space. During the development of the model, it turned out that it is best to minimize the $L_2$-norm of logits that enter the cross-entropy. In addition, we tried to maximize the entropy of the distributions generating the respective logits and minimize the $L_2$-norm of global features. However, both described approaches failed to sufficiently stabilize the algorithm. We also compared the effect of state preprocessing on the performance of the SND-STD model. It turned out that the state preprocessing is not necessary since it has no significant effect on the agent's performance. We also performed an analysis of novelty detection abilities of selected methods. After comparison with RND as a baseline, we can conclude that this baseline suffers from an IM-based reward vanishing problem. After adding the regularisation to the target model, much better features were obtained, with significant change compared to the baseline. Intrinsic reward vanishing disappears for all the tested losses. This is cross-validated also on $t$-SNE features visualisation, where regularised features yield much higher variance, which means larger sensitivity to novelty. Based on the presented results, we can conclude that self-supervised learning methods are definitely promising in the creation of novelty detectors, which can be successfully used from the point of view of intrinsic motivation and improve the agent's exploration. A direct extension of SND methods will be the merging of the target model with the model to which the actor and critic are connected. We have already done some pilot research in this direction and it seems to be a feasible task. This would greatly optimize the entire model and speed up its training in terms of computing time. At the same time, we think that this approach can be an inspiration for a new class of algorithms that will specialize in creating feature mapping capturing the relationship of the environment to the agent itself, since current self-supervised methods are agnostic to these relationships. \vspace{3,16314mm} \bibliographystyle{apalike}
3,212,635,537,849
arxiv
\subsection{ \refstepcounter{equation} \noindent {\bf \arabic{section}.\arabic{equation}.} } \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mathbb{A}}{\mathbb{A}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathbb{R}}{\mathbb{R}} \renewcommand{\P}{\mathbb{P}} \newcommand{{\rm ch}}{{\rm ch}} \DeclareMathOperator{\et}{\textnormal{\'et}} \newcommand{\mf}[1]{\mathfrak{#1}} \newcommand{\ms}[1]{\mathscr{#1}} \newcommand{\mb}[1]{\mathbb{#1}} \newcommand{\mc}[1]{\mathcal{#1}} \renewcommand{\t}[1]{\tilde{#1}} \usepackage{marginnote} \usetikzlibrary{calc} \begin{document} \title[Ulrich bundles on general double plane covers]{Rank 2 Ulrich bundles on general double plane covers} \author[R. Sebastian]{Ronnie Sebastian} \address{Department of Mathematics, Indian Institute of Technology Bombay, Powai, Mumbai 400076, Maharashtra, India.} \email{ronnie@math.iitb.ac.in} \author[A. Tripathi]{Amit Tripathi} \address{Department of Mathematics, Indian Institute of Technology Hyderabad, Kandi, Sangareddy, 502285, Telangana, India.} \email{amittr@gmail.com} \subjclass[2010]{14E20, 14J60, 14H50} \keywords{Ulrich bundles, double planes, Cayley-Bacharach} \begin{abstract} We prove that a double cover of $\mathbb{P}^2$ ramified along a general smooth curve $B$ of degree $2s$, for $s\geq 3$, supports a rank $2$ special Ulrich bundle. \end{abstract} \maketitle \section{Introduction} Let $X$ be a $d$-dimensional smooth projective variety. Unless mentioned otherwise, $\mc O_X(1)$ will always denote an ample and globally generated line bundle on $X$. \begin{definition}\label{def-Ulrich} A locally free sheaf (vector bundle) $E$ on $X$ is said to be Ulrich with respect to $\mc O_X(1)$ (or simply Ulrich when the bundle $\mc O_X(1)$ is understood) if the following two conditions are satisfied \begin{enumerate} \item $H^i(X,E(-i))=0$ for all $i>0$\,, \item $H^j(X,E(-j-1))=0$ for all $j<n$\,. \end{enumerate} \end{definition} We refer the reader to \cite[\S 2]{AK} for basic definitions. In the literature several authors define Ulrich with respect to a very ample line bundle, kindly see the next section for some remarks related to this. A conjecure of Eisenbud and Schreyer \cite{ES} states that every smooth projective variety supports an Ulrich bundle. Several people have constructed Ulrich bundles on particular varieties and we list a few. They have been shown to exist on complete intersections by \cite{HUB}, on curves and del Pezzo surfaces by \cite{ES}, on general $K3$ surfaces by \cite{AFO}, existence of special Ulrich bundles (definition of special Ulrich is recalled later in this introduction) on arbitrary $K3$ surfaces by \cite{Faenzi}, on abelian surfaces by \cite{Be-16}, on ruled surfaces by \cite{Ap-Co-Mi}, on non-special surface with $p_g=0$, $q\in \{0,1\}$ by \cite{Cas-ns-1}, \cite{Cas-ns-2}. In \cite{Cas-rs} the following result is proved. Let $X$ be a surface with Kodaira dimension $\kappa(X)\geq 0$ and $q(X) = 0$ (recall $q(X)=H^1(X,\mc O_X)$), endowed with a very ample non-special line bundle $\mc O_X(h_X)$ (recall a line bundle $\mc L$ is called non-special if $H^1(X,\mc L)=0$). Let $K_X$ denote the canonical line bundle. Assume $h^0(X,\mc O_X(2K_X-h_X))= 0$ and $h^2_X> h_XK_X$. Then $X$ supports Ulrich bundles of rank 2. The above list is far from being complete and we refer the reader to the above papers, especially \cite{Cas-ns-2}, and the references therein for more results. Recently Narayanan and Parameswaran \cite{PN} studied the existence of Ulrich line bundles on a double plane $\pi:X \rightarrow \mathbb{P}^2$ branched along a smooth curve $B \subset \mathbb{P}^2$ of degree $2s$. In \cite[Theorem 1.5]{PN}, they prove that for each $s\geq 3$, there are special classes of double planes which admit Ulrich line bundles. In \cite[Theorem 1.4]{PN} they show that a double plane branched along a generic smooth curve of degree $2s$, where $s \geq 3$, does not support an Ulrich line bundle. Let $X$ be a surface. An Ulrich bundle $E$ of rank 2, with respect to $\mc O_X(1)$, is called special Ulrich if it also satisfies ${\rm det}(E)\cong K_X\otimes \mc O_X(3)$, see \cite[Definition 5]{AK}. Let $\pi:X\to \P^2$ be a degree 2 cover which is branched along a smooth curve $B\subset \P^2$ of degree $2s$. For such a map denote $\mc O_X(1):=\pi^*\mc O_{\mb P^2}(1)$ and by an (special) Ulrich bundle on $X$ we will always mean Ulrich with respect to $\mc O_X(1)$. Then $K_X\cong \mc O_X(s-3)$, see \cite[\S1.41]{Debarre}. In this note we show the following. \begin{theorem}\label{main-th-intro} Let $\pi: X \rightarrow \mathbb{P}^2$ be a double cover branched along a generic smooth curve $B \subset \mathbb{P}^2$ of degree $2s$, where $s\geq 3$. Then $X$ admits a special rank 2 Ulrich bundle. \end{theorem} To prove the above result we use two inputs. The first is the well known correspondence between zero dimensional subschemes satisfying the Cayley-Bacharach property and global sections of a rank 2 vector bundle, see \cite[\S5]{Tan-V}. Let $F$ be the degree $2s$ homogeneous polynomial which defines $B$. Using \cite{Tan-V} we first prove \begin{theorem}[Theorem \ref{main-theorem}] Let $\pi:X\to \P^2$ be a degree 2 cover which is branched along a smooth curve $B\subset \P^2$ of degree $2s$, where $s\geq 3$. Let $F$ denote the polynomial of degree $2s$ which defines $B$. Assume that there are two polynomials $F_1$ and $F_2$ of degree $s$ such that $F\in (F_1,F_2)$. Then $X$ supports a special Ulrich bundle of rank 2. \end{theorem} The second input is the first point in \cite[Theorem 5.1]{Chiantini} which enables us to conclude that for the general degree $2s$ hypersurface $F$ we can find degree $s$ hypersurfaces $F_1$ and $F_2$ such that $F\in (F_1,F_2)$. We do not know if this holds for all smooth degree $2s$ hypersurfaces. Finally we prove that when ${\rm Pic}(X)\cong \mathbb{Z}$ every Ulrich bundle on $X$ is special. \begin{proposition}[Proposition \ref{chern-class-ulrich}] Let $\pi:X\to \P^2$ be a degree 2 cover which is branched along a smooth curve $B\subset \P^2$ of degree $2s$. Assume that the Picard group of $X$ is generated by $\mc O_{X}(1)$. Let $E$ be a rank $2$ Ulrich bundle on $X$. Then ${\rm det}(E)=\mc O_X(s)$. In particular, $E$ is special Ulrich. \end{proposition} It has been brought to our attention that Mohan Kumar, Poornapushkala Narayanan and A.J. Parameswaran have proved the above result for all double planes using a different method. \\ \noindent {\bf Acknowledgements}. We thank Enrico Carlini and Luca Chiantini for several helpful discussions related to their article \cite{Chiantini}. We thank Gianfranco Casnati for several useful comments. \section{Existence of Ulrich bundles} Throughout we work over the field of complex numbers. To show that a bundle $E$ on $X$ is Ulrich we will use the following criterion. \begin{lemma} Let $X$ be a $d$-dimensional smooth projective variety and let $\pi:X\to \mb P^d$ be a surjective and finite map of degree $e$. A bundle $E$ on $X$ is Ulrich with respect to $\pi^*\mc O_{\mb P^d}(1)$ if and only if $\pi_*E\cong \mc O_{\mb P^d}^{e\,{\rm rank}(E)}$. \end{lemma} \begin{proof} Let us first assume that $\pi_*E\cong \mc O_{\mb P^d}^{e\,{\rm rank}(E)}$. Since the map is finite, and using projection formula, we have $H^i(X,E(k))=H^i(\mb P^d,\pi_*E(k))$ for all $i,k\in \mathbb{Z}$. Since $\pi_*E\cong \mc O_{\mb P^d}^{e\,{\rm rank}(E)}$ it is clear that the conditions in Definition \ref{def-Ulrich} are satisfied. Conversely, assume that $E$ satisfies the conditions in Definition \ref{def-Ulrich}. Then it follows that $\pi_*E$ and $(\pi_*E)^\vee$ are 0-regular. Thus, both of them are $m$-regular for all $m\geq 0$. From this it easily follows that $H^i(\mb P^d,\pi_*E(k))=0$ for all $k\in \mathbb{Z}$ and for all $1\leq i\leq d-1$, that is, $\pi_*E$ is an ACM bundle. Now applying \cite{Hor} we get that $\pi_*E$ is a direct sum of line bundles. If $\mc O_{\mb P^d}(a)$ is a summand of $\pi_*E$ then we get that $H^i(\mb P^d,\mc O_{\mb P^d}(a-i))=0$ for all $i>0$ and $H^j(\mb P^d,\mc O_{\mb P^d}(a-j-1))=0$ for all $j<d$. It easily follows that $a=0$. This shows that $\pi_*E\cong \mc O_{\mb P^d}^{e\,{\rm rank}(E)}$. \end{proof} Several authors only define Ulrich bundles with respect to very ample line bundles. However, the existence of an Ulrich bundle with respect to an ample and globally generated line bundle $L$ ensures that there are Ulrich bundles with respect to $L^{\otimes n}$ for all $n>0$, see \cite[Proposition 3]{AK} and the remarks following it. Next we define the varieties of interest to us in this article. Let $B\subset \P^2$ be a smooth curve of degree $2s$ defined by a homogeneous polynomial $F$. Let $\pi:X\to \P^2$ be the double cover of $\P^2$ branched along $B$, the construction of which is explained in \cite[\S2.2]{PN}. We reproduce it here for the benefit of the reader. Let $\mb A$ denote the total space of the line bundle $\mb A=\mc O_{\mb P^2}(s)$, $\pi:\mb A\to \mb P^2$ the projection and $T\in H^0(\mb A, \pi^*\mb A)$ be the tautological section. Define $X$ to be the subvariety of $\mb A$ defined by the section $T^2 - \pi^*F\in H^0(\mb A, \pi^*\mb A^{\otimes 2}) =H^0(\mb A, \pi^*\mc O_{\mb P^2}(2s))$. We will abuse notation and denote the composite $X\subset \mb A\to \mb P^2$ also by $\pi$. Then $\pi$ is a finite map of degree 2 between smooth and projective varieties, which is ramified along the smooth curve $B\subset \mb P^2$. In this section we shall prove that all double plane covers support an Ulrich bundle. We will use the results in \cite[Theorem 10]{Tan-V} to construct a rank 2 bundle on $X$. For the benefit of the reader we state the main result from \cite{Tan-V} that we need. There are three equivalences, but we state only two of these. The reader may recall the notations from \cite[\S1, page 2]{Tan-V}. \begin{theorem}[Theorem 10, \cite{Tan-V}]\label{th-TV} Let $X$ be a complex projective variety of dimension $n\geq 2$. Let $Z\subset X$ be a subscheme of pure codimension $2$. Then the following are equivalent: \begin{enumerate}[(1)] \item $Z$ is the zero subscheme of a section of a rank 2 vector bundle $\mc E$. \item There are hypersurfaces $F_1,F_2,F_3$ such that $F_1$ and $F_2$ have no common components, $Z=F_1F_2-F_1F_2F_3$ and such that $F_1F_2F_3$ is of pure codimension 2 and is Cohen-Macaulay. \end{enumerate} Further, if (1) and (2) hold then ${\rm det}(\mc E)\equiv F_1+F_2-F_3$. \end{theorem} With notation as above let $\mc F$ denote the syzygy sheaf \begin{equation}\label{explicit-bundle} 0\to \mc F \to \bigoplus_i\mc O_{X}(-F_i)\to I_{F_1,F_2,F_3}\to 0\,. \end{equation} Then the bundle $\mc E$ in the theorem is given by $\mc F(F_1+F_2)$, this is explained just before \cite[Theorem 10]{Tan-V}. \begin{theorem}\label{main-theorem} Let $\pi:X\to \P^2$ be a degree 2 cover which is branched along a smooth curve $B\subset \P^2$ of degree $2s$, where $s\geq 3$. Let $F$ denote the polynomial of degree $2s$ which defines $B$. Assume that there are two polynomials $F_1$ and $F_2$ of degree $s$ such that $F\in (F_1,F_2)$. Then $X$ supports a special Ulrich bundle of rank 2. \end{theorem} \begin{proof} First let us note that there is no non-constant polynomial $H$ which divides both $F_1$ and $F_2$, or else $H$ will divide $F$, which gives a contradiction. Thus, the subscheme of $\mb P^2$ defined by the ideal $(F_1,F_2)$ is 0-dimensional and is contained in $B$. We denote this by $Z'$. Consider the scheme theoretic inverse image $Z_1:=\pi^{-1}(Z')$. For $i=1,2$ take $H_i=\pi^*F_i\in H^0(X,\mc O_X(s))$. Then $Z_1=H_1H_2$ in the notation of Theorem \ref{th-TV}. Take $H_3=T\in H^0(X,\mc O_X(s))$ and $Z_2$ to be the subscheme of $Z_1$ defined by $H_3$, thus, $Z_2=H_1H_2H_3$. Let us compute the ideal $I_Z:=[I_{Z_1}:I_{Z_2}]$. Let $\mathbb{C}[x,y]$ denote the coordinate ring of a standard open set in $\P^2$. Denote by $f$ the equation $F$ in $\mathbb{C}[x,y]$, similarly, for the other polynomials. The inverse image of this open set in $X$ has coordinate ring $$\mathbb{C}[x,y,t]/(t^2-f)\,.$$ The ideal $I_{Z_1}=(f_1,f_2)$ and the ideal $I_{Z_2}=(f_1,f_2,t)$. Since $t^2=f$ one easily checks that $I_Z=(f_1,f_2,t)$. In particular, $Z=Z_2=H_1H_2H_3$. Now recall the notation from \cite[\S1, page 2]{Tan-V}. Thus, we may write $$Z=H_1H_2-H_1H_2H_3\,.$$ Moreover, $H_1H_2H_3$ is clearly of pure codimension 2 and is Cohen-Macaulay (since both depth and dimension are 0). The divisors $\mc O_X(H_i)$ are all isomorphic to $\mc O_X(s)$. Thus, the bundle $\mc E=\mc F(2s)$ (see equation \eqref{explicit-bundle} for definition of $\mc F$) has a global section \begin{equation}\label{global-section} \mc O_X \to \mc E \end{equation} whose vanishing gives $Z$. Moreover, ${\rm det}(\mc E)=\mc O_X(s)$ and $\mc E$ sits in a short exact sequence (which is a twist of equation \eqref{explicit-bundle} by $\mc O_X(2s)$) \begin{equation}\label{explicit-bundle-1} 0\to \mc E \to \mc O_X(s)^{\oplus 3} \to I_Z(2s)\to 0\,. \end{equation} Consider the commutative diagram \begin{equation} \xymatrix{ 0 \ar[r]& I_{Z'} \ar[r] \ar[d]^i& \mathcal{O}_{\mathbb{P}^2} \ar[r] \ar[d]^{\pi^{\#}}& \mathcal{O}_{Z'} \ar[d]^{\pi^{\#}} \ar[r] & 0\\ 0 \ar[r]& \pi_*I_Z \ar[r]& \pi_* \mathcal{O}_X \ar[r] & \pi_*\mathcal{O}_Z \ar[r] & 0 \,. } \end{equation} Again one easily checks that the right vertical arrow is an isomorphism. Moreover, one has the trace map ${\rm Tr}:\pi_*\mc O_X\to \mc O_{\P^2}$ and it is clear that it maps $\pi_*I_Z$ to $I_{Z'}$. Since $\pi_*\mc O_X=\mc O_{\P^2}\oplus \mc O_{\P^2}(-s)$, see \cite[Remark 4.1.7]{Laz}, this shows that there is a split short exact sequence \begin{equation}\label{main-ses} 0\to I_{Z'}\to \pi_*I_Z\to \mc O_{\mb P^2}(-s)\to 0\,. \end{equation} Since this sequence is split, we have \begin{align*} h^0(X,I_Z(2s))&=h^0(\P^2,\pi_*I_Z(2s))\\ &=h^0(\P^2,I_{Z'}(2s))+h^0(\P^2,\mc O_{\P^2}(s))\,. \end{align*} Using the short exact sequence \begin{equation}\label{exact-seq-I_Z'} 0\to \mc O_{\P^2}(-2s)\to \mc O_{\P^2}(-s)^{\oplus 2}\to I_{Z'}\to 0\,, \end{equation} we get that $h^0(\P^2,I_{Z'}(2s))=2h^0(\P^2,\mc O_{\P^2}(s))-1$. Thus, we get that $$h^0(X,I_Z(2s))=h^0(\P^2,\pi_*I_Z(2s))=3h^0(\P^2,\mc O_{\P^2}(s))-1\,.$$ Next we will compute $H^0(X,\mc E)$. Taking dual of \eqref{global-section} we get an exact sequence $$0\to {\rm det}(\mc E)^\vee\to \mc E^\vee\to I_Z\to 0\,.$$ Since ${\rm det}(\mc E)=\mc O_X(s)$ and $\mc E$ is of rank 2, we get $\mc E^\vee = \mc E \otimes {\rm det}(\mc E)^\vee$, which gives $$0 \to \mc O_X \to \mc E \to I_Z(s) \to 0\,.$$ Applying $\pi_*$ to this we get \begin{equation}\label{Koszul-equation} 0\to \pi_*\mc O_X \to \pi_* \mc E \to \pi_*I_Z(s)\to 0\,. \end{equation} Since $h^0(\P^2,I_{Z'}(s))=2$ (using \eqref{exact-seq-I_Z'}) and since \eqref{main-ses} is split, we get that $h^0(\P^2, \pi_*I_Z(s))=3$. From this it follows that $h^0(\P^2,\pi_*\mc E)=4$. Applying $\pi_*$ to \eqref{explicit-bundle-1} and taking cohomology we get \begin{align*} h^1(\P^2,\pi_*\mc E)&=h^0(\P^2,\pi_*I_Z(2s))+h^0(\P^2,\pi_*\mc E) - 3-3h^0(\P^2,\mc O_{\P^2}(s))\\ &=3h^0(\P^2,\mc O_{\P^2}(s))-1 + 4- 3-3h^0(\P^2,\mc O_{\P^2}(s))\\ &=0\,. \end{align*} Further, since $\pi_*I_Z(s)=I_{Z'}(s)\oplus \mc O_{\P^2}$ and in equation \eqref{Koszul-equation} the map $$H^0(\P^2,\pi_*\mc E)\to H^0(\P^2,\pi_*I_Z(s))$$ is surjective, it follows that $\pi_*\mc E=\mc G \oplus \mc O_{\P^2}$, where $\mc G$ is a locally free sheaf and sits in a short exact sequence $$0\to \pi_*\mc O_X \to \mc G \to I_{Z'}(s)\to 0\,.$$ We will now show that $\mc G$ is trivial. Consider the following pullback diagram. \begin{equation} \label{dgm_pullback} \xymatrix{ 0 \ar[r]& \pi_* \mathcal{O}_X \ar[r] \ar@{=}[d]& \mathcal{F} \ar[r] \ar[d]^a& \mc O_{\P^2}^{\oplus 2} \ar[r] \ar[d]^b & 0 \\ 0 \ar[r] & \pi_* \mathcal{O}_X \ar[r] & \mc G \ar[r] & I_{Z'}(s) \ar[r]& 0 } \end{equation} From this it follows that $\mc F=\mc O_{\P^2}^{\oplus 3}\oplus \mc O_{\P^2}(-s)$ since ${\rm Ext}^1(\mc O_{\P^2},\pi_*\mc O_X)=0$. We may split the top row and compose the splitting with $a$ to get a diagram \begin{equation} \xymatrix{ 0 \ar[r]& \mc O_{\P^2}(-s) \ar[r] \ar[d]^d& \mc O_{\P^2}^{\oplus 2} \ar[r] \ar[d]^c & I_{Z'}(s) \ar[r]\ar@{=}[d] & 0 \\ 0 \ar[r] & \pi_* \mathcal{O}_X \ar[r] & \mc G \ar[r] & I_{Z'}(s) \ar[r]& 0 } \end{equation} Suppose ${\rm Ker}\,c\neq 0$, then the image of $c$ is a sheaf of rank 1, which surjects onto $I_{Z'}(s)$. This forces that the image is isomorphic to $I_{Z'}(s)$, which defines a splitting of the bottom row. However, since $\mc G$ is locally free, this is not possible. Thus, ${\rm Ker}\,c=0$. Now let us consider the left vertical arrow $d:\mc O_{\P^2}(-s)\to \mc O_{\P^2}\oplus \mc O_{\P^2}(-s)$. If the cokernel is $\mc O_{\P^2}$ then we get that $\mc G$ is the trivial bundle. The only other possbility for the cokernel is $\mc O_C\oplus \mc O_{\P^2}(-s)$, where $C$ is a hypersurface of degree $s$ in $\P^2$. In this case, $\mc G$ sits in a sequence $$0\to \mc O_{\P^2}^{\oplus 2}\to \mc G \to \mc O_C\oplus \mc O_{\P^2}(-s)\to 0\,.$$ Since $\mc G$ is a summand of $\pi_*\mc E$ and $H^1(\P^2,\pi_*\mc E)=0$, it follows that $H^1(\P^2,\mc G)=0$. This forces that $$H^1(\P^2,\mc O_C)=0\,.$$ But now using $0\to \mc O_{\P^2}(-s)\to \mc O_{\P^2}\to \mc O_C\to 0$ we get $0=H^1(\P^2,\mc O_C)=H^2(\P^2,\mc O_{\P^2}(-s))=H^0(\P^2,\mc O_{\P^2}(s-3))^\vee$, which is not possible if $s\geq 3$. Thus, the cokernel of $d$ is $\mc O_{\P^2}$ and so $\mc G$ and $\pi_*\mc E$ are trivial. This proves that $\mc E$ is an Ulrich bundle on $X$. It is well known that the canonical line bundle of $X$ is $\mc O_X(s-3)$. Since ${\rm det}(\mc E)\cong \mc O_X(s)=\mc O_X(s-3)\otimes \mc O_X(3)$, it follows that $\mc E$ is a special Ulrich bundle. \end{proof} That the hypothesis of the above theorem is satisfied is the first point in \cite[Theorem 5.1]{Chiantini}. This enables us to conclude that for the general degree $2s$ hypersurface $F$ we can find degree $s$ hypersurfaces $F_1$ and $F_2$ such that $F\in (F_1,F_2)$. This proves Theorem \ref{main-th-intro}. Finally we prove that when ${\rm Pic}(X)\cong \mathbb{Z}$ every Ulrich bundle on $X$ is special. \begin{proposition}\label{chern-class-ulrich} Let $\pi:X\to \P^2$ be a degree 2 cover which is branched along a smooth curve $B\subset \P^2$ of degree $2s$. Assume that the Picard group of $X$ is generated by $\mc O_{X}(1)$. Let $E$ be a rank $2$ Ulrich bundle on $X$. Then ${\rm det}(E)=\mc O_X(s)$. In particular, $E$ is special Ulrich. \end{proposition} \begin{proof} Let $E$ be an Ulrich bundle. Then $\pi_*(E)\cong \mc O_{\mb P^2}^{\oplus 4}$. Since $\pi$ is finite, the natural map $\pi^*\pi_*E\to E$ is surjective. Thus, we get that $E$ is globally generated. Let $t \in H^0(X,E)$ be a general section whose vanishing defines a closed subset $Z = \{t = 0\}$ of codimension 2. The dual defines a map $E^\vee \to \mc O_X$ whose image is the ideal sheaf of $Z\subset X$. Thus, we have a short exact sequence \begin{equation}\label{ses-ideal-sheaf-Z} 0\to \mc F\to E^\vee\to I_Z\to 0\,. \end{equation} Since $I_Z$ is torsion free it follows that $\mc F$ is a rank 1 reflexive sheaf on a surface, and so it is a line bundle. As determinant of $I_Z$ is trivial, it follows that $\mc F={\rm det}(E^\vee)$. Let ${\rm det}(E)=\mc O_X(a)$. Since $E$ is a bundle of rank 2, we have that $E^\vee\cong E\otimes {\rm det}(E)^\vee$. Then equation \eqref{ses-ideal-sheaf-Z} becomes \begin{equation}\label{eq-1-ch-ulrich} 0\to \mc O_X(-a)\to E\otimes \mc O_X(-a)\to I_Z\to 0\,. \end{equation} Tensoring this with $\mc O_X(a)$ and applying $\pi_*$ we get $$0\to \pi_*\mc O_X\to \mc O_{\mb P^2}^{\oplus 4}\to \mc O_{\mb P^2}(a)\otimes \pi_*I_Z\to 0\,.$$ Since $\pi_*I_Z\subset \pi_*\mc O_X$ and the quotient is supported on a codimension 2 subset, it follows that determinant of $\pi_*I_Z$ is equal to determinant of $\pi_*\mc O_X$. Using $\pi_*\mc O_X=\mc O_{\P^2}\oplus \mc O_{\P^2}(-s)$, we get $${\rm det}(\mc O_{\mb P^2}(a)\otimes \pi_*I_Z)= {\rm det}(\mc O_{\mb P^2}(a)\otimes \pi_*\mc O_X)=\mc O_{\P^2}(2a-s)\,.$$ Now taking determinant of the above short exact sequence we get $$\mc O_{\mb P^2}(2a-s)=\mc O_{\mb P^2}(s)\,.$$ From this it follows that $a=s$. \end{proof}
3,212,635,537,850
arxiv
\section{Introduction} Compact objects, such as black holes, neutron stars, etc. are identified by electromagnetic radiations emitted from the accreting matter. Understanding the spectral and timing properties of this radiation is essential for model builders and theorists alike. In a binary system, matter from the companion star accrets into the black hole through the Roche lobe, and/or through capturing its winds. This matter produces a disk like structure around the primary compact object. There are a large number of theoretical models in the literature which explain the physics of accretion around a black hole. Evidences of the standard disk proposed by \citet{SS73} and \citet{NT73} are present in most of the binary systems. However, the emitted spectrum of the radiation is multi-color in nature and contains both thermal and non-thermal components and a standard disk cannot explain the entire X-ray spectral features. Moreover, the inner region of the standard disk may be unstable due to the viscous and thermal effects \citep{Lightman74,Kobayashi03}. Simply put, one of the components of the spectrum is a multi-color blackbody radiation from the standard Keplerian disk and the other is a power-law component formed due to repeated Compton scatterings of the low energy (soft) photons of this blackbody by the hot electrons of the `Compton' cloud \citep{ST80,ST85}. There are many speculations regarding the nature of this Compton cloud ranging from a magnetic corona \citep{Galeev79}, to hot gas corona over the disk \citep{Haardt93,Zdziarski03}. Since the formation process of a static corona around an accretion disk is totally unknown, and since a low angular dynamic flow may naturally act as a corona, \citet[][hereafter CT95]{CT95} proposed that a disk having two distinct components, a Keplerian disk submerged inside a sub-Keplerian halo is enough to explain all the spectral properties very satisfactory. Observational evidences also started to support this so-called two-component advective flow (TCAF) \citep[e.g.,][]{Soria01,Smith02,Wu02,Cambier13}. While creating a self-consistent TCAF solution, the properties of a viscous transonic flow was made use of, in which a flow having viscosity above a critical value naturally forms a Keplerian disk and the region with a lower viscosity, due to centrifugal barrier, forms a shock wave, typically, at a few tens of Schwarzschild radii. The post-shock region (from the shock and the inner sonic point) basically evaporates the Keplerian component and together acts as a Compton cloud which produces a power-law component (hard photons) with exponential cut-off in the spectrum through thermal Comptonization. From the inner sonic point to the horizon of the black hole (bulk motion dominated advective flow or BDAF) the matter is advected rapidly to the black hole. The bulk motion in this region also up-scatters the soft photons and produces a second power-law component even when the temperature of the region is zero. If the centrifugal barrier is not strong enough, the shock may not form, but the flow still slows down. The spectral properties in this case are discussed in \citet[][hereafter C97]{C97}. The CENtrifugal pressure supported BOundary Layer or CENBOL, referred to the post-shock region or centrifugal force dominated region, which is also the base of the outflows where the pre-Jet is launched, plays the most important role in the black hole physics. As usual, this CENBOL, pre-Jet and BDAF intercept soft photons from the Keplerian disk and reprocess them to high energies via inverse Compton scattering. In this {\it letter}, we will implement the TCAF solution to study spectral properties of black hole candidates (BHCs) using widely used user-friendly spectral analysis software package, developed by GSFC/NASA, called XSPEC. For the sake of concreteness, we focus only the cases where only the CENBOL is present. We ignore the effects of BDAF and pre-Jet. In our next version of analysis, these components and spin of the black hole would be included. The Galactic transient black hole candidates are very interesting objects to study in X-rays because these sources generally show rapid evolutions in their temporal and spectral properties during their outburst phases, which are strongly correlated to each other \citep[see for a review,][]{RM06}. In general, four basic states - {\it hard, hard-intermediate, soft-intermediate}, and {\it soft} states are observed during an outburst of the BHCs \citep[see,][and references therein]{Nandi12}. The evolutions of these spectral states are observed, which indeed form a hysteresis loop during the outburst with hard states are found to be at the beginning and end time of the outbursts, whereas soft and intermediate spectral states are observed in between. The evolution of spectral states are strongly dependent on the variation of the accretion rates. According to the TCAF solution, accretion flow rates may be controlled by a physical parameter, such as the magnetic viscosity, perhaps owing to the enhanced magnetic activity of the companion \citep{Wu02,Nandi12,DD13}. During the rising phase of the outburst, viscosity may cause an increase in the accretion rate of the Keplerian matter. As the viscosity is reduced, the Keplerian rate is reduced and declining phase starts. The Keplerian disk itself recedes away leaving behind only the low-angular sub-Keplerian flow causing a hard state. Thus, a rigorous fit with TCAF model is expected to throw light on how the accretion rates and the flow geometry evolve with time. In general, low and intermediate frequency quasi-periodic oscillations (LFQPOs) are observed in hard and intermediate (hard-intermediate and soft-intermediate) spectral states of transient black hole candidates. These QPOs are reported extensively in literature, although still there are debates on the origin of these QPOs. However, according to the shock oscillation model (SOM) by Chakrabarti and his collaborators, LFQPOs are originated due to the oscillation of the post-shock region (\citealt{MSC96}, hereafter MSC96; \citealt{CAM04}, hereafter CAM04; \citealt{GGC14}, hereafter GGC14) when the resonance occurs between the infall time scale and the cooling time scale in CENBOL. During oscillation, the shape of the Compton cloud and the degree of interception change periodically. Since from our spectral fit, we can directly extract values of physical parameters related to this shock wave, we can also predict what should be the frequency of the observed low frequency QPO (if present; see \S 4.1 for details). The {\it paper} is organized in the following way: in the next Section, we briefly describe properties of the TCAF model. In \S 3, we discuss the method of the implementation of the TCAF model in XSPEC for spectral fittings. In \S 4, TCAF model fitted results obtained from the spectral fit of three different BHCs. Finally, in \S 5, we make concluding remarks and our future work plans. \section{A brief description of TCAF model} The TCAF model (CT95, C97) has been described in detail in the literature and has been proven to be a stable configuration by extensive numerical simulations \citep{GC13}. The model requires two accretion rates: one is the rate of the Keplerian component and the other is the rate of the low-angular momentum, sub-Keplerian halo, in which Keplerian disk is immersed. Two other essential parameters are the shock location and the compression ratio of the flow at the shock respectively. These two parameters provide the height of the shock, calculated using the pressure balance condition \citep{C89}. The density and temperature distribution of the flow and especially in the post-shock region are calculated using two temperature equations and continuity equations as discussed in CT95. The CT95 code also computes the optical depth, average electron temperature of the CENBOL, the spectral index etc. self-consistently by adding the relevant cooling and heating processes such as arising due to bremsstrahlung, Comptonization, inverse bremsstrahlung and inverse Comptonization. Synchrotron cooling process was not included in this version. CT95 considered only strong shock case. In order to take care of the weaker shocks also, we generalized the expression for the shock height ($H_{shk}$) and shock temperature ($T_{shk}$) in the following way: $$ H_{shk}=\left[\frac{\gamma (R-1) {X_{s}}^{2}}{R^{2}}\right]^{\frac{1}{2}} \eqno{(1)} $$ and the shock temperature is given by, $$ T_{shk}=\frac{m_p (R-1)c^{2}}{2R^{2}k_{B}(X_{s}-1)} \eqno{(2)} $$ where, $m_p$, $R$, $k_{B}$ $X_{s}$ and $\gamma$ are the mass of the proton, compression ratio, Boltzmann constant, shock location and adiabatic constant of the flow respectively. We also incorporate the spectral hardening correction \citep[see,][hereafter DMC14]{DD14a} depending on the accretion flow rate as in \citet{ST95}. \citet{Paczynski80} pseudo-Newtonian potential $\Phi_{PN}=-\frac{1}{2(r-1)}$ has been used to describe the geometry around the black hole. \section{Procedure of Implementation of TCAF into XSPEC} To fit a spectrum with the TCAF model using HEASARC's spectral analysis software package XSPEC, which already has a number of inbuilt theoretical models, we need to first generate a model {\it fits} file by varying five different input parameters: Keplerian rate (disk rate $\dot{m_d}$), sub-Keplerian rate (halo rate $\dot{m_h}$), mass of the black hole $M_{BH}$, location of the shock $X_s$, and the compression ratio $R$ and use it as a system model. In order to fit the spectra in XSPEC, we generated a additive table model {\it fits} file named ({\it TCAF.fits}). We first incorporated changes as regards to shock strength as described above in the CT95 model code (for details see, DMC14) and generated $\sim 4\times10^5$ model spectra by solving the theoretical radiative-hydro code of CT95. For each spectrum, we provide five input parameters by varying five input parameters ($\dot{m_d}$, $\dot{m_h}$, $M_{BH}$, $X_s$, and $R$) in the following ranges : i) $0.1 - 12.1$ $\dot{M}$$_{Edd}$, ii) $0.01 - 12.01$ $\dot{M}$$_{Edd}$, iii) $5 -15$ Solar mass ($M_\odot$), iv) $6 - 456$ $r_g$, and v) $1 - 4$, respectively. Here, $\dot{M}$$_{Edd}$ is the Eddington rate. These model spectra are used as input files to a program written in FORTRAN, to generate the model {\it fits} file. At present, we have fitted the spectra after keeping model {\it fits} file as a local additive table model. At the time of spectral fitting using the TCAF, one needs to supply six model initial guess parameters: i) Keplerian rate ($\dot{m_d}$ in units of $\dot{M}$$_{Edd}$), ii) sub-Keplerian rate ($\dot{m_h}$ in units of $\dot{M}$$_{Edd}$), iii) black hole mass ($M_{BH}$) in units of $M_\odot$, iv) location of the shock ($X_s$ in units of Schwarzschild radius $r_g=2GM_{BH}/c^2$), v) compression ratio ($R=\rho_+ / \rho_-$, where $\rho_+$ and $\rho_-$ are densities of the post- and pre- shock matters) of the shock, and vi) the model normalization value ($norm$), which is equivalent to $\frac{1}{4\pi D^2} cos(i)$, where $D$ is the source distance in $10$~kpc unit and $i$ is the disk inclination angle. In the near future, the fits file will be made public, for the use of the scientific community. It would be available now upon request. \section {Results: Sample Spectra Fitted with TCAF Model} We now show the results of fitting of three $2.5-25$~keV background subtracted RXTE/PCA spectra of three different black hole candidates, namely, H~1743-322, GX~339-4, GRO~J1655-40. These observations are taken from the initial phase of the outbursts, where QPOs are observed. We carry out data analysis using the FTOOLS software package HeaSoft version HEADAS 6.12 and XSPEC version 12.7. For the generation of source and background `.pha' files and spectral fittings using TCAF model we use the same method as mentioned in DMC14. It is to be noted that the TCAF model in its present form (i.e., without incorporating pre-Jet and BDAF) is able to fit hard and intermediate state spectra with acceptable values of reduced $\chi^2$ ($\leq 2$). In the soft states, the shock does not form, and the inclusion of BDAF is required (see, DMC14) for an acceptable fit. As a result, the number of parameters is required, will be reduced to three. This will be carried out in the near future. \begin{figure \vskip -0.3 cm \centering{ \includegraphics[scale=0.6,angle=270,width=4.0truecm]{fig1a.ps} \includegraphics[scale=0.6,angle=270,width=4.0truecm]{fig1b.ps}} \caption{(a) TCAF model fitted $2.5-25$~keV PCA spectrum of H~1743-322 (Observation ID = 95360-14-02-01; MJD = 55419) with variation of $\Delta \chi$ is shown in the left panel. The value of the model fitted reduced $\chi^2$ is written down. (b) The unfolded model components of the spectral fit is shown in the right panel. } \label{kn : fig1} \end{figure} In Fig. 1, $2.5-25$~keV background subtracted PCA spectrum from Galactic transient BHC H~1743-322 of observation ID = 95360-14-02-01 (MJD = 55419.1070) from its 2010 outburst is shown. Fixed values of 1\% systematic error, the hydrogen column density ($N_H$) of $1.6 \times 10^{22}$ \citep{DD13} for absorption model {\it wabs}, and $M_{BH}$ of $11.4\pm1.9$ \citep{DD14b} are used to fit the spectrum. To achieve the best fit, a single Gaussian Iron line $6.39\pm0.19$~keV is also used. With these, the value of the reduced $\chi^2 = 1.430$ is achieved. For details about the model fitted parameters, see Table 1. \begin {figure \vskip -0.3 cm \centering{ \includegraphics[scale=0.6,angle=270,width=4.0truecm]{fig2a.ps} \includegraphics[scale=0.6,angle=270,width=4.0truecm]{fig2b.ps}} \caption{Same as of Fig. 1(a-b), except the spectrum of GX~339-4 (Observation ID = 95409-01-14-04; MJD = 55300) is used. } \label{kn : fig2} \end {figure} In Fig. 2, RXTE/PCA spectrum of Galactic outbursting BHC GX~339-4 fitted with combination of TCAF and single Gaussian line ($6.31\pm0.17$~keV) is shown. This observation (ID = 95409-01-14-04; MJD = 55300.3421) is selected from the rising phase of 2010-11 outburst of GX~339-4. For the spectral fitting, fixed values of $M_{BH}$ = $5.8\pm0.5$ \citep{Hynes03}, $N_H$ = $5 \times 10^{21}$ \citep{DD10} for absorption model {\it wabs} and 1\% systematic error are used. In this case, a value of the reduced $\chi^2 = 1.029$ is achieved. \begin {figure \vskip -0.3 cm \centering{ \includegraphics[scale=0.6,angle=270,width=4.0truecm]{fig3a.ps} \includegraphics[scale=0.6,angle=270,width=4.0truecm]{fig3b.ps}} \caption{Same as of Fig. 1(a-b), except the spectrum of GRO~J1655-40 (Observation ID = 90704-04-01-00; MJD = 53439) is used. } \label{kn : fig3} \end {figure} In Fig. 3, TCAF model fitted RXTE/PCA spectrum from the 2005 outburst of the Galactic outbursting BHC GRO~J1655-40 is shown. The spectrum is fitted with combination of two additive model components, namely, TCAF and a Gaussian line ($6.62\pm0.15$~keV). This observation of ID = 90704-04-01-00 (MJD = 53439.7603) is selected from the rising phase of 2005 outburst of the source. For the spectral fitting, fixed values of $M_{BH}$ equals to $7.02\pm0.22$ \citep{Orosz97}, $N_H$ equals to $7.5 \times 10^{21}$ \citep{DD08} for absorption model {\it wabs} and 1\% systematic error are used. In this case, a value of the reduced $\chi^2 = 1.580$ is achieved. \begin{table \addtolength{\tabcolsep}{-4.50pt} \vskip -0.4 cm \small \centering \caption{\label{table1} TCAF Model Fitted Spectral Result} \vskip -0.2cm \begin{tabular}{|l|cccccccc|} \hline Source & Obs. Id & $\dot{m_d}$ & $\dot{m_h}$ & $X_s$ & R &$ \chi^2$/DOF & $\nu_{QPO}^*$ & $\nu_{QPO}^*$ \\ & & ($\dot{M}$$_{Edd}$) & ($\dot{M}$$_{Edd}$) &($r_g$)& & & (Obs.) & (Predic.) \\ \hline H 1743-322 &X-02-01& 0.516 & 0.189 & 320.0 & 1.250 & 60.1/42 & 1.045 & 1.228 \\ & &$\pm0.013$&$\pm0.081$&$\pm20.04$&$\pm0.012$& &$\pm0.007$&$\pm0.293$ \\ GX 339-4 &Y-14-04& 6.883 & 6.087 & 147.9 & 4.000 & 43.2/42 & 2.374 & 2.356 \\ & &$\pm0.003$&$\pm0.349$&$\pm1.13 $&$\pm0.075$& &$\pm0.006$&$\pm0.265$ \\ GRO J1655-40&Z-01-00& 6.987 & 1.733 & 153.8 & 3.449 & 64.78/41 & 2.313 & 2.172 \\ & &$\pm0.273$&$\pm0.232$&$\pm13.36$&$\pm0.433$& &$\pm0.010$&$\pm0.529$ \\ \hline \end{tabular} \leftline {Here, X=95360-14, Y=95409-01, and Z=90704-04. DOF means degrees of freedom.} \leftline {$^*$ Only frequency of the primary dominating QPOs (in Hz) are mentioned.} \end{table} \subsection{Prediction of QPO frequencies from the spectral fits using TCAF model} Unlike any other model fits, TCAF model predicts timing properties from spectral fits. This is because the same shock which defines the CENBOL boundary, i.e., the size of the Compton cloud, also causes low frequency QPOs as it oscillates. The presence of a shock wave does not always mean for the existence of QPOs. The shock oscillation takes place provided the cooling and the infall times scales are of same order (MSC96, CAM04, GGC14) or when the Rankine-Hugoniot relation is not satisfied even with two sonic points in the transonic sub-Keplerian flow \citep{RCM97}. The frequency of oscillation is inversely proportional to the infall time ($t_{infall}$) in post-shock region (see, Eqn. 3 below) when the cooling time scale is also similar. One can determine the QPO frequency ($\nu_{QPO}$) if the location of shock ($X_s$ in $r_g$) and the compression ratio ($R$) are known (see, Eqn. 4 below). In the presence of a shock, the infall time in the post-shock region can be expressed as, $$ t_{infall} \sim R~X_s(X_s-1)^{1/2}. \eqno{(3)} $$ The frequency of the observed QPOs becomes, $$ \nu_{QPO} \sim t_{infall}^{-1} = C / [R~X_s(X_s-1)^{1/2}] , \eqno{(4)} $$ where, $C$ is a constant = $M_{BH} \times 10^{-5}$. This shows that the derived $R$ and $X_s$ from the spectral fit leads to an estimate of the QPO frequency. In all of the three spectra fitted in this paper, QPOs are observed. From the spectral fit, we have therefore estimated the frequency of the QPOs, which roughly match with the observed values (see, Table 1). This is unique in the context of model fits. \section {Concluding Remarks and Future Plan} In this {\it letter}, we show how to implement the TCAF model in XSPEC as a local additive table model. In Figs. 1-3, we show the model fitted spectra, one each for the black hole candidates H~1743-322, GX~339-4, GRO~J1655-40 respectively. We show that the TCAF model is quite capable to fit the black hole spectra. Moreover, fitting with TCAF model appears to be better than other conventional black body and power-law models because it can directly provide accretion rates from the spectral fit. The iterative procedure of CT95 also ensures that a no X-ray component reflected from the disk is required to be added. Not only that, unlike other models, TCAF has a predictive capability of the timing properties from the spectral fitted parameters. This is possible, because the same shock which decides the size of the Compton cloud parameters such as the optical depth and its average electron temperature (and thus the spectral index), also decides the QPO frequency. Detailed spectral study using TCAF model for the 2010-11 outburst of GX~339-4 (DMC14), and 2010 outburst of H~1743-322 \citep{Mondal14} will be published elsewhere. Detailed study on the evolution of QPO frequencies during the rising and the declining phases of the outburst will also be published elsewhere \citep{DD14c}. The present version of TCAF which is implemented here does not include the subsonic pre-Jet which is originated from the CENBOL. Similarly, it does not include the innermost bulk motion dominated region of the advective flow (BDAF) whose effect would be to produce a power-law component even if the Compton cloud is cooled down. These will enable us to fit not only the very soft states, but also those states with broken power-law as well as the jet dominated flows. We have verified that our present model fits hard and intermediate states very satisfactorily. The work to extend the validity of TCAF is in progress and would be reported elsewhere. \noindent {\bf Acknowledgments :} We are thankful to NASA/GSFC scientists and XTEhelp team members (specially to Dr. Keith A. Arnaud) for their kind help to write FORTRAN programs to generate model {\it fits} file by using theoretical (TCAF) model spectra. We also acknowledge Mr. Sudip Garain of S. N. Bose National Centre for Basic Sciences for helps in producing the large fits file. SM acknowledges financial supports from CSIR (NET) scholarship.
3,212,635,537,851
arxiv
\section{LIST OF SYMBOLS} \begin{abstract} Sleep staging is of great importance in the diagnosis and treatment of sleep disorders. Recently, numerous data driven deep learning models have been proposed for automatic sleep staging. They mainly rely on the assumption that training and testing data are drawn from the same distribution which may not hold in real-world scenarios. Unsupervised domain adaption (UDA) has been recently developed to handle this domain shift problem. However, previous UDA methods applied for sleep staging has two main limitations. First, they rely on a totally shared model for the domain alignment, which may lose the domain-specific information during feature extraction. Second, they only align the source and target distributions globally without considering the class information in the target domain, which hinders the classification performance of the model. In this work, we propose a novel adversarial learning framework to tackle the domain shift problem in the unlabeled target domain. First, we develop unshared attention mechanisms to preserve the domain-specific features in the source and target domains. Second, we design a self-training strategy to align the fine-grained class distributions for the source and target domains via target domain pseudo labels. We also propose dual distinct classifiers to increase the robustness and quality of the pseudo labels. The experimental results on six cross-domain scenarios validate the efficacy of our proposed framework for sleep staging and its advantage over state-of-the-art UDA methods. \end{abstract} \begin{IEEEkeywords} sleep stage classification, domain adaptation, adversarial training, attention mechanism, self-training, dual classifiers \end{IEEEkeywords} \section{Introduction} Sleep stage classification is crucial to identify sleep problems and disorders of humans. This task refers to the classification of one or many different signals including electroencephalography (EEG), electrocardiogram (ECG), electrooculogram (EOG) and electromyogram (EMG) into one of five sleep stages, namely, wake (W), rapid eye movement (REM), non-REM stage 1 (N1), non-REM stage 2 (N2), and non-REM stage 3 (N3). For EEG recordings, they are usually split into 30-second segments, where each segment is classified manually into one of the above stages by specialists \cite{bands}. Despite being mastered by many specialists, the manual annotation process is tedious and time-consuming especially with the large amount of collected EEG data. In recent years, data-driven deep learning approaches have been developed, relying on the availability of massive amount of labeled data for training. Therefore, many deep learning methods have been proposed recently to perform sleep staging automatically \cite{deepsleepnet,tnnls_cnn_paper,seqsleepnet,attnSleep_paper}. These methods implemented different network structures to process EEG data and trained proper classification models relying on the availability of large datasets. Since these methods were able to achieve decent performance, it was expected to be a step-forward to reduce the reliance on the manual scoring process. However, many sleep labs were found to keep relying on manually scoring EEG data \cite{attentive_sleep_Staging,phan2020towards}. The main reason is the high variation between the public training data and the data generated in the sleep labs due to several factors, \textit{e.g.}, different measuring locations on the skull and different sampling rates for measuring devices. This is well-known as the \textit{domain shift} problem, i.e., the training (\emph{source}) and testing (\emph{target}) data have different distributions. Consequently, these models suffer a significant performance degradation when training on public datasets and testing on the sleep labs data. In addition, it is difficult for these labs to annotate large enough EEG datasets to re-train the models. A typical solution for the above issues is to employ transfer learning approaches \cite{phan2020towards,channel_mismatch}. For instance, Phan \textit{et al.} \cite{phan2020towards} applied transfer learning from a large dataset to a different and relatively smaller one. It includes pre-training their model on the large dataset and then fine-tuning it on the smaller dataset. Similarly, the authors in \cite{channel_mismatch} studied the channel mismatch problem while transferring the knowledge from one dataset to another. However, these transfer learning methods require the availability of labeled data from the target domain to fine-tune the model. In reality, the target domain may be completely unlabeled, and it is thus impractical to fine-tune the models. Unsupervised Domain adaptation (UDA) is a special scenario of transfer learning that aims to minimize the mismatch between the source and target distributions without using any target domain labels. So far, a limited number of studies have investigated UDA in the context of sleep stage classification. For example, Chambon \textit{et al.} \cite{da_sleep} improved the feature transferability between source and target domains using optimal transport domain adaptation. Nasiri \textit{et al.} \cite{attentive_sleep_Staging} used adversarial training based domain adaptation to improve the transferability of features. However, these methods still suffer from the following limitations. First, they rely on shared models (i.e., same architectures with same weights) to extract features from both source and target domains. This may loss the domain-specific features for both source and target domains, which can be harmful to the classification task on the target domain. Second, these approaches only align the global distribution between source and target domains without considering the mismatch of the fine-grained class distribution between the domains. As such, target samples belonging to one class can be misaligned to an incorrect class in the source domain. To tackle the aforementioned challenges, we propose an \textbf{A}dversarial \textbf{D}omain \textbf{A}daptation with \textbf{S}elf-\textbf{T}raining (\textbf{ADAST}) for EEG-based sleep stage classification. We first propose a domain-specific attention module to preserve both the source-specific and the target-specific features. Second, to align with the fine-grained distribution of the unlabeled target domain, we propose a self-training strategy to provide supervisory signal via target domain pseudo labels. Hence, we can adapt the classification decision boundaries according to the target domain classes. Moreover, we design distinct dual classifiers to improve the robustness of target domain pseudo labels. The main contributions of this work are summarized as follows: \begin{itemize} \item We propose a novel adversarial domain adaptation framework called ADAST for sleep stage classification to handle the domain shift issue across different datasets. \item ADAST utilizes unshared domain-specific attention module to preserve the key features in both source and target domains during adaptation, which can boost the classification performance. \item ADAST incorporates a dual-classifier based self-training to align the fine-grained distribution of the unlabeled target domain, which enforces the classification decision boundaries to adapt to the target domain classes. \item Extensive experiments demonstrate that our ADAST achieves superior performance for cross-domain sleep stage classification against state-of-the-art UDA methods. \end{itemize} \section{Related Works} \subsection{Sleep Stage Classification} Automatic sleep staging with single-channel EEG has been widely studied in the literature. In particular, deep learning based methods \cite{deepsleepnet,attnSleep_paper,seqsleepnet} have showed great advances through end-to-end feature learning. These methods design different network structures to extract the features from EEG data and capture the temporal dependencies. Several studies explored convolutional neural networks (CNN) for feature extraction from EEG data. For example, Supratak \textit{et al.} \cite{deepsleepnet} proposed two CNN branches to extract different frequency features in EEG signals. Li \textit{et al.} \cite{cnn_se_Sleep} proposed to adopt CNN supported by a squeeze and excitation block to extract the features from multi-epoch EEG data. Eldele \textit{et al.} \cite{attnSleep_paper} developed a multi-resolution CNN with an adaptive features recalibration to extract representative features. Additionally, Qu \textit{et al.} \cite{residual_attn} proposed multiple residual CNN blocks to learn features mappings. They further handled the temporal dependencies by using recurrent neural networks (RNNs) as in \cite{deepsleepnet}, or adopted the multi-head self-attention approach as a fast and efficient way \cite{attnSleep_paper,residual_attn}. Instead of using CNN, some works adopted RNNs. The authors in \cite{seqsleepnet} designed an end-to-end hierarchical RNN architecture. It consists of an attention-based recurrent layer to handle the short-term features within EEG epochs. They further applied a recurrent layer to capture the epoch-wise features. Some other researchers proposed different ways to handle EEG data. For example, Phan \textit{et al.} \cite{xsleepnet} used both raw EEG signal and its time-frequency image to design a joint multi-view learning from both representations. Additionally, Jia \textit{et al.} \cite{graphsleepnet} proposed a graph-based approach for sleep stage classification, where graph and temporal convolutions were utilized to extract spatial features and capture the transition rules, respectively. Neng \textit{et al.} \cite{ccrrsleepnet} handled the EEG data in the frame, epoch and sequence levels to extract a mixture of features that would improve the classification performance. Despite the success of these methods in handling complex EEG data, their performance for cross-domain (e.g., cross-dataset) sleep stage classification is limited due to the domain shift issue. Therefore, many researches were directed to adopt transfer learning approaches to handle this issue. \subsection{Transfer Learning for Sleep Staging} Some works studied the problem of personalized sleep staging~\cite{personalized_1,personalized_2} to improve the classification accuracy for individual subjects within the same dataset using transfer learning. For a dataset with two-night recordings for each subject, they pretrained the model by excluding the two nights of the test subject. Next, the first night is applied for fine-tuning the model and the second night is used for evaluation. However, few works have been proposed to work for cross-dataset scenario, i.e., training a model on subjects from one dataset and testing on different subjects from another dataset. Phan \textit{et al.} \cite{phan2020towards} studied the data-variability issue with the availability of large source dataset, and different labeled but insufficient target dataset. They trained their model on the source dataset, and fine-tuned it on the smaller target dataset. With similar problem setting, Phan \textit{et al.} \cite{channel_mismatch} proposed to use deep transfer learning to overcome the problem of channel mismatch between the two domains. These methods either require large corpus source datasets to increase their generalization ability or a labeled target dataset to fine-tune their models. Unsupervised domain adaptation (UDA) approaches were proposed to address these issues by aligning the features from different domains. These approaches can be categorized as discrepancy-based approaches and adversarial-based approaches. The discrepancy-based approaches attempt to minimize the distance between the source and target distributions. For example, Maximum Mean Discrepancy (MMD) \cite{mmd} and CORrelation ALignment (CORAL) \cite{coral} align the first and the second order statistics respectively. One the other hand, adversarial-based approaches mimic the adversarial training proposed in the generative adversarial network (GAN) \cite{goodfellow_gan}. Nasiri \textit{et al.} \cite{attentive_sleep_Staging} considered the problem of data transferability between two datasets, where models suffer poor generalization across subjects/datasets. They proposed an adversarial training along with local and global attention mechanisms to extract the transferable individual information for unsupervised domain adaptation. \begin{figure*} \centering \includegraphics[width=\textwidth]{imgs/overall_architecture22.pdf} \caption{Overall architecture of the proposed ADAST framework. The shared feature extractor consists of three convolutional blocks, where each block contains 1D-convolution, batch normalization, non-linear ReLU activation and MaxPooling. The two classifiers share the same architecture, but we apply a similarity constraint on their weights to push them from being identical to each other (best viewed in colors, as blocks with similar colors represent shared components).} \label{Fig:end-to-end} \end{figure*} \section{Method} \subsection{Preliminaries} In this work, we focus on the problem of unsupervised cross-domain adaptation for EEG-based sleep staging. In this setting, we have an access to a labeled source dataset ${X}_s= \{(\mathbf{x}_s^i,y_s^i)\}_{i=1}^{n_s}$ of $n_s$ labeled samples, and an unlabeled target dataset ${X}_t= \{(\mathbf{x}_t^j)\}_{j=1}^{n_t}$ of $n_t$ target samples. The source and target domains are sampled from source distribution $P_s(X_s)$ and target distribution $P_t(X_t)$ respectively. The source and target domains have different marginal distributions (i.e., $P_s \neq P_t$), yet they share the same label space $Y=\{1,2, \dots K\}$, where $K$ is the number of classes (i.e., sleep stages). The domain adaptation scenario aims to transfer the knowledge from a labeled source domain to a domain-shifted unlabeled target domain. In the context of EEG data, both $\mathbf{x}_s^i$ and $\mathbf{x}_t^i$ $\in \mathbb{R}^{1 \times T}$, where the number of electrodes/channels is $1$ since we use single-channel EEG data, and $T$ represents the number of timesteps in the 30-second EEG epochs. \subsection{Overview} As shown in Fig.~\ref{Fig:end-to-end}, our proposed framework consists of three main components, namely the domain-specific attention, the adversarial training and the dual classifier based self-training. First, the domain-specific attention plays an important role in refining the extracted features so that each domain preserves its key features. Second, the adversarial training step leverages a domain discriminator to align the source and target features. Particularly, the domain discriminator network is trained to distinguish between the source and target features while the feature extractor is trained to confuse the domain discriminator via generating domain invariant features. Finally, the self-training strategy utilizes the target domain pseudo labels to adapt the classification decision boundaries according to the target domain classes. The dual classifiers are incorporated to improve the quality and robustness of the pseudo labels. Further details about each component will be provided in the following subsections. \subsection{Domain-specific Attention} Our proposed framework extracts domain invariant features by using a shared CNN-based feature extractor, i.e., $F_s(\cdot) = F_t(\cdot) = F(\cdot)$. However, relying solely on this shared architecture may not be able to preserve the key features of each domain. Therefore, we propose an unshared attention module to effectively capture domain-specific information and hence refine the extracted features for both source and target domains. For each position in the feature space, the attention module calculates the weighted sum of the features at all positions with a little computational cost. Thus, the features at each location have fine details that are coordinated with fine details in distant portions of the features. Formally, given an input source sample $\mathbf{x}_s \in \mathbb{R}^{1 \times T}$ that is passed through the feature extractor to generate the source features, i.e., $F(\mathbf{x}_s) = (\mathbf{f}_{s1}, \dots, \mathbf{f}_{sl})\in \mathbb{R}^{d \times l}$, where $d$ is the number of CNN channels, and $l$ is the length of the features. Inspired by \cite{sagan}, we deploy a convolutional attention mechanism as shown in Fig.~\ref{fig:self-attn}. The attention operation starts by obtaining new representation for the features at each position by using two 1D-convolutions, i.e., $H_1$ and $H_2$. Specifically, given $\mathbf{f}_{si}, \mathbf{f}_{sj} \in \mathbb{R}^{d}$, which are the feature values at the positions $i$ and $j$, they are transformed into $\mathcal{Z}_{si} = H_1(\mathbf{f}_{si})$ and $\mathcal{Z}_{sj} = H_2(\mathbf{f}_{sj})$. The attention scores are calculated as follows. \begin{equation} \mathcal{V}_{ji} = \frac{\exp (\mathcal{Z}_{si}^\top \mathcal{Z}_{sj})}{\sum_{k=1}^{l} \exp(\mathcal{Z}_{sk}^\top \mathcal{Z}_{sj})}. \label{eqn:attn_map} \end{equation} Here, the attention score $\mathcal{V}_{ji}$ indicates the extent to which $j^{th}$ position attends to the $i^{th}$ position in the feature map. The output of the attention layer is $\mathcal{O}_{s} = (\mathbf{o}_{s1}, \dots, \mathbf{o}_{sj}, \dots \mathbf{o}_{sl}) \in \mathbb{R}^{d \times l}$, where \begin{equation} \mathbf{o}_{sj} = \sum_{i=1}^{l} \mathcal{V}_{ji} {\mathbf{f}_s}_i. \label{eqn:attn_out} \end{equation} We denote the attention process in Equations~\ref{eqn:attn_map} and \ref{eqn:attn_out} as $A(\cdot)$, such that $\mathcal{O}_s = A_s(F(\mathbf{x}_s))$. The same process applies to the target domain data flow to train $A_t$. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{imgs/self-attention22.pdf} \end{center} \caption{Design of domain-specific attention module.} \label{fig:self-attn} \end{figure} \subsection{Adversarial Training} Given the learned source and target representations which preserve the domain-specific features, adversarial training is employed to align the source and target domains. Inspired by the generative adversarial network (GAN) \cite{goodfellow_gan}, we aim to solve a minimax objective between the feature extractor and domain discriminator. Specifically, the domain discriminator is trained to classify between the source and target features, while the feature extractor tries to generate indistinguishable representations for both source and target domains. By doing so, the classifier trained on the source domain can generalize well on the target domain. However, with the minimax objective, the discriminator can saturate quickly, resulting a gradient vanishing problem \cite{adda}. To address this issue, we train our model using a standard GAN loss with inverted labels \cite{goodfellow_gan}. Formally, the domain discriminator, $D$, classifies the input features to be either from the source or target domain. Thus, $D$ can be optimized using a standard cross entropy loss with the labels indicating the domain of the data point. The objective of this operation $\mathcal{L}_{D}$ can be defined as: \begin{align} \min_D \mathcal{L}_{\mathrm{D}}= &-\mathbb{E}_{\mathbf{x}_{s} \sim P_{s}}[\log D(A_s(F(\mathbf{x}_{s})))] \nonumber \\ &-\mathbb{E}_{\mathbf{x}_{t} \sim P_{t}}[\log (1-D(A_t(F(\mathbf{x}_{t}))))], \label{eqn:train_disc} \end{align} where $\mathcal{L}_{\mathrm{D}}$ is used to optimize the domain discriminator separately so that it discriminates the source and target features. On the other hand, the feature extractor and the domain-specific attention are trained to confuse the discriminator by mapping the target features to be similar to the source ones. The objective function can be described as: \begin{align} \min_{F,A_s,A_t} \mathcal{L}_{\mathrm{adv}} = &-\mathbb{E}_{\mathbf{x}_{s} \sim P_{s}}[\log (1-D(A_s(F(\mathbf{x}_{s}))))] \nonumber \\ &-\mathbb{E}_{\mathbf{x}_{t} \sim P_{t}}[\log D(A_t(F(\mathbf{x}_{t})))]. \label{eqn:adv_train} \end{align} Notably, only $\mathcal{L}_{\mathrm{adv}}$, which optimizes the feature extractor and the domain-specific attentions, is added to the overall objective function to ensure that the model is able to generate domain-invariant features. \subsection{Dual Classifier based Self-Training} After the adversarial training step, the global distributions between the source and target domains are aligned. However, the classes between different domains may be misaligned which can deteriorate the performance. Hence, there is a need to align the fine-grained class distributions between the source and target domains. To address this issue, we propose a novel self-training strategy supported by dual classifiers. To apply self-training, we first use the model to produce pseudo labels for the target domain. Next, we train the model with these pseudo labels that act as a supervisory signal to adapt the decision boundaries according to target domain classes. However, the generated pseudo labels might be noisy and inefficient. Therefore, at the end of training, we generated new pseudo labels and use them to retrain the model again. This process is repeated for $r$ iterations. Due to the domain shift between the source and target domains, we aim to further improve robustness of the generated pseudo labels. Therefore, we jointly train dual classifiers $C_1$ and $C_2$ that share the same architecture. Our dual-classifier approach has two main benefits. First, it helps the model to avoid the variance in the training data. Second, the average prediction vector of two classifiers decreases the probability of low-confident predictions. Since the two classifiers share the same architecture, we need to ensure their diversity and make sure they do not converge to the same predictions during training. Thus, we add a regularization term $ | \theta_{C_1}^\intercal \theta_{C_2} |$ on the weights of the two classifiers as inspired by \cite{tri_training}, where $\theta_{C_1}$, $\theta_{C_2}$ represent the weights of $C_1$ and $C_2$ respectively. This regularization term ensures the diversity of the two classifiers and helps them to produce different yet correct predictions. The final prediction vector is the averaged vector of the predictions of both classifiers. Formally, in each iteration, we first calculate the average probability $\mathbf{p}_{t}$ of the two classifiers, and the corresponding target pseudo labels $\hat{y}_{t}$ as follows. \begin{align} & \mathbf{p}_{t} = \frac{1}{2} \left[C_1(A_t(F(\mathbf{x}_{t})) + C_2(A_t(F(\mathbf{x}_{t}))\right], \label{eqn:pt} \\ & \hat{y}_{t} = argmax(\mathbf{p}_{t}). \label{eqn:y_pseudo} \end{align} The target classification loss $\mathcal{L}_{\mathrm{cls}}^{t}$ based on the above pseudo labels is defined as follows. \begin{align} \min_{F,A_t,C_1,C_2} \mathcal{L}_{\mathrm{cls}}^{t}= -\mathbb{E}_{\mathbf{x}_{t} \sim P_{t}} \sum_{k=1}^K \mathbbm{1}_{[\hat{y}_t = k]} \log \mathbf{p}_{t}^k, \label{eqn:trg_cls} \end{align} where $\mathbbm{1}$ is the indicator function, which is set to be 1 when the condition is met, and set to 0 otherwise. The target classification loss $\mathcal{L}_{\mathrm{cls}}^{t}$ optimizes the feature extractor $F$, the target domain-specific attention $A_t$ as well as the dual classifiers $C_1$ and $C_2$. Similarly, the source classification loss $\mathcal{L}_{\mathrm{cls}}^{s}$, which depends on the source labels $y_s$, is formalized as follows. \begin{align} &\mathbf{p}_{s} = \frac{1}{2}~[C_1(A_s(F(\mathbf{x}_{s})) + C_2(A_s(F(\mathbf{x}_{s}))], \\ &\min_{F,A_s,C_1,C_2} \mathcal{L}_{\mathrm{cls}}^{s}= -\mathbb{E}_{(\mathbf{x}_{s},y_{s}) \sim P_{s}} \sum_{k=1}^K \mathbbm{1}_{[y_s = k]} \log \mathbf{p}_{s}^k , \label{eqn:src_cls} \end{align} where the source classification loss $\mathcal{L}_{\mathrm{cls}}^{s}$ optimizes the feature extractor $F$, the source domain-specific attention $A_s$ as well as the dual classifiers $C_1$ and $C_2$. To sum up, we integrate the adversarial loss with the source and target classification losses and the regularization of the dual classifiers in one objective loss function as follows. \begin{align} \mathcal{L}_{\mathrm{overall}} = \mathcal{L}_{\mathrm{adv}} + \mathcal{L}_{\mathrm{cls}}^s + \lambda_1 \mathcal{L}_{\mathrm{cls}}^t + \lambda_2 | \theta_{C_1} ^\intercal \theta_{C_2} |. \label{eqn:overall} \end{align} Since the adversarial training and the source classification are two essential modules, we set their weights to one and tune the values of the two hyperparameters $\lambda_1$ and $\lambda_2$ to control their contributions. In overall, the three losses are integrated to guide the feature extractor to generate domain-invariant features, while allowing the domain-specific attentions to preserve the key features for each domain. Additionally, the dual classifiers are diversified using the regularization term. \section{Experiments} \subsection{Datasets} We evaluate the proposed framework on three challenging datasets, namely Sleep-EDF\footnote{https://physionet.org/physiobank/database/sleep-edfx/} (\textbf{EDF} for short), SHHS-1 (\textbf{S1}) and SHHS-2 (\textbf{S2}). These three datasets represent distinct domains due to their differences in sampling rates and EEG channels. EDF dataset contains PSG readings of 20 healthy subjects with 10 males and 10 females. Each PSG recording consists of two EEG channels namely Fpz-Cz and Pz-Oz, with a sampling rate of 100 Hz. We adopted the EEG recordings from Fpz-Cz channel following previous studies \cite{deepsleepnet,seqsleepnet,attnSleep_paper}. Both S1 and S2 are derived from SHHS dataset\footnote{https://sleepdata.org/datasets/shhs} \cite{shhs_ref1,shhs_ref2}. SHHS is a multi-center cohort study conducted to assess the cardiovascular and other consequences of sleep-disordered breathing. The subjects in the SHHS dataset suffered from different diseases, such as cardiovascular diseases and lung diseases. S1 dataset contains the data in the first visits of the patients during 1995 to 1998, while S2 dataset contains the data in the second visits of the patients in 2011. Each PSG file in both datasets contains data from two EEG channels namely C4-A1 and C3-A2, where we only adopt C4-A1 channel recordings for both datasets. We selected subjects from S1 and S2 datasets such that 1) they contain different patients, 2) subjects from S2 dataset have a sampling rate of 250 Hz, and 3) the subjects have Apnea Hypopnea Index (AHI) $<1$ to eliminate the bias to sleep disorders~\cite{AHI_reference}. Notably, we down-sampled the data from S1 and S2 datasets such that the sequence length is the same as the EDF dataset, i.e., 30 seconds $\times$ 100 Hz ($T=3000$). We preprocessed the three datasets by 1) merging stages N3 and N4 into one stage (N3) according to AASM standard, and 2) including only 30 minutes of wake stage periods before and after the sleep~\cite{deepsleepnet}. Table \ref{tbl:datasets} shows a brief summary of the above three datasets before down-sampling. \input{tables/datasets} \input{tables/baselines_comparison} \subsection{Experimental Settings} To evaluate the performance of our model and baseline models, we employed the classification accuracy (ACC) and the macro-averaged F1-score (MF1). These two metrics are defined as follows: \begin{align} &ACC = \frac{\sum_{i=1}^{K}TP_i}{M}, \label{equ:acc}\\ &MF1 = \frac{1}{K} \sum_{i=1}^{K} \frac{2 \times Precision_i \times Recall_i}{Precision_i + Recall_i} \label{equ:f1}, \end{align} where $Precision_i = \frac{TP_i}{TP_i + FP_i}$, and $ Recall_i = \frac{TP_i}{TP_i + FN_i} $. $TP_i, ~FP_i,~ TN_i$, and $FN_i$ denote the True Positives, False Positives, True Negatives, and False Negatives for the $i$-th class respectively, M is the total number of samples and K is the number of classes. All the experiments were repeated 5 times with different random seeds for model initialization, and then we reported the average performance (i.e., ACC and MF1) with standard deviation. We performed \textit{subject-wise} splits for the data from the three domains, i.e., we split them into 60\%, 20\%, 20\% for training, validation and testing, respectively. Note that all the data from one subject were assigned to either of the 3 sets under the \textit{subject-wise} splits. In particular, we used the training part of source and target domains while training our model. We used the validation part and test part of target domain for validation and testing. Following \cite{tri_training,dirt}, we used the validation split of the target domain to select the best hyperparameters in our model. We tuned the parameters $\lambda_1, \lambda_2$ in the range $\{0.00001, 0.0001, 0.001, 0.01, 0.1, 1\}$ and set their values as $\lambda_1=0.01$ and $\lambda_2=0.001$. For self-training, we set the maximum iterations $r$ to 2, as the performance of the model is found to converge. We used Adam optimizer with a learning rate of 1e-3 that is decayed by 0.1 after 10 epochs, weight decay of 3e-4, $\beta_1 = 0.5$, $\beta_2 = 0.99$, and a batch size of 128. All the experiments were performed with PyTorch 1.7 on a NVIDIA GeForce RTX 2080 Ti GPU. The source code and supplementary material are available at \href{https://github.com/emadeldeen24/ADAST}{https://github.com/emadeldeen24/ADAST}. \subsection{Baselines} To assess our proposed ADAST model, we compared it against seven state-of-the-art domain adaptation baselines as follows. In particular, Deep CORAL, MDDA and DSAN are discrepancy-based methods, while DANN, ADDA, CDAN, DIRT-T are adversarial-based methods. \begin{itemize} \item \textbf{Deep CORAL} \cite{deep_coral}: it extends CORAL \cite{coral} to learn a nonlinear transformation that aligns the correlations of layer activations in deep neural networks. \item \textbf{MDDA} \cite{mdda}: it applies MMD and CORAL on multiple classification layers to minimize the discrepancy between the source and target domains. \item \textbf{DSAN} \cite{dsan}: it incorporates a local MMD loss to align the same-class sub-domain distributions. \item \textbf{DANN} \cite{dann}: it jointly trains feature extractor and domain classifier by negating the gradient from the domain classifier with a gradient reversal layer (GRL). \item \textbf{ADDA} \cite{adda}: it performs similar operation as DANN but by inverting the labels instead of using GRL. \item \textbf{CDAN} \cite{cdan}: it minimizes the cross-covariance between feature representations and classifier predictions. \item \textbf{DIRT-T} \cite{dirt}: it combines virtual adversarial domain adaptation (VADA) with a teacher to refine the decision boundary for the target domain. \end{itemize} We used our backbone feature extractor for all the baselines to ensure a fair comparison. We tuned the hyperparameters of the baselines to achieve their best performance. We also included \textbf{Source-Only} in the experiments, which was trained on the source domain and directly tested on the target domain without adaptation. It represents the lower bound in our experiments. \subsection{Experimental Results} \label{sec:exp_results} Table~\ref{tbl:baselines_comparison} shows the comparison results among various methods. In overall, all the domain adaptation methods, including discrepancy-based and adversarial-based methods, achieve better performance than Source-Only. This indicates the importance of using domain adaptation to address the domain shift problem for cross-dataset sleep stage classification. We noticed that the three methods considering the class-conditional distribution, i.e., CDAN, DIRT-T and DSAN, outperform the ones globally aligning the source and target domains, i.e., DANN, Deep CORAL, ADDA and MDDA. This indicates that considering class distribution, especially in the case of imbalanced sleep data, is important to achieve better classification performance on the target domain. Our proposed ADAST achieves a superior performance over all the baselines in terms of both mean accuracy and F1-score in four out of six cross-domain scenarios for two reasons. First, our ADAST, similar to CDAN, DIRT-T and DSAN, also considers the class-conditional distribution. In particular, ADAST explores the target domain classes using the proposed self-training strategy with dual classifiers. Second, ADAST preserves domain-specific features using the unshared attention module, which improves the performance. As shown in Table \ref{tbl:baselines_comparison}, the performance of our model is less than most baselines in the scenario S1$\rightarrow$EDF. Note that we used the same value of $\lambda_1$ (i.e., 0.01) for all the six scenarios, which might not be fair for some scenarios. We found that the quality of the pseudo labels is not good in this scenario S1$\rightarrow$EDF, and thus we should use a smaller $\lambda_1$ to reduce the contribution of the target classification loss. By tuning $\lambda_1$ from 0.01 to $10^{-6}$, the mean accuracy and MF1 of our ADAST in the scenario S1$\rightarrow$EDF would increase from 75.94\% and 63.33\% to 78.50\% and 64.73\%, respectively. Please refer to the Fig. S.1c in the supplementary material for more details. We also observed interesting results while investigating different cross-dataset scenarios. Various methods usually achieve better performance in the cross-domain scenario S1$\rightarrow$S2 than EDF$\rightarrow$S2 (and similarly S2$\rightarrow$S1 is better than EDF$\rightarrow$S1). To explain this, as shown in Table \ref{tbl:datasets}, S1 and S2 are closer to each other, as they have the same EEG channel. Meanwhile, EDF has a different EEG channel and sampling rate, and thus it is a distant domain from S1 and S2. These results indicate that distant domain adaptation is still very challenging. Finally, we observed that S1$\rightarrow$EDF is easier than S2$\rightarrow$EDF, probably because S1 and EDF have a similar sampling rate. \input{tables/ablation} \subsection{Ablation Study} We assessed the contribution of each component in our ADAST framework, namely the unshared domain-specific attention module (\textbf{ATT}), the dual classifiers (\textbf{DC}) and self-training (\textbf{ST}). Particularly, we conducted an ablation study to show the results of different variants of ADAST in Table~\ref{tbl:ablation}. The results emphasize three main conclusions. First, using the proposed domain-specific attention benefits the overall performance, as it helps to preserve the domain-specific features. Second, the self-training improves the classification performance by $\sim$ 1.1\%. This improvement shows the benefit of incorporating the target domain class information in modifying the classification boundaries by using the pseudo labels. Third, the addition of dual classifiers benefits the classification performance in overall as it avoids the variance in the training data. Moreover, combining it with the self-training in specific is helpful to further improve the performance by 2.5\% through improving the quality of the pseudo labels. \begin{figure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=\linewidth]{imgs/source_only_c_to_a_src_trg.png} \caption{} \label{fig:src_trg_alignment:a} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=\linewidth]{imgs/my_method_c_to_a_src_trg.png} \caption{} \label{fig:src_trg_alignment:b} \end{subfigure} \caption{UMAP feature space visualization showing the source and target domains alignment using (a) Source-Only, and (b) our ADAST, applied for the scenario S2$\rightarrow$EDF.} \label{fig:src_trg_alignment} \end{figure} \begin{figure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=\linewidth]{imgs/source_only_c_to_a_trg_only.png} \caption{} \label{fig:trg_classification:a} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=\linewidth]{imgs/my_method_c_to_a_trg_only.png} \caption{} \label{fig:trg_classification:b} \end{subfigure} \caption{UMAP feature space visualization showing the target domains classification performance after (a) Source-Only, and (b) our ADAST alignment, applied for the scenario S2$\rightarrow$EDF.} \label{fig:trg_classification} \end{figure} \subsection{Representation Visualization} In Section \ref{sec:exp_results}, the results illustrate the advantages of our proposed ADAST framework over the initial Source-Only performance. To make the comparison more intuitive, we visualized the feature representations that are learned during the training process using UMAP \cite{umap}. First, we investigated the alignment quality, where Fig.~\ref{fig:src_trg_alignment} visualizes the source and target alignment in the scenario S2$\rightarrow$EDF. In particular, Fig.~\ref{fig:src_trg_alignment:a} shows the Source-Only alignment, and Fig.~\ref{fig:src_trg_alignment:b} shows our ADAST framework alignment. In these figures, the red dots represent the source domain, and the blue dots denote the target domain. We can observe that the Source-Only is not very efficient as there are many disjoint patches that are not well-aligned with the target domain. However, our ADAST framework improves the alignment of the two domains to become arc-shaped, which increases the overlapped region and they become less discriminative. Additionally, we investigated the target domain classification performance in the aforementioned scenario after the alignment in Fig.~\ref{fig:trg_classification}. In particular, Fig.~\ref{fig:trg_classification:a} is the Source-Only performance, and Fig.~\ref{fig:trg_classification:b} is the one after our alignment. We noticed that the Source-Only alignment generates a lot of overlapping samples from different classes, which degrades the target domain classification performance. On the other hand, our ADAST framework improves the discrimination between the classes and they become more distinct from each other. This is achieved with the aid of self-training strategy. \subsection{Sensitivity Analysis} \textbf{Effect of target classification loss.} Since the self-training process relies on target domain pseudo labels, it is not practical to assign a high weight to the target classification loss as the pseudo labels are expected to have some uncertainties. Therefore, we studied the effect of the different variants to the weight assigned to the target classification loss $\lambda_1$, as shown in Fig.~\ref{fig:sens_trg_cls}. Notably, when $\lambda_1$ is very small (i.e., $\lambda_1$ = 1e-6), it makes the self-training useless, and the performance becomes very close to the case without self-training. As we gradually increase $\lambda_1$ value, we notice improvement on the overall performance until we reach the optimal value of $\lambda_1=0.01$. Further increasing $\lambda_1$ deteriorates the performance as the model is highly penalized based on the pseudo labels which may contain false examples. \begin{figure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=\linewidth]{imgs/trg_cls_sens_analysis.png} \caption{} \label{fig:sens_trg_cls} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=\linewidth]{imgs/dissim_sens_analysis.png} \caption{} \label{fig:sens_dissim} \end{subfigure} \caption{Sensitivity analysis to the different variants of $\lambda_1$ and $\lambda_2$ in Eq.~\ref{eqn:overall}.} \label{fig:sens_analysis} \end{figure} \textbf{Effect of classifier weight constraint.} Since the dual classifiers share the same architecture, it is important to keep their predictions relatively different but not with a big gap. The classifier weight constraint is the factor that keeps this distance with acceptable margin, and hence, it becomes important to study the effect of this term and how its weight $\lambda_2$ should be selected. We analyzed the performance of our model with different $\lambda_2$ values, as illustrated in Fig.~\ref{fig:sens_dissim}. When $\lambda_2$ is very small, it makes the two classifiers perform very closely to each other, which has a similar performance with a single classifier. The performance is gradually improved when increasing $\lambda_2$, as the two classifiers tend to have different classification decisions. It can be found that the best performance is achieved with $\lambda_2=0.001$. However, as its value is increased beyond this threshold (i.e., 0.001), we notice that the overall performance degrades. This happens as the weights of the two classifier became very dissimilar, moving them away from the correct predictions. \section{Conclusions} In this paper, we proposed a novel adversarial domain adaptation architecture for sleep stage classification using single channel raw EEG signals. We tackle the problem of the domain shift that happens when training the model on one dataset (i.e., the source domain), and testing it on another out of distribution dataset (i.e., the target domain). We developed unshared attention mechanisms to preserve domain-specific features. We also proposed a dual classifier based self-training strategy, which helps the model to adapt the classification boundaries according to the target domain with robust pseudo labels. The experiments performed on six cross-domain scenarios generated from three public datasets prove that our model can achieve superior performance over state-of-the-art domain adaptation methods. Additionally, the ablation study shows that the dual classifier based self-training is the main contributor to the improvement as it considers class-conditional distribution in the target domain. \bibliographystyle{unsrt}
3,212,635,537,852
arxiv
\section{Introduction} In 1924 T. Rad\'o proved the following remarkable theorem \cite{Ra}: \begin{theorem} Let $\Omega\subseteq \mathbb{C}$, be an open set and $f$ be a continuous function in $\Omega$ which is holomorphic on $\Omega\setminus\lbrace f^{-1}(0)\rbrace$. Then $f$ is holomorphic in $\Omega$. \end{theorem} A distinguishing feature of Rad\'{o}'s result is that, unlike other removable singularity results where the {\it singularity set} is required to be small in some sense, here no size bounds are a priori assumed. It is just the image of this set that has to be small. This theorem has been generalized by many authors in various directions. In particular analogous result holds for multi-dimensional holomorphic functions. In a different direction Stout \cite{St} proved that if $f$ is continuous in $\Omega\subseteq \mathbb{C}$, holomorphic in $\Omega\setminus E$, for a relatively closed set $E$, not locally constant, and $f(E)$ is polar then $f$ extends holomorphically past $E$. Analogous questions are meaningful not only for the class of holomorphic functions. The analogue of the Rad\'o theorem also holds for harmonic functions and even more generally for solutions to homogeneous uniformly elliptic second order PDEs, see \cite{Kr}. Even more generally Rad\'o theorem holds for certain classes of viscosity solutions to nonlinear elliptic PDEs, see \cite{T} and references therein. In this note we focus our attention to the case of subharmonic and plurisubharmonic functions. As the examples $$u(x_1,\cdots,x_n)=-|x_1|,\ v(z_1,\cdots,z_n)=-|Re(z_1)|$$ clearly show (this observation was made by Pokrovskiĭ in \cite{PP}) the direct analogue of Rad\'o's theorem fails for these classes- both examples are continuous, even Lipschitz, and subharmonic (resp. plurisubharmonic) except on a set that is sent by $u$ (resp. by $v$) to zero. It turns out that an additional smoothness assumption is the key to obtain a positive result. Our main theorem, which mirrors the result in \cite{Kr} for harmonic functions, can be stated as follows: \begin{theorem}\label{main} Let $\Omega$ be open in $\mathbb R^{n}$ (respectively in $\mathbb C^{n}$) and $E\subseteq \Omega$ be a Borel set. If $u\in C^{1,p}(\Omega),\, p\in(0,1]$ is subharmonic (respectively plurisubharmonic) in some open neighborhood of $\Omega\setminus E$ and the Hausdorff measure $\mathcal H^{p}(u(E))=0$ then $u$ is actually subharmonic (respectively plurisubharmonic) in $\Omega$. If $u\in C^{1}(\Omega)$ then the same conclusion holds if $u(E)$ is at most countable. The results are optimal with respect to the size of the image of $E$. \end{theorem} It is interesting to note that our theorem is {\it sharp} in both settings whereas in many other situations sharp results for subharmonic functions are in general not sharp for plurisubharmonic ones. A typical example is Lelong's theorem: (pluri)subharmonic functions which are locally bounded above extend through polar sets. This is optimal for subharmonic functions, whereas there are non-polar sets through which locally bounded above plurisubharmonic functions do extend. In principle, this is because counterexamples tend to belong to the ''harmonic part'' of the theory of subharmonic functions. We wish also to point out that as a rule general results for removable sets of subharmonic functions are usually sharp. On the other hand there are quite a few removability results for plurisubharmonic functions for which optimality of assumptions remains open, see \cite{HP}, \cite{Po}. Note that Theorem \ref{main} settles the extension problem in the classes $C^{1,p}, 0\leq p\leq 1$. If the functions in question are $C^2$ or better the picture changes dramatically as our next result shows: \begin{theorem}\label{c2} Let $\Omega$ be open in $\mathbb R^{n}$ (respectively in $\mathbb C^{n}$) and $E\subseteq \Omega$ be a Borel set. If $u\in C^{2}(\Omega)$ is subharmonic (respectively plurisubharmonic) in some open neighborhood of $\Omega\setminus E$ then $u$ extends subharmonically (respectively plurisubharmonically) to the whole $\Omega$ if and only if $u(E)$ has empty interior. \end{theorem} We wish to point out that $int u(E)=\emptyset$ is easily seen to be necessary as elementary examples show, but the other implication is somewhat tricky especially along the critical set $\lbrace z\in\Omega: \nabla u(z)=0\rbrace$. In fact we shall use a recent result of Gardiner and S\"odin \cite{GS} about extensions of subharmonic functions past critical sets. Another key tool in our approach comes from our previous work \cite{DD} where we proved that plurisubharmonic functions with subharmonic singularities across a Lebesgue measure zero set extend plurisubharmonically. Also throughout the paper we make use of the following simple but fundamental fact from calculus: If a real valued function $u$ attains an extremal value at some point $x$ where the gradient of $u$ exists finitely (that is the partial derivatives of $u$ exist finitely) then $\nabla u(x)=0$, regardless of whether $u$ is differentiable at $x$ or not. The note is organized as follows. In the next section we discuss various technical results of independent interest related to weaker notions of differentiability that we shall use later on. In Section 3 we recall and slightly generalize our findings from \cite{DD}. In Section 4 we thoroughly discuss the result of Gardiner and S\"odin from \cite{GS} providing yet another slight generalization. Section 5 is devoted to the proofs of Theorems \ref{main} and \ref{c2}. Finally in Section 6 some applications are provided. \section{Critical sets and sets of finite existence of the gradient of discontinuous functions}\label{borelstuff} In this section we study the Borel complexity of the critical set and the set of finite existence of the gradient of discontinuous functions. Our main point of interest are (pluri)subharmonic functions, thus upper semicontinuous ones, yet we take the general perspective of real valued functions, as this seems to be a neglected area of function theory, see \cite{D}. Let $\Omega\subseteq \mathbb R^{n}$ be open and $u$ be a finite real valued function on $\Omega$. We consider the set $$E_{1}:=\{x\in\Omega: \nabla u(x) \text{ exists finitely and } \nabla u(x)=0\}.$$ This is the \textit{critical set} of $u$. We consider also $$E_{2}:=\{x\in\Omega: \nabla u(x) \text{ does not exists finitely}\},$$ that is the set where either some partial derivative of $u$ at $x$ equals $\pm\infty$ or the limit defining this partial does not exist. Subharmonic functions take the values in $[-\infty,\infty)$, so we have to take special care of the set $u^{-1}(-\infty)$. Observe that $v:= e^{u}$ is a finite valued subharmonic function, so the discussion below applies to it. Let $E_1^{u}$, $E_2^{u}$ and $E_1^{v}$, $E_2^{v}$ be the corresponding sets for $u$ and $v$. It is immediate that \begin{enumerate} \item $u^{-1}(-\infty)\subseteq E_2^{u}$ \item ${\displaystyle u^{-1}(-\infty)=\bigcap_{j=1}^{\infty}\{x\in\Omega : u(x)<-j\}}$ is $G_{\delta}$ \item $x\in E_{1}^{u}\implies x\in E_1^{v}$ \item $x\in E_{1}^{v}$ and $u(x)>-\infty \implies x\in E_{1}^{u}$ \item $x\in E_{2}^{u}$ and $u(x)>-\infty \implies x\in E_{2}^{v}$ \item $x\in E_{2}^{v} \implies x\in E_{2}^{u}$ \item it may happen that $x\in E_{2}^{u}$ and $x\not \in E_{2}^{v}$, take for example $u(z)=\log \Vert z\Vert^2$, but then $x\in E_{1}^{v}$. This is because the ratio $\frac{e^{u(x+t\vec{e}_{i})}}{t}$ takes both positive and negative values, unless it vanishes identically. The latter is possible if $n\geq 3$ because then the coordinate axes form a polar set. \end{enumerate} Thus, $u^{-1}(-\infty)\subseteq E_{1}^{v}\cup E_{2}^{v}$ and hence $E_{1}^{u}= E_{1}^{v}\setminus u^{-1}(-\infty)$ and $E_{2}^{u}= E_{2}^{v}\cup u^{-1}(-\infty)$, so we may restrict ourselves to finite real valued functions. The Borel complexity does not change. It is a classical result by Zahorski \cite{Za} and Brudno \cite{Bru} that for any finite real valued function of \textit{one variable} the set where the derivative is infinite is $F_{\sigma\delta}$ of measure zero, and where it does not exist either finitely or infinitely is $G_{\delta\sigma}$. In particular, $E_2$ is Borel. H\'ajek \cite{Ha} proved that for any finite real valued function $u$ of \textit{one variable}, the function $$x\to\limsup_{t\to 0}\frac{u(x+t)-u(x)}{t},$$ called the \textit{upper derivative}, is a Baire- 2 function. Hence, the critical set $E_1$ of $u$, which is $$\left\{x: \limsup_{t\to0} \frac{u(x+t)-u(x)}{t}\leq 0\right\}\bigcap\left\{x: \liminf_{t\to0} \frac{u(x+t)-u(x)}{t}\geq 0\right\},$$ is $F_{\sigma\delta}$, and hence Borel. In higher dimensions these results do not generalize in the expected way. Actually, the best one can get for an arbitrary finite real valued function $u$ is that each one dimensional cross-section of the sets where the $i$-th partial derivative vanishes and respectively does not exist finitely, in direction parallel to the $i$-th coordinate axis is Borel. However, the example of Serrin \cite{S} shows that both $E_1$ and $E_2$ may fail even to be Lebesgue measurable for a measurable function $u$. The example of Neubauer and Hahn \cite{N} shows that $E_1$ and $E_2$ may fail to be Borel for a function $u$ which is Baire- 3, yet $E_1$ and $E_2$ are Lebesgue measurable for any Baire function $u$. On the other hand, a recent result by Mykhaylyuk and Plichko \cite{MP} shows that for a continuous $u$ the set where the gradient exists finitely is $F_{\sigma\delta}$. In particular, $E_2$ is $G_{\delta\sigma}$. We prove the following result: \begin{theorem} For any Baire- 1 (in particular semicontinuous) finite valued function $u:\Omega\to \mathbb R$ the sets $E_1$ and $E_2$, defined above, are Borel (both can be empty). \end{theorem} We begin with a lemma that, in a sense, saves the proof from the continuous case. \begin{lemma}\label{fsigma} Let $I\subseteq \mathbb R$ be an interval, $B\subseteq \mathbb R^{n}$ be a $F_{\sigma}$ set. Then any set of the form $$\{(x,x_2,\ldots,x_n)\in\mathbb R\times \mathbb R^{n-1}: x_1 - x\in I, (x_1,\ldots,x_n)\in B\}$$ is $F_{\sigma}$. \end{lemma} \begin{proof} As intervals are $F_{\sigma}$ sets, we observe that $B\times I\subseteq \mathbb R^{n+1}$ is $F_{\sigma}$. Now the mapping $f:\mathbb R^{n+1}\to\mathbb R^{n}$, where $$f(x_1,\ldots, x_{n+1})=(x_1-x_{n+1},x_2,\ldots,x_{n})$$ is continuous, as it is a projection, and maps $B\times I$ to the set from the statement of the lemma. Continuous mappings $\mathbb R^{n+1}\to \mathbb R^{n}$ preserve $F_\sigma$ sets. This can be seen as follows. Every $F_\sigma$ set is at most countable union of closed sets, and each closed set in $\mathbb R^{n+1}$ can be represented as the at most countable union of compact sets. As continuous mappings transform compact set to compact ones, we conclude by observing that $f(\bigcup_{s} A_s)=\bigcup_{s} f(A_s)$ for any family of sets $A_s$. \end{proof} \begin{remark} We remark that the lemma is specific for $\mathbb R^{n}$, as in general topological spaces it is not true that continuous mappings preserve $F_{\sigma}$ sets. \end{remark} For the part concerning $E_2$ we follow closely \cite{MP}. Let $A_{m,k}$ be the set of all $(x,y)\in \mathbb R\times\mathbb R^{n-1}$ such that for all $w,v\in \left(x-\frac{1}{k},x+\frac{1}{k}\right)$ one has $$|u(w,y)-u(v,y)|\leq \frac{1}{m}.$$ Let $B_{m,k}$ be the set of all $(x,y)\in \mathbb R\times\mathbb R^{n-1}$ such that for all $w,w'\in \left(x,x+\frac{1}{k}\right)$ and all $v,v'\in \left(x-\frac{1}{k},x\right)$ one has $$\left|\frac{u(w,y)-u(v,y)}{w-v}-\frac{u(w',y)-u(v',y)}{w'-v'}\right|\leq \frac{1}{m}.$$ Thus, $$A_{m,k}=\bigcap_{w,v \in \left(x-\frac{1}{k},x+\frac{1}{k}\right)}\left\{(x,y): |u(w,y)-u(v,y)|\leq \frac{1}{m}\right\},$$ $$B_{m,k}=$$ $$\bigcap_{v,v'\in \left(x-\frac{1}{k},x\right),\, w,w'\in \left(x,x+\frac{1}{k}\right)}\left\{(x,y): \left|\frac{u(w,y)-u(v,y)}{w-v}-\frac{u(w',y)-u(v',y)}{w'-v'}\right|\leq \frac{1}{m} \right\}.$$ As the restriction of $u$ to the last $(n-1)$ coordinates with the first coordinate fixed is a Baire- 1 function, so is the function $\mathbb R^{n-1}\ni y\to |u(w,y)-u(v,y)|\in \mathbb R$, hence the set $\{|u(w,y)-u(v,y)|\leq \frac{1}{m}\}$, for fixed $w$ and $v$, is $G_\delta$. The same holds with the set from the definition of $B_{m,k}$ with $v,v',w,w'$ fixed. The difference with the continuous case is that these sets need not be closed. Since in the definitions of $A_{m,k}$ and $B_{m,k}$ we take uncountable intersections it is not clear that these sets are Borel. \begin{lemma} The sets $A_{m,k}$ and $B_{m,k}$ are $G_\delta$. \end{lemma} \begin{proof} We note that the complement of $A_{m,k}$ satisfies $$A_{m,k}^{c}=\bigcup_{w,v \in \left(x-\frac{1}{k},x+\frac{1}{k}\right)}\left\{(x,y): u(w,y)-u(v,y)> \frac{1}{m}\right\}$$$$\bigcup \bigcup_{w,v \in \left(x-\frac{1}{k},x+\frac{1}{k}\right)}\left\{(x,y): u(w,y)-u(v,y)<- \frac{1}{m}\right\}. $$ It is enough to prove that $\bigcup_{w,v \in \left(x-\frac{1}{k},x+\frac{1}{k}\right)}\left\{(x,y): u(w,y)-u(v,y)> \frac{1}{m}\right\}$ is $F_\sigma$, as the proof for $\bigcup_{w,v \in \left(x-\frac{1}{k},x+\frac{1}{k}\right)}\left\{(x,y): u(w,y)-u(v,y)<- \frac{1}{m}\right\}$ is the same. Observe that $$\bigcup_{w,v \in \left(x-\frac{1}{k},x+\frac{1}{k}\right)}\left\{(x,y): u(w,y)-u(v,y)> \frac{1}{m}\right\}$$ $$=\bigcup_{r\in \mathbb Q}$$ $$\left[\bigcup_{w \in \left(x-\frac{1}{k},x+\frac{1}{k}\right)}\left\{(x,y): u(w,y)> r\right\}\bigcap \bigcup_{v \in \left(x-\frac{1}{k},x+\frac{1}{k}\right)}\left\{(x,y): r-u(v,y)> \frac{1}{m}\right\}\right].$$ Now the problem is reduced to proving that $\bigcup_{w \in \left(x-\frac{1}{k},x+\frac{1}{k}\right)}\left\{(x,y): u(w,y)> r\right\}$ is $F_\sigma$. This follows from Lemma \ref{fsigma} with $B=\{(x,y): u(x,y)> r\}$ which is $F_{\sigma}$, as $u$ is Baire- 1, and $I=\left(-\frac{1}{k},\frac{1}{k}\right)$. The proof for $B_{m,k}$ is similar but longer. Again, $$B_{m,k}^{c}=D\bigcup F,$$ where $D$ is equal to $$\bigcup_{v,v'\in \left(x-\frac{1}{k},x\right),\, w,w'\in \left(x,x+\frac{1}{k}\right)}\left\{(x,y): \frac{u(w,y)-u(v,y)}{w-v}-\frac{u(w',y)-u(v',y)}{w'-v'}> \frac{1}{m} \right\}$$ and $F$ is equal to $$\bigcup_{v,v'\in \left(x-\frac{1}{k},x\right),\, w,w'\in \left(x,x+\frac{1}{k}\right)}\left\{(x,y): \frac{u(w,y)-u(v,y)}{w-v}-\frac{u(w',y)-u(v',y)}{w'-v'}<- \frac{1}{m} \right\}.$$ The set $D$ can be written as $$\bigcup_{v,v'\in \left(x-\frac{1}{k},x\right),\, w,w'\in \left(x,x+\frac{1}{k}\right)}\left\{(x,y): \frac{u(w,y)-u(v,y)}{w-v}-\frac{u(w',y)-u(v',y)}{w'-v'}> \frac{1}{m} \right\}$$$$=\bigcup_{r\in \mathbb Q} \left[ \bigcup_{v\in \left(x-\frac{1}{k},x\right),\, w\in \left(x,x+\frac{1}{k}\right)}\left\{(x,y): \frac{u(w,y)-u(v,y)}{w-v}> r \right\}\right.$$ $$\left.\bigcap \bigcup_{v'\in \left(x-\frac{1}{k},x\right),\, w'\in \left(x,x+\frac{1}{k}\right)}\left\{(x,y): r-\frac{u(w',y)-u(v',y)}{w'-v'}> \frac{1}{m} \right\}\right].$$ Further, $$\bigcup_{v\in \left(x-\frac{1}{k},x\right),\, w\in \left(x,x+\frac{1}{k}\right)}\left\{(x,y): \frac{u(w,y)-u(v,y)}{w-v}> r \right\}$$$$=\bigcup_{v\in \left(x-\frac{1}{k},x\right),\, w\in \left(x,x+\frac{1}{k}\right)}\left\{(x,y): u(w,y)-rw>u(v,y)-rv \right\}$$ $$=$$ $$\bigcup_{\rho\in\mathbb Q}\left[ \bigcup_{ w\in \left(x,x+\frac{1}{k}\right)}\left\{(x,y): u(w,y)-rw>\rho \right\}\bigcap \bigcup_{v\in \left(x-\frac{1}{k},x\right)}\left\{(x,y): \rho>u(v,y)-rv \right\}\right].$$ Finally, $\bigcup_{ w\in \left(x,x+\frac{1}{k}\right)}\left\{(x,y): u(w,y)-rw>\rho \right\}$ is $F_{\sigma}$ by Lemma \ref{fsigma}, where $B= \left\{(x,y): u(x,y)-rx>\rho \right\}$ is $F_{\sigma}$ as $u(x,y)-rx$ is Baire- 1, and $I= \left(0,\frac{1}{k}\right)$. \end{proof} \begin{remark} As above, this lemma is specific for $\mathbb R^{n}$. \end{remark} Now $A_{m,k}$ is $G_\delta$, hence $A=\bigcap_{m=1}^{\infty}\bigcup_{k=1}^{\infty} A_{m,k}$ is $G_{\delta\sigma\delta}$. Likewise $B_{m,k}$ is $G_\delta$ hence $B=\bigcap_{m=1}^{\infty}\bigcup_{k=1}^{\infty} B_{m,k}$ is $G_{\delta\sigma\delta}$. The set of points where $\frac{\partial u}{\partial x_1}$ exists and is finite is exactly $A\cap B$. The same is true for all the other partial derivatives, hence the set, where $u$ allows a gradient is $G_{\delta\sigma\delta}$. Hence, $E_2$ is $F_{\sigma\delta\sigma}$. For the part concerning $E_1$ we follow closely \cite{N}. We denote by $$\underline{\frac{\partial u}{\partial x_1}}(x,y):=\liminf_{t\to 0}\frac{u(x+t,y)-u(x,y)}{t},\quad \overline{\frac{\partial u}{\partial x_1}}(x,y):=\limsup_{t\to 0}\frac{u(x+t,y)-u(x,y)}{t} $$ the \textit{lower} and respectively \textit{upper partial derivatives} of $u$ with respect to $x_{1}$ at $(x,y)\in \mathbb R\times\mathbb R^{n-1}$. These functions may take infinite values. What follows is in principle the same as in \cite{N}, except for the notation, the fact that in \cite{N} one-sided lower partial derivatives are considered and the lemma below. We define $$A_p:=\left\{(x,y)\in \Omega: \underline{\frac{\partial u}{\partial x_1}}(x,y)<p\right\}.$$ Without loss of generality we may assume that $p>1$, as $A_p$ for the function $u$ is the same as $A_{q+p}$ for the function $u(x,y)+qx$ which is of the same class as $u$. Further, we define $$U^{+}_{r,k}:=\left\{(x,y): u(x+h,y)<r \text{ for some } h\in \Big[ \frac{1}{k+1}, \frac{1}{k}\Big)\right\}$$$$=\bigcup_{h\in \big[ \frac{1}{k+1}, \frac{1}{k}\big) }\{(x,y): u(x+h,y)<r\}$$ $$=\left\{(z,y)\in\mathbb R\times \mathbb R^{n-1}: z - x\in \Big[ \frac{1}{k+1}, \frac{1}{k}\Big), (x,y)\in \{u(x,y)<r\}\right\},$$ $$U^{-}_{r,k}:=\left\{(x,y): u(x+h,y)>r \text{ for some } h\in \Big(-\frac{1}{k},-\frac{1}{k+1}\Big] \right\}$$$$=\bigcup_{h\in \big(-\frac{1}{k},-\frac{1}{k+1}\big] }\{(x,y): u(x+h,y)>r\}$$ $$=\left\{(z,y)\in\mathbb R\times \mathbb R^{n-1}: z - x\in \Big(-\frac{1}{k},-\frac{1}{k+1}\Big], (x,y)\in \{u(x,y)>r\}\right\},$$ $$V^{+}_{r,k,p}:=\left\{(x,y): u(x,y)>r-\frac{p}{k}\right\},$$ $$V^{-}_{r,k,p}:=\left\{(x,y): u(x,y)<r+\frac{p}{k}\right\}.$$ As $u$ is Baire- 1, $V^{+}_{r,k,p}$, as well as $V^{-}_{r,k,p}$, are $F_{\sigma}$. We further define $$W_{k,p}:=\left(\bigcup_{r\in \mathbb Q} U^{+}_{r,k}\cap V^{+}_{r,k,p}\right)\bigcup \left(\bigcup_{r\in \mathbb Q} U^{-}_{r,k}\cap V^{-}_{r,k,p}\right)$$ $$=\left\{(x,y): u(x+h,y)<u(x,y)+\frac{p}{k} \text{ for some } h\in \Big[ \frac{1}{k+1}, \frac{1}{k}\Big)\right\}$$ $$\bigcup\left\{(x,y): u(x+h,y)>u(x,y)-\frac{p}{k} \text{ for some } h\in \Big(-\frac{1}{k},-\frac{1}{k+1}\Big] \right\}.$$ Let $$\varphi(h):=\begin{cases}h\left(\left \lceil\frac{1}{h}\right \rceil-1\right), & \text { if } h>0\\ h\left(\left \lfloor\frac{1}{h}\right \rfloor+1\right), & \text { if } h<0\end{cases}.$$ Clearly, $\lim_{h\to 0}\varphi(h)=1$. Further, we set $$Z_{k,p}:= \left\{(x,y): \frac{u(x+h,y)-u(x,y)}{h}<\frac{p}{\varphi(h)} \text{ for some } -\frac{1}{k}<h<\frac{1}{k}, h\neq 0\right\}$$ $$=\bigcup_{h\in\left(-\frac{1}{k},\frac{1}{k}\right)\setminus\{0\}}\left\{(x,y): \frac{u(x+h,y)-u(x,y)}{h}<\frac{p}{\varphi(h)} \right\}.$$ Now $Z_{k,p}=\bigcup_{j=k}^{\infty} W_{j,p}$. Let $B_p:=\bigcap_{k=1}^{\infty} Z_{k,p}$. Finally, $$A_p$$ is equal to $$\bigcup_{m=1}^{\infty}B_{p-\frac{1}{m}}=\bigcup_{m=1}^{\infty}\bigcap_{k=1}^{\infty}\bigcup_{j=k}^{\infty}\left[\left(\bigcup_{r\in \mathbb Q} U^{+}_{r,j}\cap V^{+}_{r,j,p-\frac{1}{m}}\right)\bigcup \left(\bigcup_{r\in \mathbb Q} U^{-}_{r,j}\cap V^{-}_{r,j,p-\frac{1}{m}}\right)\right].$$ Now $U^{+}_{r,j}$ is exactly the type of set as in Lemma \ref{fsigma}, as $B=\{(x,y): u(x,y)<r\}$ is $F_\sigma$, and we can take $I=\Big[\frac{1}{k+1},\frac{1}{k}\Big)$. Hence, $U^{+}_{r,j}$ is $F_\sigma$. So are the sets $U^{-}_{r,j}$, $V^{+}_{r,j, p-\frac{1}{m}}$, $V^{-}_{r,j, p-\frac{1}{m}}$, and $\bigcup_{j=k}^{\infty}\left[\left(\bigcup_{r\in \mathbb Q} U^{+}_{r,j}\cap V^{+}_{r,j,p-\frac{1}{m}}\right)\bigcup \left(\bigcup_{r\in \mathbb Q} U^{-}_{r,j}\cap V^{-}_{r,j,p-\frac{1}{m}}\right)\right]$. Thus, $A_p$ is $F_{\sigma\delta\sigma}$, hence its complement $\left\{(x,y): \underline{\frac{\partial u}{\partial x_1}}(x,y)\geq p\right\}$ is $G_{\delta\sigma\delta}$. Similar reasoning applies to the upper partial derivative with respect to $x_1$. Now the set where $\frac{\partial u}{\partial x_1}(x,y)= 0$ is $$\left\{(x,y): \underline{\frac{\partial u}{\partial x_1}}(x,y)\geq 0\right\}\cap \left\{(x,y): \overline{\frac{\partial u}{\partial x_1}}(x,y)\leq 0\right\},$$ which is $G_{\delta\sigma\delta}$. The same is true for all the other partial derivatives, hence the set $E_1$ is $G_{\delta\sigma\delta}$. Meanwhile, we also obtained that $\overline{\frac{\partial u}{\partial x_1}}$ and $\underline{\frac{\partial u}{\partial x_1}}$ are Baire- 3 functions. \begin{remark} We suspect that in fact $E_1$ is $F_{\sigma\delta}$ and $E_2$ is $G_{\delta\sigma}$ at least for subharmonic functions $u$, as suggested by the special cases of continuous \cite{MP}, and one variable functions \cite{Ha}. If true, this would be optimal.\end{remark} \section{Plurisubharmonic extension of subharmonic functions} In this section we slightly generalize the main result of \cite{DD} that subharmonic functions which are plurisubharmonic outside a small set are actually plurisubharmonic. Let $\Omega\subseteq \mathbb C^{n}$ be open and $u$ be a subharmonic function on $\Omega$. In addition to the sets $E_1$ and $E_2$ from Section \ref{borelstuff}, we consider also $E_{3}$ - an arbitrary subset of $\Omega$ of zero Lebesgue measure which is disjoint with $E_{1}\cup E_{2}$. Note that $E_3$ need not be Borel. It is well known that any subharmonic function has a finite gradient almost everywhere, see e.g. \cite{Kr}. This is somehow counter intuitive, as a subharmonic function is merely upper semicontinuous, and the latter functions are a priori only continuous on a residual set which may happen to be of Lebesgue measure zero. More accurately, it is $e^{u}$ that is continuous on the residual set, as a subharmonic function can have a dense set of poles, and hence be nowhere continuous in the classical sense. We note that, despite allowing a finite gradient on a big set, a subharmonic function, even a finite valued one, may be nowhere differentiable. All this allows us to assume that $E_2$ can be considered jointly with $E_3$ as \newline $\lambda^{2n}(E_2\cup E_3)=0$. Let $E:=E_1\cup E_2\cup E_3$. \begin{theorem}\label{DDD} Let $u$ be a subharmonic function in $\Omega$. Then if $u$ is plurisubharmonic in some open neighborhood of $\Omega\setminus E $ it is actually plurisubharmonic in the whole $\Omega$. \end{theorem} \begin{remark} This theorem is only interesting for functions $u$ with a big critical set. If $E_1$ and hence $E $ is of Lebesgue measure zero, then Theorem \ref{DDD} is a corollary of the main result in \cite{DD}. However, even for smooth $u$ the critical set of $u$ may be quite big, and the Morse-Sard theorem specifies the minimal regularity of $u$ that guarantees that at least the image $u(E_1)$ is small. Typically, we will think of $E_{1}$ as some nowhere dense Cantor- type set of positive Lebesgue measure. \end{remark} As in \cite{DD}, we recall (see \cite{H}, Proposition 3.2.10') that subharmonicity and plurisubharmonicity is equivalent to subharmonicity and respectively plurisubharmonicity in the viscosity sense. \begin{lemma}\label{visc} Let $\Omega\subseteq \mathbb C^{n}$ be open. An upper semicontinuous function $u$ on $\Omega$ is subharmonic (respectively plurisubharmonic) if for every $z_0\in\Omega$ and every local function $\varphi \in C^2$ defined near $z_0$ and satisfying $\varphi(z)\geq u(z)$ with equality at $z_0$ we have $\Delta\varphi(z_0)\geq 0$ (respectively we have $\frac{\partial ^2 \varphi}{\partial z_j\partial\bar{z}_k} (z_0)\geq 0$, that is the complex Hessian is non negative definite). \end{lemma} For some points $z_0\in\Omega$ such a function $\varphi$ may not exist. It is known the set of points allowing the local $C^2$ majorant is dense for general upper semicontinuous $u$, but may well be countable. This is, however, enough to recover (pluri)subharmonicity. On the other hand for (pluri)subharmonic functions the set of points allowing $\varphi$ is of full Lebesgue measure. Also note that in the language of viscosity theory the above lemma says that $u$ is subharmonic iff it is a viscosity subsolution to the Laplace equation $\Delta v=0$ (that is $\Delta u\geq 0$ in the viscosity sense) and $u$ is plurisubharmonic iff it is a viscosity subsolution to the {\it constrained complex Hessian} equation $\det^{+}\left(\frac{\partial ^2 v}{\partial z_j\partial\bar{z}_k}\right)= 0$ (that is $\det^{+}\left(\frac{\partial ^2 u}{\partial z_j\partial\bar{z}_k}\right)\geq 0$ in the viscosity sense), where $${\det}^{+}(A)=\begin{cases} \det(A)\ & \text{ if }\ A\geq 0;\\ -\infty\ & \text{ otherwise }. \end{cases}$$ For more details regarding the viscosity theory of such constrained complex Hessian equations we refer to \cite{EGZ} and \cite{Ze}. As a direct corollary of Lemma \ref{visc} plurisubharmonicity of $u$ would follow if one can show that for any $z_0\in E $ and any local $C^2$ majorant $\varphi\geq u$, $\varphi(z_0)=u(z_0)$ one has $$\frac{\partial ^2 \varphi}{\partial z_j\partial\bar{z}_k} (z_0)\geq 0$$ as matrices. To this end we closely follow the argument from \cite{DD}. Suppose $u$ allows a local $C^2$ majorant at $z_0\in E $. Translating if necessary one may assume that $z_0$ is the origin, that $\Omega$ contains a ball $B_{\delta_0}$ centered at the origin and that $\varphi$ is defined on $B_{\delta_0}$. For a fixed $0<\delta<\frac{\delta_0}2$ we consider the function $$v_{\delta}(z):=\varphi(z)+\delta\Vert z\Vert ^2-\delta^3-u(z).$$ By the very definition of $\varphi$ we have $v_{\delta}(z)\geq 0$ on the collar $B_{2\delta}\setminus B_{\delta}$. Recall (see \cite{DD}), that $v_{\delta}$ is bounded below on $B_{\delta}$ and $v_{\delta}$ is a lower semicontinuous viscosity supersolution to the Poisson equation $\Delta v= \Delta\varphi+4n\delta=:f$, that is $$\Delta v_{\delta}\leq f.$$ Consider the following convex envelope in $B_{2\delta}$ $$\Gamma_{v_\delta}(z):=\sup\lbrace l(z)|\ l-\text{affine},\, l\leq v_{\delta}\, \text{ on }\ B_{\delta},\, l\leq 0\ \text{ on }\ B_{2\delta}\setminus B_{\delta} \rbrace.$$ As $v_\delta$ is bounded below, the family of functions $l$ above is non void. Just as in \cite{DD}, we exploit the Alexandrov-Bakelman-Pucci estimate for viscosity supersolutions (see Theorem 3.2 in \cite{CC} for continuous supersolutions and \cite{I} for merely lower semicontinuous supersolutions): \begin{lemma}\label{ABP} Let $v_\delta$ and $\Gamma_{v_{\delta}}$ be as above. Then there is a universal constant $C$, which depends only on $n$ such that $$\delta^3\leq \sup_{B_{\delta}}v_{\delta}^{-}\leq C\delta \left(\int_{\lbrace B_{\delta}\cap \lbrace v_{\delta}=\Gamma_{v_{\delta}}\rbrace\rbrace}\max\{f,0\}^{2n}d\lambda^{2n}\right)^{\frac1{2n}}.$$ \end{lemma} The upshot is that for every $\delta\in \left(0,\frac{\delta_0}2\right)$the function $v_\delta$ matches its convex envelope $\Gamma_{v_{\delta}}$ on a set of positive measure within $B_{\delta}$. There are two possibilities. In the first case there exists a sequence $\delta_{j}\searrow 0$ such that that there is a point $z_{\delta_{j}}\in B_{\delta_{j}}\cap \lbrace v_{\delta_{j}}=\Gamma_{v_{\delta_{j}}}\rbrace\setminus E $. Then, arguing as in \cite{DD}, we find that the smallest eigenvalue of the complex Hessian of $\varphi$ at $z_{\delta_{j} }$ is at least $-\delta_{j}$, meaning that the complex Hessian of $\varphi$ at the origin is non negative definite. In the second case there is a fixed $\delta_1<\frac{\delta_0}2$ such that for all $0<\delta<\delta_1$ one has $B_{\delta}\cap \lbrace v_{\delta}=\Gamma_{v_{\delta}}\rbrace\subseteq E $. As $B_{\delta}\cap\lbrace v_{\delta}=\Gamma_{v_{\delta}}\rbrace$ has positive Lebesgue measure, and $\lambda^{2n}(E_2\cup E_3)=0$, it follows that $B_{\delta}\cap\lbrace v_{\delta}=\Gamma_{v_{\delta}}\rbrace\cap E_1$ has positive measure. We observe that on $E_1\cap \lbrace v_{\delta}=\Gamma_{v_{\delta}}\rbrace$ we have $\nabla \Gamma_{v_{\delta}}=\nabla v_{\delta}$, because $v_{\delta}-\Gamma_{v_{\delta}}$ attains a minimum, and $\nabla v_{\delta}=\nabla \psi_{\delta}$, where $\psi_{\delta}(z):= \varphi(z) +\delta\Vert z\Vert ^2$, because $\nabla u$ vanishes. The following condition (''monotonicity of the gradient'') is equivalent to convexity for $C^1$ functions: For any $z,w\in B_{\delta}$ (treated as real points in $\mathbb R^{2n}$) one has $$\langle \nabla \Gamma_{v_{\delta}}(z)-\nabla\Gamma_{v_{\delta}}(w),z-w\rangle\geq 0.$$ Thus, on $B_{\delta}\cap\lbrace v_{\delta}=\Gamma_{v_{\delta}}\rbrace\cap E_1$ we have $\langle \nabla \psi_{\delta}(z)-\nabla \psi_{\delta}(w),z-w\rangle\geq 0.$ As $B_{\delta}\cap\lbrace v_{\delta}=\Gamma_{v_{\delta}}\rbrace\cap E_1$ has positive Lebesgue measure, we pick a point of density $z_\delta$ in it, meaning that $$\lim_{\varepsilon\to 0^{+}}\frac{\lambda^{2n}( B_{\varepsilon}(z_{\delta})\cap B_{\delta}\cap \lbrace v_{\delta}=\Gamma_{v_{\delta}}\rbrace\cap E_1 )}{\lambda^{2n}(B_{\varepsilon})}=1.$$ We argue that the real Hessian of $\varphi +\delta\Vert z\Vert ^2$ is non negative definite at $z_\delta$. If this is not the case then there is a set $H$ of directions $\mathbb R^{2n}\ni v=(v_1,\ldots,v_{2n})\neq 0$ such that $$\sum_{j,k=1}^{2n}\frac{\partial ^2\psi_{\delta}}{\partial x_j\partial x_k} (z_{\delta})v_jv_k< c\Vert v\Vert^2$$ for a fixed negative $c$ strictly between the smallest eigenvalue of the Hessian and $0$. Moreover, the spherical projection of this set, which we call $A$, on $S^{2n-1}$ is of positive $2n-1$- dimensional spherical measure $\sigma^{2n-1} (A)>0$. There are directions in $H\cap B_{\delta}\cap\lbrace v_{\delta}=\Gamma_{v_{\delta}}\rbrace\cap E_1$ arbitrarily close to $z_\delta$, as otherwise the density at $z_{\delta}$ would be no greater than $\frac{\sigma^{2n-1}(S^{2n-1})-\sigma^{2n-1}(A)}{\sigma^{2n-1}(S^{2n-1})}<1$. We pick \newline $z_{m}\in H\cap B_{\delta}\cap\lbrace v_{\delta}=\Gamma_{v_{\delta}}\rbrace\cap E_1,\, m=1,2,\ldots$ such that $z_{m}\to z_\delta$ and put ${v_{m}}=z_{m}-z_{\delta}$. Using that for $C^2$ functions the Hessian is the Jacobian of the gradient, we get $$\nabla \psi_{\delta}(z_m)= \nabla \psi_{\delta}(z_\delta)+ Jac(\nabla \psi_{\delta})(z_m-z_\delta)+o(\Vert z_m-z_\delta\Vert),$$ so we have $$0>\frac{c}{2}>\frac{\sum_{j,k=1}^{2n}\frac{\partial ^2\psi_{\delta}}{\partial x_j\partial x_k} (z_{\delta})(v_m)_j(v_m)_k+o(\Vert {v_{m}}\Vert^2)}{\Vert{v_{m}}\Vert^2}$$ $$=\frac{\langle \nabla \psi_{\delta}(z_m)-\nabla \psi_{\delta}(z_{\delta}),z_{m}-z_{\delta}\rangle}{\Vert z_{m}-z_{\delta}\Vert^2}\geq 0.$$ When the real Hessian of a $C^2$ function is non negative definite then also the complex Hessian is non negative definite. Thus, the smallest eigenvalue of the complex Hessian of $\varphi$ at $z_\delta$ is at least $-\delta$. So again when $\delta\searrow 0$ we have $z_\delta\to 0=z_0$, and again the complex Hessian of $\varphi$ at $z_0$ is non negative definite. \section{Subharmonic and plurisubharmonic extension through the critical set} The key observation for this section is by Gardiner and S\"odin \cite{GS} (for harmonic functions this is due to Kr\'{a}l \cite{Kr}). It says that if $u$: $\Omega\rightarrow \mathbb{R}$ is $C^{1}$ on an open set $\Omega\subseteq \mathbb{R}^{n}, n\geq 2$ and subharmonic on $\{x\in\Omega:\nabla u(x)\neq 0\}$, then $u$ is subharmonic on all of $\Omega$. We observe that the condition can be relaxed as follows. \begin{theorem}\label{GardinerSodin} Let $\Omega\subseteq \mathbb R^{n}$ be open, and let $u$ be a upper semicontinuous function on $\Omega$. Let $E_{1}\subseteq \Omega$ be the set where $\nabla u$ exists and $\nabla u=0$. If $u$ is subharmonic on some open neighborhood of $\Omega\setminus E_1$ then $u$ is subharmonic on $\Omega$. \end{theorem} \begin{proof} We follow closely \cite{GS} in what follows and modify the argument only slightly to cover the more general setting. Since $u$ allows a gradient on $E_{1}$ it is finite valued there. As $u$ is subharmonic outside $E_{1}$, we have $u(x)<\infty$ for each $x \in \Omega$. As $u$ is upper semicontinuous, it is a pointwise limit of a decreasing sequence of continuous functions $u_{j}$. Let $\varepsilon>0$ and $B$ be the open ball $\{x:\Vert x-x_{1}\Vert<r\}$ such that $\overline{B}\subseteq\Omega$. Let $v_{j}$ be the Poisson integral of $u_{j}$ in $B$ $$v_{j}(y)=\frac{1}{\sigma^{n-1}(S^{n-1})}\int_{\partial B}\frac{r^2-\Vert y-x_1\Vert^2}{r\Vert x-y\Vert^{n}}u_{j}(x)\, d\sigma^{n-1}(x).$$ We define $$h_{\varepsilon, j}(x)=v_{j}(x)+\varepsilon\left(1+\frac{r^{2}-\Vert x-x_{1}\Vert^{2}}{2n}\right). $$ The function $h_{\varepsilon, j}$ satisfies $h_{\varepsilon, j}\in C(\overline{B})\cap C^{\infty}(B)$, and solves the Dirichlet problem $$\left\{\begin{array}{ll} \Delta h_{\varepsilon, j}=-\varepsilon & \text { in }\ B,\\ h_{\varepsilon, j}=u_{j}+\varepsilon\geq u+\varepsilon & \text{ on } \partial B. \end{array}\right.$$ It will be enough to show that $h_{\varepsilon, j}\geq u$ in $B$, since then $$u(x_1)\leq h_{\varepsilon, j}(x_1) =\frac{1}{\sigma^{n-1}(S^{n-1})}\int_{\partial B}\frac{u_{j}(x)}{r^{n-1}}\, d\sigma^{n-1}(x)+\varepsilon\left(1+\frac{r^2}{2n}\right).$$ The latter expression, using monotone convergence, converges to $$\frac{1}{\sigma^{n-1}(\partial B)}\int_{\partial B}u(x)\, d\sigma^{n-1}(x),$$ when $j\to\infty,\, \varepsilon\searrow 0$, and we obtain that $u$ satisfies the spherical mean value inequality. This yields subharmonicity. The set $$ O=\left\{(x, y)\in\overline{B}\times\overline{B}:h_{\varepsilon, j}(x)-u(y)>\frac{\varepsilon}{2} \right\} $$ is relatively open in $\overline{B}\times\overline{B}$ since $u$ is upper semicontinuous, and contains $$\{(x, x):x\in\partial B\}.$$ Thus, the quantity $\Vert x-y\Vert^{4}$ is bounded away from zero on $\partial(B\times B)\backslash O$. Also, finite upper semicontinuous functions are bounded above on compact sets, and so we may choose $c>0$ large enough so that $w>0$ on $\partial(B\times B)$ , where $$ w(x, y)=h_{\varepsilon, j}(x)-u(y)+c\Vert x-y\Vert^{4},\ x, y\in\overline{B}. $$ We suppose, for the sake of contradiction, that the minimum value of the lower-semicontinuous function $w$ on $\overline{B}\times\overline{B}$ is attained at some interior point \newline $(x_{0}, y_{0})\in B\times B$. We have $$h_{\varepsilon, j}(x)-u(y)+c\Vert x-y\Vert^{4}\geq h_{\varepsilon, j}(x_{0})-u(y_{0})+c\Vert x_{0}-y_{0}\Vert^{4},\ x, y\in\overline{B}.$$ Now we define $$ \varphi(x):=h_{\varepsilon, j}(x_{0})+c(\Vert x_{0}-y_{0}\Vert^{4}-\Vert x-y_{0}\Vert^{4})\ ,\ x\in\overline{B}. $$ Setting $y=y_{0}$, we obtain the inequality $$ h_{\varepsilon, j}-\varphi\geq 0.$$ Further, $ h_{\varepsilon, j}-\varphi$ is $C^2$ and attains its minimum value at $x_{0}$, so it's Hessian is non negative definite at $x_0$. Hence, $$\Delta \varphi(x_{0})\leq\Delta h_{\varepsilon, j}(x_{0})=-\varepsilon. $$ In particular, $x_{0}\neq y_{0}$ since $\Delta \varphi(y_{0})=0.$ Similarly, if we define $$ \psi(y):=u(y_{0})+c(\Vert x_{0}-y\Vert^{4}-\Vert x_{0}-y_{0}\Vert^{4})\ ,\ y\in\overline{B}, $$ the inequality $$h_{\varepsilon, j}(x)-u(y)+c\Vert x-y\Vert^{4}\geq h_{\varepsilon, j}(x_{0})-u(y_{0})+c\Vert x_{0}-y_{0}\Vert^{4},\ x, y\in\overline{B}$$ transforms to $$u-\psi\leq 0,$$ by setting $x=x_{0}$. Suppose that $y_0\in E_{1}$. Since $ u-\psi$ and attains its maximum value $0$ at $y_{0}$, then $$0=\nabla u(y_{0})=\nabla\psi(y_{0})\neq0,$$ because $x_{0}\neq y_{0}$ and the gradient of $\psi$ vanishes only there. But if $y_0\not\in E_1$, by hypothesis, the formula $$ v(s)=w(x_{0}+s,\ y_{0}+s)=h_{\varepsilon, j}(x_{0}+s)-u(y_{0}+s)+c\Vert x_{0}-y_{0}\Vert^{4} $$ defines a function which is superharmonic on some neighborhood of $0$ in $\mathbb{R}^{n}$. Since $v$ attains a local minimum at $0$, it must be constant near $0$. However, this leads to the contradictory conclusion that $u\in C^{\infty}$ and $\Delta u=-\varepsilon<0$ near $y_{0}.$ The theorem now follows, because $$ \min_{x\in\overline{B}}(h_{\varepsilon, j}(x)-u(x))=\min_{x\in\overline{B}}w(x, x)\geq\min_{(x,y)\in\overline{B}\times\overline{B}}w(x,y)=\min_{(x,y)\in\partial(B\times B)}w(x,y)\geq 0. $$ \end{proof} The plurisubharmonic case is now easy. Having an upper semicontinuous function which is plurisubharmonic on some open neighborhood of $E_1$, we first extend is as a subharmonic function on the whole of $\Omega$ by Theorem \ref{GardinerSodin}. Now this subharmonic function is actually plurisubharmonic by Theorem \ref{DDD} with $E_2=E_3=\emptyset$. \section{Rad\'{o}- type theorem in the $C^{1,p}$ and $C^{1}$ case} This section deals with the main theorem: \begin{theorem}\label{rado} Let $\Omega$ be open in $\mathbb R^{n}$ (respectively in $\mathbb C^{n}$) and $E \subseteq \Omega$ be a Borel set. If $u\in C^{1,p}(\Omega),\, p\in(0,1]$ is subharmonic (respectively plurisubharmonic) in some open neighborhood of $\Omega\setminus E $ and the Hausdorff measure $\mathcal H^{p}(u(E))=0$ then $u$ is actually subharmonic (respectively plurisubharmonic) in $\Omega$. If $u\in C^{1}(\Omega)$ then the same conclusion holds if $u(E)$ is at most countable. The results are optimal with respect to the size of the image of $E $. \end{theorem} \begin{remark} The assumption that $E$ is Borel may seem artificial. As $u$ is (pluri)sub-\newline harmonic on some open neighborhood $\Omega'$ of $\Omega\setminus E$, we could have taken the relatively closed set $\Omega\setminus\Omega'\subseteq E$ instead of $E$. Because $u$ is continuous, $u(E)$ is then a $F_\sigma$ set and $u^{-1}(u(E))$ is relatively $F_\sigma$ in $\Omega$. This means that we could have confined ourselves to just relatively closed or relatively $F_\sigma$ sets $E$. However, we hope that similar ideas may be useful to study extension theorems with minimal assumptions (less than Lipschitz but stronger in another directions), and without the continuity the above argument fails. It turns out that assuming $E$ is a Borel set is by no means a greater restriction than assuming it to be relatively closed in the discontinuous setting, whereas the failure of $E$ to be Borel or Souslin leads to considerable measure theoretic difficulties explained in Remark \ref{Borel}. So we made this choice out of convenience. \end{remark} \begin{remark}\label{Borel} In \cite{Kr}, where the analogous theorem is proved in the harmonic setting, the conditions are expressed somewhat more technically using inner Hausdorff measures to tackle supersets of $u(E)$ which are not necessarily Hausdorff measurable. In our case this can be avoided by using the following seemingly not widely known facts. As $E$ is Borel and $u$ is at least upper semicontinuous, hence a Borel mapping, $u(E)$ is Lebesgue measurable but not necessarily Borel, see \cite{B} Theorem 6.7.3. This is not enough to conclude that $u(E)$ is also Hausdorff measurable, as even the continuous image of a Borel set is not necessarily Borel, and the $\sigma$- algebras of Lebesgue measurable and $\mathcal H^{p}$- measurable sets in $\mathbb R^{n}$ are in general different if $0<p<n$, as the example of a non-Lebesgue measurable set situated in some lower-dimensional subspace of $\mathbb R^{n}$ demonstrates. However, $u(E)$ is an analytic set (also known as Souslin set or ${\bf \Sigma_{1}^{1}}$- set in the projective hierarchy), as the Borel image of a Borel set, see Theorem 6.7.3 in \cite{B}. Analytic sets are measurable with respect to any Borel measure, see Corollary 2.12.7 or Theorem 7.4.1 \cite{B} or Theorem 26 in \cite{R}. As $\mathcal H^{p}$ is a Borel measure (Proposition 3.10.9 in \cite{B} or Theorem 27 in \cite{R}) it follows that $u(E)$ is $\mathcal H^{p}$- measurable. Now the property ''every compact subset has vanishing $\mathcal H^{p}$ measure'' is equivalent to being of zero $\mathcal H^{p}$ measure for analytic sets, see Corollary 2.10.48 in \cite{F} (this assertion is specific for $\mathbb R^n$). Also the property ''every compact subset is at most countable'' from \cite{Kr} is equivalent to being at most countable for analytic sets, see Corollary 6.7.13 in \cite{B}. The latter is not true for arbitrary sets, as the so-called Bernstein sets demonstrate.\end{remark} \begin{remark} The subharmonic part of Theorem \ref{rado} is essentially settled in \cite{GS}, as it can be easily deduced from there using only results which are already well-established. We provide the argument but we take no credit for it. What is substantially new is the plurisubharmonic part of the theorem. \end{remark} \begin{proof} We set $\Omega_0\subseteq \Omega$ to be the set where $\nabla u \neq 0$. It is clearly an open set which can further disconnect the connected components of $\Omega$. Also some part of $E $ can be contained in $\Omega_0$. We start with the $C^{1,p}$ case, as the continuously differentiable part requires a slightly different approach. As in Theorem $1$ in \cite{Kr}, it follows that $\Omega_0\cap E $ has zero Hausdorff measure of dimension $n-1+p$ (respectively $2n-1+p$). This is essentially a consequence of the implicit function theorem. As $u$ is also subharmonic in $\Omega_0\setminus E $, we use the extension result of \cite{SY} (see also \cite{P}), to conclude that $u$ is subharmonic on $\Omega_0$. In the $C^{1}$ case we have to use the subharmonic extension result \cite{Y}. In the plurisubharmonic case we can now use Theorem \ref{DDD} to conclude that $u$ is plurisubharmonic in $\Omega_0$, as $E_3=E \cap \Omega_0$ is of Lebesgue measure zero. It remains to extend the (pluri)subharmonicity through $E \cap(\Omega\setminus \Omega_0)$. But on this set one has $ \nabla u=0$ and, as $u\in C^{1}$- this follows form \cite{GS} in the subharmonic case. In the plurisubharmonic case we first extend $u$ as subharmonic function on $\Omega$ and use Theorem \ref{DDD} afterwards. The same examples as in \cite{Kr} demonstrate that for any $\varepsilon>0$ there is a relatively closed set $E \subseteq \Omega$ and a $C^{1,p}$ function $u$ such that $0<\mathcal H^{p}(u(E))<\varepsilon$, $u$ is not subharmonic on $\Omega$ but is subharmonic (actually locally affine) on $\Omega\setminus E $. The construction is sketched as follows. Let $G=u(E)$ and put $$ \alpha(x):=\mathcal{H}^{p}(\{t\in G:t\leq x\})=\int_{-\infty}^{x}\chi_{G}(t)d \mathcal{H}^{p}(t)\ ,\ x\in \mathbb{R} $$ We fix an open interval $J$ with $\displaystyle \inf_{x\in J}\alpha(x)>0$ on which $\alpha$ is non constant and define $$ \beta(t):=q^{2} \int_{-\infty}^{t}\alpha(x)dx,\quad t\in J, $$ where the constant $q\geq 1$ is chosen in such a way that $q\alpha (x)\geq 1$ for $x\in J$. Then $\beta$ maps $J$ increasingly on an open interval $I$ and $\beta'\geq 1$ on $J$. We denote by $\gamma:I\rightarrow J$ the corresponding inverse mapping $\gamma=\beta^{-1}$. Now $u(x_1,\ldots,x_n):=\gamma(x_1)$ is the required counterexample in the $C^{1,p}$ setting. The only thing we need to add to \cite{Kr} is that $\gamma$ is concave. If $u(E)$ is uncountable then it contains a compact perfect set $H$. By Corollary 4 to Theorem 35 in \cite{R} there is a gauge function $h$ (also known as dimension function) such that the generalized Hausdorff measure (see \cite{R}) $\mathcal H^{h}(H)>0$. Then, as in Lemma 1 of \cite{Kr}, there exists a $C^1$ function $\gamma$, even such that the modulus of continuity of $\gamma'$ is $h$, defined on some open interval $I$ such that $\gamma'$ is locally affine on $I\setminus \gamma^{-1}(H)$ but not globally affine. Now $u(x_1,\ldots,x_n)=\gamma(x_1)$ is the required counterexample in the $C^{1}$ setting. \end{proof} In the next theorem we deal with more regular functions: \begin{theorem} Let $\Omega$ be open in $\mathbb R^{n}$ (respectively in $\mathbb C^{n}$) and $E\subseteq \Omega$ be a Borel set. If $u\in C^{2}(\Omega)$ is subharmonic (respectively plurisubharmonic) in some open neighborhood of $\Omega\setminus E$ then $u$ extends subharmonically (respectively plurisubharmonically) to the whole $\Omega$ if and only if $u(E)$ has empty interior. \end{theorem} \begin{proof} It is clear that every $C^2$ function which is (pluri)subharmonic outside $E $ is actually (pluri)subharmonic through $E $ if and only if $E $ has empty interior, as the Laplacian (respectively the complex Hessian) is continuous and non negative on the dense complement of $E $. As above, we define $\Omega_0$ to be the noncritical set of $u$. As $u:\Omega_0\to\mathbb R$ is a submersion, it is a continuous open mapping. Now, assuming that $\mathbb R\setminus u(E)$ is dense, $u^{-1}(\mathbb R\setminus u(E))\cap\Omega_0$ is contained in $\Omega_0\setminus E $ and dense in $\Omega_0$, hence $u$ is (pluri)subharmonic on $\Omega_0$. Now $u$ extends also through $E \cap(\Omega\setminus\Omega_0)$, as $\nabla u=0$ there. The conclusion is that for $C^2$ functions the extension result holds if and only if $u(E)$ has dense complement, for if $u(E)$ has nonempty interior it contains an interval $(a,b)$ and a counterexample can be produced as follows. Let $u(z)=|z|^4-2|z|^2+1$, $E =\left\{z\in\mathbb C: |z|\leq \frac{1}{\sqrt{2}}\right\}$, $u(E)=\left[\frac{1}{4},1\right]$. Let $\alpha:\mathbb R\to\mathbb R$ be an affine increasing function, such that $\alpha(\left[\frac{1}{4},1\right]) \subseteq (a,b)$. Now $u=\alpha\circ u$ is an evident counterexample. The same is of course true when $u$ is more regular than $C^2$. \end{proof} \section{Applications} In this last section we briefly collect some direct corollaries to harmonic and pluriharmonic extension problems. By applying Theorem \ref{rado} to the subharmonic $u$ and superharmonic $-u$ we recover the following special case of the theorem of Kr\'{a}l \cite{Kr}: \begin{theorem} Let $\Omega$ be open in $\mathbb R^{n}$ and $E\subseteq \Omega$ be a relatively closed set.\newline If $u\in C^{1,p}(\Omega),\, p\in(0,1]$ is harmonic in $\Omega\setminus E$ and the Hausdorff measure\newline $\mathcal H^{p}(u(E))=0$ then $u$ is actually harmonic in $\Omega$. If $u\in C^{1}(\Omega)$ then the same conclusion holds if $u(E)$ is at most countable. \end{theorem} Likewise, by applying Theorem \ref{rado} to the plurisubharmonic $u$ and plurisuperharmonic $-u$ we get the following extension result for pluriharmonic functions: \begin{theorem} Let $\Omega$ be open in $\mathbb C^{n}$ and $E\subseteq \Omega$ be a relatively closed set. \newline If $u\in C^{1,p}(\Omega),\, p\in(0,1]$ is pluriharmonic in $\Omega\setminus E$ and the Hausdorff measure $\mathcal H^{p}(u(E))=0$ then $u$ is actually pluriharmonic in $\Omega$. If $u\in C^{1}(\Omega)$ then the same conclusion holds if $u(E)$ is at most countable. \end{theorem} \bibliographystyle{amsplain}
3,212,635,537,853
arxiv
\section{Jet vetoing: gaps between jets} We consider dijet production with transverse momentum $Q$ and a veto on the emission of additional radiation in the inter-jet rapidity region, $Y$, harder than $Q_0$. We shall refer generically to the ``gaps between jets'' process, although the veto scale is chosen to be large, $Q_0= 20$~GeV, so that we can rely on perturbation theory. Thus a ``gap'' is simply a region of limited hadronic activity. Gaps between jets is a pure QCD process, hence the cross-section is large and studies can be performed with early LHC data. It is interesting because it allows one to investigate a remarkably diverse range of QCD phenomena. For instance, the limit of large rapidity separation corresponds to the limit of high partonic centre of mass energy and BFKL effects are expected to become important~\cite{muellernavelet}. On the other hand one can study the limit of emptier gaps, becoming more sensitive to wide-angle soft gluon radiation. Furthermore, if one wants to investigate both of these limits simultaneously, then the non-forward BFKL equation enters the game~\cite{muellertang}. In the following we discuss only wide-angle soft emissions. Accurate studies of these effects are important also in relation to other processes, in particular the production of a Higgs boson in association with two jets. It is well known that this process can occur via gluon-gluon fusion and weak-boson fusion (WBF). QCD radiation in the inter-jet region is clearly different in the two cases and, in order to enhance the WBF channel, one can put a cut on emission between the jets \cite{Barger:1994zq,Kauer:2000hi}. This situation is very closely related to gaps between jets since the Higgs carries no colour charge, and QCD soft logarithms can be resummed using the same technique~\cite{softgluonshiggs}. Given a hard scattering process, we can study how it is modified by the addition of soft radiation. If the observable is inclusive enough, then we have no effects because soft contributions cancel when real and virtual corrections are added together, as a result of the Bloch-Nordsieck theorem. However, if we restrict the real radiation to a corner of the phase space, as happens for the gap cross-section, we encounter a miscancellation and are left with a logarithm of the ratio of the hard scale and veto scale, $Q/Q_0$. The resummation of wide-angle soft radiation in the gaps between jets process was originally performed assuming that the real--virtual cancellation is perfect outside the gap, so that one needs only to consider virtual gluon corrections integrated over momenta for which real emissions are forbidden, i.e. over the ``in gap'' region of rapidity and with $k_T$ above the veto scale $Q_0$~\cite{KOS,OS, Oderda}. We shall refer to these contributions as global logarithms. The resummed squared matrix element can be written as: \begin{eqnarray} \label{resummedpartonicxsec1} |{\cal M}|^2 &=& \frac{1}{V_c} \langle m_0 | e^{- \xi \mathbf{\Gamma}^{\dagger}}e^{- \xi \mathbf{\Gamma}} |m_0 \rangle\,, \nonumber \\ \xi &=&\frac{2}{\pi} \int_{Q_0}^{Q} \frac{d k_T}{k_T} \alpha_s(k_T)\,, \end{eqnarray} where $V_c$ is an averaging factor for initial state colour. The vector $|m_0 \rangle$ represents the Born amplitude and the operator $\mathbf{\Gamma}$ is the soft anomalous dimension: \begin{equation} \label{gammaoperator} \mathbf{\Gamma} = \frac{1}{2}Y \mathbf{t}_t^2+ i \pi \mathbf{t}_a\cdot \mathbf{t}_b +\frac{1}{4}\rho_{\rm jet}(Y, |\Delta y|)(\mathbf{t}_c^2+\mathbf{t}_d^2)\,, \end{equation} where $\mathbf{t}_i$ is the colour charge of parton $i$ and the function $\rho_{\rm jet}(Y,\Delta y)$ is related to the jet definition. The operator $\mathbf{t}_t^2$ represents the colour exchanged in the $t$-channel: \begin{equation} \mathbf{t}_t^2=(\mathbf{t}_a+\mathbf{t}_c )^2 = \mathbf{t}_a^2+\mathbf{t}_c^2 + 2 \,\mathbf{t}_a\cdot \mathbf{t}_c\,. \end{equation} The imaginary part of Eq.~(\ref{gammaoperator}) is due to Coulomb gluon exchange. These contributions play an important role in the proof of QCD factorization and they are also responsible for super-leading logarithms~\cite{SLL1,SLLind}. We notice that for processes with less than four coloured particles, such as deep-inelastic scattering or Drell-Yan processes, the imaginary part of the anomalous dimension does not contribute to the cross-section. For instance, if we consider three coloured particles, then colour conservation implies that $ \mathbf{t}_a+\mathbf{t}_b + \mathbf{t}_c=0$, and consequently \begin{equation} i\pi \, \mathbf{t}_a \cdot \mathbf{t}_b =\frac{i\pi}{2} \left( \mathbf{t}_c^2 -\mathbf{t}_a^2-\mathbf{t}_b^2 \right)\,, \end{equation} which contributes as a pure phase. Coulomb gluons do play a role in dijet production, but they are not implemented in angular-ordered parton showers. We shall evaluate the impact of these contributions on the cross-section in the next section. It was later realised~\cite{DS} that the above procedure is not enough to capture the full leading logarithmic behaviour. Real gluons emitted outside of the gap are forbidden to re-emit back into the gap and this gives rise to a new tower of logarithms, formally as important as the primary emission corrections, known now as non-global logarithms. The leading logarithmic accuracy is therefore achieved by considering all $2 \to n $ processes, i.e. \mbox{$n-2$ out-of-gap gluons}, dressed with ``in-gap'' virtual corrections, and not only the virtual corrections to the $2\to 2$ scattering amplitudes. The colour structure quickly becomes intractable and, to date, calculations have been performed only in the large $N_c$ limit~\cite{DS,appleby2,nonlinear}. A different approach was taken in~\cite{SLL1,SLLind}, where the specific case of only one gluon emitted outside the gap, dressed to all orders with virtual gluons but keeping the full $N_c$ structure, was considered. That calculation had a very surprising outcome, namely the discovery of a new class of ``super-leading'' logarithms (SLL), formally more important than the ``leading'' single logarithms. Their origin can be traced to a failure of the DGLAP ``plus-prescription'', when the out-of-gap gluon becomes collinear to one of the incoming partons. Real and virtual contributions do not cancel as one would expect and one is left with an extra logarithm. This miscancellation first appears at the fourth order relative to the Born cross-section and it is caused by the imaginary part of loop integrals, induced by Coulomb gluons. These SLL contributions have been recently resummed to all orders in~\cite{jetvetoing}. The result takes the form: \begin{equation} \label{master2} |{\cal M}_1^{\rm SLL}|^2 = - \frac{2 }{\pi} \int_{Q_0}^{Q} \frac{d k_T}{k_T}\alpha_s(k_T) \left( \ln \frac{Q}{k_T} \right) \left( \Omega^{\rm coll}_R + \Omega^{\rm coll}_V\right), \end{equation} where $ \Omega^{\rm coll}_{R(V)}$ is the resummed real (virtual) contribution in the limit where the out-of-gap gluon becomes collinear to one of the incoming partons. The presence of SLL has been also confirmed by a fixed order calculation in~\cite{SLLfixed}; in this approach SLL have been computed at ${\cal O} (\alpha_s^5)$ relative to Born, i.e. going beyond the one out-of-gap gluon approximation. \section{LHC phenomenology} In this section we perform two different studies. Firstly we consider the resummation of global logarithms and we study the importance of Coulomb gluon contributions, comparing the resummed results to the ones obtained with a parton shower approach. We then turn our attention to SLL and we evaluate their phenomenological relevance. In both studies we consider $\sqrt{S}=14$~TeV, $Q_0=20$~GeV, jet radius $R=0.4$ and we use the MSTW 2008 LO parton distributions~\cite{mstw08}. \begin{figure*} \begin{center} \includegraphics[width=0.47\textwidth]{marzani_simoneGAP.fig1.eps} \hspace{.5cm} \includegraphics[width=0.47\textwidth]{marzani_simoneGAP.fig2.eps} \caption{On the left we plot the gap fraction for $Y=3$ (upper red curves) and $Y=5$ (lower green curves) as a function of $Q$ and on the right as a function of $Y$, for $Q=100$~GeV (upper blue curves) and $Q=500$~GeV (lower violet curves). The solid lines are the full resummation of global logarithms, while the dashed ones are obtained by omitting the $i \pi$ terms in the anomalous dimension.}\label{fig:kfact1} \end{center} \end{figure*} Soft logarithmic contributions are implemented in \textsc{Herwig++} via angular ordering of successive emissions. Such an approach cannot capture the contributions coming from the imaginary part of the loop integrals, due to Coulomb gluon exchange. We evaluate the importance of these contributions in Fig.~\ref{fig:kfact1}. On the left we plot the gap cross-section, normalised to the Born cross-section (i.e. the gap fraction), as a function of $Q$ at two different values of $Y$ and, on the right, as a function of $Y$ at two different values of $Q$. The solid lines represent the results of the resummation of global logarithms; the dashed lines are obtained by omitting the $i \pi$ terms in the soft anomalous dimension matrices. As a consequence, the gap fraction is reduced by $7\%$ at $Q=100$~GeV and $Y=3$ and by as much as $50\%$ at $Q=500$~GeV and $Y=5$. Large corrections from this source herald the breakdown of the parton shower approach. In Fig.~\ref{fig:gapx} we compare the gap cross-section obtained after resummation to that obtained using \textsc{Herwig++}~\cite{ThePEG,Bahr:2008pv,KLEISSCERN9808v3pp129,Gieseke:2003rz} after parton showering ($Q$ is taken to be the mean $p_T$ of the two leading jets). The broad agreement is encouraging and indicates that effects such as energy conservation, which is included in the Monte Carlo, are not too disruptive to the resummed calculation. Nevertheless, the histogram ought to be compared to the dotted curve rather than the solid one, because \textsc{Herwig++} does not include the Coulomb gluon contributions. The resummation approach and the parton shower differ in several aspects: some non-global logarithms are included in the Monte Carlo and the shower is performed in the large $N_c$ limit. Of course the resummation would benefit from matching to the NLO calculation and this should be done before comparing to data. \begin{figure*} \begin{center} \includegraphics[width=0.65\textwidth]{marzani_simoneGAP.fig3.eps} \caption{The gap cross-section obtained using \textsc{Herwig++} (black histogram) is compared to the one from resummation (red curves). As before the solid line is the full result, while the dashed line is obtained by omitting the Coulomb gluon contributions. At the bottom we plot the ratio between the results obtained from the resummation and the one from \textsc{Herwig++}.}\label{fig:gapx} \end{center} \end{figure*} Finally we want to study the relevance of the SLL contributions. In order to do that we define \begin{equation} \label{kSLL} K^{(1)}= \frac{\sigma^{(0)}+\sigma^{(1)}}{\sigma^{(0)}} \,, \end{equation} where $\sigma^{(0)}$ contains the resummed global logarithms and $\sigma^{(1)}$ the resummed SLL contribution coming from the case where one gluon is emitted outside of the gap. The results are shown in Fig.~\ref{fig:kSLL}. Generally the effects of the SLL are modest, reaching as much as 15\% only for jets with \mbox{$Q > 500$~GeV} and rapidity separations $Y > 5$. The contribution coming from $n \ge 2$ out-of-gap gluons is thought to be less important~\cite{jetvetoing}. Remember that we have fixed the value of the veto scale $Q_0=20$~GeV and that the impact will be more pronounced if the veto scale is lowered. \begin{figure*} \begin{center} \includegraphics[width=0.47\textwidth]{marzani_simoneGAP.fig4.eps} \hspace{.5cm} \includegraphics[width=0.47\textwidth]{marzani_simoneGAP.fig5.eps} \caption{On the left we plot the $K$-factor as defined in Eq.~(\ref{kSLL}) as a function of $Q$ for $Y=3$ (upper red curve) and $Y=5$ (lower green curve); on the right we plot it as a function of $Y$, for $Q=100$~GeV (upper blue curve) and $Q=500$~GeV (lower violet curve) }\label{fig:kSLL} \end{center} \end{figure*} \section{Conclusions and Outlook} There is plenty of interesting QCD physics in ``gaps-between-jets'' and measurement can be performed with early LHC data. There are significant contributions arising from the exchange of Coulomb gluons, especially at large $Q/Q_0$ and/or large $Y$, which are not implemented in the parton shower Monte Carlos. However, before comparing to data, there is a need to improve the resummed results by matching to the fixed order calculation. These observations will have an impact on jet vetoing in Higgs-plus-two-jet studies at the LHC. We have studied the super-leading logarithms that occur because gluon emissions that are collinear to one of the incoming hard partons are forbidden from radiating back into the veto region. Even if their phenomenological relevance is generally modest, they deserve further study because they are deeply connected to the fundamental ideas behind QCD factorization. \newpage \begin{footnotesize}
3,212,635,537,854
arxiv
\section{Introduction} \label{sec:intro} Accurate high-resolution information of precipitation data is essential to effective prediction and management of water resources \citep{clark2015}. Dramatic improvements in modeling physical processes driving precipitation have resulted in more realistic simulations from global climate models and hence more reliable predictions. The high complexity of modern climate models, however, implies a computational and storage cost which limit the spatial resolution at which global climate simulations can be performed. As such, there are significant uncertainties and mismatches with observations, due to precipitation patterns that coarse resolutions do not sufficiently represent as they cannot capture the scale of the physical processes of interest \citep{wood2021}. The consequences can be over- or under-attribution of a particular location or incorrect timing of events, that can for example be the difference between a local flooding or not \citep{sapountzis2021}. It is therefore of high scientific interest to refine global predictions and produce maps of both probability of rain occurrence and precipitation intensity at a high spatial scale, in order to inform impact assessment models for flood resilience and agricultural models for drought predictions. It is in principle possible to produce high resolution precipitation using a coarse global dataset as boundary condition for a regional weather model such as the Weather and Research Forecasting (WRF, \cite{ska19}). This \textit{dynamical downscaling} approach \citep{sai11} has the appealing advantage of producing physically consistent spatial fields at high resolution, but comes with a substantial associated cost in terms of computational and storage resources, as well as expertise for model setup that only few research centers, universities or businesses could afford. A more affordable solution lies in the formulation of an empirical relationship between global data and ground observations to be fit at locations where ground data are available. Under the assumption that this relationship is at least approximately valid at unobserved locations, high resolution maps can be produced by correcting the global dataset. This \textit{statistical downscaling} approach \citep{ber10} is fast, computationally affordable, and has a long established track record of success in the geoscience literature. In order to work, such approach requires that the global and the ground data are co-located, which is not a priori the case since global data are defined as averages over large areas. It becomes therefore necessary to use spatial statistical models to interpolate the global simulation values at the same locations of the ground observations, and to have an assessment of the uncertainty around these estimates. Global spatial data require the formulation of specialized models whose theoretical properties are substantially different from spatial processes on Euclidean spaces. In fact, \cite{gne13} highlighted how a valid process on the sphere with great circle distance could be achieved only with severe restrictions on the parameter space of the most widespread covariance model, the Mat\'ern function. In the past two decades, new modeling approaches tailored for global data have emerged. Among them, \cite{jun07,jun08} proposed to embed the sphere in a three dimensional space, consider a Mat\'ern model and apply partial derivatives to achieve more flexibility. The proposed class of models was able to capture not just an isotropic behavior, but also \textit{axial symmetry}, i.e., a nonstationary behavior across latitude \citep{jon63}. \cite{jun11} generalized this approach to multivariate global processes. A fast and flexible spectral class of axially symmetric models was proposed in the case of gridded data by \cite{cas13}. The approach was then generalized to non-parametric spectral estimation \citep{cas14}, three-dimensional variables \citep{cas16}, different land/ocean behavior \citep{cas17} and also multivariate processes \citep{edw19}. On the more theoretical side, substantial progress has been made in the determination of properties of high dimensional spheres for isotropic processes via basis decomposition see, e.g., \cite{ara20,por20}. We refer to \cite{jae17,por18} for two recent reviews on the topic. A novel, different perspective was raised in the seminal work of \cite{lin11}, where a subclass of Mat\'ern models was associated with the solution of a diffusion-reaction Stochastic Partial Differential Equation (SPDE) with the Markov property and inference was performed with finite volumes. The key insight of this approach, as far as global models are concerned, is that the original SPDE on the plane can be just adapted to the sphere, with the additional benefit of not requiring boundary conditions. While in its original formulation the SPDE resulted in stationary models, non-stationary extensions have been proposed by allowing spatially varying coefficients. Several alternatives have been proposed, from nested SPDE \citep{bol11} to models with physical barriers \citep{bak19}. Recently, \cite{fug15,fug20} extended this approach by allowing models with local deformation of the SPDE via a spatially varying scalar and vector field. The proposed approach showed promising results, but has been so far limited to the Gaussian case and generalization to non-Gaussian data is by no means straightforward, given the challenges in modeling non-Gaussian data and the computational overhead implied by these models. In this work, we propose a non-Gaussian, non-stationary SPDE-based global spatio-temporal model with local deformation and a buffer between land and sea to account for abrupt changes in spatial dependence. Non-Gaussianity is modeled via a latent Gaussian model, i.e., by assuming that the non-Gaussian marginal behavior is conditionally independent across locations, and then the spatial dependence is captured via a latent process with a Gaussian structure. Inference is still achievable for very large datasets by means of 1) a sparse precision matrix of the latent Gaussian model emerging from the finite volume solution of the SPDE and 2) a fast approximation of the high-dimensional integrals required for posterior computation via Integrated Nested Laplace Approximation (INLA, \cite{rue09}). The model is ideally suited to highly non-Gaussian data such as daily global precipitation, and it is then used to 1) fit global reanalysis data, 2) provide interpolated data at the same location as the ground observations, 3) downscale precipitation using both ground and interpolated data, so that 4) high resolution maps of precipitation are provided. The work proceeds as follows. Section \ref{sec:data} introduces the data which will be used in this work. Section \ref{sec:method} details the methodology for the latent Gaussian model, specifically the temporal and the spatial component. Section \ref{sec:inference} shows how inference is performed and how sparsity and numerical approximations alleviate the computational burden. Section \ref{sec:sim} assesses numerically the posterior consistency, as well as the improved predictability of the proposed model against simpler alternatives. Section \ref{sec:app} applies the proposed model to the precipitation data and shows it can provide high resolution maps of daily precipitation across the continental United States. Section \ref{sec:conc} concludes with a discussion. For reproducibility, at the end of this work we provide information about the repository where the code and data are available. \section{Data Description} \label{sec:data} \quad We focus on daily global precipitation data from the Modern-Era Retrospective Analysis for Research and Applications, version 2 (MERRA-2, \cite{gel:17}) produced by the NASA Global Modeling and Assimilation Office (GMAO). MERRA-2 is a reanalysis data product that incorporates observations from satellite instruments and is considered one of the best representations of the state of the Earth's system. The data is available on a regular grid with a resolution of $0.625^{\circ}\times 0.5^{\circ}$ in longitude and latitude, respectively, for a total of $n=207,936$ locations. We focus on the year 2021, the latest year with a continuous record available, and we use the daily Maximum Rainfall Rate (MRR, in $\text{kg/}\text{m}^2\cdot \text{s})$. To convert the MRR into precipitation, we divided it by the water density, 1,000 $\text{(kg/m}^3)$, and convert the unit to millimeter by multiplying by 1,000, as well as multiply by 86,400$s$ to obtain the daily precipitation. We assume that for each location, the MRR lasts for the whole day, which leads to some overestimation, as it can be clearly seen from the two different legend scales in Figure \ref{fig:USCRNLocs}. The downscaling approach in Section \ref{sec:app} will be able to account for this by performing a linear transformation between (interpolated) MERRA-2 and USCRN. \begin{figure}[!tb] \centering \includegraphics[width = 14cm]{F1-USCRNLocations.png} \caption{Average daily precipitation (in mm) for each USCRN site and MERRA-2 grid point from January 1$^{\text{st}}$, 2021 through December 31$^{\text{th}}$, 2021.} \label{fig:USCRNLocs} \end{figure} For ground observations, we consider the U.S. Surface Climate Reference Network (USCRN, \cite{noaa}), a data product containing continuous records from climate monitoring stations across the continental United States. The USCRN monitoring stations record measurements for total precipitation, measured in millimeters (mm), in real-time in 5-minute intervals. The data are collected with a Geonor T-200B precipitation gauge, whose maximum capacity is 600mm. This gauge uses a precipitation collection bucket which is surrounded by a wind/snow shield and heated in order to prevent ice buildup in cold regions. Three wires attached to this collection device vibrate with frequencies relative to the weight of the bucket, and these vibration frequencies are then converted to gauge depth (in mm). For this work, we consider data from 131 different monitoring stations post-processed to daily resolution forming a continuous record from January $1^{\text{st}}$, 2021 to December, 31$^{\text{th}}$ 2021. Figure \ref{fig:USCRNLocs} shows the locations of the USCRN sensors along with the average total daily precipitation throughout 2021. For comparison, the same figure also shows the average daily precipitation for the MERRA-2 grid points during the same time frame. It is readily apparent from this Figure that the regions of highest average daily precipitation are the northwest and southeast regions of the country whereas the drier region of the country spans from the eastern border of California through to the Mississippi River. \section{Methodology}\label{sec:method} \subsection{Latent Gaussian Model} \quad We propose a spatio-temporal latent Gaussian model \citep{rue09}, defined for a generic spatial point on the sphere $\mathbf{s} \in \mathbb{S}^2$ and time $t=1, 2, \ldots$ as: \begin{subequations} \label{eq:latent} \begin{flalign} & \qquad Y(\mathbf{s},t) \mid \mu(\mathbf{s},t), \boldsymbol{\theta}_{\text{MRG}} \sim h (\mu(\mathbf{s},t),\boldsymbol{\theta}_{\text{MRG}}),\label{eqn:latent1}\\ & \qquad g(\mu(\mathbf{s},t)) = \sum_{p=1}^P\beta_pf_p(\mathbf{s})+f^{\text{time}}(\mathbf{s},t)+\epsilon(\mathbf{s}),\label{eqn:latent2}\\ & \qquad f^{\text{time}}(\mathbf{s},t) = \sum_{k=1}^K \left\{\zeta_{k}(\mathbf{s}) \sin\left(\frac{2\pi k t}{\delta}\right)+\zeta'_{k}(\mathbf{s}) \cos\left(\frac{2\pi k t}{\delta}\right)\right\},\label{eqn:latent3} \end{flalign} \end{subequations} \noindent where $h(\cdot)$ represents the marginal distribution of $Y(\cdot)$ conditional on the latent field and the hyperparameters, and belongs to the exponential family with some mean $\mu(\mathbf{s},t)$, whose structure is determined by a latent Gaussian process through a link function $g(\cdot)$. The marginal parameters $\boldsymbol{\theta}_{\text{MRG}}$ characterize moments higher than the first, and could be empty. If the marginal distribution is Gaussian, we have $Y(\mathbf{s})\sim \mathcal{N}(\mu(\mathbf{s},t),\boldsymbol{\theta}_{\text{MRG}})$, and the link function $g(\cdot)$ is simply the identity function \citep{dun18}. For example, if the marginal distribution is the Bernoulli distribution instead, we have $Y(\mathbf{s})\sim \mathcal{B}(\mu(\mathbf{s},t))$, and the logit function can be chosen as the link function \citep{dun18}. We assume that the transformed mean in the latent space $g(\mu(\mathbf{s},t))$ is modeled by a location specific time effect, $f^{\text{time}}(\mathbf{s},t)$, $p=1, \ldots, P$ location-specific covariates $f_p(\mathbf{s})$, and a spatial error $\epsilon(\mathbf{s})$. The time effect $f^{\text{time}}(\mathbf{s},t)$ is described by $K$ harmonics with parameters $\boldsymbol{\zeta}(\mathbf{s})=(\zeta_1(\mathbf{s}), \ldots, \zeta_K(\mathbf{s}))^\top$ and $\boldsymbol{\zeta}'(\mathbf{s})=(\zeta'_1(\mathbf{s}), \ldots, \zeta'_K(\mathbf{s}))^\top$. If we assume that we have a sample observed at $\mathbf{s}_1, \ldots, \mathbf{s}_n$, the total number of temporal parameters in equation \eqref{eqn:latent3} is $\boldsymbol{\theta}_{\text{time}}=\{\boldsymbol{\theta}_{\text{time}}(\mathbf{s}_1), \ldots, \boldsymbol{\theta}_{\text{time}}(\mathbf{s}_n)\}$, where $\boldsymbol{\theta}_{\text{time}}(\mathbf{s}_i)=\{{\boldsymbol{\zeta}(\mathbf{s}_i), \boldsymbol{\zeta}'}(\mathbf{s}_i)\}$, for a total of $2Kn$ parameters. The period $\delta\in \{365, 366\}$ depends on the leap/no-leap year considered. We assume that the spatial random effect $\epsilon(\mathbf{s})$ is a realization from a mean-zero Gaussian random field independent in time, whose covariance function depends on some parameters $\boldsymbol{\theta_{\text{space}}}$ which will be specified in the next Section. \subsection{Spatial Correlation Structure} \quad The simplest models for the spatial dependence of $\epsilon(\mathbf{s})$ are stationary and isotropic, i.e., they assume that the dependence is a function of $\|\mathbf{s}_1-\mathbf{s}_2\|$. Among them, one of the most popular choices is arguably the Mat\'ern model, whose correlation between two locations $\mathbf{s}_1, \mathbf{s}_2$ is defined as \citep{ste99} \[ \text{Corr}(\epsilon(\mathbf{s}_1),\epsilon(\mathbf{s}_2))=C(\mathbf{s}_1,\mathbf{s}_2)=\frac{1}{2^{\nu-1}\Gamma(\nu)}\left(\frac{\|\mathbf{s}_1-\mathbf{s}_2\|}{\rho}\right)^{\nu}K_{\nu}\left(\frac{\|\mathbf{s}_1-\mathbf{s}_2\|}{\rho}\right), \] where $K_\nu$ is the modified Bessel function of the second kind with smoothness parameter $\nu>0$ (i.e., controlling the degree of mean squared differentiability) and range parameter $\rho>0$. If inference is sought for a large dataset, a matrix comprising of the covariance among all locations could not be stored, and likelihood evaluation could become computationally challenging or just impossible. Instead of operating directly with the covariance matrix, a popular solution in the past decade has been to rely on the identification of a Gaussian process with Mat\'ern covariance as the (unique) stationary solution of the following fractional reaction diffusion SPDE \citep{whi54}: \begin{equation}\label{eq:spde} \left(\frac{1}{\rho^2}-\Delta\right)^{\nu/2+1/2}\epsilon(\mathbf{s})=\mathcal{W}(\mathbf{s}),\; \mathbf{s}\in \mathbb{R} ^2, \end{equation} where $\Delta$ is the Laplacian operator and $\mathcal{W}(\mathbf{s})$ is a spatial Gaussian white noise. By exploiting an `explicit link' between a continuous Markov process when $\nu$ is integer in \eqref{eq:spde} and a discrete Gaussian Markov Random Field (GMRF), \cite{lin11} proved that if all locations are arranged on a 2D lattice, then the covariance structure of the GMRF could be approximated by applying the convolution of a sparse precision matrix. Moreover, any location that is not on the lattice could also be interpolated and approximated by means of a triangulation over the domain. Ultimately, this implies that the Mat\'ern covariance can be approximated by a sparse precision matrix, and hence allow faster and feasible inference on the spatial structure of $\epsilon(\cdot)$. In this work, we rely on a similar SPDE defined on a sphere defined as \begin{equation}\label{eq:spdeshpere} \left(\frac{1}{\rho^2}-\Delta_{\mathbb{S}^2}\right)^{\nu/2+1/2}\epsilon(\mathbf{s})=\mathcal{W}(\mathbf{s}),\;\mathbf{s}\in \mathbb{S} ^2, \end{equation} where $\Delta_{\mathbb{S}^2}$ is the Laplacian operator. The aforementioned SPDE approach has clear computational advantages, but in its formulation is limited to stationary and isotropic processes \citep{rue09}. The SPDE operator can, however, be generalized to allow for nonstationary constructs, while still yielding sparse precision matrices. In this work we rely on a spatially varying SPDE originally formulated in \cite{fug19}, but other approaches for spatially varying parameters \citep{rue09} or nested SPDE \citep{bol11} have been proposed. We assume a location on the sphere has polar coordinates $\mathbf{s}=(L,l)$, where $L$ is the latitude and $l$ is the longitude. We introduce two terms: a vector field $\mathbf{v}(\cdot)=(v_1(\cdot),v_2(\cdot))^\top$ and a positive-valued scalar field $\rho(\cdot)$. We then define the inverse deformation tensor as: \[ \mathbf{G}(\mathbf{s})^{-1}=\rho(\mathbf{s})^2\frac{\mathbf{I}_2+\mathbf{v}(\mathbf{s})\mathbf{v}(\mathbf{s})^\top}{\sqrt{1+\|\mathbf{v}(\mathbf{s})\|^2}}. \] One can show that with the spatially varying metric tensor defined above, the distance along the direction $\mathbf{v}(\mathbf{s})$ is scaled by $1/(\rho(\mathbf{s})(1+\|\mathbf{v}(\mathbf{s})\|^2)^{\frac{1}{4}}$. In the orthogonal direction of $\mathbf{v}(\mathbf{s})$, the distance is scaled by $(1+\|\mathbf{v}(\mathbf{s})\|^2)^{\frac{1}{4}}/{\rho(\mathbf{s})}$. Therefore, the vector field $\mathbf{v}(\cdot)$ specifies the direction of the local anisotropic effect at each location, while $\rho(\cdot)$ represents its strength. After specifying the metric tensor $\mathbf{G}(\mathbf{s})$, it case be shown that an appropriate change of variable in the SPDE \eqref{eq:spdeshpere} yields \citep{fug20}: \begin{equation} \label{EQ:Gs} [|\mathbf{G}(\mathbf{s})|^{\frac{1}{2}}-\nabla\cdot|\mathbf{G}(\mathbf{s})|^{\frac{1}{2}}\mathbf{G}(\mathbf{s})^{-1}\nabla]\epsilon(\mathbf{s})=|\mathbf{G}(\mathbf{s})|^{\frac{1}{4}}\mathcal{W}(\mathbf{s}),\;\mathbf{s}\in \mathbb{S} ^2. \end{equation} \subsection{Spherical Harmonics} \quad Both the vector field $\mathbf{v}(\cdot)$ and the scalar field $\rho(\cdot)$ can be specified through basis decomposition such as spherical vector harmonics and spherical harmonics, respectively. However, a more flexible approach is necessary for global models, which must account not just for slowly changing nonstationarity, but also for abrupt changes dictated by large geographical descriptors such as land and ocean \citep{cas17}. In order to formulate a valid model via SPDE while still accounting for abrupt changes, we consider the buffering approach proposed by \cite{bak19}. More specifically, we use a buffer area along coastlines with a separate parameter that describes the multiplicative drop $d\in [0,1]$ in the strength of dependence in the buffer area, so that for each of the land/ocean domain we propose a separate spherical harmonics decomposition: \[ \text{log}\{\rho^j(\mathbf{s})\}=\sum_{l=0}^\mathcal{L}\sum_{m=-l}^l\alpha_{ml}^jY_l^m(\mathbf{s}), \] \noindent where $\alpha_{ml}^j$ are real-valued coefficients and $Y_l^m(\mathbf{s})$ are Laplace's spherical harmonic of degree $l$ and order $m$ and $j=\{\text{land, ocean}\}$ specifies the geographical descriptor where $\mathbf{s}$ is located. Similarly, the vector field $\mathbf{v}(\cdot)$ can be described as: \[ \mathbf{v}^j(\mathbf{s})= \sum_{l=1}^\mathcal{L}\sum_{m=-l}^l\{E_{lm}^{(1,j)}\nabla Y_m^l(\mathbf{s})+E_{lm}^{(2,j)}\hat{\mathbf{r}}(\mathbf{s})\nabla \times Y_m^l(\mathbf{s})\}, \] \begin{sloppypar} \noindent where $\hat{\mathbf{r}}$ is the unit vector in the positive radial direction, $E_{lm}^{(1,j)}$ and $E_{lm}^{(2,j)}$ are real coefficients, $\mathcal{L}$ is the highest order in the bases. Additionally, in order to account for micro-scale variability, we assume that the process for both land and sea also has a nugget $\tau_j^2$. In summary, the spatial parameters of the model are $\boldsymbol{\theta}_{\text{space}}=\left\{d,\left\{\tau_j^2, j \in \{ \text{land, sea} \}\right\},\left\{\alpha_{ml}^j, E_{lm}^{(1,j)}, E_{lm}^{(2,j)}, m=-l,\ldots, l; l=1, \ldots, \mathcal{L}, j \in \{\text{land, sea}\}\right\}\right\}$, for a total of $6(\mathcal{L}^2+2\mathcal{L})+3$ parameters. \end{sloppypar} \quad We use a priori independent standard normal distributions as priors for all parameters, with log transformation if they are constrained to be positive. The same setting is applied to the parameters used in simulation study and application. \section{Inference}\label{sec:inference} \quad We propose a stepwise inference approach to reduce the overall dimension of the parameter space in each step. We first estimate $\boldsymbol{\theta}_{\text{time}}$ at each location independently, then $\boldsymbol{\theta}_{\text{space}}$ conditionally on the temporal parameters. In \cite{edw20} it was shown that the stepwise approach results in an asymptotically consistent inference, and \cite{cas17} showed that uncertainty and bias propagation have small impact for large yet finite datasets such as the one we work with here. \subsection{Step 1: Temporal Structure} \quad In the first step, the inference is performed at each location independently without spatial and covariate effect. We redefine equation \eqref{eq:latent} as the following: \begin{equation} \label{eq:temp} \begin{array}{rcl} Y(\mathbf{s}, t) & \sim & h (\mu(\mathbf{s},t),\boldsymbol{\theta}_{\text{MRG}}),\\[7pt] g(\mu(\mathbf{s},t)) & = & \sum_{p=1}^P\beta_pf_p(\mathbf{s})+\sum_{k=1}^K \left\{\zeta_{k}(\mathbf{s}) \sin\left(\frac{2\pi k t}{\delta}\right)+\zeta'_{k}(\mathbf{s}) \cos\left(\frac{2\pi k t}{\delta}\right)\right\}. \end{array} \end{equation} \noindent The vector of temporal parameters $\boldsymbol{\theta}_{\text{time}}$ and the linear parameters $\beta_1, \ldots, \beta_p$ are estimated using least-squares and the parameters are considered fixed in the following inference steps. Once $\hat{\boldsymbol{\theta}}_{\text{time}}, \hat{\beta}_1, \ldots, \hat{\beta}_p$ are obtained, conditional on them the spatial parameters $\boldsymbol{\theta}_{\text{space}}$ of the spatial process $\epsilon(\mathbf{s})$ can be estimated. \subsection{Step 2: Spatial Covariance Structure} \quad We define a collection of triangles $T_1, \ldots, T_{n_T}$ on the sphere, and use a finite volume method to discretize the SPDE in \eqref{EQ:Gs}. We redefine the inverse matrix tensor as $\mathbf{G}(\mathbf{s})^{-1}=\rho(\mathbf{s})^2\mathbf{H}(\mathbf{s})$, where $|\mathbf{H}(\mathbf{s})|=1$, and we integrate it over triangles $T_i$ generated on a global mesh and seek for a piece-wise constant solution to the SPDE. For all triangles $T_i$, we have the following equality in distribution: \begin{equation} \label{EQ:joint} \left[\int_{T_i}\frac{1}{\rho(\mathbf{s})^2}-\nabla\cdot \mathbf{H}(\mathbf{s})\nabla\right]\epsilon(\mathbf{s})\mathrm{d}V \overset{d}{=} \int_{T_i}\frac{1}{\rho(\mathbf{s})}\mathcal{W}(\mathbf{s})\mathrm{d}V. \end{equation} \noindent Here $\nabla\cdot$ is the divergence operator, $\nabla$ is the gradient operator, and $\mathbf{H}(\cdot)$ is a $2\times2$ piecewise continuously differentiable diffusion tensor and $\mathrm{d}V$ is the surface measure on the triangles. This allows to translate the SPDE into a set of linear equations for a Gaussian vector that is assumed to be constant across each triangle. Similarly to \cite{bert07,fug20}, let $\boldsymbol{\epsilon}=(\epsilon_1,\epsilon_2,...,\epsilon_n)$ be the vector of values at triangle center, then the following $n\times n$ matrix $\mathbf{A_H}$ could be calculated to describe a discrete approximation: \[ \left(\sum_{j=1}^3\int_{\sigma_{i,j}}(\mathbf{H}(\mathbf{s})\nabla \epsilon(\mathbf{s}))^\top n_{i,j}\mathrm{d} \mathbf{s}\right)_{i=1}^n \approx \mathbf{A_H}\boldsymbol{ \epsilon}. \] \noindent Here, $\sigma_{i,j}$ represents the three faces of the triangle $T_i$. Then, we combine this with a $n\times n$ diagonal matrix $\mathbf{D}$, in which $d_{ii}=|T_i|/\rho(x_i)^2$, so that we have: \[ \left(\int_{T_i}\frac{\epsilon(\mathbf{s})}{\rho(\mathbf{s})^2}\mathrm{d}\mathbf{s}-\sum_{j=1}^3\int_{\sigma_{i,j}}(\mathbf{H}(\mathbf{s})\nabla u(s))^T\mathbf{n}_{i,j}\mathrm{d}\mathbf{s}\right)_{i=1}^n\approx(\mathbf{D}-\mathbf{A_H})\boldsymbol{\epsilon}. \] \noindent With this approximation, the equality in distribution expressed in equation \eqref{EQ:joint} can now be expressed as: \[ (\mathbf{D}-\mathbf{A_H})\boldsymbol{\epsilon}\sim \mathcal{N}(0,\mathbf{L}), \] \noindent where $\mathbf{L}$ is a $n\times n$ diagonal matrix with elements $l_{ii}=|T_i|/\rho(x)_i^2$. This implies that $\boldsymbol{\epsilon}\sim \mathcal{N}(0,\mathbf{Q}^{-1})$, and $\mathbf{Q}$ is a sparse precision matrix defined as: \[ \mathbf{Q}=(\mathbf{D}-\mathbf{A_H})^\top \mathbf{L}^{-1}(\mathbf{D}-\mathbf{A_H}). \] \noindent Therefore, the finite volume method ensures a sparse precision matrix, which mitigates the computational burden for large global data and boosts the computing speed of the nonstationary model during inference. \subsection{Inference for Latent Gaussian model} \quad In order to perform inference on the latent Gaussian Model, in this work we make use of the Nested Laplace Approximation (INLA, \cite{rue09}) a method for Bayesian inference alternative to traditional Markov Chain Monte Carlo (MCMC), which could further ease the computational burden. INLA is a deterministic method for fast approximation of high dimensional integrals which takes advantage of computational properties of models that can be expressed as a latent GMRF. Thus, the INLA approach is used for performing the inference in this study. Under the proposed latent Gaussian Model structure, we have the observed data vector denoted here as $\boldsymbol{Y}=(Y(\mathbf{s}_1), \ldots, Y(\mathbf{s}_n))^\top$ at locations $\mathbf{s}_i$ that can be described by hyperparameter vector $\boldsymbol{\theta}_{\text{space}}$. For simplicity, throughout this section, we will use $\boldsymbol{\theta}$ to represent hyperparameter vector $\boldsymbol{\theta}_{\text{space}}$. If conditioned on latent spatial field $\mathbf{X}$, the observations are marginally independent with likelihood: \[ \pi(\mathbf{Y}|\mathbf{X},\boldsymbol{\theta})=\prod_{i=1} ^n\pi(Y(\mathbf{s}_i)|X(\mathbf{s}_i),\boldsymbol{\theta}), \] \noindent where $\boldsymbol{X}=(X(\mathbf{s}_1),\ldots, X(\mathbf{s}_n))^\top$ is a Gaussian field with mean zero and modeled by a SPDE approach with precision matrix $\mathbf{Q}(\boldsymbol{\theta})$. Therefore, the joint distribution of latent effect and hyperparameters can be written as: \[ \begin{array}{rcl} \pi(\mathbf{X},\boldsymbol{\theta}|\mathbf{Y}) &\propto& \pi(\boldsymbol{\theta})\pi(\mathbf{X}|\boldsymbol{\theta})\prod_{i=1}^n \pi(Y(\mathbf{s}_i)|X(\mathbf{s}_i),\boldsymbol{\theta})) \\[7pt] &\propto& \pi(\boldsymbol{\theta})|Q(\boldsymbol{\theta})|^{1/2}\text{exp}\{-\frac{1}{2}\mathbf{X}^\top Q(\boldsymbol{\theta}) \mathbf{X}\}\prod_{i=1}^n \pi(Y(\mathbf{s}_i)|X(\mathbf{s}_i),\boldsymbol{\theta}), \end{array} \] \noindent where $|\boldsymbol{Q}(\boldsymbol{\theta})|$ is the determinant of the precision matrix. The main goal is to approximate the posterior marginals $\pi(X(\mathbf{s}_i)|\mathbf{Y})$, $\pi(\boldsymbol{\theta}|\mathbf{Y})$ and $\pi(\theta_j|\mathbf{Y})$. The marginal posterior distributions of interest can be written as: \[ \begin{array}{rcl} \pi(X(\mathbf{s}_i)|\boldsymbol{Y}) &=& \int \pi(X(\mathbf{s}_i)|\boldsymbol{\theta},\boldsymbol{Y})\pi(\boldsymbol{\theta}|\boldsymbol{Y})\mathrm{d}\boldsymbol{\theta}\\[7pt] \pi(\theta_j|\boldsymbol{Y})&=&\int \pi(\boldsymbol{\theta}|\boldsymbol{Y})\mathrm{d}\theta_{-j}. \end{array} \] \noindent The key idea of INLA approach is to use the form above to construct nested approximations. The approximations of the marginals for the latent field $\pi(X(\mathbf{s}_i)|\mathbf{Y})$ are computed by approximating $\pi(\boldsymbol{\theta}|\mathbf{Y})$ and $\pi(X(\mathbf{s}_i)|\boldsymbol{\theta},\mathbf{Y})$, and using numerical integration to integrate out $\boldsymbol{\theta}$. In other words, the posterior marginals of the latent parameter would be obtained by: \[ \tilde{\pi}(X(\mathbf{s}_i)|\boldsymbol{Y})=\sum_{k}\tilde{\pi}(X(\mathbf{s}_i)|\boldsymbol{\theta}_k,\boldsymbol{y})\times \tilde{\pi}(\boldsymbol{\theta}_k|\boldsymbol{Y})\times \Delta_k, \] \noindent where $\Delta_k$ are the weights associated with a vector $\boldsymbol{\theta}_k$ of hyperparameters in a grid. \section{Simulation Studies}\label{sec:sim} \quad Throughout this section, we denote with NS-LS the proposed nonstationary latent Gaussian model \eqref{EQ:Gs} with land/sea effect with NS the nonstationary model with no land/sea effect. We further consider the stationary SPDE model \eqref{eq:spdeshpere}, and denote with S-LS the model with land/sea effect and with S without it. In Section \ref{sec:hyper}, we perform simulations from the Gaussian marginal distribution for NS-LS to numerically assess posterior consistency for both the hyperparameters and the resulting covariance matrix. In Section \ref{sec:gaus} and Section \ref{sec:bern}, we perform simulations from Gaussian and Bernoulli marginal distributions with identity and logit link, respectively, to assess the interpolation (kriging) performance of the NS-LS against NS, S-LS and S. Since the key contribution of this work lies in the spatial component of the model, throughout this section we will assume a purely spatial process with no covariates. In other words, model \eqref{eq:latent} simplifies to \begin{subequations} \label{eq:latent:simp} \begin{flalign} & \qquad Y(\mathbf{s}) \sim h (\mu(\mathbf{s}),\boldsymbol{\theta}_{\text{MRG}}),\label{eqn:latent1:simp}\\ & \qquad g(\mu(\mathbf{s})) = \epsilon(\mathbf{s})\sim \mathcal{N}(0, \boldsymbol{\Sigma}(\boldsymbol{\theta_{\text{space}}})).\label{eqn:latent2:simp} \end{flalign} \end{subequations} In the Gaussian case we also have $\boldsymbol{\theta}_{\text{MRG}}=\sigma^2=0.05$, while in the Bernoulli case no marginal parameters are defined, so that $\boldsymbol{\theta}_{\text{MRG}}=\emptyset$. For each simulation, we sample $n=2,000$ data points on the unit sphere, and then draw the parameters of $\boldsymbol{\theta_{\text{space}}}$ from a Normal distribution with mean 1 and standard deviation 0.5, assume them fixed (similar results have been observed for other samples or distributions). Each simulation comprises of $n_r=100$ replicates from the resulting covariance matrix $\boldsymbol{\Sigma}(\boldsymbol{\theta_{\text{space}}})$. We simulate data from a NS-LS model with $\mathcal{L}=1$, so that there is a total of $6(\mathcal{L}^2+\mathcal{L})+3=21$ hyperparameters. We perform $n_s=100$ independent simulations and report the results both in terms of aggregated performance and their uncertainty . \subsection{Posterior consistency in the Gaussian case}\label{sec:hyper} \quad In order to numerically assess posterior consistency, for each simulation we consider an increasing number of replicates $n_r=10, \ldots, 100$. Inference is performed assuming the same model \eqref{eq:latent:simp} and with a mesh of $n_T=2,000$ triangles. For varying levels of $n_r$, the hyperparameters' posterior distributions is retrieved and is compared with the true value. Posterior consistency can be empirically verified in the extent to which the hyperparameters' posterior distributions converges to the true parameters $\boldsymbol{\theta_{\text{space}}}$ as $n_r$ increases. \begin{figure}[ht!] \centering \includegraphics[scale=0.45]{F2-hyper.jpg} \caption{Functional boxplots \citep{sun11} across $n_s$ simulations of the posterior distribution of two hyperparameters (a) $\alpha_{11}^2$ and (b) $E_{10}^{(2,2)}$ for different number of replicates $n_r$. The vertical dashed lines represent the true hyperparameter values.} \label{fig:hyper} \end{figure} \begin{table}[ht!] \caption{Median MSE (IQR) between the true hyperparameter and the posterior distribution across all simulations $n_s$ for Gaussian case. \label{tab:hypermse}} \centering \begin{tabular}{|l|l|l|l|l|l|}\hline $n_r$ & 20 & 40 & 60 & 80 & 100 \\ \hline Median MSE (IQR) & 0.32 (0.13) & 0.25 (0.07) & 0.14 (0.05) & 0.05 (0.05) & 0.01 (0.007)\\ \hline \end{tabular} \end{table} \noindent Figure \ref{fig:hyper} shows the functional boxplot \citep{sun11} for all $n_s$ of the posterior distributions, for two hyperparameters for increasing values of realizations $n_r$. It is readily apparent how the posterior mean aligns to the true parameter value and the posterior standard deviations decreases as the replicates increase. While results are shown for NS-LS, similar patterns have been observed across all other models (NS, S-LS and S). Table \ref{tab:hypermse} shows the median MSE and InterQuartile Range (IQR) of the hyperparameters posterior means estimated from the NS-LS model and the true values across all hyperparameters and across all $n_s=100$ simulations. The median MSE decreases as the replicates increases. In order to perform a uniform comparison across all hyperparameters, whose number quickly becomes overbearing (e.g., with $\mathcal{L}=4$ we would have $6(4^2+4)+3=123$ hyperparameters), we also compare the covariance matrix implied by the hyperparameters with the true one. We assess the discrepancy in the covariances via the Kullback-Leibler Divergence (KLD), which in the case of an $n$-dimensional Gaussian distributions with mean $\boldsymbol{\mu}_0$ and $\boldsymbol{\mu}_1$ and covariance matrices $\boldsymbol{\Sigma}_0$ and $\boldsymbol{\Sigma}_1$ simplifies to: \[ \frac{1}{2}\left(\text{tr}(\boldsymbol{\Sigma}_1^{-1}\boldsymbol{\Sigma}_0)-n+(\boldsymbol{\mu}_1-\boldsymbol{\mu}_0)^\top \boldsymbol{\Sigma}_1^{-1}(\boldsymbol{\mu}_0-\boldsymbol{\mu}_1)+\text{ln}\left(\frac{\text{det}\boldsymbol{\Sigma}_1}{\text{det}\boldsymbol{\Sigma}_0}\right)\right). \] \noindent In our case $\boldsymbol{\mu}_0=\boldsymbol{\mu}_1=\mathbf{0}$, $\boldsymbol{\Sigma}_0=\boldsymbol{\Sigma}(\boldsymbol{\theta}_{\text{space}})$ and $\boldsymbol{\Sigma}_1=\boldsymbol{\Sigma}(\hat{\boldsymbol{\theta}}_{\text{space}})$, so that the KLD measures the distance between the true and estimated covariance. The results as shown in Figure \ref{fig:KLD} for NS-LS (panel (a)) and S (panel (b)), where the functional boxplot \citep{sun11} of KLD across all $n_s=100$ simulations for an increasing number of realizations $n_r$ is shown. The functional boxplot is used to report the envelope of the 50\% central region (pink area), the median curve (black line) and the maximum non-outlying envelope (outer blue line). As in the case of the estimated parameters, we observe how even with a relatively small number of replicates in the training set, the estimated covariance is converging to the true one. In particular, after 40 replicates the estimated covariance is practically indistinguishable from the true one. \begin{figure}[ht!] \centering \includegraphics[scale=0.5]{F3-functional_boxplot.jpg} \caption{Functional boxplot across $n_s=100$ simulations of the KLD between the true covariance matrix and the estimated one according to (a) NS-LS and (b) S-LS.} \label{fig:KLD} \end{figure} \subsection{Interpolation performance in the Gaussian case}\label{sec:gaus} \quad In order to assess the interpolation performance, we perform inference on the hyperparameters for all four models and use them to interpolate at specified locations. We consider two cases (1) all $n$ data points are used in the training set and interpolation is performed at the same sites (2) 92\% of the $n$ locations are considered in the training set, and the others 8\% are withheld for crossvalidation. The test locations are located in within three selected areas indicated in Figure S1. Interpolation performance is measured with the MSE. Results for both cases are reported in Table \ref{tab:mse}, and it is readily apparent how the MSE of NS-LS model is the smallest among all four models for the both the all location case (1) and the cross-validation setting (2). More specifically, compared to the S-LS model, the NS-LS model shows an improvement of the median MSE across all locations by 14.6\%. The NS-LS model also shows an appreciable improvement in MSE by 10.4\% and 16.7\%, compared with the NS and S models respectively. From these results it is clear how the land/sea effect and buffer area construction yield significant improvement when used in conjunction with the NS model. \begin{table}[ht!] \caption{Comparison of interpolation performance across models. The first two columns show the median MSE (IQR) across all $n_s=100$ simulations in the Gaussian case for both (1) all locations and (2) for crossvalidation. The last two columns show the median AUC (IQR) for the Bernoulli case across the same two cases. \label{tab:mse}} \centering \begin{tabular}{|l|l|l|l|l|l|}\hline Model & locations& NS-LS & S-LS & NS & S \\ \hline \multirow{2}{*}{Gaussian} & All locations & 90.12 (8.17) & 105.47 (9.94) & 100.55 (11.25) & 108.25 (11.23) \\ & Crossvalidation & 9.11 (1.04) & 21.88 (1.18) & 19.35 (1.62) & 21.39 (1.59) \\ \hline \hline \multirow{2}{*}{Bernoulli} & All locations & 0.824 (0.048) & 0.769 (0.074) & 0.782 (0.051) & 0.753 (0.050) \\ & Crossvalidation & 0.707 (0.072) & 0.676 (0.081) & 0.672 (0.079) & 0.641 (0.079)\\ \hline \end{tabular} \end{table} \subsection{Interpolation performance in the Bernoulli case}\label{sec:bern} \quad We now assess predictability in the case of a Bernoulli distribution with logit link, and as in Section \ref{sec:gaus} we assess both the case where all locations are used as training set, as well as cross-validation with the same testing locations as before. Figure \ref{fig:roc} shows the average differences across all $n_s=100$ simulations between receiver operating characteristic curve (ROC) for NS-LS and S-LS, using S as reference for all locations and validation locations. The ROC for NS are visually indistinguishable to that of the S-LS model, so the results associated to that model are not show. The ROC difference in both cases show how the NS-LS model is uniformly better than the stationary S model (as the ROC difference is always positive), and also uniformly better than the S-LS model, especially in the middle of the curve. As expected, the extent of improvement of NS-LS is larger in the case of cross-validation (panel (b)), where the added value of the model at unobserved locations is more apparent. In order to have a comprehensive assessment across all possible choice of thresholds, we consider the area under the curve (AUC) of the ROC for all models and we report it in Table \ref{tab:mse}. In the best case of a perfect prediction, i.e., 100\% true positive rate uniformly across the choice the threshold the AUC should equal 1, and in the worst case of a random guess it should be 0.5. The extent to which the AUC is close to 1 is a measure of predictive performance in this case. As it is shown in Table \ref{tab:mse}, the NS-LS outperforms every other model in both cases. More specifically, across all locations, the NS-LS yields an improvement by 7.2\%, 5.3\% and 9.4\% for the S-LS, NS and S models respectively. These results agree with those presented in Section \ref{sec:gaus}, for the use of the land/sea effect and buffer area construction definitively yields improved performance when included in the NS model. \begin{figure}[ht!] \centering \includegraphics[scale=0.42]{F4-ROC_diff.jpg} \caption{Average differences across all $n_s=100$ simulations between ROC curves of NS-LS and S (black line), and S-LS and S (red line) for (a) all locations and (b) cross-validation. The ROC for NS are visually indistinguishable to that of the S-LS model, so the results associated to that model are not show.} \label{fig:roc} \end{figure} \section{Application}\label{sec:app} \quad In this section, we use the data detailed in Section \ref{sec:data} and the proposed latent Gaussian model with nonstationary SPDE introduced in Section \ref{sec:method} to estimate the global probability of a rain event and the precipitation intensity. In section \ref{sec:down}, we discuss both the fit of the global MERRA-2 dataset and the downscaling approach to adjust interpolated MERRA-2 data with ground USCRN precipitation measurement. In section \ref{sec:eval}, we provide evaluation metrics to assess the model performance. \begin{figure} \centering \includegraphics[scale=0.55]{F5-application.jpg} \caption{Average (a) daily precipitation and (b) precipitation probability. The global dataset is interpolated at the same sites as the ground observations according to the nonstationary global SPDE model \eqref{EQ:Gs}, the linear model \eqref{eq:down} is fit, and the resulting relationship is used to produce the downscaled maps.} \label{fig:app} \end{figure} \begin{figure}[ht!] \centering \includegraphics[scale=0.44]{F6-downscaling.jpg} \caption{The fitted lines using downscaling models described in (a) equation \eqref{eq:down1} and (b) equation \eqref{eq:down2} on February $1^{\text{st}}$, 2021.} \label{fig:ols} \end{figure} \subsection{Modeling global precipitation and downscaling}\label{sec:down} \quad We initially focus on the MERRA-2 data and consider two global data sets 1) a binary rain occurrence event and 2) in case of rain, the actual rain intensity. We then fit the latent Gaussian model \eqref{eq:latent} with nonstationary PDE \eqref{EQ:Gs} with $\mathcal{L}=1$, using a Bernoulli marginal distribution with a logit link function $g(\cdot)$ for rain occurrence and a Gamma distribution with negative inverse link function for rain intensity. Validation for the choice of the marginal distribution can be found in the supplementary along with Figure S3 showing the histogram of precipitation at 456 sample locations (resolution of $18.75^{\circ}\times 15^{\circ}$ in longitude and latitude) with estimated Gamma density. The sample locations are sparse in space to mitigate any spatial influences. In both cases, no additional covariates are assumed, and we assume $K=2$ harmonics for the temporal component, as it was shown to be the optimal choice according to the model selection in Figure S2. Formally, model \eqref{eq:latent} now specializes in the following two models: \begin{subequations} \label{eq:app} \begin{flalign} \log\left(\frac{\mu(\mathbf{s},t)}{1-\mu(\mathbf{s},t)}\right) = f^{\text{time}}(\mathbf{s},t)+\epsilon(\mathbf{s}),\label{eqn:app1} \quad \text{precipitation probability}\\ -\mu(\mathbf{s}, t)^{-1} = f^{\text{time}}(\mathbf{s},t)+\epsilon(\mathbf{s}). \quad \text{precipitation intensity} \label{eqn:app2} \end{flalign} \end{subequations} The histogram shows that precipitation intensity follows a Gamma distribution with shape parameter 0.826 and scale parameter 0.184. Inference is performed with a global triangulation of $n_T=2,340$ triangles, of which $1,134$ are within the area of interest (contiguous United States), while the remaining $1,206$ cover the rest of the world. The hyperparameters' posterior distributions is obtained and used to predict both the precipitation probability and intensity at the locations where the 131 USCRN ground observations locations are located, see Figure \ref{fig:USCRNLocs}. These predictions are then adjusted (downscaled) to point resolution via linear regression. Since we perform downscaling independently for every time point, for simplicity we now drop the time dependence, and we denote as $Y_{G}(\mathbf{s})$ and $Y_{S}(\mathbf{s})$ the precipitation intensity for USCRN and MERRA2, respectively (G=ground, S=simulation), and with $p_{G}(\mathbf{s})$ and $p_{S}(\mathbf{s})$ the probability of precipitation occurrence. We further denote as $\hat{Y}_{S}(\mathbf{s})$ and $\hat{p}_{S}(\mathbf{s})$ the estimated intensity and probability of occurrence, respectively, according to the proposed SPDE model. Finally, we estimate the probability of precipitation occurrence for the USCRN data by fitting the latent Gaussian model \eqref{eq:latent} for each location independently as a time series model, i.e., assuming no spatial dependence and denote the estimate as $\hat{p}_{G}(\mathbf{s})$. We further assume a linear relationship between USCRN and MERRA2 precipitation occurrence probability and intensity: \begin{subequations}\label{eq:down} \begin{flalign} \log\left(\frac{\hat{p}_G(\mathbf{s})}{1-\hat{p}_G(\mathbf{s})}\right) = \beta_0^{(O)} + \beta_1^{(O)}\log\left(\frac{\hat{p}_S(\mathbf{s})}{1-\hat{p}_S(\mathbf{s})}\right)+\xi_O(\mathbf{s}),\label{eq:down1} \quad \text{precipitation probability}\\ \log\left(Y_G(\mathbf{s})\right) =\beta_0^{(I)}+\beta_1^{(I)}\log\left(\hat{Y}_S(\mathbf{s})\right)+\xi_I(\mathbf{s}),\quad \text{precipitation intensity}\label{eq:down2} \end{flalign} \end{subequations} where $\xi_j(\mathbf{s})\sim \mathcal{N}(0,\sigma^2_j), j \in \{O,I\}$ independent and identically distributed in space. A functional boxplot of the variogram of the residuals in Figure S4 (with each curve representing a different time point) lends support to the assumption of spatial independence of the error. The downscaling parameters $\beta_0^{(I)}$ and $\beta_1^{(I)}$ for precipitation intensity are then estimated using the ordinary least squares. \subsection{Results and Evaluation}\label{sec:eval} \quad Downscaled probabilities of precipitation occurrence and precipitation intensity according to the aforementioned model are displayed in Figure \ref{fig:app}(a) and (b), respectively, with the dark bubbles representing average values from the USCRN data. The prediction maps of the United States show high daily precipitation and high precipitation intensity around Seattle, while the lowest values can be found near Las Vegas, and overall the model prediction resembles the ground observation values across the United States. To evaluate the model performance, we calculate the root mean squared error (RMSE) for both probability of precipitation occurrence and precipitation intensity. The RMSE for intensity and probability of precipitation occurrence is 2.01 mm and 0.14 mm, respectively. In order to assess the value added by the smoothing of our SPDE model, we also perform downscaling with the linear models in \eqref{eq:down}, but assuming that no spatial model is fit, i.e., that the MERRA-2 data are not interpolated at the locations of the USCRN sites. Instead, we consider MERRA-2 data at their original resolution, and attribute to each USCRN site the value in the same cell. In other words, we consider as covariates $p_S(\mathbf{s},t)$ and $Y_S(\mathbf{s},t)$. The resulting RMSE for this model in the case of precipitation intensity and probability of precipitation occurrence is 82.74 mm and 0.28 mm, respectively. Therefore, the proposed SPDE approach has narrowed the discrepancy between MERRA-2 and USCRN significantly, as it has reduced the RMSE for precipitation intensity and probability of precipitation occurrence by 97.6\% and 50\%, respectively. Figure \ref{fig:ols} shows the fitted lines using downscaling model in \eqref{eq:down1} and \eqref{eq:down2} on February $1^{\text{st}}$, 2021. The $R^2$ for the two linear models are 0.78 and 0.67 for precipitation probability and intensity, respectively. We also evaluate the model uncertainty by crossvalidation. First, we remove the data from one ground observation location and fit the model using the remaining observations. Next, we construct the 95\% credibility interval for the posterior mean of the probability of precipitation occurrence or precipitation intensity at the removed location with the estimated posterior distributions of the hyperparameters of the model. Then, we repeat the same procedure for all the 131 locations in USCRN. Finally, we determine how many intervals among the 131 the 95\% credibility intervals cover the true value. For precipitation, 93.1\% (122/131) of the 95\% credibility intervals cover the true value, while for probability of raining, 91.6\% (120/131) of the 95\% credibility intervals cover the true value. \section{Conclusion and Discussion} \label{sec:conc} In this work, we have proposed a novel non-stationary spatio-temporal SPDE model able to smooth both probability of precipitation occurrence and probability intensity from a global datasets. Such interpolated dataset is then used in conjuction with ground observation to produce high resolution (downscaled) precipitation maps, which allow to predict what would ground observations would look like in unsampled location with a higher degree of accuracy compared to the original simulated data (i.e., the global data at their native resolution). One may in principle use MERRA-2 as a boundary condition to drive regional simulations with models such as WRF to obtain precipitation maps at equally high spatial resolution, with the added benefit of being able to produce predictions compliant with physical laws. Such \text{dynamical downscaling} approach is however considerably more involved as it require substantial computational and storage resources, as well as considerable expertise to set up WRF properly. As such, our proposed \textit{statistical downscaling} approach is considerably faster and easier to implement without specialized computational resources. The proposed method of adjustment of a simulation via ground observation can also be seen as a bias correction approach, i.e., a method to correct simulations (see, e.g., \cite{yua19, kim15} and \cite{ho12,haw13} for a general review). While a large body of literature in geoscience focuses on bias correction as a means to adjust the first \citep{hemer,lian} and possibly the second moment \citep{teu,li} of the marginal distribution, such approach can be used also to adjust non-Gaussian features, similarly to other recent efforts \citep{pia12,vra14}. The proposed statistical model is scalable to future reanalysis data products with even higher spatial resolution, owing to the finite volume approximation of the SPDE generating the spatial model. Even more realistic downscaled patters could be generated if additional physical variables such as temperature and humidity could be considered as covariates. An incorporation of covariates could be performed either as the latent Gaussian model in \eqref{eqn:latent2}, as suggested in this work, or as as additional input of the scalar or vector field which dictate the deformation of the SPDE model. This could be implemented assuming either a linear contribution, or a non-linear one by means of neural networks \citep{hu22}. In principle, multiple variables could be modeled jointly. However, this would considerably increase both the methodological challenge and the computational overhead, as fast, flexible, multivariate and non-Gaussian global models are currently an active area of investigation \citep{gen15}. \section*{Acknowledgements} This research is supported by grant NSF DMS 2014166. \bibliographystyle{plainnat}
3,212,635,537,855
arxiv
\section{Introduction} The birth of massive stars is inextricably linked to that of cluster formation, as most massive stars form at the centre of dense molecular clouds. The structure of such regions is dense and filamentary, far removed from simple spherical models. In \citet{Smith12a}, hereafter Paper I, we investigated the line profiles that would be observed from collapsing cores embedded within filaments from simulations of clustered star formation. We found that the dense filaments frequently obscured the collapsing core and a blue asymmetric collapse profile \citep{Zhou92,Walker94,Myers96} was observed in less than 50\% of sightlines. In the current paper we extend this analysis to study the line profiles produced during the initial collapse of massive star forming regions (MSFRs). There have been numerous observational studies of line profiles from MSFRs \citep[e.g.][]{Wu03,Fuller05,Wu07,Sun09,Chen10,Csengeri11}. Generally such studies have focussed on finding infall candidates by identifying blue asymmetries in their optically thick lines ( see Paper I or the review by \citealt{Evans99}.) A common way of classifying such surveys is using the blue excess seen across the survey i.e. the number of blue biased profiles minus the number of red biased profiles divided by the total number of observations \citep{Mardones97}. Typical values for such surveys are around 10-30\%. This finding could be interpreted in several ways. First, the majority of MSFRs could be quasi-equilibrium objects containing static massive pre-stellar cores that have not yet collapsed \citep{Tan06}. Alternatively, most massive star forming regions could be collapsing according to their dynamical timescale \citep{Elmegreen00,Vazquez-Semadeni05} but the observational signature produced differs from that predicted by simple spherical models. There are alternative models of massive star formation in which massive stars are formed from well defined massive pre-stellar cores supported by super-sonic turbulence \citep{McKee03 , however predictions of the observed line profiles resulting from such theories have not yet been calculated. In this paper we use the simulations presented in \citet{Smith09b}, hereafter S09, in which a cluster of low mass stars are formed around massive ones to investigate the observational signatures of a collapsing MSFR. We consider the case where the MSFR has already formed a protostar at its centre which is growing rapidly in mass through accretion. This simulation follows the competitive accretion formalism \citep{Bonnell03,Bonnell04,Bonnell06} in which massive stars are formed at the centre of a collapsing cluster where the cluster potential funnels gas towards the massive protostar (see also \citealt{Klessen00,Girichidis11}). These simulations lack outflows, which are known to affect HCO$^+$\xspace line profiles \citep{Rawlings04}. To this end we will focus our modelling on the earliest stages of the collapse before outflows become significant. Without first considering the simple dynamical case without inflows, it would be impossible to disentangle the two effects. Despite this, we will compare our results to observational studies, that may have outflows present, in order to understand to what extent the dynamics of a complex clustered star formation region can explain observed line profiles. Our paper is structured as follows. In Section \ref{method} we outline our method and then in Section \ref{results} we present our results. Section \ref{results} is divided into subsections, each of which identifies a key feature of the simulated observations and then directly compares the profiles to observations. We discuss some qualifications and compare to low mass star formation in Sections \ref{qual} and \ref{comparison}. Section \ref{conclusions} provides a summary. \section{Method}\label{method} As in Paper 1 we use the radiative transfer code RADMC-3D \citep{Dullemond12} to calculate the line profiles from star forming regions extracted from the simulations of S09. These simulations followed the evolution of a $10^4$ \,M$_{\odot}$\xspace cylindrical giant molecular cloud as it undergoes star formation using the smooth particle hydrodynamic SPH method \citep{Monaghan92}. Several massive stars were formed at the centre of clusters. Our line transfer utilises the large velocity gradient proposed by \citet{Sobolev57}, with the inclusion of ``doppler catching'' to interpolate under-resolved velocities as implemented in RADMC-3D by \citet{Shetty11,Shetty11b}. In this section we highlight only the differences in our method from the previous paper, for a full description of our methods we refer to Paper 1. The modelled regions are chosen from \citet{Smith09b} by finding the most massive sink particles and then selecting a 0.4 pc radius region around them. Sink particles represent sites of star formation and are formed from high density gravitationally bound gas in the simulation \citep{Bate95}. The sink particles have a radius of 20 AU and it is possible that within this a multiple system is formed, however we shall use the term sink and protostar synonymously in what follows. We identify two independent regions, one of which forms a very massive sink particle ($\sim 30$ \,M$_{\odot}$\xspace) and the other a less massive sink ($\sim 10$ \,M$_{\odot}$\xspace). One of our chosen species, HCO$^+$\xspace, has been shown to be present in outflows \citep{Rawlings04,Paron12}. As we lack this physics in our simulation we concentrate on the early evolution of the MSFRs before outflows become dominant. We discuss potential implications of outflows in Section \ref{outflows}. The regions are considered at two epochs during their evolution, when the central sources are around 0.5 \,M$_{\odot}$\xspace and 5.0 \,M$_{\odot}$\xspace, corresponding to early and more evolved phases. The most massive sink reaches a final mass of around 30 \,M$_{\odot}$\xspace when the simulation is terminated. Table \ref{ICs} outlines the properties of the two star forming regions. In addition to the central massive protostar multiple additional sites of star formation are also contained within the regions. \begin{table} \caption{The mass contained within each region when the central sink mass is of order 0.5 \,M$_{\odot}$\xspace and 5.0\,M$_{\odot}$\xspace.} \centering \begin{tabular}{c c c c c } \hline \hline Region & Central Sink & Gas Mass & Total number & Final Sink \\ & Mass [\,M$_{\odot}$\xspace] & [\,M$_{\odot}$\xspace] & of sinks & Mass [\,M$_{\odot}$\xspace] \\ \hline A & 0.5 & 370 & 36 & 29.2 \\ B & 0.7 & 257 & 11 & 10.7 \\ \hline A & 5.4 & 354 & 73 & 29.2 \\ B & 5.0 & 260 & 18 & 10.7 \\ \hline \hline \end{tabular} \tablecomments{The gas mass excludes the mass in sinks. The final sink mass corresponds to the mass of the central sink when the simulation is terminated.} \label{ICs} \end{table} In contrast to Paper 1 where we considered HCN as the optically thick tracer, for the massive star forming regions (MSFRs) we use HCO$^+$\xspace due to the greater prevalence of observational studies of massive star formation using this species \citep[e.g][]{Fuller05,Csengeri11}. We adopt a constant abundance of $A_{\mathrm{HCO}^+}=5.0\times 10^{-9}$ relative to the H$_2$ number density \citep{Aikawa05}. For the optically thin tracer we adopt an N$_2$H$^+$\xspace abundance $A_{\mathrm{N2H}^+}=10^{-10}$ relative to H$_2$ \citep{Aikawa05}, as in Paper I. We focus our analysis on the isolated hyperfine component (F101-012) at 93176.2527 MHz, which is displaced by 2.297 MHz from the neighbouring hyperfine lines \citep{Keto10}. Initially, we consider only the (1-0) transitions but later in the paper we shall compare to higher-order optically thick lines that trace higher densities due to their larger critical density. These two species are easily observable with the Atacama Large Millimeter Array (ALMA). There is some suggestion that in real molecular clouds HCO$^+$\xspace and N$_2$H$^+$\xspace abundances may be anti-correlated as HCO$^+$\xspace is depleted onto dust grains at low temperatures \citep[e.g.][]{Jorgensen04}. However, in massive star formation regions the temperatures around the central protostar are of order 20K or higher, so freeze out should not be a major factor. In the simulated cloud, gas temperatures are calculated using a barytropic equation of state with an additional heating term from sinks based on the YSO models of \citet{Robitaille06}. The method is described in full in \citet{Smith09b}. Figure \ref{temp} illustrates the effect of this heating along the simulated beam in the extreme case when the central sink in Region A has a mass of 5.0 \,M$_{\odot}$\xspace. Along the line of sight the temperature is highest at the peak density where the sink particle heats the gas, but also increases towards the outside of the region where the gas is diffuse and would be heated by external radiation. As emission is dependent on both density and temperature, the line profiles are dominated by emission from the central source. All the line profiles are integrated over a Gaussian beam. For our fiducial case we consider a beam with a full-width-half-maximum (fwhm) of 0.06 pc to allow for comparison with the observational study of a high-mass starless gas clump by \citet{Beuther13}, however we shall consider the effect of altering the beamsize later in the paper. \begin{figure} \begin{center} \includegraphics[width=3in]{./fig1_Core24H_inc90_phi90_temp_beam.eps} \caption{The gas temperature along a line of sight directly through the central source when it has a mass of 5.0 \,M$_{\odot}$\xspace integrated over a 0.06pc fwhm beam. The temperature rises at the centre due to heating from the sink particle representing the massive protostar.} \label{temp} \end{center} \end{figure} \section{Results}\label{results} \subsection{Velocity Fields}\label{velocities} \subsubsection{Simulation} For a full analysis of the dynamics of the MSFRs see S09. Here, we provide an overview of the underlying velocity fields in the simulated MSFRs. Figure \ref{velfield} shows the density and velocity in two slices through Regions A and B. The fields are normalised so that the most massive protostar has a position and velocity of zero. The main feature of the velocity fields is a large scale infall motion toward the central object. As discussed in S09 it is this large scale collapse of gas towards the central object that allows a massive star to form. However there are a few additional features to note. In Region A, the massive star forms at the centre of a network of converging filaments, as predicted by \citet{Myers11}, and the velocity vectors follow the contours of the filament towards the centre. In Region B there is evidence of the original velocity field of converging flows which formed the filaments in which the MSFR is embedded. The velocity field in the MSFR is therefore a combination of large scale turbulent motions and gravitational collapse. This leads to a coherent velocity field over the box, which is in strong contrast to the lower mass star formation studied in Paper 1 where the velocity field is extremely inhomogeneous. Additionally, it should be noted that unlike conventional models of spherically collapsing cores there is no envelope of static gas that surrounds the region. \begin{figure*} \begin{center} \begin{tabular}{c c} \includegraphics[width=3.5in]{./fig2a_Vel_98_xy_fill.eps} \includegraphics[width=3.5in]{./fig2b_Vel_98_xz_fill.eps}\\ \includegraphics[width=3.5in]{./fig2c_Vel_24C_xy_fill.eps} \includegraphics[width=3.5in]{./fig2d_Vel_24C_xz_fill.eps}\\ \end{tabular} \caption{The density \textit{(grayscale)} and velocity field \textit{(vectors)} of two slices through the central source in Regions A and B. In region A there is a large-scale velocity gradient towards the central massive star. In region B this is also true of the majority of the dense gas but there are also indications of the background velocity flows that formed the dense filaments in which the massive star is embedded. } \label{velfield} \end{center} \end{figure*} However, while the velocity field may be relatively coherent, the density field is much less so. As previously noted, the MSFR is formed at the convergence point of dense filaments and contains multiple sites of star formation, each of which is associated with gas over-density. Figure \ref{beam_density} shows the density and velocity field along the x-axis of Region A. There are multiple peaks in the density field each of which contributes to the overall emission separately. Local maxima in the velocity field are associated with sites of small-scale collapse, however these are superimposed on top of a larger supersonic flow of a few \,kms$^{-1}$\xspace towards the core centre from each direction. This velocity flow abruptly changes from positive to negative over the centre of the region, where the massive star forms, with a gradient of more than 2\,kms$^{-1}$\xspace occurring in less than 0.1 pc. \begin{figure} \begin{center} \includegraphics[width=3in]{./fig3_Core98_inc90_phi90_props_beam.eps} \caption{The gas density and velocity along a line of sight directly along the x-axis of Region A. The quantities are averaged over a gaussian beam of 0.06pc fwhm. There is a large velocity gradient over the point where the massive star forms, and the density field shows considerable substructure.} \label{beam_density} \end{center} \end{figure} \subsubsection{Observations} Evidence for large scale infall in massive star formation has been presented by a number of authors. In particular, \citet{Motte07} carried out a survey of Cygnus X searching for massive pre-stellar cores. The authors did not find any truly pre-stellar cores but instead observed large scale supersonic flows directed towards the centres of suspected young MSFRs. Further, \citet{Peretto06,Peretto07} found that the massive cluster forming clump NGC 264-C was collapsing along its axis in accordance with its dynamical timescale, and therefore channelling mass towards the Class 0 object at its centre. The violent collapse of gas to form high mass protostellar objects was also proposed by \citet{Beuther02} to explain their observed line-widths and multiple velocity components. Another similarity between these simulations and observations is the prevalence of filamentary structure. A recent study by \citet{Peretto12} of the Pipe nebula found the only indication of star cluster formation occurred at a point of convergence of multiple filaments. Similarly \citet{Schneider12} found cluster formation at the junction of filaments in the Rosette molecular cloud. \begin{figure*} \begin{center} \begin{tabular}{c c} \includegraphics[width=3.6in]{./fig4a_combined_98pe_hco.eps} \includegraphics[width=3.6in]{./fig4b_combined_98pl_hco.eps}\\ \includegraphics[width=3.6in]{./fig4c_combined_24Cpe_hco.eps} \includegraphics[width=3.6in]{./fig4d_combined_24Cpl_hco.eps}\\ \end{tabular} \caption{The HCO$^+$\xspace (1-0) \textit{black} and N${_2}$H${^+}$ (1-0) \textit{red} line profiles from regions A \textit{top} and B \textit{bottom}. The N${_2}$H${^+}$ (1-0) is multiplied by a factor of 4 in order to be visible. The central colour image shows the column density in the plane in which the sight-lines pass through the core, and the position at which the outer panels touch the central image indicates the orientation of the sightline. The line profiles are calculated for a 0.06 pc beam centred directly on the embedded core. The central box has a physical size of 0.8 pc. In the left-hand panels the declination angle has a constant value of $\phi=0^\circ$, and in the right panels the inclination has a constant value of $inc = 90^\circ$. Note that the N$_2$H$^+$\xspace lines are symmetric through $180^\circ$ as they are optically thin.} \label{profiles} \end{center} \end{figure*} \subsection{Optically thin profiles}\label{thin} \subsubsection{Simulation} Figure \ref{profiles} shows the modelled line profiles along various viewing angles through the central source in Regions A and B when the sink has a mass of $ \sim 0.5$ \,M$_{\odot}$\xspace. In the left hand panels of the figures the model is fixed at, $\phi=0^\circ$, and we view the model at $45^\circ$ intervals in inclination. In the right hand panels the inclination is fixed at $inc=90^\circ$ and we view the model at $45^\circ$ intervals in rotation. We sample a total of 14 unique lines of sight through each core centre. The red line shows the isolated N$_2$H$^+$\xspace 1-0 hyperfine line multiplied by a factor of four so that it is visible alongside the HCO$^+$\xspace (1-0) line (black). The most striking feature of the optically thin lines is that all the line profiles exhibit non-Gaussian features. In Section \ref{velocities} we demonstrated that the velocity field of the region is dominated by large scale infall motions but the density field contains multiple peaks. Figure \ref{gausscomp} shows the N$_2$H$^+$\xspace line profile resulting from integrating the emission along the line of sight shown in Figure \ref{beam_density}, with a two component Gaussian fit shown in red. There are major components at velocities +1.5 \,kms$^{-1}$\xspace and 0 \,kms$^{-1}$\xspace, with the latter containing additional substructure. An examination of Figure \ref{beam_density} shows that most of the material on one side of the central protostar is collapsing towards the centre at a roughly constant velocity +1.5 \,kms$^{-1}$\xspace and that the gas traveling at this velocity contains a number of dense cores. The aggregate emission from these cores produces the linepeak at this velocity. The velocity field of the region is normalised such that the central protostar has a velocity of zero, resulting in the second Gaussian component peaking at this velocity. Figure \ref{beam_density} shows that the density field around the core is not smooth as there are dense knots of gas at either side of the core, each with a slightly different velocity. This results in the two maxima in the velocity profile in this component. Further examination of the components of the N$_2$H$^+$\xspace lines in Figure \ref{profiles} indicates that each peak in the optically thin lines comes from a knot of dense gas in the MSFR. In our simulations MSFRs are also cluster forming regions \citep{Bonnell04,Bonnell06,Smith09b}. Such regions contain many cores of dense gas, all of which emit strongly in N$_2$H$^+$\xspace (1-0) at a velocity determined by the local speed at which gas is collapsing towards the centre. This is not true just of the simulations used here; \citet{Krumholz07}, \citet{Krumholz09} and \citet{Girichidis12a} also find multiple cores of dense gas in MSFRs. We note that multiple components in a line profile can also be attributed to optical depth effects, and while N$_2$H$^+$\xspace is generally optically thin, it may become optically thick in the densest regions of the core. In Figure \ref{profiles} it is immediately apparent that optical depth effects are not the causing the multiple components in the lines as the N$_2$H$^+$\xspace profiles are symmetric through $180^\circ$, an impossible outcome if the lines were optically thick. Observationally, either an estimation of the optical depth or complementary observations of another optically thin species would be required to confirm that multiple components in a line profile arise from substructure. \begin{figure} \begin{center} \includegraphics[width=3in]{./fig5_thin_gausscomponent} \caption{The N$_2$H$^+$\xspace(1-0) F(2-1) line profile from Region A when viewed at $i=90^\circ$ $\phi=90^\circ$. The red line shows a simple two component Gaussian fit to the emission. The line has multiple velocity components due to a number of dense cores along the line of sight.} \label{gausscomp} \end{center} \end{figure} Another interesting feature of the optically thin emission from our MSFRs is that the line peak does not always correspond to the velocity of the massive protostar. Figure \ref{velpeak} shows a histogram corresponding to the N$_2$H$^+$\xspace line peak for all the viewing angles in Figure \ref{profiles}. The peak is frequently displaced by more than 0.5 \,kms$^{-1}$\xspace from the velocity of the massive protostar. In all cases the observed line-widths are super-thermal, showing that bulk motions dominate the dynamics of MSFRs. Expected line-widths will be discussed more in Section \ref{time}. \begin{figure} \begin{center} \includegraphics[width=3in]{./fig6_Thickvsthin.eps} \caption{The velocity corresponding to the peak HCO$^+$\xspace(1-0) \textit{(solid)} and N$_2$H$^+$\xspace (1-0) F(2-1) \textit{(dashed)} emission from all simulated lines of sight. The distribution of optically thick line peaks has a blue excess relative to the optically thin peaks.} \label{velpeak} \end{center} \end{figure} \subsubsection{Observations} While such non-Gaussianity is not observed in low mass star formation, recent studies of high mass star formation have detected multiple components from high resolution interferometer observations of cloud centres. For example, \citet{Beuther13} present an analysis of the starless prospective MSFR region IRDC18310-4 in which multiple components are clearly observed in N$_2$H$^+$\xspace (1-0) lines at the location of some of the 870 $\micron$ peaks. \citet{Csengeri11b} carried out a high resolution dynamical study of the massive clump DR21(OH) and found that while single dish N$_2$H$^+$\xspace (1-0) observations reveal a single component, the line splits into multiple components when observed with an interferrometer. This effect was particularly clear when observing the massive dense cores in the region. Multiple N$_2$H$^+$\xspace (1-0) peaks can also be seen in some cases of \citet{Fuller05} although in this case the regions are more poorly resolved compared to those studied here, an issue we will discuss in more detail later. In our simulations the optically thin lines have multiple components due to the presence of several cores of dense gas along the line of sight. Observational studies also find many dense cores in MSFRs \citep{Bontemps10,Longmore10,Rodon12}, suggesting that non-Gaussian optically thin lines should be a feature of massive star formation. \subsection{Optically thick profiles}\label{thick} \subsubsection{Simulation} Figure \ref{profiles} shows the observed line profiles at various viewing angles through the central source in Regions A and B when it has a mass of around 0.5 \,M$_{\odot}$\xspace. The optically thick HCO$^+$\xspace (1-0) line is shown by the black line. The majority of the lines have a greater peak emission on their blue side than the red. As was shown by \citet{Zhou92} and \citet{Myers96} a double peaked profile with an excess of emission on the blue side is an indication of collapse (see Paper 1 for a detailed discussion). The profiles presented here do not always show this classical two peaked signature, but there is usually brighter emission on the blue side. The optically thick line profiles characteristically resemble a saw tooth with a sharp rise in emission on the blue side followed by a more gradual drop off and in some cases a red shoulder. This signature is qualitatively rather insensitive to viewing angle, although in quantitive terms there are noticeable variations. In Region A collapse occurs over a very wide volume and consequently there is a peak towards the blue side in most profiles. In Region B, which forms a less massive star, the infall profiles show a greater degree of variability reflecting the fact that the inward motions are less pronounced. Still the majority of lines show a blue excess. A more quantitative estimate of the asymmetry of the line is usually given by the normalised velocity difference $\delta V$ between the optically thick and thin components \citep{Mardones97}. However, as discussed in Section \ref{thin} since the optically thin lines cannot be consistently fit by one single component, this analysis is technically no longer valid. Nonetheless, a blue excess is seen in the optically thick lines. Figure \ref{velpeak} shows the histogram of the peak velocities in the HCO$^+$\xspace (1-0) lines. They are clearly shifted to the blue side relative to the N$_2$H$^+$\xspace (1-0) peak velocities. In 19 out of 28 cases the peak emission in HCO$^+$\xspace (1-0) occurs towards the bluewards side of the N$_2$H$^+$\xspace (1-0) peak. Table \ref{offsets} summarises the offsets in position between the optically thin and thick emission peaks. Table \ref{offsets} utilises two samples. Firstly, the full sample used to produce Figure \ref{velpeak}, which includes cases where the N$_2$H$^+$\xspace (1-0) emission had multiple components. Secondly, the subset of the sightlines that had only a single component in N$_2$H$^+$\xspace (1-0). In order to determine whether a sightline should be included we ignored small variations in the line profile that would be hidden by noise in a true observation, but excluded cases where there was clearly more than one major component. In this case the rest velocity of the N$_2$H$^+$\xspace emission was determined by fitting a gaussian to the profile. When compared to the typical widths of the line (several \,kms$^{-1}$\xspace) the offsets are not very large. \citet{Mardones97} required that the offset between the optically thick and thin emission be more than 0.25 times the linewidth of the optically thin component to certify a line as having a blue excess. \citet{Fuller05} found N$_2$H$^+$\xspace linewidths of order 2 \,kms$^{-1}$\xspace in their sample meaning that we should require an offset of more than $-0.5 $ \,kms$^{-1}$\xspace to classify a core as having a blue excess. Only 11 out of 28 of the HCO$^+$\xspace (1-0) emission lines have an offset of more than $-0.5 $ \,kms$^{-1}$\xspace indicating that more than half of our line profiles would not be considered infall candidates if this stricter criteria were adopted. When we exclude line profiles with multiple N$_2$H$^+$\xspace components this falls to 6 out of 18 remaining profiles. Our sample has a blue excess of only $E=(N_B-N_R)/N_T=(11-1)/28=0.36$ $N_B$ and $N_R$ are the number of red and blue profiles, and $N_T$ is the total number of profiles. The excess falls to 0.18 if we require the N$_2$H$^+$\xspace to have a single component. This small excess occurs despite the fact that both regions are collapsing so an excess of $E=1.0$ would be expected when considering all viewing angles. \begin{table} \caption{The number of cases where the offset in \,kms$^{-1}$\xspace between the optically thick and thin (1-0) emission is of a given magnitude. We consider two samples. First the full sample where all sightlines are included. In this case the offset is between the location of the peak optically thick and thin emission. Secondly, we consider only those sightlines where the optically thin emission has no clear secondary component. In this case the offset is between the location of the peak optically thick emission and the central velocity of a gaussian fit to the optically thin emission.} \centering \begin{tabular}{ l c c c c c c} \hline \hline Offset & $<-1.0$ & $<-0.5$ & $<0.0$ &$>0.0$ & $>0.5$ & $>1.0$ \\ \hline All & 3 & 11 & 19 & 9 & 1 & 1\\ \hline Single & 1 & 6 & 15 & 3 & 1 &1\\ \hline \hline \end{tabular} \tablecomments{Our sample contains 28 lines in total.} \label{offsets} \end{table} Moreover, the line profiles do not always exhibit a central dip due to self-absorption. An inspection of Figure \ref{beam_density} reveals why this is the case. This is viewed along the x-axis, which corresponds to a viewing angle inclination and declination of $i=90$, $\phi=90$ in our nomenclature. A blue asymmetric double peaked line profile relies on a collapsing region having two points at a given velocity, one at the centre of the core and the other at the outside. In Region A the outer extents of the MSFR are flowing inward with supersonic velocities of $\pm1.5$ \,kms$^{-1}$\xspace. Consequently, at velocities of $\pm 1$ \,kms$^{-1}$\xspace there is no self-absorption from gas in the outer regions and consequently all the emission from the centre of the region where the massive star forms is visible. This probably also accounts for the modest displacement between the optically thick and thin peaks. When compared to monolithic collapse with a static envelope, infall signatures should be much more difficult to detect in a chaotic cluster formation scenario. \subsubsection{Observations} Detecting infall towards of MSFRs has been the focus of various observational studies \citep[e.g.][]{Wu03,Sun09,Chen10}. We focus here on just two such studies: \citet{Fuller05}, which is a large scale search for infall motions, and \citet{Csengeri11} which is amongst the most highly resolved observations to date. \citet{Fuller05} carried out a molecular line survey towards 77 candidate high mass protostars and identified 21 infall candidates, showing blue asymmetry in their HCO$^+$\xspace (1-0) lines, and 11 red profiles. The blue excess was calculated using the method of \citet{Mardones97}. The simulated observations presented here also have a large blue excess, but a greater fraction of the lines are blue and there are fewer red asymmetries. The latter is easily explained by the lack of outflows in our simulations, which could easily increase the number of red asymmetries. Another factor is that \citet{Fuller05} used a larger beam than we consider in these simulations. We will show in Section \ref{width} that this reduces the strength of the blue asymmetry. A third factor is that the blue asymmetry we observe is typically quite small in magnitude, this raises the possibility that some of the \citet{Fuller05} cores with only a slight blue excess may also be collapsing. \citet{Csengeri11} carried out a survey of young dense cores in Cygnus X that showed the cores were dynamically evolving objects. The authors fit their observed line profiles to a model of a collapsing spherical core to estimate key properties. This model predicts that each core should have a central absorption dip. However the observed line profiles frequently did not show this feature, suggestive of large scale infall without a static envelope, as found in our simulated models. \subsection{Time Evolution}\label{time} \subsubsection{Simulation} \begin{figure} \begin{center} \includegraphics[width=3.5in]{./fig7_2panel_TIME_98_i0p0.eps} \caption{The evolution of a line profile as the central object grows in mass in Region A. The left panel shows a central mass of $0.5$ \,M$_{\odot}$\xspace and the right a central mass of $5.0$ \,M$_{\odot}$\xspace. The black line shows the HCO$^+$\xspace (1-0) emission and the red line the N$_2$H$^+$\xspace (1-0) F(2-1) emission multiplied by a factor of four so that it is visible in the plot. Over time the line gets brighter and wider.} \label{timeA} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=3.5in]{./fig8_2panel_TIME_24_i0p0.eps} \caption{As in Figure \ref{timeA} but for Region B. The rise in N$_2$H$^+$\xspace emission at the right edges of the plots is the neighbouring hyperfine line.} \label{timeB} \end{center} \end{figure} A further consideration is the evolution of such massive cores, as obviously not all regions will be observed when the central protostar has a mass of $0.5$ \,M$_{\odot}$\xspace as modelled above. Figures \ref{timeA} and \ref{timeB} show the evolution of the line profile observed at $i=0$, $\phi=0$ in Regions A and B when the central protostar has a mass of 0.5 \,M$_{\odot}$\xspace and 5.0 \,M$_{\odot}$\xspace. In both cases the HCO$^+$\xspace (1-0) line increases in brightness and becomes more strongly blue skewed. The N$_2$H$^+$\xspace line still exhibits non-Gaussianity. Table \ref{timetable} shows the mean intensity peak of the lines from all viewing angles when the central protostar has mass 0.5 \,M$_{\odot}$\xspace and 5.0 \,M$_{\odot}$\xspace. The peak intensity of HCO$^+$\xspace (1-0) increases strongly in both cases, but the N$_2$H$^+$\xspace intensity is largely unchanged. Since the lines are not single Gaussians, we cannot use such a fit to determine their width, instead we find for each line the velocity range within which 95\% of the emission is contained. In both the HCO$^+$\xspace and N$_2$H$^+$\xspace case the linewidth increases as the region evolves. Table \ref{offsets_time}, however, shows that the blue excess of the sample is largely unchanged. In all cases the lines have widths in excess of the thermal scale ( $\sim 0.2$ \,kms$^{-1}$\xspace for 10K gas and $\sim 0.4$ \,kms$^{-1}$\xspace for 40K gas) as the rapid (several \,kms$^{-1}$\xspace) collapse dominates the dynamics. In all snapshots of our simulation the gas is strongly collapsing along converging filaments. Consequently we expect the effect of dynamical collapse on the line profiles to be similar throughout the entirety of the early evolution of the protostar until a HII region develops \citep[e.g.][]{Keto07,Peters12}. \begin{table} \caption{Linewidths and peak line intensities} \centering \begin{tabular}{ l l l l c c } \hline \hline Region & Species & 0.5 \,M$_{\odot}$\xspace & & 5.0 \,M$_{\odot}$\xspace & \\ & & I$_{peak}$ [K] & v$_{95\%}$ [km/s] & I$_{peak}$ & v$_{95\%}$ \\ \hline A & N$_2$H$^+$ & 0.57 & 3.31 & 0.36 & 4.19 \\ B & N$_2$H$^+$ & 0.36 & 3.14 & 0.32 & 4.28 \\ \hline A & HCO$^+$ & 6.27 & 3.95 & 8.60 & 5.05 \\ B & HCO$^+$ & 4.63 & 4.39 & 7.59 & 5.43 \\ \hline \end{tabular} \tablecomments{The peak line intensity, and linewidth within which 95\% of the emission is contained for the simulated line profiles. This is calculated when the central source has a mass of 0.5 \,M$_{\odot}$\xspace and 5.0 \,M$_{\odot}$\xspace respectively.} \label{timetable} \end{table} \begin{table} \caption{As in Table \ref{offsets} but for when the central source in each region had a mass of 5 \,M$_{\odot}$\xspace.} \centering \begin{tabular}{ l c c c c c c} \hline \hline Offset & $<-1.0$ & $<-0.5$ & $<0.0$ &$>0.0$ & $>0.5$ & $>1.0$ \\ \hline All & 10 & 13 & 21 & 7 & 6 & 2\\ \hline Single & 2 & 8 & 10 & 4 & 1 & 1\\ \hline \hline \end{tabular} \label{offsets_time} \end{table} \subsubsection{Observation} There is a general expectation that as MSFRs evolve the observed line intensity should increase due to the increased densities and temperatures, a trend that we confirm here. \citet{Wu07} showed that ultra compact HII region precursors had a lower blue excess than that that of ultra compact HII regions themselves. However, it is hard to compare these trends to the observational literature at later times than when the primary has a mass of 5.0 \,M$_{\odot}$\xspace due to contamination from outflows. For example, \citet{Chen10} carried out a survey of MSFRs identified from a survey of extended green objects \citep{Cyganowski08} and found that sources that showed stronger signs of outflows had more red line profiles. \subsection{Higher order transitions} \subsubsection{Simulations} Another point of interest is the line profiles of higher order transitions. In Figure \ref{4-3} we show the HCO$^+$\xspace (4-3), (3-2) and (1-0) line profiles. The lines appear more Gaussian in the higher order transitions where the critical density is higher. While the 1-0 lines frequently exhibit a red shoulder this is less pronounced as the transition number increases. Table \ref{transitions} shows the mean $\chi^2$ goodness of fit to a single Gaussian profile for each transition. As the transition increases to higher levels the $\chi^2$ statistic systematically decreases implying that a Gaussian is a better representation of the line. The lines also become less blue asymmetric with respect to the N$_2$H$^+$\xspace (1-0) peak position, particularly in the case of the (4-3) line. The (2-1) transition, which was not included in Figure \ref{4-3} has similar properties to the (1-0) transition. As expected, the peak brightness decreases due to the lower population levels at the higher transitions. The line widths also slightly decrease due to the emission originating mainly from the central regions. The decrease in the asymmetry of the lines at higher transitions occurs because only the central dense regions have sufficient densities and temperature to excite the molecule. As we discussed in Section \ref{thick} the central dense regions have a large velocity gradient with no overlapping velocities along the line of sight. In the (1-0) case this results in a large peak with no central dip due to self-absorption. This effect is even stronger in the higher-order transitions, as there is even less chance of self-absorption from the surrounding gas. Consequently we suggest that the higher-order HCO$^+$\xspace transitions are not any more effective indicators of collapse than the lower-order transitions, and in fact in some cases might even be less sensitive to collapse motions. \begin{figure} \begin{center} \includegraphics[width=3.5in]{./fig9_combined_98_43_pe_hco.eps} \caption{As in Figure \ref{profiles} but showing the HCO$^+$\xspace (1-0) \textit{solid} (3-2) \textit{dotted} and (4-3) \textit{dashed} transitions for Region A.} \label{4-3} \end{center} \end{figure} \begin{table*} \caption{The mean $\chi^2$ goodness of fit to a gaussian profile for various transitions of the HCO$^+$\xspace line. \label{transitions}} \centering \begin{tabular}{c c c c c c c} \hline \hline Region & Transition & Critical Density [cm$^{-3}$] & $\chi^2$ & Blue & I$_{p}$ [K] & v$_{95\%}$ [km/s] \\ \hline A &1-0 & $1.85\times 10^{5}$ & $1.55 \times 10^{-1}$ & 8 & 6.27 & 3.95\\ A &2-1 & $1.10\times 10^{6}$ & $4.11 \times 10^{-1}$ & 8 & 4.79 & 3.79\\ A &3-2 & $3.51\times 10^{6}$ & $1.31 \times 10^{-2}$ & 6 & 3.83 & 3.61\\ A &4-3 & $9.07\times 10^{6}$ & $4.84 \times 10^{-3}$ & 3 & 2.94 & 3.55\\ \hline B &1-0 & $1.85\times 10^{5}$ & $1.48 \times 10^{-1}$ & 3 &4.63 & 4.39\\ B &2-1 & $1.10\times 10^{6}$ & $2.83 \times 10^{-2}$ & 4 & 3.25 & 3.90\\ B &3-2 & $3.51\times 10^{6}$ & $7.62 \times 10^{-3}$ & 2 & 2.61 & 3.31\\ B &4-3 & $9.07\times 10^{6}$ & $2.03 \times 10^{-3}$ & 2 & 1.92 & 3.20 \\ \hline \end{tabular} \tablecomments{Also shown is the number of blue profiles in each case with a blue excess of over 0.5 \,kms$^{-1}$\xspace relative to the peak velocity of the N$_2$H$^+$\xspace (1-0) line. The critical density for LTE is estimated using the relation $n_{H_2}=A_{ul}/K_{ul}$ where $A_{ul}$ is the Einstein A coefficient and $K_{ul}$ is the collisional rate coefficient at an assumed kinetic temperature of 20K. The width of the profile v$_95\%$ is the velocity range within which $95\%$ of the total emission is contained.} \end{table*} \subsubsection{Observation} A number of studies have used the higher transitions of HCO$^+$\xspace to study massive star formation regions and successfully detected infall motions \citep[e.g.][]{Klaassen08,Roberts11,Rygl13}. The survey of \citep{Fuller05} observed the HCO$^+$\xspace (1-0) (3-2) and (4-3) transitions for all their sources. They found that the (1-0) transition had a stronger blue excess, in agreement with our findings here. Morphologically they also noted that the (3-2) and (4-3) transitions were more likely to have a single peak. \citet{Fuller05} suggested that this discrepancy might be due to the fact that there is stronger infall in the lower density outer regions of their sources. In our simulated model the velocity at which the outer regions are collapsing towards the centre is indeed higher in Region A. However, the major cause of the simpler profile is a lack of self-absorption in the dense gas. As outflows contribute the bulk of their emission to the line wings, adding outflows to our models would be unlikely to change this finding. \subsection{Variation with beam size}\label{width} \subsubsection{Simulation} Our analysis so far has assumed a $0.06$ pc fwhm beam throughout, which represents the best case scenario for current observations. However, it is also useful to consider the case of larger beams, representing distant sources or single dish observation, and smaller beams, representing what might be possible with ALMA. To this end we also consider the case of a 0.4 pc ($\sim8.25\times 10^4$ AU) beam, equivalent to the half-width of our box, and a 0.01 pc ($\sim2\times 10^3$ AU) beam, which is around the typical Jeans scale in molecular clouds. \begin{figure*} \begin{center} \includegraphics[width=5in]{./fig10_3panel_beamwith_98_i90p90.eps} \caption{The 1-0 transitions from Region A observed at $i=0$, $\phi=0$ with a beam fwhm of $0.4$, $0.06$ and $0.01$ pc. The black line shows the HCO$^+$\xspace line and the red the N$_2$H$^+$\xspace multiplied by a factor of four.} \label{beam} \end{center} \end{figure*} Figure \ref{beam} shows the effect of beam size on the line profile observed from Region A at $i=0$, $\phi=0$. The line brightness increases and the lines get narrower as the beam size decreases. There is also a greater discrepancy between the red and blue peaks. The N$_2$H$^+$\xspace (1-0) line becomes less Gaussian as the fwhm decreases, and very clearly contains multiple components at a fwhm of 0.01 pc. Table \ref{beamtable} shows the mean peak line intensities and linewidths for all our models as a function of beam size. In all cores and in both line transitions the intensity increases and the line width decreases, although not to the point where the linewidth becomes thermal. \begin{table} \caption{Peak line intensity and linewidths for three beam sizes.} \centering \begin{tabular}{ l l l l c c} \hline \hline Region & Species & fwhm [pc] & I$_{p}$ [K] & v$_{95\%}$ [km/s] \\ \hline A & N$_2$H$^+$ & 0.01 & 1.42 & 2.94 \\ A & N$_2$H$^+$ & 0.06 & 0.57 & 3.31 \\ A & N$_2$H$^+$ & 0.4 & 0.28 & 3.53 \\ \hline B & N$_2$H$^+$ & 0.01 & 1.15 & 2.60 \\ B & N$_2$H$^+$ & 0.06 & 0.36 & 3.14 \\ B & N$_2$H$^+$ & 0.4 & 0.21 & 3.47\\ \hline A & HCO$^+$ & 0.01 & 18.5 & 3.61\\ A & HCO$^+$ & 0.06 & 6.27 & 3.95 \\ A & HCO$^+$ & 0.4 & 4.83 & 4.48 \\ \hline B & HCO$^+$ & 0.01 & 12.99 & 3.62 \\ B & HCO$^+$ & 0.06 & 4.63 & 4.39 \\ B & HCO$^+$ & 0.4 & 3.14 & 4.53 \\ \hline \end{tabular} \tablecomments{The mean peak line intensity, and linewidth within which 95\% of the emission is contained for the simulated (1-0) line profiles calculated using three beam sizes.} \label{beamtable} \end{table} Table \ref{offsets_beam} shows the number of blue and red offsets observed in the regions depending on the beam size. All the regions show blue offsets between the optically thick and thin peaks with the greatest number occurring in the $0.01$ pc beam. Unfortunately, the increased prevalence of multiple components in N$_2$H$^+$\xspace complicates the interpretation of the narrow beam profiles. In Table \ref{offsets_beam} there is no entry for the single component subset because only 4 out of the 28 profiles satisfied this criteria. The N$_2$H$^+$\xspace emission is typically used to assign the rest velocity of the core and so without a unique core velocity it would be observationally ambiguous whether these were true infall profiles. \subsubsection{Observations} Our 0.06 pc fiducial beam size was chosen to match the resolution of \citet{Beuther13}, which uses Plateau de Bure observations. This case represents the typical resolution currently available in the literature. These studies are already revealing features hidden in older more poorly resolved surveys such as multiple components in the optically thin lines, as we have discussed above. Future observations with ALMA and the upgraded PdBI (NOEMA) should once again improve our understanding of massive star formation. Current mm-interferometers such as the Plateau de Bure Interferometer and in particular ALMA are able to observe MSFR using a narrow beam. Therefore our $0.01$ pc beam case represents a useful test of the underlying model of star formation presented in S09. We would expect to see strong narrow peaks in HCO$^+$\xspace that are highest on the blue side of the peak N$_2$H$^+$\xspace emission component, and multiple components in the N$_2$H$^+$\xspace lines. Confirmation or refutation of these predictions should provide useful constraints on the dynamics of massive star formation. Though again we caution that outflows could change this picture. \begin{table} \caption{As in Table \ref{offsets} but for different beam sizes.} \centering \begin{tabular}{ l c c c c c c c} \hline \hline Offset & Beam & $<-1.0$ & $<-0.5$ & $<0.0$ &$>0.0$ & $>0.5$ & $>1.0$ \\ \hline All & 0.01 & 6 & 13 & 23 & 5 & 2 & 1\\ All & 0.06 & 3 & 11 & 19 & 9 & 1 & 1\\ All & 0.40 & 3 & 10 & 22 & 6 & 4 & 1 \\ \hline Single & 0.06 & 1 & 6 & 15 & 3 & 1 &1\\ Single & 0.40 & 1 & 7 & 14 & 4 &1 &1\\ \hline \end{tabular} \tablecomments{There is no entry for the 0.01pc beam in the single component sample, as only four sightlines fulfilled this criteria.} \label{offsets_beam} \end{table} \section{Sources of Uncertainty} \subsection{Assumption of constant abundances}\label{qual} An important uncertainty in our method is the assumption of constant chemical abundances throughout the region. There are several reasons why this may not be true in reality. At low temperatures HCO$^+$\xspace is frozen onto dust grains, decreasing its abundance \citep{Jorgensen04}. However, this mechanism only operates below temperatures $\sim 20$ K, a temperature which is typically exceeded in the vicinity of the massive core. N$_2$H$^+$\xspace is destroyed by CO and is consequently thought to be more abundant in dense gas \citep{Bergin02}. This depends on the CO freezing out, but in hot dense gas this may not be the case. As these effects can be both positive and negative, in the absence of a full chemical model, our assumption of constant abundances is the simplest available. We use abundance estimates from the work of \citet{Aikawa05} who applied a detailed chemical model to a collapsing Bonnor-Ebert sphere. It is unclear whether such abundances are applicable to the dense gas in massive star formation regions. Our adopted abundances are at the low end of the those found in infrared dark clouds (IRDCs) by \citet{Vasyunina11}, but are still consistent. To investigate what effect a higher abundance would have on our line profiles we run our models again with the higher abundances suggested by \citet{Sanhueza12}. For the HCO$^+$\xspace lines there is little difference, the lines are brighter but the general morphologies remain the same. However, the N$_2$H$^+$\xspace lines at an abundance of $A_{\mathrm{N2H}^+}=10^{-8.8}$ become optically thick. Since this is not what is observed in MSFRs we conclude that our original abundance of $A_{\mathrm{N2H}^+}=10^{-10}$ gives a more realistic picture. Reasons for this deviation might be that the N$_2$H$^+$\xspace should in reality be destroyed in hot regions, or that the abundance found by \citet{Sanhueza12} is only valid for the extremely cold and dense environments of IRDCs and not warmer MSFRs. If the N$_2$H$^+$\xspace is being destroyed by CO in hot regions, this might reduce the number of velocity components seen in the N$_2$H$^+$\xspace (1-0) line. \subsection{Absence of outflows}\label{outflows} In this paper we have concentrated on the signatures of dynamical collapse from MSFR's. However such regions may also contain outflows \citep[e.g.][]{Beuther02}. These are not included in our original simulations, and hence, their effects are not present in the modelled line profiles. Consequently our line profiles represent that which would be expected from dynamical collapse of a cluster alone. In the case of our fiducial protostellar mass of 0.5 \,M$_{\odot}$\xspace, \citet{Seifried11} showed that outflows would be confined to a column extending roughly 200AU directly above and below the protostar. Since we consider the global motions in a MSFR region 0.8pc in diameter, this should not significantly affect the resulting line profiles. In Section \ref{time} we consider a protostellar mass of 5.0 \,M$_{\odot}$\xspace, and in this case it expected that there should be some outflow activity. Nonetheless, even when outflows are present, the results presented here should still be observationally relevant as we are focussing on dense gas tracers. N$_2$H$^+$\xspace line profiles are almost completely unaffected by outflows as it is such a strong tracer of cold dense gas. For example, \citet{Beuther05} describes a massive core in IRDC18223-3 that shows features in the line wings of CO and CS that are indicative of outflows, however such features are entirely absent from the N$_2$H$^+$\xspace lines. Therefore, the modelled N$_2$H$^+$\xspace profiles are unlikely to be affected by outflow activity. However in HCO$^+$\xspace our observational comparisons may be more unreliable since most observed regions show broadened line wings from outflows. \citet{Cesaroni97} shows a protostar where the HCO$^+$\xspace line profiles have broad line wings attributed to outflows. However, the bulk of the HCO$^+$\xspace emission is clearly attributed to the dense core surrounding the protostar. This would suggest that an additional contribution may be required in the line wing, but out models should provide a good model of the behaviour of the HCO$^+$\xspace line centre. \citet{Rawlings00}, however, showed that HCO$^+$\xspace may be enhanced in the walls of a beam cavity, and that this may affect line profile morphologies \citep{Rawlings04}. even on small scales. Nonetheless, our models are the first to consider the effects of a complex gas morphology on the line profiles from massive star forming regions. If both collapse and outflows had been considered at once it would have been difficult to differentiate the two processes. Further work will be required to gain a complete understanding of HCO$^+$\xspace line profiles. \section{Differences between low and high mass star formation}\label{comparison} When we compare our results to those found in Paper I, several differences become apparent between high and low mass star formation. In Paper I, the optically thin N$_2$H$^+$\xspace component is Gaussian and has a narrow line width, but in the MSFR the N$_2$H$^+$\xspace line is wider and has multiple components. The mean values of $\sigma(v)$ obtained from a Gaussian fit to the N$_2$H$^+$\xspace (1-0) emission in the three low mass filaments in Paper I was $\sigma(v)=$0.28, 0.20 and 0.20 \,kms$^{-1}$\xspace. In this Paper using the fiducial beam width of 0.06 pc a similar procedure yields a mean $\sigma(v)$ of 0.80 and 0.71 \,kms$^{-1}$\xspace for Regions A and B. In Paper I the beam used was narrower (0.01 pc) as low mass star forming regions are typically closer to the observer, but unfortunately we cannot do a direct comparison for this beam size as the MSFR profiles were non-Gaussian. However, we have shown in Section \ref{width} that 95\% of the N$_2$H$^+$\xspace emission was contained over a velocity range of 2.94 and 2.6 \,kms$^{-1}$\xspace for Regions A and B. This value was only slightly lower than that seen for the 0.06pc beam. In Paper I, the optically thick HCN (1-0) emission was highly variable with viewing angle and frequently showed no blue asymmetry (only 48\% of cores using the $\delta V$ method). In the MSFRs, the line of sight variability is slightly lower and the line profiles more consistently have blue excesses relative to the true core velocity. However, the resulting offset is not always large with respect to the width of the N$_2$H$^+$\xspace line, and the interpretation of the offsets as infall profiles is complicated by the multiple components in N$_2$H$^+$\xspace. When these factors are included the percentage of infall profiles was broadly similar in both Papers. In the low mass cores a region of size roughly the local Jeans length (0.01-0.1 pc) is collapsing, but the gas outside this region has disordered turbulent velocities. If a line becomes optically thick in the filament, rather than in the low mass core, the core velocities will not contribute to the observed HCO$^+$\xspace line profile. In the low mass case only the dense core centre contributes to the N$_2$H$^+$\xspace emission and so there is only a single component. In the MSFRs the collapse motions extend over a larger region, so the lines are more likely to become optically thick in the collapsing medium surrounding the central core. However, the large velocity gradient across the region leads to little self-absorption and only a small offset between optically thick and thin components. MSFRs contain multiple dense cores, increasing the probability of detecting multiple components in optically thin lines. On the other hand, low mass star forming regions are usually not clustered, and therefore the profiles only contain a single component. Figure \ref{highlow} shows a comparison between the HCO$^+$\xspace and N$_2$H$^+$\xspace (1-0) lines between a high mass and low mass core for illustrative purposes. For the high mass case, we chose Region A viewed at an inclination and declination of 90 degrees. For the low mass case we chose Core A from Paper I. To allow a fair comparison, we chose a beam size of 0.01 pc for both cases, as this is more typical of the resolution in low mass star forming regions \citep[e.g.][]{Andre07}. From many viewing angles in Paper I no clear blue asymmetry is seen, this corresponds to the case shown on the right (i=135, $\phi=0$ in Paper I) where the HCO$^+$\xspace (1-0) line is optically thick in the filament. In the middle panel we show another viewing angle in which the HCO$^+$\xspace became optically thick in the low mass core instead of the filament (i=0, $\phi=0$ in Paper I). In this case the HCO$^+$\xspace (1-0) line is brighter due to the higher core densities relative to the filament, and the line has a blue asymmetry. The N$_2$H$^+$\xspace line has a single component. In the left panel we show the high-mass case, which has the brightest HCO$^+$\xspace lines and the profile peak is to the blue-ward side of the core rest velocity. The N$_2$H$^+$\xspace line clearly shows multiple components. \begin{figure} \begin{center} \includegraphics[width=3.5in]{./fig11_3panel_highvlow90_hco.eps} \caption{An illustration of the HCO$^+$\xspace and N$_2$H$^+$\xspace (1-0) line profiles in high and low mass star forming regions. The N$_2$H$^+$\xspace line has been multiplied by a factor of two to make it more visible and to allow a fair comparison, we chose a beam size of 0.01 pc throughout. The high mass case corresponds to Region A viewed at i=90, $\phi=90$. For the low mass case we chose Core A from Paper I viewed at i=0, $\phi=0$ \textit{(middle)} and i=135, $\phi=0$ \textit{(right)}, which have a blue and red asymmetry respectively.} \label{highlow} \end{center} \end{figure} \section{Conclusions}\label{conclusions} In this paper we have carried out radiative transfer modelling of optically thin and thick line profiles arising purely from collapse motions in massive star forming regions. The underlying cloud model was obtained from the numerical simulation presented by \citet{Smith09b} in which massive stars formed at the bottom of the potential well of a proto-cluster. We assume constant abundances and only treat the early evolutionary stages of massive star formation since our simulation does not include mechanical feedback and ionising radiation. Our conclusions are the following: \begin{enumerate} \item \textit{Velocities:} Infall motions extend over a large volume, and there are strong velocity gradients across the modelled massive star forming regions, which are particularly steep (20 km s$^{-1}$ pc$^{-1}$ in the most massive case) across the central core. The massive star forming regions are not surrounded by a static envelope. \item \textit{Optically thin lines:} The optically thin N$_2$H$^+$\xspace (1-0) isolated hyperfine lines frequently have multiple components. This is caused by emission from dense substructure in the proto-cluster that, due to the sharp velocity gradient across the region, has a different velocity from the central core. The lines are broad, and the peak in emission does not necessarily correspond to the velocity at which the most massive star is forming. \item \textit{Optically thick lines:} The optically thick HCO$^+$\xspace (1-0) line shows only marginal blue asymmetries relative to the optically thin line peaks. The optically thick line peak is displaced by more than $-0.5$ \,kms$^{-1}$\xspace in less than half of the calculated sightlines, which is small compared to the typical observed 2 \,kms$^{-1}$\xspace linewidth observed in such regions \citep{Fuller05}. Moreover, there is rarely a central absorption dip due to the lack of a static self-absorbing envelope. \item \textit{Time evolution:} As the massive star forming region evolves its optically thick HCO$^+$\xspace lines get brighter, and both optically thick and thin line profiles get broader. \item \textit{Variation with beam:} The line brightness increases and linewidth decreases as the fwhm of the beam is reduced. In our narrowest beam the peak HCO$^+$\xspace was more frequently to the blue side of the peak N$_2$H$^+$\xspace emission. However, the N$_2$H$^+$\xspace profiles also more frequently contained multiple components in the narrowest beam. Such behaviour would be an ideal prediction to test with ALMA in order to study the dynamics of massive star formation. \item \textit{Higher order transitions:} Higher transitions of the optically thick HCO$^+$\xspace lines become increasingly Gaussian due to a growing fraction of the emission originating from a region with a large velocity gradient, where there is little self-absorption in the line. This suggests that the lower HCO$^+$\xspace transitions, namely (1-0) and (2-1) are better indicators of collapse for these regions. \item \textit{Comparison to low mass star formation:} The optically thick lines of MSFRs are bright, and have large linewidths. In low mass cores, the profiles are less intense and have smaller linewidths. The optically thin lines of MSFRs can have multiple peaks, due to the presence of numerous density enhancements along the line of sight. Alternatively, optically thin lines from low mass protostars almost always have Gaussian profiles, since the ambient (filamentary) gas contributes little to the emission \end{enumerate} \section*{Acknowledgements} We thank Amy Stutz for her work on Paper I to which this paper is a sequel. R.J.S, R.S and R.S.K.\ gratefully acknowledge support from the DFG via the SPP 1573 {\em Physics of the ISM} (grants SM321/1-1, KL 1358/14-1 \& SCHL 1964/1-1). We are also thankful for support from the SFB 881 {\em The Milky Way System} subprojects B1, B2 and B4.
3,212,635,537,856
arxiv
\section{\sf Introduction} In this paper, a \emph{smooth} finite dimensional algebra over a field $k$ is a finite dimensional algebra of finite global dimension. The word "smooth" is originated in commutative algebra and is convenient for brevity. Observe that in \cite{CUNTZQUILLEN}, for finite dimensional algebras, "smooth" corresponds to algebras of global dimension at most one, that is, hereditary or semisimple algebras. In 2006, Y. Han conjectured in \cite{HAN} that a finite dimensional algebra whose Hochschild homology vanishes in large enough degrees is smooth. In the same paper Y. Han proved the conjecture for monomial algebras, while in \cite{BERGHMADSEN2009} P.A. Bergh and D. Madsen proved it in characteristic zero for graded finite dimensional local algebras, Kozsul algebras and graded cellular algebras. Recently, the same authors showed in \cite{BERGHMADSEN2017} that trivial extensions of selfinjective algebras, local algebras and graded algebras have infinite Hochschild homology, a result which confirms Han's conjecture for these algebras. Observe that in the 90's, the work of the Buenos Aires Cyclic Homology Group \cite{BACH}, and of L. Avramov and M. Vigu\'{e}-Poirrier \cite{AVRAMOVVIGUE} provided the result for finitely generated commutative algebras. In relation with Han's conjecture, lower bounds are obtained in \cite{BERGHMADSEN2010} for the dimension of the Hochschild homology groups of fiber products of algebras, trivial extensions, path algebras of quivers containing loops and quantum complete intersections. Note that P.A. Bergh and K. Erdmann proved in \cite{BERGHERDMANN} that quantum complete intersections - not at a root of unity - satisfy Han's conjecture. In \cite{SOLOTARVIGUE} A. Solotar and M. Vigu\'{e}-Poirrier proved Han's conjecture for a generalization of quantum complete intersections and for a family of algebras which are in a sense opposite to these. Moreover in \cite{SOLOTARSUAREZVIVAS}, A. Solotar, M. Su\'{a}rez-Alvarez and Q. Vivas considered quantum generalized Weyl algebras and proved Han's conjecture for these algebras (out of a few exceptional cases). In this paper we consider null-square algebras over a field $k$, that is algebras $\Lambda$ of the form $$\left( \begin{array}{cc} A & N \\ M & B \\ \end{array} \right)$$ where $A$ and $B$ are $k$-algebras, $M$ and $N$ are bimodules, and the product is given by matrix multiplication subject to $MN=0=NM$. For these algebras, $I=M\oplus N$ is a two-sided ideal verifying $I^2=0$ and $C=A\times B$ is a subalgebra. Actually $\Lambda= C\oplus I$, that is, $\Lambda$ is a cleft singular extension (see \cite[p. 284]{MACLANE}). Hochschild homology is a functor $HH_*$ from $k$-algebras to graded vector spaces. Hence for a null-square algebra, $HH_*(C)$ is a direct summand of $HH_*(\Lambda)$. Moreover, note that $HH_*(C)=HH_*(A)\oplus HH_*(B)$. In relation to Han's conjecture, this paper treats two opposite cases, one corresponds to quivers without cycles, while in the other case the quiver contains cycles. Both of them aim to provide an inductive step towards proving the conjecture. In Section \ref{cornerandtriangular} we consider algebras which are $E$-triangular, that is, they do not have oriented cycles with respect to a complete system $E$ of non necessarily primitive orthogonal idempotents - for brevity we call such a set $E$ a "system". In Sections \ref{HHnullsquareprojective} and \ref{cuatro}, on the contrary, we study a case where there is an oriented cycle. In this last case our analysis requires the involved bimodules to be projective. A null-square algebra with $N=0$ will be called a corner algebra. For these algebras $HH_*(\Lambda)=HH_*(C)$ by a direct computation that we briefly recall in Section \ref{cornerandtriangular}, see also \cite{LODAY1998} or \cite{CIBILS2000}. Moreover we show that if a corner algebra is finite dimensional, with $A$ and $B$ smooth, then the corner algebra is also smooth. This leads to our first result, namely corner algebras built on the class of algebras ${\mathcal H}$ verifying Han's conjecture, also belong to ${\mathcal H}$. Note that no extra assumption on $M$ is required in the foregoing. Based on the previous results, we go further. To a system $E$ of a $k$-algebra $\Lambda$, we associate its Peirce $E$-quiver: the set of vertices is $E$, and for $x\neq y$ elements of $E$, there is an arrow from $x$ to $y$ if $y\Lambda x\neq 0$. If the Peirce $E$-quiver has no oriented cycles then $\Lambda$ is called $E$-triangular. For instance the Peirce $E$-quiver of a corner algebra with respect to the system $E$ given by the two diagonal idempotents is an arrow if $M\neq 0$. We show that if $\Lambda$ is $E$-triangular, then there is a decomposition $HH_*(\Lambda)=\bigoplus_{x\in E} HH_*(x\Lambda x)$. Moreover for a finite dimensional $E$-triangular algebra $\Lambda$ such that $x\Lambda x$ is smooth for all $x\in E$, the algebra is also smooth. We infer that finite dimensional $E$-triangular algebras built on the class ${\mathcal H}$ also belong to ${\mathcal H}$, without requiring additional assumptions on the bimodules $y\Lambda x$. In Section \ref{HHnullsquareprojective}, we consider null-square algebras $\Lambda$ with non zero bimodules $M$ and $N$, in other words the Peirce $E$-quiver with respect to the two diagonal idempotents is $\cdot\rightleftarrows\cdot$. If $M$ and $N$ are projective bimodules, $\Lambda$ is called a null-square projective algebra. We provide a long exact sequence computing $HH_*(\Lambda)$, which is associated to the short exact sequence obtained from the product map $\Lambda\otimes_C\Lambda \to \Lambda$. We obtain a projective resolution of the kernel $K^1_C(\Lambda)$ of this map, which enables us to compute $\mathsf{Tor}^{\Lambda\!-\! \Lambda}_{*}(K_C^1(\Lambda),\ \Lambda)$ through invariants or coinvariants of a natural action of cyclic groups $C_m$ on the zero degree Hochschild homology of tensor powers $N\otimes_BM$, that is $H_0\left( A, \left(N\otimes_BM\right)^{\otimes_{_A}m}\right)^{C_{m}}$ and $H_0\left( A, \left(N\otimes_BM\right)^{\otimes_{_A}m}\right)_{C_{m}}$. We thus obtain the long exact sequence of Theorem \ref{longexactsequence}. In Section \ref{Han nullsquareprojective} we focus on basic finite dimensional algebras $A$ and $B$ over a perfect field. After choosing a complete system of primitive orthogonal idempotents for each algebra, the projective bimodules $M$ and $N$ are given explicitly as direct sums of indecomposable projective bimodules. We first prove that if the invariants are zero, that is $H_0\left( A, \left(N\otimes_BM\right)^{\otimes_{_A}m}\right)^{C_{m}}=0$, then the space itself is zero. We infer that if the null-square projective algebra has zero homology in large degrees, then the $0$-homology of any tensor power of $N\otimes_AM$ vanishes. Hence the long exact sequence obtained before provides $HH_*(\Lambda)=HH_*(A\times B)$. We prove that the tensor powers of $N\otimes_AM$ and of $M\otimes_BN$ vanish in large enough degrees. Observe that $H_0\left(A, \left(N\otimes_BM\right)^{\otimes_A*}\right)$ is related to 2-truncated cycles, namely cycles in the Gabriel quiver of a basic algebra in which the product of any two consecutive arrows is zero, as considered in \cite{BERGHHANMADSEN2012} in order to guarantee that Hochschild homology is infinite dimensional. Another important result that we obtain in this section is the following Theorem \ref{smooth}. For a perfect field $k$, let $\Lambda$ be a finite dimensional null-square projective $k$-algebra, where $A$ and $B$ are smooth. Assuming the bimodules verify $\left(N\otimes_BM\right)^{\otimes_A*}=0$ for large enough exponents, the algebra $\Lambda$ is also smooth. The proof relies on the construction of an explicit projective resolution obtained through successive cones of the identity. One of the main results of this paper follows: a finite dimensional null-square projective algebra built on the class of basic algebras in ${\mathcal H}$ also belongs to ${\mathcal H}$. In the last section we give a presentation by quiver and relations of a null-square projective algebra, starting from the same type of presentations of $A$ and $B$. This is useful for producing examples where our results apply. \section{\sf Han's conjecture for corner and $E$-triangular algebras}\label{cornerandtriangular} In this section we first consider null-square algebras and their category of representations. Next, we will study corner algebras which are particular cases of null-square algebras, in relation with Han's conjecture. The results that we obtain in this section for corner (and then for triangular algebras) do not require a projectivity hypothesis on the bimodules considered in the definition of a null-square algebra below. \begin{defi} Let $k$ be a field and let $A$ and $B$ be $k$-algebras. Let $M$ and $N$ be respectively a $B-A$-bimodule and an $A-B$-bimodule. The corresponding \emph{null-square algebra} is $$\left( \begin{array}{cc} A & N \\ M & B \\ \end{array} \right)$$ where the product is given by matrix multiplication using the products of $A$ and $B$, the bimodule structures of $M$ and $N$, and setting $mn=0$ and $nm=0$ for all $m\in M$ and $n\in N$. \end{defi} \begin{rema} A \emph{square algebra} is an algebra $$\Lambda = \left( \begin{array}{cc} A & N \\ M & B \\ \end{array} \right)$$ as before, with two bimodule maps $\alpha:N\otimes_BM\to A$ and $\beta:M\otimes_AN\to B$ verifying the obvious "associativity" conditions that ensure the associativity of the corresponding matrix product on $\Lambda$. A null-square algebra is a square algebra where $\alpha=0=\beta$. Observe that in \cite{BUCHW}, R.-O. Buchweitz studies square algebras which are called "(generalised) Morita context" or "pre-equivalence", and focus on the case where $\alpha$ or $\beta$ are surjective. \end{rema} \begin{exam} Let $\Lambda$ be a $k$-algebra with a decomposition $\Lambda=P\oplus Q$ as a right $\Lambda$-module. Then $\Lambda$ is a square algebra of the form $$ \left( \begin{array}{cc} \mathop{\sf End}\nolimits_{\Lambda}P & \mathop{\sf Hom}\nolimits_{\Lambda}(Q,P) \\ \mathop{\sf Hom}\nolimits_\Lambda(P,Q) & \mathop{\sf End}\nolimits_{\Lambda}Q \\ \end{array} \right)$$ If for all $f\in\mathop{\sf Hom}\nolimits_{\Lambda}(P,Q)$ and for all $g\in\mathop{\sf Hom}\nolimits_{\Lambda}(Q,P)$ the compositions $gf$ and $fg$ are zero, the algebra is null-square. \end{exam} \begin{rema} Any square algebra $\Lambda$ is obtained as above by considering the right module decomposition: $$\left( \begin{array}{cc} A & N \\ M & B \\ \end{array} \right)=\left( \begin{array}{cc} A & N \\ 0 & 0\\ \end{array} \right) \oplus \left( \begin{array}{cc} 0 & 0 \\ M & B \\ \end{array} \right).$$ \end{rema} \vskip3mm Recall that a \emph{cleft singular extension algebra} (see \cite[p. 284]{MACLANE}) is an algebra $\Lambda$ with a decomposition $\Lambda = C \oplus I,$ where $C$ is a subalgebra and $I$ is a two-sided ideal of $\Lambda$ verifying $I^2=0.$ A null-square algebra $ \Lambda = \left( \begin{array}{cc} A & N \\ M & B \\ \end{array} \right)$ is an instance of a cleft singular extension with $C=A\times B$ and $I=M\oplus N$. Indeed, $I$ is a two-sided ideal precisely because $MN=NM=0$. We will next consider systems of idempotents of an arbitrary algebra in order to recall the representation theory of a null-square algebra. \begin{defi} Let $\Lambda$ be a $k$-algebra. A \emph{system} of $\Lambda$ is a finite set $E$ of non zero orthogonal idempotents which is complete, \emph{i.e. } $\sum_{x\in E}x=1$. The system is trivial if $E=\{1\}.$ \end{defi} Observe that in the above definition we do not require the idempotents to be primitive. To a system $E$ of a $k$-algebra $\Lambda$ we associate a $k$-category ${\mathcal C}_{\Lambda, E}$ as follows: its objects are the elements of $E$ while the vector space ${}_y\!\left({{\mathcal C}_{\Lambda, E}}\right)_x$ of morphisms from $x$ to $y$ is $y\Lambda x$. The composition is provided by the product of $\Lambda$. Of course $\Lambda$ is recovered as the direct sum of all the morphisms spaces of ${\mathcal C}_{\Lambda, E}$, endowed with the matrix product. It is well known and easy to prove that the $k$-categories of left $\Lambda$-modules, and of $k$-functors from ${\mathcal C}_{\Lambda, E}$ to $k$-vector spaces, are isomorphic. Let now ${\mathcal C}$ be a small $k$-category, with set of objects ${\mathcal C}_0$. Notice that a $k$-functor ${\mathcal M}$ from ${\mathcal C}$ to $k$-vector spaces is given by a family of vector spaces $\{{}_x{\mathcal M}\}_{x\in{\mathcal C}_0}$ and a collection of linear maps $${}_y{\mathcal C}_x\otimes {}_x{\mathcal M} \stackrel{{}_ym_x} {\xrightarrow{\hspace{15mm} }} {}_y{\mathcal M}$$ such that, for any objects $x$, $y$ and $z$, the following diagram commutes: \[ \xymatrix@!C{ {}_z{\mathcal C}_y\otimes {}_y{\mathcal C}_x \otimes {}_x{\mathcal M} \ar@{->}[d]_{c\otimes 1} \ar[r]^-{1\otimes {}_ym_x} & {}_z{\mathcal C}_y\otimes {}_y{\mathcal M} \ar@{->}[d]^{{}_zm_y} \\ {}_z{\mathcal C}_x\otimes {}_x{\mathcal M} \ar[r]_-{{}_zm_x} & {}_z{\mathcal M}} \] Next we define a $k$-category, which will be isomorphic to the category of left modules over a square algebra. \begin{defi}\label{categoryS} Let $\Lambda = \left( \begin{array}{cc} A & N \\ M & B \\ \end{array} \right)$ be a square algebra. The objects of the linear category ${\mathcal S}(\Lambda)$ are $X \underset{\nu } {\overset{\mu}{\rightleftharpoons}} Y$, where $X$ is an $A$-module, $Y$ is a $B$-module, $X\overset{\mu}{\rightharpoonup}Y$ stands for a map of $B$-modules $\mu:M\otimes_A X\rightarrow Y$ and analogously $X\underset{\nu}{\leftharpoondown}Y$ stands for a map of $A$-modules $\nu:N\otimes_B Y\rightarrow X$ which verify \begin{equation}\label{associativity} \nu(1_N\otimes \mu)=\alpha\otimes 1_X \mbox{ and } \mu(1_M\otimes \nu)=\beta\otimes 1_Y. \end{equation} Note that we identify the vector spaces $A\otimes_AX$ and $X$ through the canonical isomorphism, as well as $Y\otimes_BB$ and $Y$. A morphism in ${\mathcal S}(\Lambda)$ from $X \underset{\nu } {\overset{\mu}{\rightleftharpoons}} Y$ to $X'\underset{\nu'} {\overset{\mu'}{\rightleftharpoons}} Y'$ is a couple $(\varphi,\psi)$ where $\varphi :X\to X'$ is a morphism of $A$-modules, $\psi :Y\to Y'$ is a morphism of $B$-modules such that the following diagrams commute: $$ \xymatrix@!C{ M\otimes_A X \ar@{->}[d]_{1\otimes\varphi} \ar[r]^-{\mu} & Y \ar@{->}[d]^{\psi} \\ M\otimes_A X' \ar[r]_-{\mu'} & Y'} \hskip2cm \xymatrix@!C{ X \ar@{->}[d]_{\varphi} & N\otimes_B Y \ar@{->}[l]_-{\nu} \ar@{->}[d]^{1\otimes \psi}\\ X' & N\otimes_B Y' \ar@{->}[l]^-{\nu'} } $$ \end{defi} \begin{prop}\label{modulesandcategoryS} Let $\Lambda$ be a square algebra. The category of left $\Lambda$-modules is isomorphic to ${\mathcal S} (\Lambda)$. \end{prop} \begin{proof} Consider the complete set of orthogonal idempotents $E=\{e, 1-e\}$ of $\Lambda$, where $e=\left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \\ \end{array} \right).$ The result is an immediate consequence of the previous observations.\hfill $\diamond$ \bigskip \end{proof} In what follows the categories of the above proposition will be identified. Note that for a null-square algebra, the equalities (\ref{associativity}) become \begin{equation} \nu(1_N\otimes \mu)=0\mbox{ and } \mu(1_M\otimes \nu)=0. \end{equation} \begin{lemm}\label{projectiveone} Let $\Lambda$ be a square algebra and let $P$ be a projective $A$-module. The $\Lambda$-module $\left(P \underset{\alpha} {\overset{1}{\rightleftharpoons}} M\otimes_A P\right)$ is projective. \end{lemm} \begin{proof} Let $\Lambda_1$ be the $\Lambda\! -\! A$-bimodule given by the first column of $\Lambda$, that is, $\Lambda_1=\left( \begin{array}{cc} A & 0 \\ M & 0 \\ \end{array} \right)= \left(A \underset{\alpha} {\overset{1}{\rightleftharpoons}} M\right).$ Note that $\Lambda_1=\Lambda e$ is a projective $\Lambda$-module. Moreover, if $X$ is an $A$-module, $ \Lambda_1\otimes_A X = \left(X \underset{\alpha\otimes 1_X} {\overset{1}{\rightleftharpoons}} M\otimes_A X\right)$. Since $ \Lambda_1\otimes_A A$ is isomorphic to $\Lambda_1$, we infer that $ \Lambda_1\otimes_A P$ is a projective $\Lambda$-module. \hfill $\diamond$ \bigskip \end{proof} The analogous result holds for $B$-modules. From now on we focus on null-square algebras. \begin{prop}\label{simples} Let $\Lambda = \left( \begin{array}{cc} A & N \\ M & B \\ \end{array} \right)$ be a null-square algebra where $A$, $B$, $M$ and $N$ are finite dimensional. A simple $\Lambda$-module is isomorphic to $$ S \underset{} {\overset{}{\rightleftharpoons}} 0 \mbox{ or } 0 \underset{} {\overset{}{\rightleftharpoons}} T$$ where $S$ and $T$ are simple $A$ and $B$-modules respectively. \end{prop} \begin{proof} We assert that the Jacobson radical of $\Lambda$ is $\left( \begin{array}{cc} \mathop{\rm rad}\nolimits A & N \\ M & \mathop{\rm rad}\nolimits B \\ \end{array} \right)$ where $\mathop{\rm rad}\nolimits A$ and $\mathop{\rm rad}\nolimits B$ are the Jacobson radicals of $A$ and $B$. Indeed, this vector space is a nilpotent two-sided ideal, and the quotient of $\Lambda$ by it is semisimple. \hfill $\diamond$ \bigskip \end{proof} \normalsize \begin{defi} A \emph{corner algebra} $\Lambda$ is a square algebra with $N=0$. In this case, the objects of ${\mathcal S}(\Lambda)$ are denoted by $X {\overset{\mu}{\rightharpoonup}} Y$. \end{defi} In this Section we consider Han's conjecture for corner algebras first, and secondly for $E$-triangular algebras which will be defined below. We emphasize that for corner algebras we do not make any hypothesis on the projectivity of $M$. First we recall the following result. \begin{prop} \cite[Proposition 10, p.86]{EILROSZEL} Let $A$ and $B$ be finite dimensional smooth $k$-algebras. The $k$-algebra $A\otimes B$ is smooth. \end{prop} \begin{theo}\label{cornerfgld} Let $\Lambda= \left( \begin{array}{cc} A & 0 \\ M & B \\ \end{array} \right)$ be a corner finite dimensional algebra, where $M$ is a $B\!-\!A$-bimodule. If $A$ and $B$ are smooth, then $\Lambda$ is smooth. \end{theo} \begin{proof} It is well known that if a finite dimensional algebra $A$ is smooth the same holds for $A^{\mathsf{op}}$. By the previous proposition, $B\otimes A^{\mathsf{op}}$ is smooth. Let $0\to Q_q\to\cdots\to Q_1\to Q_0\to M\to 0$ be a finite resolution of $M$ by projective $B\!-\!A$-bimodules. Firstly let $S\rightharpoonup 0$ be a simple $\Lambda$-module where $S$ is a simple $A$-module. Let $0\to P_p\to\cdots\to P_1\to P_0\to S\to 0$ be a resolution of $S$ by projective $A$-modules. Observe that the following sequence of $\Lambda$-modules obtained by tensoring the previous resolution by $\Lambda_1$ $$ 0\to(P_p \underset{} {\overset{1} {\rightharpoonup}} M\otimes_A P_p)\to\cdots\to (P_0 \underset{} {\overset{1} {\rightharpoonup}} M\otimes_A P_0) \to (S \underset{} {\overset{} {\rightharpoonup}} 0)\to 0$$ is not exact in general unless $M$ is a projective $A$-module. Instead we consider the double complex obtained by tensoring both resolutions over $A$: \[ \xymatrix{ & & \vdots \ar[d] & \vdots \ar[d] & \vdots \ar[d] & \\ & \dots \ar[r] & Q_1\otimes_A P_2 \ar[r] \ar[d] & Q_1\otimes_A P_1 \ar[r] \ar[d] & Q_1\otimes_A P_0 \ar[r] \ar[d] & 0 \\ & \dots \ar[r] & Q_0\otimes_A P_2 \ar[r] \ar[d] & Q_0\otimes_A P_1 \ar[r] \ar[d] & Q_0\otimes_A P_0 \ar[r] \ar[d] & 0 \\ & \dots \ar[r] & M\otimes_A P_2 \ar[r] \ar[d] & M\otimes_A P_1 \ar[r] \ar[d] & M\otimes_A P_0 \ar[r] \ar[d] & 0 \\ & & 0 & 0 & 0 & } \] The total complex of this double complex is exact, since each column is obtained by tensoring an exact complex by a projective module. Hence we obtain a finite exact sequence of $\Lambda$-modules: \[ \xymatrix{ & \vdots & & & \vdots & & \\ & P_2 \ar[d] & \hspace{-8mm} \rightharpoonup & \hspace{-6mm} M\otimes_A P_2 \ar[d] \hspace{3mm} \oplus & \hspace{-6mm} Q_0\otimes_A P_1 \ar[d] \ar[dl] \hspace{3mm} \oplus & \hspace{-6mm} Q_1\otimes_A P_0 \ar[dl] \\ & P_1 \ar[d] & \hspace{-8mm} \rightharpoonup & \hspace{-6mm} M\otimes_A P_1 \ar[d] \hspace{3mm} \oplus & \hspace{-6mm} Q_0\otimes_A P_0 \ar[dl] & & \\ & P_0 \ar[d] & \hspace{-8mm} \rightharpoonup & \hspace{-6mm} M\otimes_A P_0 \ar[d] & & & \\ & S \ar[d] & \hspace{-8mm} \rightharpoonup & 0 & & & \\ & 0 & & & & & } \] We assert that this is a projective resolution of $S\rightharpoonup 0$. Indeed the $i$-th module is $$\left(P_i\rightharpoonup M\otimes P_i\right) \oplus \left(0\rightharpoonup Q_0\otimes_AP_{i-1}\right)\oplus\cdots\oplus \left(0\rightharpoonup Q_{i-1}\otimes_A P_0\right).$$ The first summand $\Lambda_1\otimes_A P_i$ is projective by Remark \ref{projectiveone}. For the other summands, we first notice that if $Q$ is a projective $B\!-\!A$-bimodule and $X$ is any $A$-module, $Q\otimes_A X$ is a projective $B$-module. Moreover, for a corner algebra $\Lambda$, if $W$ is a projective left $B$-module, then the left $\Lambda$-module $0\rightharpoonup W$ is projective. Secondly let $T$ be a simple $B$-module and let $0\rightharpoonup T$ the corresponding simple $\Lambda$-module. Let $R_\bullet \to T$ be a finite $B$-projective resolution of $T$, then $(0\rightharpoonup R_\bullet)\to (0\rightharpoonup T)$ is a finite resolution of $0\rightharpoonup T$ by projective $\Lambda$-modules. \hfill $\diamond$ \bigskip \end{proof} Now, we will define $E$-triangular algebras with respect to a chosen system $E$. We define first a quiver inferred from the Peirce decomposition $\Lambda=\bigoplus_{x,y\in E}y\Lambda x$. \begin{defi}\label{Equiver} Let $\Lambda$ be a $k$-algebra and let $E$ be a system of $\Lambda$. The \emph{Peirce $E$-quiver} $Q_E$ has set of vertices $E$; for $x$ and $y$ different elements of $E$ there is an arrow from $x$ to $y$ in case $y\Lambda x\neq 0$. Note that $Q_E$ contains no loops. \end{defi} \begin{defi} An algebra $\Lambda$ is $E$-\emph{triangular} with respect to a non trivial system $E$ if $Q_E$ has no oriented cycles. \end{defi} \begin{rema} In case $|E|=2$, the Peirce $E$-quiver of an $E$-triangular algebra is an arrow, and the algebra is a corner algebra. Observe that a finite dimensional algebra which is $E$-triangular with respect to a system $E$ may have oriented cycles in its Gabriel quiver. \end{rema} \begin{lemm} Let $\Lambda$ be a $k$-algebra which is $E$-triangular. There exists a system $F$ of two idempotents such that $\Lambda$ is a corner algebra. \end{lemm} \begin{proof} The Peirce $E$-quiver has no oriented cycles, it is finite and it has at least two vertices. Then there exists a source vertex $e$, that is, a vertex with no arrows ending at it. The idempotent $f=\sum_{x\neq e}x$ is not zero. Since $e\Lambda f=0$, the algebra $\Lambda$ is a corner algebra with respect to the system $F=\{e,f \}$. \hfill $\diamond$ \bigskip \end{proof} \begin{coro}\label{triangularfgld} Let $\Lambda$ be a finite dimensional $k$-algebra which is $E$-triangular with respect to a system $E$. If $x\Lambda x$ is smooth for every $x\in E$, then $\Lambda$ is smooth. \end{coro} \begin{proof} We proceed by induction on the number of vertices. Let $e$ be a source vertex of $Q_E$, let $f=\sum_{x\neq e}x=1-e$ and let $F$ be the system $\{e,f\}$. Let $E'=E\setminus\{e\}$, which is a system of the algebra $f\Lambda f$. The $E'$-quiver of $f\Lambda f$ has no oriented cycles since $y (f\Lambda f )x = y\Lambda x$ for every $x,y\in E'$. By hypothesis the algebras $x(f\Lambda f)x=x\Lambda x$ are smooth for every $x\in E'$. By induction $f\Lambda f$ is smooth. Theorem \ref{cornerfgld} provides the result since $e\Lambda e$ is smooth and $\Lambda$ is a corner algebra with respect to $F$, as in the proof of the previous lemma. \hfill $\diamond$ \bigskip \end{proof} By definition the Hochschild homology vector spaces of a $k$-algebra $\Lambda$ with coefficients in a $\Lambda$-bimodule $Z$ are $$H_*(\Lambda, Z)= \mathsf{Tor}^{\Lambda\otimes \Lambda^{\mathsf{op}}}_*(\Lambda, Z)$$ where the later is also denoted by $\mathsf{Tor}^{\Lambda-\Lambda}_*(\Lambda, Z).$ Next we recall the computation of the Hochschild homology of a corner algebra, see for instance \cite{LODAY1998, CIBILS2000}. The following well known result will be required; we provide a sketch of its proof for the convenience of the reader. \begin{lemm}\label{byseparable} Let $\Lambda$ be a $k$-algebra, let $D$ be a separable subalgebra of $\Lambda$, let $Z$ be a $\Lambda$-bimodule and let $Z_D= Z/ \langle dz-zd \mid z\in Z, d\in D\rangle$. The homology of the complex $$\cdots\stackrel{b}{\to}Z\otimes_{D-D}\left(\Lambda\otimes_D\Lambda\otimes_D\Lambda\right) \stackrel{b}{\to}Z\otimes_{D-D}\left(\Lambda\otimes_D\Lambda\right)\stackrel{b}{\to} Z\otimes_{D-D}\Lambda\stackrel{b}{\to}Z_D\to 0$$ \normalsize is $H_*(\Lambda, Z)$, where $\otimes_{D-D}$ stands for $\otimes_{{D\otimes D^{\mathsf{op}}}}$ and where the maps $b$ are given by the usual formulas for computing Hochschild homology: \begin{align*} b(x_0\otimes x_1\otimes \dots\otimes x_n) &= x_0x_1\otimes x_2\otimes \dots x_n \\ &+\sum_{0}^{n-1}(-1)^i x_0\otimes \dots\otimes x_ix_{i+1}\otimes \dots\otimes x_n \\ &+(-1)^n x_nx_0\otimes x_2\otimes \dots \otimes x_{n-1}. \end{align*} \end{lemm} \begin{proof} Consider the complex with differential $d$ defined by the usual formulas for the canonical resolution of $\Lambda$ over the ground field $$\dots \stackrel{d}{\to}\Lambda\otimes_D\Lambda\otimes_D \Lambda\stackrel{d}{\to}\Lambda\otimes_D \Lambda\stackrel{d}{\to}\Lambda\to 0.$$ The map $s$ given by $s(x_1\otimes \dots \otimes x_n)=1\otimes x_1\otimes \dots \otimes x_n$ is well defined and verifies $ds+sd=1$, this proves that the complex is acyclic. Since $D$ is separable, $D\otimes D^{op}$ is also separable and any $D$-bimodule is projective. Consequently the acyclic complex above is a projective resolution of $\Lambda$ by projective $\Lambda$-bimodules. \normalsize The statement of the lemma is obtained by applying the functor $Z\otimes_{D-D} -$ to this resolution and observing that $Z\otimes_{\Lambda - \Lambda}\left(\Lambda\otimes_DX\otimes_D\Lambda\right)$ is canonically isomorphic to $Z\otimes_{D-D} X$ for any $D$-bimodule $X$. \hfill $\diamond$ \bigskip \end{proof} \begin{theo}\cite{LODAY1998,CIBILS2000}\label{diago} Let $\Lambda = \left( \begin{array}{cc} A & 0 \\ M & B \\ \end{array} \right)$ be a corner algebra, where $A$ and $B$ are $k$-algebras and $M$ is a $B\!\!-\!\!A$-bimodule. There is a decomposition $$HH_*(\Lambda)=HH_*(A)\oplus HH_*(B)$$ \end{theo} \begin{proof} Let $e$ be the idempotent $\left( \begin{array}{cc} 1_A & 0 \\ 0 & 0 \\ \end{array} \right)$ and let $f=1-e=\left( \begin{array}{cc} 0 & 0 \\ 0 & 1_B \\ \end{array} \right)$. Let $D=\left( \begin{array}{cc} k & 0 \\ 0 & k \\ \end{array} \right) = ke\times kf$; note that $D$ is a separable subalgebra of $\Lambda$. We assert that the complex of the previous lemma is actually the direct sum of the complexes that compute $HH_*(A)$ and $HH_*(B)$. Indeed, notice that the $D$-bimodule decomposition $\Lambda=A\oplus B\oplus M$ provides a direct sum decomposition $$\Lambda\otimes_{D-D}\left( \Lambda\otimes_D\cdots\otimes_D\Lambda\right)= (A\otimes\cdots\otimes A) \oplus (B\otimes\cdots\otimes B)$$ since $$0=M\otimes_{D-D}B=M\otimes_{D-D}A=M\otimes_{D-D}M=B\otimes_{D-D}M=A\otimes_{D-D}M$$ and $A\otimes_{D-D}A= A\otimes A$ while $B\otimes_{D-D}B= B\otimes B$. Observe that in degree $0$ we obtain $\Lambda\otimes_{D-D}D= A\oplus B$. \hfill $\diamond$ \bigskip \end{proof} \begin{coro}\label{triangularHH} For any $k$-algebra $\Lambda$ which is $E$-triangular with respect to a system $E$, there is a decomposition $$HH_*(\Lambda)=\bigoplus_{x\in E}HH_*(x\Lambda x).$$ \end{coro} \begin{proof} The idea of the proof is similar to the proof of Corollary \ref{triangularfgld}. It follows by induction once a source vertex of $Q_E$ is chosen. \hfill $\diamond$ \bigskip \end{proof} Next we turn to Han's conjecture that we recall: if $A$ is a finite dimensional algebra over a field such that $HH_n(A)=0$ for $n$ large enough, then $A$ is smooth. \begin{theo} Finite dimensional corner $k$-algebras built on the class of $k$-algebras ${\mathcal H}$ verifying Han's conjecture also belong to ${\mathcal H}$. \end{theo} \begin{proof} Let $\Lambda = \left( \begin{array}{cc} A & 0 \\ M & B \\ \end{array} \right)$ be a finite dimensional corner algebra and suppose $HH_*(\Lambda)=0$ for large enough degrees. Theorem \ref{diago} shows that the same holds for $A$ and $B$. Since $A$ and $B$ belong to ${\mathcal H}$, they are smooth. By Theorem \ref{cornerfgld}, $\Lambda$ is smooth. \hfill $\diamond$ \bigskip \end{proof} \begin{coro}\label{Hantriangular} Let $\Lambda$ be a finite dimensional $k$-algebra which is $E$-triangular with respect to a system $E$ of $\Lambda$. If for every $x\in E$ the algebras $x\Lambda x$ belong to ${\mathcal H}$, then $\Lambda$ belongs to ${\mathcal H}$. \end{coro} \begin{proof} The proof follows from Corollaries \ref{triangularfgld} and \ref{triangularHH}.\hfill $\diamond$ \bigskip \end{proof} \begin{rema}\label{agree} Let $\Lambda$ be a smooth finite dimensional algebra such that $\Lambda/\mathop{\rm rad}\nolimits \Lambda$ is a product of copies of the ground field $k$ and which admits a Wedderburm decomposition $\Lambda = D \oplus \mathop{\rm rad}\nolimits \Lambda$ where $D$ is a subalgebra of $\Lambda$. Note that if $k$ is perfect a Wedderburm decomposition always exists. If $\Lambda$ is smooth, it is proven by B. Keller in \cite[2.5]{KELLER} that there is a $K$-theoretical equivalence between $\Lambda$ and $D$. In particular the cyclic homologies of these algebras are isomorphic, as well as the Hochschild homologies due to the Connes' long exact sequences relying cyclic and Hochschild homologies, for $\Lambda$ and $\Lambda/\mathop{\rm rad}\nolimits\Lambda$, see for instance \cite{WEIBEL}. Consequently the Hochschild homology of $\Lambda$ is concentrated in degree zero. In this situation, it follows from Han's conjecture that if the Hochschild homology vanishes in large enough degrees, then it actually vanishes in all positive degrees. We observe that in the situation of Corollary \ref{Hantriangular}, the result that we have proven agrees with the previous observation. Indeed, we have shown using Corollary \ref{triangularHH} that Hochschild homology is the direct sum of the Hochschild homologies at the idempotents of the system. \end{rema} \section{\sf Hochschild homology of null-square projective algebras}\label{HHnullsquareprojective} In this section we consider a \emph{null-square projective algebra} $\Lambda$, that is, a null-square algebra $\Lambda =\left( \begin{array}{cc} A & N \\ M & B\\ \end{array} \right)$ where $M$ and $N$ are projective $B\!-\!A$ and $A\!-\!B$-bimodules respectively; we recall that $MN=NM=0$. We will provide a long exact sequence which computes $HH_*(\Lambda)$. First we consider a cleft extension algebra $\Lambda=C\oplus I$, where $C$ is a subalgebra and $I$ is a two-sided ideal, see \cite[p. 284]{MACLANE}. Let $$K^1_C(\Lambda)=\mathop{\rm Ker}\nolimits \left(\Lambda\otimes_C\Lambda\stackrel{d}{\longrightarrow}\Lambda\right)$$ where $d$ is given by the product of $\Lambda$. In case $I$ is projective as a $C$-bimodule we will provide a resolution of $K^1_C(\Lambda)$ by projective $\Lambda$-bimodules. This resolution specialized to a null-square projective algebra will allow to compute $\mathsf{Tor}_*^{\Lambda-\Lambda}(K^1_C(\Lambda),\Lambda)$. The mentioned long exact sequence {will be obtained as} the $\mathsf{Tor}$ exact sequence associated to the short exact sequence of $\Lambda$-bimodules \begin{equation}\label{theshort} 0\longrightarrow K^1_C(\Lambda)\longrightarrow\Lambda\otimes_C\Lambda\longrightarrow\Lambda\longrightarrow 0. \end{equation} \begin{rema} This short exact sequence splits as a sequence of $C$-bimodules but it does not split as a sequence of $\Lambda$-bimodules. \end{rema} \begin{lemm} Let $\Lambda=C\oplus I$ be a cleft extension algebra. The following complex is acyclic: $$\cdots\stackrel{d}\longrightarrow\Lambda\otimes_CI\otimes_CI\otimes_C\Lambda \stackrel{d}\longrightarrow\Lambda\otimes_CI\otimes_C\Lambda \stackrel{d}\longrightarrow\Lambda\otimes_C\Lambda \stackrel{d}{\longrightarrow}\Lambda \longrightarrow 0$$ with differentials for $n\geq 3$ \begin{align*} d(l_1\otimes x_2\otimes \dots\otimes x_{n-1}\otimes l_n) &= l_1x_2\otimes x_3\otimes \dots x_{n-1}\otimes l_n \\ &+\sum_2^{n-2}(-1)^{i+1} l_1\otimes \dots\otimes x_ix_{i+1}\otimes \dots\otimes l_n \\ &+(-1)^n l_1\otimes x_2\otimes \dots \otimes x_{n-1}l_n \end{align*} and, for $n=2$, the product of the algebra is denoted by $d$ as before. \end{lemm} \begin{proof} Let $l\in\Lambda$ and let $l=l_C+l_I$ be its decomposition in $C\oplus I$. Let $s$ be the map given as follows: $$s( l_1\otimes x_2\otimes \dots\otimes x_{n-1}\otimes l_n) = 1\otimes \left(l_1\right)_I\otimes x_2\otimes \dots\otimes x_{n-1}\otimes l_n.$$ It is straightforward to check that $s$ is well defined with respect to the tensor products over $C$. The verification that $s$ is a homotopy contraction is not completely trivial, we illustrate this by checking the property in degree two: \begin{align*} ds(l\otimes x\otimes l')= &l_I\otimes x\otimes l' -1\otimes l_Ix\otimes l' + 1\otimes l_I\otimes xl',\\ sd(l\otimes x \otimes l')= & 1\otimes \left(lx\right)_I\otimes l' - 1\otimes l_I\otimes xl'. \end{align*} Note that $(lx)_I = lx= l_Cx+l_Ix$. Hence \begin{align*} (ds+sd)(l\otimes x\otimes l')&=l_I\otimes x\otimes l' - 1 \otimes l_Ix\otimes l' + 1\otimes (lx)_I\otimes l'\\ &=l_I\otimes x\otimes l' - 1\otimes l_Ix\otimes l' + 1\otimes l_Cx\otimes l' +1\otimes l_Ix\otimes l'\\ &=l_I\otimes x\otimes l' + 1\otimes l_cx\otimes l'\\ &=l_I\otimes x\otimes l' + l_C\otimes x \otimes l'\\ &= (l_I+l_C)\otimes x \otimes l'\\ &= l\otimes x \otimes l'. \end{align*} \hfill $\diamond$ \bigskip \end{proof} \begin{prop}\label{projresK} Let $\Lambda=C\oplus I$ be a cleft extension algebra and suppose $I$ is a projective $C$-bimodule. The following is a resolution of $K_C^1(\Lambda)$ by projective $\Lambda$-bimodules: $$\cdots \stackrel{d}{\longrightarrow}\Lambda\otimes_CI\otimes_CI\otimes_C\Lambda \stackrel{d}{\longrightarrow}\Lambda\otimes_CI\otimes_C\Lambda\stackrel{d}{\longrightarrow}K_C^1(\Lambda)\longrightarrow 0.$$ \end{prop} \begin{proof} The complex is acyclic by the previous result. We claim that if $P$ and $Q$ are projective $C$-bimodules, then $P\otimes_C Q$ is also a projective $C$-bimodule. Indeed $(C\otimes C)\otimes_C(C\otimes C)$ is a projective bimodule and the result follows. Consequently $I\otimes_C\cdots\otimes_C I$ is a projective $C$-bimodule. Moreover, if $P$ is a projective $C$-bimodule it is clear that $\Lambda\otimes_C P\otimes_C \Lambda$ is a projective $\Lambda$-bimodule. \hfill $\diamond$ \bigskip Let $\Lambda$ be a $k$-algebra and $Z$ be a $\Lambda$-bimodule. We recall the following $$H_0(\Lambda, Z)\ = \ \Lambda \otimes_{\Lambda\otimes \Lambda^{\mathsf{op}}} Z\ = \ \Lambda\otimes_{\Lambda-\Lambda} Z\ = \ Z/\langle\lambda z - z\lambda\rangle $$ where $\langle\lambda z - z\lambda\rangle$ is the vector subspace of $Z$ generated by the set $\{\lambda z - z\lambda\}$ for all $\lambda\in\Lambda$ and $z\in Z$. \end{proof} Let $\Lambda$ be an algebra and let $C$ be a subalgebra. Let $U$ be a $C$-bimodule and let $\Lambda\otimes_C U \otimes_C \Lambda$ be the induced $\Lambda$-bimodule. The next result gives a decomposition of the Hochschild homology in degree zero of a cleft algebra $\Lambda=C\oplus I$ with coefficients in an induced bimodule. We provide a proof for further use. \begin{prop}\label{h0induced} Let $\Lambda=C\oplus I$ be a cleft algebra and let $U$ be a $C$-bimodule. $$H_0(\Lambda,\ \Lambda\otimes_CU\otimes_C\Lambda) = H_0(C,U) \oplus H_0(C,\ I\otimes_CU)$$ \end{prop} \begin{proof} The mutual inverse isomorphisms are given by $$\begin{array}{llll} a\otimes u\otimes b&\mapsto\ \left(ba\right)_Cu\ &+\ &\left(ba\right)_I\otimes u,\\ u+ x\otimes v &\mapsto\ 1\otimes u \otimes 1\ &+\ &x\otimes v\otimes 1. \end{array} $$ \hfill $\diamond$ \bigskip \end{proof} We will use next the previous result for $U= I^{\otimes_{_C} n}$. Let $$I(n) = H_0(C, \ I^{\otimes_{_C} n}).$$ \begin{coro} Let $\Lambda=C\oplus I$ be a cleft algebra. There is a decomposition $$H_0\left(\Lambda, \ \Lambda \otimes_C I^{\otimes_{_C} n} \otimes_C \Lambda\right) = I(n)\ \oplus \ I(n+1).$$ \end{coro} \begin{prop}\label{torKcomplex} Let $\Lambda = C\oplus I $ be a cleft algebra where $I$ is a projective $C$-bimodule. The vector spaces $\mathsf{Tor}_*^{\Lambda\!-\!\Lambda}(K_C^1(\Lambda),\Lambda)$ are the homology spaces of the complex $$\cdots \stackrel{b}{\longrightarrow} I(n)\oplus I(n+1) \stackrel{b}{\longrightarrow}I(n-1)\oplus I(n)\stackrel{b}{\longrightarrow}\cdots \stackrel{b}{\longrightarrow} I(2)\oplus I(3) \stackrel{b}{\longrightarrow} I(1)\oplus I(2) \longrightarrow 0 $$ where $$b: I(n)\oplus I(n+1) \to I(n-1)\oplus I(n)$$ is as follows: \begin{itemize} \item If $z_1\otimes\dots\otimes z_n \in I(n)$, then \begin{align*} b(z_1\otimes\dots\otimes z_n) &= z_1\otimes\dots\otimes z_n \\ &+\sum_{1}^{n-1} (-1)^i z_1\otimes\dots \otimes z_iz_{i+1}\otimes \dots\ \otimes z_n \\ &+ (-1)^nz_n\otimes z_1\otimes\dots\otimes z_{n-1} \end{align*} where the first and the last terms belong to $I(n)$ and the middle sum belongs to $I(n-1)$. \item If $z_0\otimes\dots\otimes z_n \in I(n+1)$, then \begin{align*} b(z_0\otimes\dots\otimes z_n) &= z_0z_1\otimes\dots\otimes z_n \\ &+\sum_{0}^{n-1} (-1)^i z_0\otimes\dots \otimes z_iz_{i+1}\otimes \dots\ \otimes z_n \\ &+ (-1)^nz_n z_0\otimes\dots\otimes z_{n-1} \end{align*} which belongs to $I(n)$. \end{itemize} \end{prop} \begin{proof} The formulas are obtained by applying the functor $H_0(\Lambda, -)$ to the projective resolution of $K_C^1(\Lambda)$ of Proposition \ref{projresK}, and by translating the differentials to the present setting through the isomorphisms provided in Proposition \ref{h0induced}. \hfill $\diamond$ \bigskip \end{proof} \begin{lemm}\label{Iodd0} Let $A$ and $B$ be $k$-algebras, let $C=A\times B$ and let $I$ be a $C$-bimodule of the form $I=M\oplus N$ where $M$ is a $B\!-\!A$-bimodule and $N$ is a $A\!-\!B$-bimodule. For $n$ odd, $I(n)=0$. \end{lemm} \begin{proof} First we notice that $M\otimes_CM=0=N\otimes_C N$ since for instance $m \otimes m' = m(1_A,0) \otimes m' = m\otimes (1_A,0)m' = m\otimes 0 =0$. Moreover $N\otimes_C M = N\otimes_B M$ and $M\otimes_C N = M\otimes_A N$. Consequently $$ I^{\otimes_{_C} n} = \left(\cdots \otimes_AN\otimes_BM\otimes_AN\otimes_BM\right)\ \oplus\ (\cdots \otimes_BM\otimes_AN\otimes_BM\otimes_AN)$$ with $n$ tensorands in each summand. In particular for $n$ odd we have $$ I^{\otimes_{_C} n} = \left(M\otimes_A\cdots \otimes_BM\otimes_AN\otimes_BM\right)\ \oplus\ (N\otimes_B\cdots \otimes_AN\otimes_BM\otimes_AN) $$ and we assert that $H_0(C, \ I^{\otimes_{_C} n}) =0$. Indeed $(1_A,0)x=0$ for every $x\in M$, while $x(1_A,0)=x$, and $(0,1_B)y=0$, while $y(0,1_B)=y$ for every $y\in N$. \hfill $\diamond$ \bigskip \end{proof} \begin{lemm} In the same situation as in the previous lemma, for $n=2m$, $$ I^{\otimes_{_C} n} =\left(N\otimes_BM\right)\otimes_A\cdots \otimes_A\left(N\otimes_BM\right)\ \oplus\ (M\otimes_AN)\otimes_B\cdots \otimes_B(M\otimes_AN)$$ $$= \left(N\otimes_BM\right)^{\otimes_{_A}m}\ \oplus \ \left(M\otimes_AN\right)^{\otimes_{_B}m}.$$ \end{lemm} \begin{coro}\label{even} Let $A$ and $B$ be $k$-algebras, let $C=A\times B$ and let $I$ be a $C$-bimodule of the form $I=M\oplus N$ where $M$ is a $B\!-\!A$-bimodule and $N$ a $A\!-\!B$-bimodule. The following decomposition holds: $$I(2m)= H_0\left( A, \left(N\otimes_BM\right)^{\otimes_{_A}m}\right) \ \oplus\ H_0\left( B, \left(M\otimes_AN\right)^{\otimes_{_B}m}\right).$$ \end{coro} \begin{defi}\label{cyclicaction} Let $C_m=\langle t\mid t^m=1\rangle$ be a cyclic group of order $m$. The $kC_m$-module structures of $H_0\left( B, \left(M\otimes_AN\right)^{\otimes_{_B}m}\right)$ and $H_0\left( A, \left(N\otimes_BM\right)^{\otimes_{_A}m}\right)$ are given by the following action of $t$ by cyclic permutation: $$t(x_m\otimes y_m\otimes \cdots x_2\otimes y_2\otimes x_1\otimes y_1) = x_1\otimes y_1\otimes x_m\otimes y_m\otimes \cdots x_2\otimes y_2,$$ $$t(y_m\otimes x_m\otimes \cdots y_2\otimes x_2\otimes y_1\otimes x_1) = y_1\otimes x_1\otimes y_m\otimes x_m\otimes \cdots y_2\otimes x_2.$$ \end{defi} Note that the above actions are not well defined neither on $M\otimes_AN$ nor on $N\otimes_BM$, on the other hand they are well defined on the $0$-degree homology of these bimodules. We provide two isomorphisms between these $kC_m$-modules that will be used in the proof of the next result: $$ \begin{array}{rlll} H_0\left( A, \left(N\otimes_BM\right)^{\otimes_{_A}m}\right) &\stackrel{\sigma}{\to} & H_0\left( B, \left(M\otimes_AN\right)^{\otimes_{_B}m}\right) \\ y_m\otimes x_m\otimes \cdots \otimes y_1\otimes x_1 & \mapsto & x_1\otimes y_m\otimes x_m\otimes \cdots \otimes y_1, \end{array} $$ $$ \begin{array}{rlll} H_0\left( B, \left(M\otimes_AN\right)^{\otimes_{_B}m}\right) & \stackrel{\tau}{\to}& H_0\left( A, \left(N\otimes_BM\right)^{\otimes_{_A}m}\right)\\ x_m\otimes y_m\otimes \cdots \otimes x_1\otimes y_1 & \mapsto & y_1\otimes x_m\otimes y_m\otimes\cdots \otimes x_1. \end{array} $$ Notice that the compositions $\sigma\tau$ and $\tau\sigma$ are the actions of $t$ on the corresponding vector spaces. Finally we recall that for a group $G$ and a $kG$-module $H$, the invariants (or fixed points) of the action are $H^G=\{x\in H \mid sx=x \mbox{ for all } s\in G\}.$ The coinvariants are $H_G=H/\langle sx-x\rangle$ where $\langle sx-x\rangle$ is the vector subspace of $H$ generated by the elements of the form $sx-x$ for all $s\in G$ and $x\in H$. If $G$ is finite and the characteristic of the field does not divide its order, then $H_G$ and $H^G$ are canonically isomorphic through the action of $\frac{1}{|G|}\sum_{s\in G}s$. \begin{theo} Let $\Lambda = \left( \begin{array}{cc} A & N \\ M & B\\ \end{array} \right)$ be a null-square projective algebra, and let $I=M\oplus N$. For $m\geq 0$, $$ \arraycolsep=0,3mm\def1,4{2} \begin{array}{lll} \mathsf{Tor}^{\Lambda\!-\! \Lambda}_{2m+1}(K_C^1(\Lambda),\ \Lambda)&=&H_0\left( B, \left(M\otimes_AN\right)^{\otimes_{_B}m+1}\right)^{C_{m+1}} \mbox{\ \ and } \\ \mathsf{Tor}^{\Lambda\!-\! \Lambda}_{2m}(K_C^1(\Lambda),\ \Lambda)&= &H_0\left( B, \left(M\otimes_AN\right)^{\otimes_{_B}m+1}\right)_{C_{m+1}}. \end{array}$$ \end{theo} \begin{proof} We recall that for a null-square projective algebra $MN=0=NM$, hence $I^2=0$. Moreover $I(n)=0$ for $n$ odd, by Lemma \ref{Iodd0}. Consequently the complex of Proposition \ref{torKcomplex} reduces to $$\cdots \stackrel{b}{\to} I(6)\stackrel{0}{\to} I(4)\stackrel{b}{\to} I(4)\stackrel{0}{\to} I(2)\stackrel{b}{\to} I(2)\to 0$$ where for $n=2m$ $$b(z_1\otimes\cdots\otimes z_n) = z_1\otimes\cdots\otimes z_n\ +\ z_{n}\otimes z_1\otimes\cdots\otimes z_{n-1}.$$ Furthermore, the matrix of $$ \begin{array}{lll} b:&H_0\left( A, \left(N\otimes_BM\right)^{\otimes_{_A}m}\right) \ \oplus\ H_0\left( B, \left(M\otimes_AN\right)^{\otimes_{_B}m}\right)\\ &\longrightarrow H_0\left( A, \left(N\otimes_BM\right)^{\otimes_{_A}m}\right) \ \oplus\ H_0\left( B, \left(M\otimes_AN\right)^{\otimes_{_B}m}\right) \end{array}$$ with respect to the decomposition of Proposition \ref{even} is $\left( \begin{array}{cc} 1 & \tau \\ \sigma & 1\\ \end{array} \right).$ Moreover, $$\begin{array}{ll} \mathop{\rm Ker}\nolimits b &=\{(u,v)\mid u+\tau (v) =0 = \sigma (u) + v\}\\ &=\{(u,-\sigma u)\mid u = \tau\sigma u\} \\ &=\{u\mid tu=u\}\\ &=H_0\left( A, \left(N\otimes_BM\right)^{\otimes_{_A}m}\right)^{C_m}. \end{array}$$ In order to compute $\mathop{\rm Coker}\nolimits b$, note that $(u,v)=-(\tau v, \sigma u)$ holds in $\mathop{\rm Coker}\nolimits b$. Hence $(u,0)=(0,-\sigma u)=(\tau\sigma (u), 0)$. This shows that the map $$H_0\left( A, \left(N\otimes_BM\right)^{\otimes_{_A}m}\right)_{C_m}\to \mathop{\rm Coker}\nolimits b$$ given by $u\mapsto (u,0)$ is well defined. Its inverse is given by $(u,v)\mapsto u-\tau(v)$. Hence $\mathop{\rm Coker}\nolimits b= H_0\left( A, \left(N\otimes_BM\right)^{\otimes_{_A}m}\right)_{C_m}$. \hfill $\diamond$ \bigskip \end{proof} \sf Towards describing the long exact sequence mentioned above, we consider now some tools of homological algebra to compute $\mathsf{Tor}_*^{\Lambda\!-\!\Lambda}(\Lambda\otimes_C\Lambda, \ \Lambda)$. The next result will be used for a null-square projective algebra $\Lambda = \left( \begin{array}{cc} A & N \\ M & B\\ \end{array} \right)$ and for the inclusion of algebras $C\otimes C^{\mathsf op} \subset\Lambda\otimes \Lambda^{\mathsf op}$, where $C=A\times B$. \begin{lemm}\label{tor induced} Let $F\subset D$ be an inclusion of $k$-algebras and suppose $D$ is projective as a left $F$-module. Let $U$ be a right $F$-module and let $U\!\!\uparrow^D=U\otimes_F D$ be the induced right module. Let $Z$ be a left $D$-module and let ${}_F\!\!\downarrow\!\! Z$ be the left $F$-module obtained by restricting the action to $F$. The following holds: $$\mathsf{Tor}^D_*(U\!\!\uparrow^D,Z)=\mathsf{Tor}^F_*(U, {}_F\!\!\downarrow\!\! Z).$$ \end{lemm} \begin{proof} The left hand side functor in the variable $Z$ is characterised by its universal property : \begin{itemize} \item $\mathsf{Tor}^D_0(U\!\!\uparrow^D,Z)= U\!\!\uparrow^D\otimes_D Z = U\otimes_F Z$, \item $\mathsf{Tor}^D_0(U\!\!\uparrow^D,Z)=0$ if $Z$ is projective, \item A short exact sequence of of $D$-modules provides a long exact sequence. \end{itemize} It is clear that the right hand side functor in the variable $Z$ verifies the same properties. Note that the second property is fulfilled precisely because we assume ${}_F\!\!\downarrow\!\! D$ is projective. \hfill $\diamond$ \bigskip \end{proof} \begin{lemm} Let $\Lambda$ be a null-square projective algebra and let $C=A\times B$. The $C$-bimodule $\Lambda\otimes\Lambda$ is projective. \end{lemm} \begin{proof} Note first that by hypothesis $M$ is a projective $B\!-\!A$-bimodule. It becomes a $C$-bimodule by extending the actions by zero, then $M$ is a projective $C$-bimodule. The same holds for $N$, then $I=M\oplus N$ is a projective $C$-bimodule. Consider the $C$-bimodule decomposition $$\Lambda\otimes \Lambda = (C\otimes C) \oplus (C\otimes I) \oplus (I\otimes C)\oplus (I\otimes I).$$ We assert that a projective $C$-bimodule is also projective as a left (or right) $C$-module. Indeed, the free rank-one $C$-bimodule $C\otimes C$ is free as a left (or right) $C$-module. This observation makes the proof of the assertion immediate. We infer that $I$ is projective as a left and as a right $C$-module. We record that if $P$ is a projective left $C$-module and $Q$ is a projective right $C$-module, the $C$-bimodule $P\otimes Q$ is a projective $C$-bimodule. Consequently the four terms of the above direct sum decomposition of the $C$-bimodule $\Lambda\otimes\Lambda$ are projective $C$-bimodules. \hfill $\diamond$ \bigskip \end{proof} \begin{theo} Let $\Lambda=\left( \begin{array}{cc} A & N \\ M & B\\ \end{array} \right)$ be a null-square projective algebra, and let $C=A\times B$. There is a decomposition $$\mathsf{Tor}^{\Lambda\!-\!\Lambda}_*(\Lambda\otimes_C\Lambda, \ \Lambda)= HH_*(A) \oplus HH_*(B).$$ \end{theo} \begin{proof} We consider the inclusion $C\otimes C^{\mathsf op} \subset\Lambda\otimes \Lambda^{\mathsf op}$. Lemma \ref{tor induced} with $U=C$ provide the following: $$\begin{array}{ll} \mathsf{Tor}^{\Lambda\!-\!\Lambda}_*(\Lambda\otimes_C\Lambda, \ \Lambda)&=\mathsf{Tor}^{\Lambda\!-\!\Lambda}_*(\Lambda\otimes_C C\otimes_C\Lambda, \ \Lambda) \\ &= \mathsf{Tor}^{C-C}_*\left(C, \ {}_{C}\!\!\downarrow\!\!\Lambda\!\!\downarrow_C\right)\\ &= H_*(C, \ {}_{C}\!\!\downarrow\!\!\Lambda\!\!\downarrow_C) \\ &= HH_*(C)\oplus H_*(C,M)\oplus H_*(C,N) \end{array}$$ We assert that $H_*(C,M)=H_*(C,N)=0$. Indeed, let $P_{\bullet} \to A$ be a projective resolution of the $A$-bimodule $A$, and analogously for $Q_\bullet \to B$. Note that $P_\bullet \oplus Q_\bullet \to A\oplus B$ is a projective resolution of the $C$-bimodule $C$, where the $C$-bimodule structure of $P_\bullet$ is obtained by extending the action to $B$ by zero, and analogously for $Q_\bullet$. The functor $M\otimes_{C\!-\!C}-$ applied to $P_\bullet \oplus Q_\bullet$ gives the zero complex by simple arguments already used in the proof of Lemma \ref{Iodd0} and $H_*(C,M)=0$. Analogously $H_*(C,N)=0$. Note that the assertion also follows from \cite[p. 173]{CARTANEILENBERG}. In order to prove $HH_*(C)=HH_*(A)\oplus HH_*(B)$, observe that the summands $A\otimes_{C\!-\!C}Q_\bullet$ and $B\otimes_{C\!-\!C}P_\bullet$ of $C\otimes_{C\!-\!C}(P_\bullet\oplus Q_\bullet)$ are zero for analogous reasons. \hfill $\diamond$ \bigskip \end{proof} The previous results and the exact sequence (\ref{theshort}) provides the following: \begin{theo}\label{longexactsequence} Let $\Lambda=\left( \begin{array}{cc} A & N \\ M & B\\ \end{array} \right)$ be a null-square projective algebra. There is a long exact sequence as follows: $$ \arraycolsep=0,3mm\def1,4{1,4} \begin{array}{llrllllllll} \dots\\ H_0 \left(A, (N\otimes_BM)^{\otimes\!_{_A} m+1}\right)^{C_{m+1}} &\to &HH_{2m+1}(A)&\oplus &HH_{2m+1}(B)&\to &HH_{2m+1}(\Lambda)&\to\\ H_0 \left(A, (N\otimes_BM)^{\otimes\!_{_A} m+1}\right)_{C_{m+1}} &\to &HH_{2m}(A)&\oplus &HH_{2m}(B)&\to &HH_{2m}(\Lambda)&\to\\ \dots\\ H_0 \left(A, (N\otimes_BM)^{\otimes\!_{_A} 3}\right)^{C_3} &\to &HH_5(A)&\oplus &HH_5(B)&\to &HH_5(\Lambda)&\to\\ H_0 \left(A, (N\otimes_BM)^{\otimes\!_{_A} 3}\right)_{C_3} &\to &HH_4(A)&\oplus &HH_4(B)&\to &HH_4(\Lambda)&\to\\ H_0 \left(A, (N\otimes_BM)^{\otimes\!_{_A} 2}\right)^{C_2}&\to &HH_3(A)&\oplus &HH_3(B)&\to &HH_3(\Lambda)&\to\\ H_0 \left(A, (N\otimes_BM)^{\otimes\!_{_A} 2}\right)_{C_2} &\to &HH_2(A)&\oplus &HH_2(B)&\to &HH_2(\Lambda)&\to\\ H_0 \left(A, (N\otimes_BM)\right) &\to &HH_1(A)&\oplus &HH_1(B)&\to &HH_1(\Lambda)&\to\\ H_0 \left(A, (N\otimes_BM)\right) &\to &HH_0(A)&\oplus &HH_0(B)&\to &HH_0(\Lambda)&\to 0.\\ \end{array}$$ \end{theo} \begin{coro}\label{invariantszzero} Let $\Lambda=\left( \begin{array}{cc} A & N \\ M & B\\ \end{array} \right)$ be a null-square projective algebra. If $HH_n(\Lambda)=0$ for $n$ large enough, then $$H_0\left( A, \left(N\otimes_BM\right)^{\otimes_{_A}n}\right)_{C_n}=H_0\left( B, \left(M\otimes_AN\right)^{\otimes_{_B}n}\right)^{C_n}=0$$ for $n$ large enough. \end{coro} \begin{proof} Hochschild homology is a functor from the category of algebras to the category of vector spaces. Let $\Lambda=C\oplus I$ where $C$ is a subalgebra of $\Lambda$ and $I$ is a two-sided ideal. In other words, there is an algebra surjection $\Lambda \to C$ which splits in the category of algebras, then $HH_*(C)$ is a direct summand of $HH_*(\Lambda)$. Consequently, if $HH_n(\Lambda)=0$ for $n$ large enough, then the same holds for $HH_n(C)$. The long exact sequence of the previous theorem provides the result. \hfill $\diamond$ \bigskip \end{proof} \begin{rema} The morphisms induced by the inclusion $K^1_C(\Lambda)\to \Lambda\otimes_C\Lambda$ of the short exact sequence (\ref{theshort}) are zero. Indeed, if $f:M\to M'$ and $g:N\to N'$ are $C$-bimodule morphisms, we associate functorially a morphism between the corresponding short exact sequences (\ref{theshort}) for the corresponding algebras $\Lambda$ and $\Lambda'$. This induces in term a functorial morphism between the corresponding long exact sequences of Theorem \ref{longexactsequence}. In particular, for $M'=N'=0$ we infer that the morphisms induced by the inclusion of (\ref{theshort}) factor through zero, hence they are zero. Consequently there are short exact sequences as follows for $m> 0$: \small $$0\to HH_{2m}(A)\oplus HH_{2m}(B)\to HH_{2m}(\Lambda)\to H_0 \left(A, (N\otimes_BM)^{\otimes\!_{_A} m}\right)_{C_{m}}\to 0$$ $$0\to HH_{2m+1}(A)\oplus HH_{2m+1}(B)\to HH_{2m+1}(\Lambda)\to H_0 \left(A, (N\otimes_BM)^{\otimes\!_{_A} m+1}\right)_{C_{m+1}}\to 0.$$ \normalsize For $m=0$ we obtain that $HH_0(A)\oplus HH_0(B)$ and $HH_0(\Lambda)$ are isomorphic; this can of course be verified by a direct computation. \end{rema} \section{\sf Han's conjecture for null-square projective algebras}\label{Han nullsquareprojective}\label{cuatro} Our first aim is to prove that if the algebras $A$ and $B$ are finite dimensional and basic, and if the invariants under the action of the cyclic groups $C_m$ on the spaces considered in Theorem \ref{longexactsequence} are zero, then the spaces themselves are zero. Let $A$ and $B$ be finite dimensional and basic algebras. Let $E$ and $F$ be complete sets of primitive orthogonal idempotents of $A$ and $B$ respectively. If $k$ is perfect, then $$\mathop{\rm rad}\nolimits \left(B\otimes A^{\mathsf{op}}\right)=B\otimes \mathop{\rm rad}\nolimits A^{\mathsf{op}}+\mathop{\rm rad}\nolimits B\otimes A^{\mathsf{op}}$$ and $\{g\otimes e\}_{(g,e)\in F\times E}$ is a complete set of primitive orthogonal idempotents of $B\otimes A^\mathsf{op}$. Consequently $$\{Bg\otimes eA\}_{(g,e)\in F\times E}$$ is a complete set of representatives, without repetitions, of the isomorphism classes of projective $B\!-\! A$-bimodules. Let \begin{equation}\label{M} {}_BM_A= \bigoplus_{(g,e)\in F\times E} {}_g m_e \left(Bg\otimes eA\right) \end{equation} be a projective finitely generated $B\!-\! A$ - bimodule, where by the Krull-Schmidt Theorem, the integers ${}_g m_e$ are uniquely determined by $M$. Similarly, let \begin{equation}\label{N} {}_AN_B= \bigoplus_{(f,h)\in E\times F} {}_f n_h \left(Af\otimes hB\right) \end{equation} be a finitely generated projective $A\!-\! B$ - bimodule. The next definition is pictured in Figure \ref{NM quiver} on page \pageref{NM quiver}. \begin{defi} In the situation considered above, the \emph{$(N,M)$-quiver} is defined as follows: its vertices are $E\cup F$, where we agree to distribute $E$ in a first horizontal floor and $F$ in a ground floor. There are two sort of arrows: \begin{itemize} \item Horizontal, distributed into:\\ - first floor ones, which provides the Peirce $E$-quiver of $A$ (see Definition \ref{Equiver}), and \\ - ground floor ones, namely the Peirce $F$-quiver of $B$. \item Vertical, distributed into:\\ - down ones, there are ${}_g m_e$ arrows from $e$ to $g$ in one-to-one correspondence with the direct summands $Bg\otimes eA$ of $M$, and\\ - up ones, defined according to $N$ in the analogous way than for $M$. \end{itemize} \end{defi} We agree to write the sequence of arrows of a path from right to left, as for composition of morphisms. Recall that the \emph{length} of a path is the length of the corresponding sequence, and that a \emph{cycle} is a path which starts and ends at the same vertex. Next we define some particular kinds of paths in the $(N,M)$-quiver. \begin{defi}\label{balanced} Let $\gamma$ be a path of the $(N,M)$-quiver, \begin{itemize} \item $\gamma$ is \emph{balanced} if it does not contain two consecutive horizontal arrows. In case $\gamma$ starts and ends at the same floor, its \emph{revolution number} is half of the number of the vertical arrows of the sequence of $\gamma$. \item $\gamma$ is $E$-\emph{balanced} if it is balanced and it starts and ends at the first floor, that is, at $E$-vertices. The set of $E$-balanced paths with revolution number $m$ is denoted by $\mathsf{P}^E_m$. \item $\gamma$ is an $E$-\emph{vertical balanced cycle} if it is an $E$-balanced cycle whose first arrow is down vertical. The set of $E$-vertical balanced cycles with revolution number $m$ is denoted by $\mathsf{CV}^E_m$. \end{itemize} \end{defi} \begin{theo}\label{invariantszero} Let $A$ and $B$ be basic finite dimensional algebras over a perfect field $k$, and let $M$ and $N$ be projective bimodules as above. Let $C_m$ be the cyclic group of order $m$ with generator $t$ acting by cyclic permutation on $H_0 \left(A, (N\otimes_BM)^{\otimes_A m}\right)$ as given in Definition \ref{cyclicaction}. If $H_0 \left(A, (N\otimes_BM)^{\otimes_A m}\right)^{C_m}=0$, then $H_0 \left(A, (N\otimes_BM)^{\otimes_A m}\right)=0.$ \end{theo} \begin{proof} We assert that $H_0 \left(A, (N\otimes_BM)^{\otimes_A m}\right)$ is a direct sum of vector spaces indexed by $\mathsf{CV}^E_m$. In order to provide an outline of the evidence, let consider the particular case $$N= (Af\otimes hB) \oplus (Af'\otimes h'B) \ \mbox{ and }\ M=(Bg\otimes eA) \oplus (Bg'\otimes e'A).$$ Notice that the $(M,N)$-quiver has two down arrows and two up arrows. Then \begin{equation}\label{NM} \begin{array}{rrcl} N\otimes_B M= & (Af\otimes hBg \otimes eA)&\oplus & (Af\otimes hBg' \otimes e'A)\ \ \oplus\\ & (Af'\otimes h'Bg \otimes eA) &\oplus & (Af'\otimes h'Bg' \otimes e'A) \end{array} \end{equation} and \begin{equation}\label{T1NM} \begin{array}{lrlll} H_0 \left(A, (N\otimes_BM)\right)=&(eAf\otimes hBg)&\oplus &(e'Af\otimes hBg')\ \ \oplus\\ & (eAf'\otimes h'Bg) &\oplus & (e'Af'\otimes h'Bg'). \end{array} \end{equation} If the first summand is non zero, then $eAf\neq 0$ and $hBg\neq 0$ and by definition there are corresponding arrows in the $E$ and Peirce $F$-quivers, respectively from $f$ to $e$ and from $g$ to $h$. We associate to this non zero summand the following $E$-vertical balanced cycle with revolution number $1$: \begin{itemize} \item[--] the first vertical down arrow from $e$ to $g$ corresponds to the projective direct summand $Bg\otimes eA$ of $M$, \item[--] the subsequent horizontal arrow at the ground floor from $g$ to $h$, due to $hBg\neq 0$, \item[--] next the vertical up arrow from $h$ to $f$ which corresponds to the projective bimodule $Af\otimes hB$, \item[--] finally the horizontal arrow at the first floor from $f$ to $e$, due to $eAf\neq 0$. \end{itemize} The decomposition of $H_0\left(A, (N\otimes_B M)^{\otimes_A 2}\right)$ contains the following direct summand \begin{equation}\label{T2NM} \begin{array}{llll} (eAf'\otimes h'Bg' \otimes e'Af \otimes hBg). \end{array} \end{equation} It corresponds to the $E$-vertical balanced cycle $\gamma$ with revolution number $2$, described by the following sequence of vertices (from right to left) and drawn below: $$e,f',h',g',e',f,h,g,e$$ \begin{figure}[h] \begin{center} \begin{tikzpicture}[ x={(-0.35cm,-0.35cm)}, y={(1cm,0cm)}, z={(0cm,1cm)}, font=\tiny, ] \fill[black!7] (-4,0,0) -- (1,0,0) -- (1,6,0) -- (-4,6,0) -- cycle; \fill[black!7] (-4,0,2) -- (1,0,2) -- (1,6,2) -- (-4,6,2) -- cycle; \draw[thick] (-4,0,0) -- (1,0,0) -- (1,6,0); \draw[thick] (-4,0,2) -- (1,0,2) -- (1,6,2); \coordinate (g') at (-2,2,0); \coordinate (h') at (-3,3,0); \coordinate (h) at (-1,3,0); \coordinate (g) at (-2,4,0); \coordinate (e') at (-2,2,2); \coordinate (f') at (-3,3,2); \coordinate (f) at (-1,3,2); \coordinate (e) at (-2,4,2); \foreach \n in {h, g, e, f} { \fill (\n) circle(1.5pt); \node[below right] at (\n) {$\n$}; } \foreach \n in {h', g', e', f'} { \fill (\n) circle(1.5pt); \node[above left] at (\n) {$\n$}; } \begin{scope}[ shorten <=3pt, shorten >=3pt, thick ] \foreach \a / \b in {g/h, f/e', g'/h', f'/e} \draw[->] (\a) -- (\b); \foreach \a / \b in {h/f, h'/f'} { \draw[shorten >=0pt] (\a) -- (\a |- 1,0,2); \draw[->, shorten <=0pt, densely dashed] (\a |- 1,0,2) -- (\b); } \foreach \a / \b in {e'/g', e/g} { \draw[shorten >=0pt, densely dashed] (\a) -- (\a |- 1,0,2); \draw[->, shorten <=0pt] (\a |- 1,0,2) -- (\b); } \node at (0.5,0.25,0) {$B$}; \node at (0.5,0.25,2) {$A$}; \end{scope} \end{tikzpicture} \end{center} \caption{$(N,M)$\sf -quiver}\label{NM quiver} \end{figure} The direct summands of $H_0\left(A, (N\otimes_B M)^{\otimes_A 2}\right)$ are originated by the indecomposable direct summands of $M$ and $N$. The vertical arrows of the $E$-vertical balanced cycle keep track of them. For instance the vertical arrow from $e'$ to $g'$ corresponds to the projective direct summand $Bg'\otimes e'A$ of $M$. Note that the $E$-vertical balanced cycle drawn above is not the square of a vertical balanced cycle of revolution number $1$. On the other hand, $E$-vertical balanced cycles which are powers of shorter ones do exist. Let $\gamma\in \mathsf{CV}^E_m$. We consider the non zero vector subspaces of $A$ and $B$ corresponding to the horizontal arrows of $\gamma$, which belong to the respective $E$ and Peirce $F$-quivers. Let $V_\gamma$ be their tensor product, obtained by following the order of the arrows of $\gamma$. Conversely, as sketched above, a non zero vector space direct summand of $H_0\left(A, (N\otimes_B M)^{\otimes_A m}\right)$ determines an $E$-vertical balanced cycle of revolution number $m$. Then $$H_0\left(A, (N\otimes_B M)^{\otimes_A m}\right)=\bigoplus_{\gamma\in \mathsf{CV}^E_m} V_\gamma.$$ We describe now the transported action of $C_m$ on $\mathsf{CV}^E_m$. Let $\gamma$ be an $E$-vertical balanced cycle at a vertex $e$, of revolution number $m$. Let $\gamma'$ be the $E$-balanced path obtained from $\gamma$ by removing at its beginning the balanced oriented path $\alpha$ defined as follows: $\alpha$ is the first arrow of $\gamma$ followed by the next ones until reaching the source of the second vertical down arrow of $\gamma$. Note that $\alpha$ begins at $e$, it has revolution number $1$, and in general $\alpha$ is not a cycle. The target of $\gamma'$ is still $e$, and we have $t\cdot\gamma=\alpha\gamma'$. We suppose now $H_0\left(A, (N\otimes_B M)^{\otimes_A m}\right)\neq 0$, that is $\mathsf{CV}^E_m\neq\emptyset$. Let $\gamma\in \mathsf{CV}^E_m$ and let $\underline{\gamma}$ be the $E$-vertical balanced cycle of smallest length such that $\gamma=(\underline{\gamma})^l$, in particular $\underline{\gamma}$ is not a power of a shorter $E$-vertical balanced cycle. The stabilizer subgroup of $\gamma$ in $C_m$ is generated by $t^{\frac{m}{l}}$. That is $t^{\frac{m}{l}}\cdot\gamma = \gamma$ and $\{t^i \gamma\}_{i=0,\dots, \frac{m}{l}-1}$ are distinct. Let $k[\mathsf{CV}^E_m]$ be the vector space with basis $\mathsf{CV}^E_m$. The trace element $\hat{\gamma} = \gamma +t\cdot\gamma+t^2\cdot\gamma+\cdots +t^{\frac{m}{l}-1}\cdot\gamma\in k[\mathsf{CV}^E_m]$ is a sum of different basis elements, hence $\hat{\gamma}\neq 0$. Moreover $t\cdot\hat{\gamma} = \hat{\gamma}$. We will infer from $\underline{\gamma}$ a non zero element of $H_0\left(A, (N\otimes_B M)^{\otimes_A m}\right)$. Let $\underline{u}$ be a non zero element of $V_{\underline{\gamma}}$, and let $u=\underline{u}^{\otimes l}\in V_\gamma$. This way $u\neq 0$ and $t^{\frac{m}{l}}\cdot u=u$. Moreover $t^i\cdot u\in V_{t^i\gamma}$. Observe that the vector spaces $V_{t^i\cdot\gamma}$ are distinct for $i=0,\dots, \frac{m}{l}-1$ since the corresponding $E$-vertical balanced paths are different. Consequently $\hat{v}=v+t\cdot v+\dots +t^{l-1}\cdot v\neq 0$. Moreover $t\cdot\hat{v}=\hat{v}$, then $H_0\left(A, (N\otimes_B M)^{\otimes_A m}\right)^{C_m}\neq 0$. \hfill $\diamond$ \bigskip \end{proof} \begin{theo}\label{NMNMNM=0} Let $\Lambda=\left( \begin{array}{cc} A & N \\ M & B\\ \end{array} \right)$ be a null-square projective algebra where $A$ and $B$ are basic finite dimensional algebras over a perfect field $k$, and let $M$ and $N$ be finitely generated projective bimodules, given as in (\ref{M}) and (\ref{N}). If $HH_n(\Lambda)=0$ for $n$ large enough, then $H_0 \left(A, (N\otimes_BM)^{\otimes\!_{_A}n}\right)=0$ for all $n>0$ and $\left(N\otimes_B M\right)^{\otimes\!_{_A}n}=0$ for $n$ large enough. \end{theo} \begin{proof} The hypothesis that Hochschild homology of $\Lambda$ vanishes in large enough degrees, imply by Corollary \ref{invariantszzero} that $H_0\left(A, (N\otimes_B M)^{\otimes_A m}\right)^{C_m}=0$ for $m$ large enough. By the previous theorem \ref{invariantszero}, we infer $H_0\left(A, (N\otimes_B M)^{\otimes_A m}\right)=0$ for the same $m$'s, hence $\mathsf{CV}^E_m=\emptyset$ for $m$ large enough. However if $\mathsf{CV}^E_{n_0}\neq\emptyset$ for some $n_0$, then $\mathsf{CV}^E_{rn_0}\neq\emptyset$ for all $r>0$. Hence $\mathsf{CV}^E_n=\emptyset$ for all $n>0$. As a consequence $H_0\left(A, (N\otimes_B M)^{\otimes_A n}\right)=0$ for all $n>0$. We assert that in the same way as in the proof of Theorem \ref{invariantszero}, $\left(N\otimes_B M\right)^{\otimes_A m}$ is a direct sum of non zero vector spaces which are in one-to-one correspondence with $\mathsf{P}^E_m$ (see Definition \ref{balanced}). For instance in the decomposition (\ref{NM}), the first summand $Af\otimes hBg \otimes eA$ corresponds to the $E$-balanced paths which contain the vertical down arrow from $e$ to $g$ and the vertical up arrow from $h$ to $f$. More precisely, there is a subsequent decomposition $$Af\otimes hBg \otimes eA = \bigoplus_{y,x\in E} yAf\otimes hBg \otimes eAx,$$ and for each non zero summand $yAf\otimes hBg \otimes eAx$, the $E$-balanced path is determined by the sequence of vertices $y,f,h,g,e,x.$ In particular $\left(N\otimes_B M\right)^{\otimes_A m}=0$ if and only if $\mathsf{P}_m^E=\emptyset$. We have shown before that the $(N,M)$-quiver has no $E$-balanced vertical cycles. Since the $(N,M)$-quiver is finite, the $E$-balanced paths have a maximal length. Then $\mathsf{P}_n^E=\emptyset$ for $n$ large enough, and $\left(N\otimes_B M\right)^{\otimes\!_{_A}n}=0$ for the same set of $n$'s. \hfill $\diamond$ \bigskip \end{proof} The long exact sequence of Theorem \ref{longexactsequence} provides then the following \begin{coro}\label{iguales} Let $\Lambda=\left( \begin{array}{cc} A & N \\ M & B\\ \end{array} \right)$ be a null-square projective algebra where $A$ and $B$ are basic finite dimensional algebras over a perfect field $k$, where $M$ and $N$ are finitely generated projective bimodules. If $HH_n(\Lambda)=0$ for $n$ large enough, then for all $n$ $$HH_n(\Lambda)= HH_n(A)\oplus HH_n(B).$$ \end{coro} Our next aim is to provide a tool for bounding above the global dimension of a null-square projective algebra. For this purpose we first briefly recall the \emph{mapping cone} construction. Let $(C_\bullet,c) = \{C_n\stackrel{c_n}{\to} C_{n-1}\}_{n\in \mathbb{Z}}$ and $(D_\bullet,d)=\{D_n\stackrel{d_n}{\to} D_{n-1}\}_{n\in \mathbb{Z}}$ be complexes with differentials $c$ and $d$. Let $f:C_\bullet \to D_\bullet$ be a map of complexes. Let $C_\bullet[1]$ be the complex defined by $C_n[1]= C_{n-1}$. There exists a complex $(\mathsf{co}(f)_\bullet,e)$ called the mapping cone of $f$, and a short exact sequence of complexes $$0\to C_\bullet[1]\to \mathsf{co}(f)_\bullet \to D_\bullet \to 0$$ such that the connecting homomorphism in the long exact sequence of cohomology is the morphism induced by $f$. In particular, $f$ induces isomorphisms (\emph{i.e.} $f$ is a \emph{quasi-isomorphism}) if and only if the mapping cone complex is acyclic. Actually $\mathsf{co}(f)_n=C_n\oplus D_{n-1}$ with differential $e=\left( \begin{array}{rr} c & f \\ 0& -d \\ \end{array} \right)$; note that the change of sign for $d$ guarantees $e^2=0$, since $fc=df$. We simplify the tensor product notation as follows: let $U$ be a $C\!-\! B$-bimodule and let $V$ be a $B\!-\!A$-bimodule, we will write $UV$ instead of $U\otimes_B V$ and $VU$ instead of $V\otimes_A U$. \begin{theo}\label{null-squaremodulefgd} Let $\Lambda=\left( \begin{array}{cc} A & N \\ M & B\\ \end{array} \right)$ be a null-square projective algebra where $A$ and $B$ are k-algebras, and $M$ and $N$ are $B\!-\! A$ and $A\!-\!B$-projective bimodules respectively. Let $X$ be a left $A$-module and $P_\bullet \to X$ be a projective resolution. Associated to $P_\bullet \to X$, there is a $\Lambda$-projective resolution $Q_\bullet \to (X\rightleftharpoons 0)$ such that if $P_\bullet \to X$ is finite and if $\left(N\otimes_B M\right)^{\otimes\!_{_A}n}=0$ for $n$ large enough, then $Q_\bullet \to (X\rightleftharpoons 0)$ is finite. \end{theo} \begin{proof} We define the modules of $Q_\bullet$ as follows: \vskip5mm {\footnotesize \hskip-1cm \def1,4{2.6 $\begin{array}{lllll} Q_0= \left(P_0 \underset{0} {\overset{1}{\rightleftharpoons}} MP_0\right)\\ Q_1= \left(P_1 \underset{0} {\overset{1}{\rightleftharpoons}} MP_1\right) \oplus \left(NMP_0 \underset{1} {\overset{0}{\rightleftharpoons}} MP_0\right)\\ Q_2= \left(P_2 \underset{0} {\overset{1}{\rightleftharpoons}} MP_2\right) \oplus \left(NMP_1 \underset{1} {\overset{0}{\rightleftharpoons}} MP_1\right)\oplus \left(NMP_0 \underset{0} {\overset{1}{\rightleftharpoons}} M(NM)P_0\right)\\ Q_3 = \left(P_3 \underset{0} {\overset{1}{\rightleftharpoons}} MP_3\right) \oplus \left(NMP_2 \underset{1} {\overset{0}{\rightleftharpoons}} MP_2\right)\oplus \left(NMP_1 \underset{0} {\overset{1}{\rightleftharpoons}} M(NM)P_1\right)\oplus \left((NM)^2P_0 \underset{1} {\overset{0}{\rightleftharpoons}} M(NM)P_0\right)\\ Q_4 = \left(P_4 \underset{0} {\overset{1}{\rightleftharpoons}} MP_4\right) \oplus \left(NMP_3 \underset{1} {\overset{0}{\rightleftharpoons}} MP_3\right)\oplus \left(NMP_2 \underset{0} {\overset{1}{\rightleftharpoons}} M(NM)P_2\right) \oplus \\ \left((NM)^2P_1 \underset{1} {\overset{0}{\rightleftharpoons}} M(NM)P_1\right)\oplus \left((NM)^2P_0 \underset{0} {\overset{1}{\rightleftharpoons}} M(NM)^2P_0\right)\\ \vdots\\ Q_{2m}=\left(P_{2m} \underset{0} {\overset{1}{\rightleftharpoons}} MP_{2m}\right) \oplus \left(NMP_{2m-1} \underset{1} {\overset{0}{\rightleftharpoons}} MP_{2m-1}\right)\oplus\cdots \oplus \left((NM)^{m}P_0 \underset{0} {\overset{1}{\rightleftharpoons}} M(NM)^{m}P_0\right)\\ Q_{2m+1}=\left(P_{2m+1} \underset{0} {\overset{1}{\rightleftharpoons}} MP_{2m+1}\right) \oplus \left(NMP_{2m} \underset{1} {\overset{0}{\rightleftharpoons}} MP_{2m}\right)\oplus\cdots \oplus \left((NM)^{m+1}P_0 \underset{1} {\overset{0}{\rightleftharpoons}} M(NM)^{m}P_0\right)\\ \vdots\\ \end{array}$} We observe that the $Q_i$ are projective $\Lambda$-modules. Indeed, first we note that the free rank one bimodule $B\otimes A$ is projective as a left (or right module), hence any projective bimodule (for instance $M$) is projective as a left (or right module). Consequently for any left $A$-module $X$, the left $B$-module $M\otimes_A X$ is projective. Finally, Lemma \ref{projectiveone} shows that each direct summand of $Q_i$ is a projective $\Lambda$-module. The differentials are defined in the figure below: \begin{landscape} \begin{tikzcd}[column sep=0.15] \vdots & & \vdots & & \vdots & & \vdots & & \vdots & & \vdots & & \vdots & & \vdots & & \vdots & & \vdots \\ (NM)^2P_0 \ar[ddrr, swap, "1"] \ar[rrrrrrrrrrrrrrrrrr, rightharpoonup, bend left=8] & \oplus & (NM)^2P_1 \ar[dd, "-1\mbox{\tiny $\otimes$}p_1"] & \oplus & NMP_2 \ar[dd, "1\mbox{\tiny $\otimes$}p_2"] \ar[rrrrrrrrrr, rightharpoonup, bend left=10] \ar[ddrr, swap, "1"] & \oplus & NMP_3 \ar[dd, "-1\mbox{\tiny $\otimes$}p_3"] &\oplus & P_4 \ar[dd, "{p_4}"] & \rightharpoonup & MP_4 \ar[dd, "1\mbox{\tiny $\otimes$}p_4"] & \oplus & MP_3 \ar[llllll, , rightharpoonup, bend left=20] \ar[dd, "-1\mbox{\tiny $\otimes$}p_3"] \ar[ddll, "1"] & \oplus & M(NM)P_2 \ar[dd, "1\mbox{\tiny $\otimes$}p_2"] & \oplus & M(NM)P_1 \ar[dd, "-1\mbox{\tiny $\otimes$}p_1"] \ar[ddll, "1"] \ar[llllllllllllll, rightharpoonup, bend left=9] & \oplus & M(NM)^2P_0 \\ & & & & & & & & & & & & & & & & & & \\ & & (NM)^2P_0 & \oplus & NMP_1 \ar[dd, "1\mbox{\tiny $\otimes$}p_1"] \ar[rrrrrrrrrr, rightharpoonup, bend left=15] \ar[ddrr, swap, "1"] & \oplus & NMP_2 \ar[dd, "-1\mbox{\tiny $\otimes$}p_2"] &\oplus & P_3 \ar[dd, "{p_3}"] & \rightharpoonup & MP_3 \ar[dd, "1\mbox{\tiny $\otimes$}p_3"] & \oplus & MP_2 \ar[llllll, rightharpoonup, bend left=20] \ar[dd, "-1\mbox{\tiny $\otimes$}p_2"] \ar[ddll, "1"] & \oplus & M(NM)P_1 \ar[dd, "1\mbox{\tiny $\otimes$}p_1"] & \oplus & M(NM)P_0 \ar[ddll, "1"] \ar[llllllllllllll, rightharpoonup, bend left=10] & & \\ & & & & & & & & & & & & & & & & & & \\ & & & & NMP_0 \ar[rrrrrrrrrr, rightharpoonup, bend left=10] \ar[ddrr, swap, "1"] & \oplus & NMP_1 \ar[dd, "-1\mbox{\tiny $\otimes$}p_1"] &\oplus & P_2 \ar[dd, "{p_2}"] & \rightharpoonup & MP_2 \ar[dd, "1\mbox{\tiny $\otimes$}p_2"] & \oplus & MP_1 \ar[llllll, rightharpoonup, bend left=20] \ar[dd, "-1\mbox{\tiny $\otimes$}p_1"] \ar[ddll, "1"] & \oplus & M(NM)P_0 & & & & \\ & & & & & & & & & & & & & & & & & & \\ & & & & & & NMP_0 & \oplus & P_1 \ar[dd, "{p_1}"] & \rightharpoonup & MP_1 \ar[dd, "1\mbox{\tiny $\otimes$}p_1"] & \oplus & MP_0 \ar[llllll, rightharpoonup, bend left=20] \ar[ddll, "1"] & & & & & & \\ & & & & & & & & & & & & & & & & & & \\ & & & & & & & & P_0 \ar[d, "{p_0}"] & \rightharpoonup & MP_0 \ar[d, "1\mbox{\tiny $\otimes$}p_0"] & & & & & & & & \\ & & & & & & & & X \ar[d] & \rightharpoonup & 0 \ar[d] & & & & & & & & \\ & & & & & & & & 0 & & 0 & & & & & & & & \end{tikzcd} \end{landscape} It is immediate to check that the differentials are morphisms of $\Lambda$-modules, that is, the corresponding squares commute (see Definition \ref{categoryS} and Proposition \ref{modulesandcategoryS}). The column with $X$ in the bottom is the projective resolution $P_\bullet \to X$. We observe that the two columns on its right give the mapping cone of the identity of the complex $(MP_\bullet, -1{\tiny\otimes}p)$. Since the identity is an isomorphism, the mapping cone is exact. Similarly, the next two columns on the right provide the mapping cone of the identity for the complex $(M(NM)P_\bullet, -1{\tiny\otimes}p)$, and so forth. The two columns on the left of $P_\bullet \to X$ correspond to the mapping cone of the identity of the complex $(NMP_\bullet, 1{\scriptsize\otimes}p)$. The next two columns on the left are the mapping cone of the identity of $\left((NM)^2P_\bullet, 1{\tiny\otimes}p\right)$, and so forth. Consequently $Q_\bullet$ is a resolution of $X\rightleftharpoons 0$ by projective $\Lambda$-modules. Let $r$ be an integer such that $(NM)^i=0$ for $i>r$. Moreover let $l$ be an integer such that $P_j=0$ for $j>l$. For a given $m$, the module $Q_{m}$ is the direct sum of vector spaces of the form $(NM)^iP_j$ for $2i+j=m$ or $2i+j=m+1$, and of vector spaces of the form $M(NM)^iP_j$ for $2i+j=m$ or $2i+j=m+1$. Let $m>2r+l$. In case $2i+j=m$ or $m+1$, either $i>r$ or $j>l$ since otherwise $2i+j\leq 2r+l<m$. Hence $Q_m=0$ for all $m>2r+l$. \hfill $\diamond$ \bigskip \end{proof} \begin{theo}\label{smooth} Let $k$ be a perfect field and let $\Lambda=\left( \begin{array}{cc} A & N \\ M & B\\ \end{array} \right)$ be a finite dimensional null-square projective algebra where $A$ and $B$ are smooth. If $(NM)^n=0$ for large enough $n$, then $\Lambda$ is smooth. \end{theo} \begin{proof} The complete list of simple $\Lambda$-modules is $ \{S \underset{} {\overset{}{\rightleftharpoons}} 0 \}\bigcup \{0 \underset{} {\overset{}{\rightleftharpoons}} T\}$ where $S$ and $T$ are simple modules over $A$ and $B$ respectively, see Proposition \ref{simples}. The previous theorem shows that $S \underset{} {\overset{}{\rightleftharpoons}} 0$ is of finite projective dimension. The analogous theorem holds for $\Lambda$-modules of the form $0 \underset{} {\overset{}{\rightleftharpoons}} Y$ where $Y$ is a $B$-module. Then the simple modules $0 \underset{} {\overset{}{\rightleftharpoons}}T$ are also of finite projective dimension. \hfill $\diamond$ \bigskip \end{proof} \begin{theo} Let $k$ be a perfect field. Any finite dimensional null-square projective $k$-algebra built on the class ${\mathcal H}$ of basic $k$-algebras verifying Han's conjecture also belongs to ${\mathcal H}$. \end{theo} \begin{proof} Let $\Lambda=\left( \begin{array}{cc} A & N \\ M & B\\ \end{array} \right)$, where $A$ and $B$ are finite dimensional basic $k$-algebras which belong to ${\mathcal H}$, and $M$ and $N$ are projective bimodules. Suppose $HH_n(\Lambda)=0$ for $n$ large enough. Then by Corollary \ref{iguales}, $HH_n(A)$ and $HH_n(B)$ vanish for the same set of $n$'s, hence $A$ and $B$ are smooth. Moreover, by Theorem \ref{NMNMNM=0} we have $(NM)^n=0$ for $n$ large enough. The previous corollary shows that $\Lambda$ is smooth. \hfill $\diamond$ \bigskip \end{proof} \begin{rema} As much as in Remark \ref{agree}, we observe that according to Corollary \ref{iguales} this result agrees with the property proved by B. Keller in \cite[2.5]{KELLER}, namely the Hochschild homology of a finite dimensional smooth algebra over a perfect field is concentrated in degree zero. \end{rema} \section{\sf Gabriel quiver and relations of a null-square projective algebra} Let $A$ be a finite dimensional algebra such that $A/\mathop{\rm rad}\nolimits A$ is a product of copies of $k$, in other words $A$ is basic - equivalently, $A$ is Morita reduced - and sober - that is, the algebra of $A$-endomorphisms of each simple $A$-module is just $k$. Let $E$ be a complete system of primitive and orthogonal idempotents. The set of vertices of the \emph{Gabriel quiver} $Q_A$ is $E$; the number of arrows from $x$ to $y$ is the dimension of the vector space $y (\mathop{\rm rad}\nolimits A / \mathop{\rm rad}\nolimits^2 A)x$. It is well known that $Q_A$ is canonical, in the sense that $Q_A$ does not depend on the choice of $E$. Let $Q$ be a quiver with finite set of vertices $Q_0$ and set of arrows $Q_1$. The vector space $kQ_0$ is endowed with a semisimple algebra structure where $Q_0$ is a complete set of primitive orthogonal idempotents. Note that $kQ_0$ is basic and sober. The vector space $kQ_1$ is a $kQ_0$-bimodule in the natural way. The \emph{path algebra} $kQ$ is by definition the tensor algebra $T_{kQ_0}(kQ_1)$, it has a canonical basis given by the oriented paths of $Q$. The universal property of $kQ$ is as follows: any algebra map $\varphi : kQ\to X$ is determined by an algebra map $\varphi_{0}: kQ_0\to X$ - that is, a set map from $Q_0$ to a system of $X$ -, and $\varphi_1 : kQ_1\to X$, a $kQ_0$-bimodule map - the structure of $X$ as $kQ_0$-bimodule being inferred from $\varphi_0$. A finite dimensional algebra $A$ as above can be \emph{presented}, namely there exists a - non canonical - algebra surjection $kQ\to A$ such that its kernel $I$ is an admissible two-sided ideal, that is, there exist a positive integer $m$ such that $F^m\subset I\subset F^2$, where $F$ is the two sided ideal generated by $(Q_A)_1$. Moreover, the ideal $I$ decomposes as $ \oplus_{x,y\in E}\ yIx$ since $(Q_A)_0$ is complete. The system of generators $R$ of $I$ considered in a presentation is \emph{adapted}, that is, $R$ is graded with respect to this decomposition, its elements are called relations. Note that any system of generators $R'$ gives rise to a graded one, namely $R= \bigsqcup_{x,y \in E} yR'x$, where for a set of paths $Z$, we denote by $yZx$ the paths of $Z$ starting at $x$ and ending at $y$. Let $\Lambda=\left( \begin{array}{cc} A & N \\ M & B \\ \end{array} \right)$ be a finite dimensional null-square projective algebra, where $A$ and $B$ are basic and sober, with respective presentations $(Q_A,R_A)$ and $(Q_B,R_B)$, and where the projective bimodules $M$ and $N$ are given as in (\ref{M}) and (\ref{N}). \begin{lemm} The Gabriel quiver of $\Lambda$ is the disjoint union of $Q_A$, $Q_B$ and new arrows as follows: \begin{itemize} \item ${}_g m_e $ arrows from $e\in E$ to $g\in F$, which we call \emph{down arrows}, \item ${}_f n_h $ arrows from $h\in F$ to $f\in E$, which we call \emph{up arrows}. \end{itemize} \end{lemm} \begin{proof} The description of the Jacobson radical of $\Lambda$ as given in the proof of Lemma \ref{simples} provides immediately the result. \hfill $\diamond$ \bigskip \end{proof} Let $T_{A\times B}(M\oplus N)$ be the tensor algebra of the $A\times B$-bimodule $M\oplus N$, where, as already mentioned, the given actions are extended by zero in order to consider $M$ and $N$ as $A\times B$-bimodules. For instance we infer $M\otimes_{A\times B} M= N\otimes_{A\times B} N=0$. The next two results are easy to prove, using both that $M$ and $N$ are projective bimodules, and the universal properties of the algebras involved. \begin{lemm}\label{iso} There is an algebra isomorphism $\varphi : T_{A\times B}(M\oplus N)\to kQ_{\Lambda}/\langle R_A, R_B\rangle$. \end{lemm} \begin{lemm} Let $\psi: T_{A\times B}(M\oplus N) \to \Lambda$ be the algebra map given by the inclusions of $A\times B$ and $M\oplus N$. $$\mathop{\rm Ker}\nolimits \psi = \langle (M\oplus N)^{\otimes_{(A\times B)}2} \rangle= \langle N\otimes_B M + M\otimes_A N\rangle.$$ \end{lemm} \vskip2mm The set of all oriented paths of $Q_A$ generates the vector space $kQ_A/\langle R_A\rangle $, hence we can choose a subset $P_A$ which is a basis of $kQ_A/\langle R_A\rangle $. Let also $P_B$ be a basis of $kQ_B/\langle R_B\rangle $, where $P_B$ is a subset of the oriented paths of $Q_B$. Let $u$ be a down arrow from $e$ to $g$, and let $v$ be an up arrow from $h$ to $f$ in $Q_\Lambda$. We define the sets $v \curlyvee u$ and $u\curlyvee v$ of oriented paths of $Q_\Lambda$ as follows: $$v\curlyvee u = v(hP_Bg)u \mbox{\ \ and \ \ } u\curlyvee v= u(eP_Af)v.$$ Let $R$ be the disjoint union of $R_A$, $R_B$, and $v\curlyvee u$ and $u\curlyvee v$ for all pairs $(u,v)$, where $u$ is a down arrow and $v$ is an up arrow. \begin{theo} Let $\Lambda=\left( \begin{array}{cc} A & N \\ M & B \\ \end{array} \right)$ be a finite dimensional null-square projective algebra, where $A$ and $B$ are basic and sober algebras with presentations $(Q_A, R_A)$ and $(Q_B, R_B)$ respectively, and where the projective bimodules $M$ and $N$ are given as in (\ref{M}) and (\ref{N}). The algebra $\Lambda$ is presented by $(Q_\Lambda, R)$. \end{theo} \begin{proof} The key point of the proof is the following. Consider the image of $\mathop{\rm Ker}\nolimits \psi$ by $\varphi$ (see Lemma \ref{iso}) in $kQ_\Lambda/\langle R_A, R_B\rangle$. Let $Bg\otimes eA$ be a direct summand of $M$, and $Af \otimes hB$ be a direct summand of $N$. They provide the direct summand $Bg\otimes eAf \otimes hB$ of $M\otimes_A N\subset \left(T_{A\times B}(M\oplus N)\right)_2 \subset \mathop{\rm Ker}\nolimits \psi$. In order to consider its image by $\varphi$, let $u$ and $v$ be the arrows in $Q_\Lambda$ associated respectively to $Bg\otimes eA$ and $Af \otimes hB$. The image of $Bg\otimes eAf \otimes hB$ in $kQ_\Lambda/\langle R_A, R_B\rangle$ is generated by $u\curlyvee v$. \hfill $\diamond$ \bigskip \end{proof} \begin{exam} Let $Q_A$ be a \emph{crown} quiver with three arrows $a_0$, $a_1$ and $a_2$, that is these arrows start respectively at $e_0$, $e_1$ and $e_2$, and end respectively at $e_1$, $e_2$ and $e_0$. Let $R_A=\{a_2a_0\}$. It is easy to establish that $A=kQ_A/\langle R_A \rangle$ is smooth. Let $(Q_B, R_B)$ be a presentation of a basic and sober algebra $B$, and let $g$ and $h$ be vertices of $B$. Let $M=Bh\otimes e_1A$ and $N=Ae_2\otimes gB$, and $u$ from $e_1$ to $h$ and $v$ from $g$ to $e_2$ the corresponding arrows. Note that the $(N,M)$-quiver has no oriented cycles. Let $\Lambda$ be the corresponding null-square projective algebra, next we describe its Gabriel quiver and a set of relations: \begin{itemize} \item $Q_\Lambda = Q_A \cup Q_B \cup \{u,v\}$. \item $R=\{a_2a_0, R_B\}\cup\{v\gamma u\}_{\gamma \in gP_Bh}$, where $P_B$ is basis of oriented paths of $B$. \end{itemize} Moreover it follows from the previous results that if $B$ is smooth, then $\Lambda$ is smooth. \end{exam} \vskip5mm \noindent\textbf{Acknowledgements:} the first and third authors thank Marcelo Lanzilotta and Universidad de la Rep\'ublica (Uruguay) for excellent conditions during part of the preparation of this work. We also thank Cristian Chaparro and Mariano Su\'arez \'Alvarez for help with tikz figures. \normalsize
3,212,635,537,857
arxiv
\section{Introduction} In the 1960s, Douglas Richard Hofstadter introduced the $Q$-sequence~\cite{18}, which is defined by the nested recurrence relation $Q\p{n}=Q\p{n-Q\p{n-1}}+Q\p{n-Q\p{n-2}}$ with initial conditions $Q\p{1}=Q\p{2}=1$. The resulting sequence appears to grow approximately like $\frac{n}{2}$, but with a lot of noise. It remains open whether the $Q$-sequence actually grows this way, or, in fact, whether the sequence is truly infinite. It is theoretically possible that, at some point, $Q\p{n}$ could exceed $n$. If this happens, $Q\p{n+1}$ and all subsequent terms would be undefined. If a sequence generated by a nested recurrence is finite in this way, we say the sequence \emph{dies}. Due to a superficial resemblance to the definition of the Fibonacci sequence, sequences defined like the $Q$-sequence are known as \emph{meta-Fibonacci sequences}. Hofstadter and Greg Huber have since~\cite{1} studied the two-parameter generalization of meta-Fibonacci recursions: $Q_{r,s}(n)=Q_{r,s}(n-Q_{r,s}(n-r)) + Q_{r,s}(n-Q_{r,s}(n-s))$ with $r<s$. Based on this investigation, a well-behaved solution to the recurrence $V(n)=V(n-V(n-1))+V(n-V(n-4))$ was discovered empirically. The initial conditions $V(1) = V(2) = V(3) = V(4) = 1$ generate a monotone solution that includes every positive integer, a property now known as \emph{slow}. Later, the properties of this solution were confirmed with a proof~\cite{3,2}. During the process of this investigation, a variety of experiments on the V-recurrence were carried out in order to understand the behaviour of other probable solution sequences~\cite{2}. However, very little is known about the behaviour of the $V$-recurrence under different sets of initial conditions. This study aims to clarify the properties of other solutions and their curious connections with another nested recursion: $H(n)= H(n-H(n-2)) + H(n-H(n-3))$. \subsection{Notation} Going forward, un-decorated symbols such as $Q$, $V$, etc.\ will be used to denote specific sequences with those names. To refer to other sequences satisfying the same recurrences, we use the symbols with subscripts. Also, initial conditions are often denoted in this paper by sequences of numbers enclosed in angle brackets. For example, the initial conditions to the $V$-sequence would be written as $\ic{1,1,1,1}$. \section{On Mortality of Remarkably Long Life} Finding nested recurrence relations with increasing “mortality” and understanding the generational behaviour of them can be seen as a meaningful attempt in order to discover the nature of chaotic solutions~\cite{4}. In literature, there are some examples of long-living, finite, chaotic sequences which are produced by meta-Fibonacci recurrences. One remarkable example for the $V$-recurrence is that the initial conditions $\ic{3,1,4,4}$ generates a sequence that terminates after $474767$ terms~\cite{2}. Similarly, the recurrence $B_A(n)=B_A(n-B_A(n-1))+B_A(n-B_A(n-2))+B_A(n-B_A(n-3))$ with the initial conditions $\ic{1,1,1,4,3}$ dies~\cite{5} when $B_A(509871) = 519293$. More surprisingly, Isgur notes that $L_A(n)=L_A(n-19-L_A(n-3))+L_A(n-28-L_A(n-12))$ with initial conditions $L_A(n)=1$ for $1 \leq n \leq 29$ has relatively long life and it becomes incalculable after more than $19$ million terms~\cite{6}. Inspired by these curious examples, we study an exceptional chaotic sequence~\cite{7} $V_c$ that is generated by the $V$-recurrence with the initial conditions $\ic{3,4,5,4,5,6}$. Investigation of the behaviour of $V_{c}(n)$ may be highly illustrative since it has really long life~\cite{7} (more precisely $V_{c}(3080193026) = V_{c}(3080193026 - V_{c}(3080193025)) + V_{c}(3080193026 - V_{c}(3080193022)) =V_{c}(2290654567) + V_{c}(1873687422) = 1686223049 + 1415176819 = 3101399868$), and it has a curious generational structure. \subsection{Generational Structure} Before we proceed, it is important to discuss the concept of a generational structure. A sequence generated by a two-term Hofstadter-Huber recurrence (such as the $Q$ or $V$-recurrence) has the property that each term is the sum of two earlier terms in the sequence. The indices of these earlier terms are known as the \emph{parents} of the current term, with the index coming from the first term in the recurrence known as the \emph{mother} or \emph{mother spot} and the second as the \emph{father} or \emph{father spot}. For example, in a sequence $V_a$ satisfying the $V$-recurrence, the mother of $V_a\p{n}$ is $n-V_a\p{n-1}$ and the father is $n-V_a\p{n-4}$. Some sequences have the property that they can be partitioned into intervals where both parents of a term in one interval lie in the previous interval. If this is possible, the sequence is said to have a \emph{generational structure}. Sometimes, this is not possible, but it is almost possible. It is also useful to discuss generations in these cases~\cite{11,8,19,9,10}. \subsection{Generations of $V_c(n)$} In order to see the facts behind this long-lived finite sequence, we need to construct some auxilary sequences which analyze the generational structure of $V_{c}(n)$. With similar methodology of previous studies~\cite{11,8, 9}, we can define $W(n)$, $P_{s}(n)$ and $R(n)$ as below, see Table 1 and Table 2 for corresponding values. In our experimental range, these auxilary sequences are used in order to detect unpredictable sub-generations of the sequence which are responsible for termination of $V_{c}(n)$. See Figure 1 in order to observe generational boundaries of $V_{c}(n)$. \newpage \begin{definition} Let $W(n)$ be the least $m$ such that minimum of the father $(m-V_{c}(m-4))$ and mother $(m-V_{c}(m-1))$ spots is equal or greater than $n$. \end{definition} \begin{definition} Let $P_{s}(n) = W(P_{s}(n-1))$ for $n > 2$, with $P_{s}(1) = 1$ and $P_{s}(2) = 4$. Furthermore, define $P(n) = P_{s}(n) + 3$ for $n > 2$, with $P(1) = P_{s}(1)$ and $P(2) = P_{s}(2)$. \end{definition} \begin{definition} Let $R(n)$ be the largest $m < P(n+1)-1$ such that $V_{c}(m+1) - V_{c}(m)$ is not 0 or 1 for $n > 2$, with $R(1) = 1$ and $R(2) = 4$. \end{definition} For a corresponding noise sequence, define $S_c(n) = V_c(n)-\frac{n}{2}$, Let $\big \langle S_{c}(n) \big \rangle_{k}$ denote the average value of $S_{c}(n)$ over the $k^{th}$ generation's boundaries that are determined by $P(k)$ and $R(k)$, and define $\alpha(k, S_{c}(n))$ as below. \\ \begin{equation} \begin{cases} M_{k}(S_{c}(n))^2 = \big \langle S_{c}(n)^2 \big \rangle_{k} - \big \langle S_{c}(n) \big \rangle_{k}^2\\ \\ \alpha(k, S_{c}(n)) = \log_2\!\left(\frac{M_{k}(S_{c}(n))}{M_{k-1}(S_{c}(n))}\right)\label{p:alpha} \end{cases} \end{equation} \begin{figure}[!hb] \begin{center} \includegraphics[width=0.7\textwidth]{A309704_3.eps} \caption{A line plot of $S_c(n) = V_c(n) - \frac{n}{2}$ for $R(7) \leq n \leq P(11)$. Red regions are corresponding to slow subsequence of $V_c(n)$, while black regions have unpredictable noise characteristics.} \label{fig:1122} \end{center} \end{figure} \begin{table*}[!] \caption {The values of $P(n)$ sequence for $n \leq 20$.} \begin{center} \begin{adjustbox}{max width=1\textwidth} \begin{tabular}{cccccc} \noalign{\smallskip} $ $ & $ $ & $ $ & $m$ & $ $ & $ $\\ \noalign{\smallskip}\hline\noalign{\smallskip} \noalign{\smallskip} $ $ & $1$ & $2$ & $3$ & $4$ & $5$\\ \noalign{\smallskip}\hline\noalign{\smallskip} $P(m+0)$ &1 &4 &17 &37 &78\\ $P(m+5)$ &162 &331 &671 &1352&2715\\ $P(m+10)$ &5443 &10900 &21816 &43649 &87316\\ $P(m+15)$ &174652&349325 &698673 &1397370 &2794765\\ \hline \noalign{\smallskip} \\ \\ \end{tabular} \end{adjustbox} \caption {The values of $R(n)$ sequence for $n \leq 20$.} \label{tab:table1} \begin{adjustbox}{max width=1\textwidth} \begin{tabular}{cccccc} \noalign{\smallskip} $ $ & $ $ & $ $ & $m$ & $ $ & $ $\\ \noalign{\smallskip}\hline\noalign{\smallskip} \noalign{\smallskip} $ $ & $1$ & $2$ & $3$ & $4$ & $5$\\ \noalign{\smallskip}\hline\noalign{\smallskip} $R(m+0)$ &1 &4 &18 &45 &111\\ $R(m+5)$ &257 &542 &1115 &2242&4501\\ $R(m+10)$ &9029 &18088 &36213 &72462 &144994\\ $R(m+15)$ &290027&580112 &1161200 &2323822 &4650379\\ \hline \noalign{\smallskip} \end{tabular} \end{adjustbox} \label{tab:tablex} \end{center} \end{table*} \begin{table*}[!h] \caption {Values of $\alpha(k, S_{c}(n))$ for $5 \leq k \leq 20$.} \begin{center} \begin{adjustbox}{max width=0.85\textwidth} \begin{tabular}{cc} \noalign{\smallskip} $k$ & $\alpha(k, S_{c}(n))$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} 5 &0.8402 \\ 6 &0.7278 \\ 7 &0.7477 \\ 8 &1.4374 \\ 9 &1.0590 \\ 10 &1.1340 \\ 11 &1.1686 \\ 12 &1.1744 \\ 13 &1.1077 \\ 14 &1.1656 \\ 15 &1.1558 \\ 16 &1.1339 \\ 17 &1.1371 \\ 18 &1.1336 \\ 19 &1.1212 \\ 20 &1.1231 \\ \hline \noalign{\smallskip} \end{tabular} \end{adjustbox} \label{tab:stat3} \end{center} \end{table*} Computational results in Table 3 show that $\alpha$ values oscillate in different range than the sequences which include Hofstadter's $Q$-sequence that are studied before~\cite{11,8,9,10}. This investigation suggests that $V_{c}(n)$ is going to termination step by step in its successive generational order due to increasing characteristics of noise that $\alpha$ values which are greater than $1$ depict. So experimental evidence suggests that infinite chaotic solution for $V$-recurrence is very difficult to construct although exceptionally long life is possible for some choices of initial conditions sets based on appearance of such generational structure. Furthermore, initial condition patterns which are formulated by asymptotic property of $V$-sequence also confirm mortality of solution sequence if a slow solution does not exist ~\cite{11}. At this case, it is natural to think that if a non-slow infinite solution sequence exists for $V$-recurrence, most likely that solution has quasi-periodic nature ~\cite{13,15}. \newpage \section{A new kind of solution} Given a meta-Fibonacci recurrence, there is a known algorithm to search for solutions to it that satisfy a linear recurrence relation~\cite{14}. This algorithm finds infinite families of solutions that eventually consist of interleavings of simple (typically constant or linear) subsequences. For the $V$-recurrence, the algorithm finds $20$ solution families eventually consisting of interleavings of five constant or linear sequences. Since solutions to the $V$-recurrence are invariant under shifting all of the terms~\cite{12}, this corresponds to four fundamentally different families. The initial conditions in Table~\ref{tab:icv} each generate a representative of one of these families. (Despite having the same constant-linear pattern, the terms in the last two families have different congruences mod $5$. They are therefore distinct families.) \begin{table} \begin{tabular}{|c|c|}\hline \textbf{Pattern} & \textbf{Initial Condition}\\\hline C,C,C,L,L & \[ \ic{5, 4, 0, 0, 0, 5, 0, 5, 5, 1, 5, 4} \]\\\hline C,C,L,C,L & \[ \ic{4, 0, 5, -2, 1, 3, -3, 5, 3, 0, 4, 10, 5, 8} \]\\\hline C,L,C,L,L & \[ \ic{0, 14, -4, -7, 8, 5, 14, -2, -2, 8, 0, 0, 6, 3, 18, 15, 14, 11, 8, 8, 20, 14, 16, 13, 8, 25} \]\\\hline C,L,C,L,L & \[ \ic{0, 2, -2, -6, 11, 6, 2, 3, 0, 11, 0, 2, 8, -2, 11, 15, 2, 13} \]\\\hline \end{tabular} \caption{Patterns and representative initial conditions for each of the four families of period-$5$ solutions to the $V$-recurrence. (C=constant, L=linear)} \label{tab:icv} \end{table} In this section, we describe another infinite family of solutions to the $V$-recurrence. Like the families in Table~\ref{tab:icv}, its members eventually consist of five relatively simple interleaved sequences. But, as we shall see, not all of them are constant or linear. Then, we describe a related family of solutions to a different recurrence. \subsection{A System of Nested Recurrences with Slow Solutions} As an aside, we first discuss the behavior of solutions to a certain type of system of nested recurrences. \begin{definition} For integers $c_f$, $d_f$, $c_g$, and $d_g$ with $d_f+d_g>0$, the \emph{Golomb-like system} with those parameters is the system \[ \begin{cases} f\p{n}=g\p{n-g\p{n-1}-c_f}+d_f\\ g\p{n}=f\p{n-f\p{n}-c_g}+d_g. \end{cases} \] \end{definition} \noindent The name \emph{Golomb-like} stems from the observation that the recurrences in these systems bear a superficial resemblance to Golomb's~\cite{16} recurrence $G\p{n}=G\p{n-G\p{n-1}}+1$. Also, solutions to Golomb-like systems appear to behave similarly to solutions to Golomb's recurrence. In particular, Golomb-like systems have some slow solutions with simple descriptions, which we will see shortly. They also appear to have many non-slow solutions with noticeable patterns. Golomb's recurrence exhibits similar behavior, and it is conjectured~\cite{17} that all solutions to Golomb's recurrence grow asymptotically like $\sqrt{2n}$. We have a similar conjecture about Golomb-like systems: \begin{con} Any solution to the Golomb-like system with parameters $c_f$, $d_f$, $c_g$, and $d_g$ grows asymptotically like $\sqrt{\pb{d_f+d_g}n}$. \end{con} \noindent The evidence for this conjecture comes from experimentation combined with the similar behavior to Golomb's recurrence. In particular, these solutions all appear to be sub-linear. \subsubsection{Specific Solutions to Golomb-like Systems} We now examine a few specific solutions to some Golomb-like systems. All of these solutions are slow and easy to describe. They all appear again in connections with the $V$-recurrence. \begin{pro}\label{prop:fg1} The Golomb-like system \[ \begin{cases} f\p{n}=g\p{n-g\p{n-1}}\\ g\p{n}=f\p{n-f\p{n}}+1 \end{cases} \] given initial condition $f\p{1}=0$ generates a slow solution where each nonnegative integer $i$ appears in the $f$-sequence $2i+1$ times and in the $g$-sequence $2i$ times. \end{pro} \begin{proof} If each nonnegative integer $i$ appears $2i+1$ times in the $f$-sequence, terms $f\p{i^2+1}$ through $f\p{i^2+2i+1}$ must equal $i$. Similarly, if each nonnegative integer $i$ appears $2i$ times in the $g$-sequence, terms $g\p{i^2-i+1}$ through $g\p{i^2+i}$ must equal $i$. We now proceed by induction on the index. First, we observe that $f\p{1}=0$, as required. We now examine each sequence, starting with the $g$-sequence. Suppose $n$ is a positive integer, and suppose that, for all $m\leq n$, $f\p{m}$ equals its desired value. We can write $n=i^2-i+1+r$ for some $i\geq1$ and $0\leq r<2i$. Wishing to show $g\p{n}=i$, we have \begin{align*} g\p{n}&=g\p{i^2-i+1+r}\\ &=f\p{i^2-i+1+r-f\p{i^2-i+1+r}}+1\\ &=f\p{i^2-i+1+r-f\p{i^2+1+\pb{r-i}}}+1. \end{align*} We now have two cases to consider: \begin{description} \item[$r<i$:] If $r<i$, then $r-i<0$, meaning that $f\p{i^2+1+\pb{r-i}}=i-1$ by induction. We then have that \begin{align*} g\p{n}&=f\p{i^2-i+1+r-\pb{i-1}}\\ &=f\p{i^2-2i+2+r}+1\\ &=f\p{\pb{i-1}^2+1+r}+1. \end{align*} Since $0\leq r<i<2\pb{i-1}+1$, we have that $f\p{\pb{i-1}^2+1+r}=i-1$, meaning $g\p{n}=i$, as required. \item[$r\geq i$:] If $r\geq i$, then $r-i\geq0$, meaning that $f\p{i^2+1+\pb{r-i}}=i$ by induction. We then have that \begin{align*} g\p{n}&=f\p{i^2-i+1+r-i}\\ &=f\p{i^2-2i+1+r}+1\\ &=f\p{\pb{i-1}^2+1+\pb{r-1}}+1. \end{align*} Since $i-1\leq r-1<i<2\pb{i-1}+1$, we have that $f\p{\pb{i-1}^2+1+\pb{r-1}}=i-1$, meaning $g\p{n}=i$, as required. \end{description} Now, we examine the $f$-sequence. Suppose $n\geq2$ is an integer, and suppose that, for all $m<n$, $g\p{m}$ equals its desired value. We can write $n=i^2+1+r$ for some $i\geq1$ and $0\leq r<2i+1$. Wishing to show $f\p{n}=i$, we have \begin{align*} f\p{n}&=f\p{i^2+1+r}\\ &=g\p{i^2+1+r-g\p{i^2+1+r}}\\ &=g\p{i^2+1+r-g\p{i^2+i+1+\pb{r-i-1}}}\\ &=g\p{i^2+1+r-g\p{\pb{i+1}^2-\pb{i+1}+1+\pb{r-i-1}}}. \end{align*} We now have two cases to consider: \begin{description} \item[$r\leq i$:] If $r\leq i$, then $r-i-1<0$, meaning that $g\p{\pb{i+1}^2-\pb{i+1}+1+\pb{r-i-1}}=i$ by induction. We then have that \begin{align*} f\p{n}&=g\p{i^2+1+r-i}\\ &=g\p{i^2-i+1+r}. \end{align*} Since $0\leq r\leq i<2i$, we have that $g\p{i^2-i+1+r}=i$, meaning $f\p{n}=i$, as required. \item[$r>i$:] If $r> i$, then $r-i-1\geq0$, meaning that $g\p{\pb{i+1}^2-\pb{i+1}+1+\pb{r-i-1}}=i+1$ by induction. We then have that \begin{align*} f\p{n}&=g\p{i^2+1+r-\pb{i+1}}\\ &=g\p{i^2-i+1+\pb{r-1}}. \end{align*} Since $i\leq r-1<2i$, we have that $g\p{i^2-i+1+\pb{r-1}}=i$, meaning $f\p{n}=i$, as required. \end{description} \end{proof} \begin{pro}\label{prop:fg2} The Golomb-like system \[ \begin{cases} f\p{n}=g\p{n-g\p{n-1}}\\ g\p{n}=f\p{n-f\p{n}}+2 \end{cases} \] given initial conditions $f\p{1}=0$, $f\p{2}=1$, $f\p{3}=1$, $g\p{1}=1$, $g\p{2}=1$, and $g\p{3}=2$ generates a slow solution where: \begin{itemize} \item Each odd integer $i\geq3$ appears in the $f$-sequence $2i+1$ times and the $g$-sequence $2i-1$ times. \item Each even positive integer appears in each sequence once. \item The $f$-sequence starts with $0$, this being the only appearance of $0$ in either sequence. \item The number $1$ appears exactly $4$ times in the $f$-sequence and exactly twice in the $g$-sequence. \end{itemize} \end{pro} Proposition~\ref{prop:fg2} has a similar proof to Proposition~\ref{prop:fg1}, with the added complication of keeping track of even versus odd. The odd terms are generated similarly to all the terms in the proof of Proposition~\ref{prop:fg1}, and each even term in one sequence comes from the preceding even term in the other sequence. For brevity, the proof of Proposition~\ref{prop:fg2} is omitted. \begin{pro}\label{prop:fg12} The Golomb-like system \[ \begin{cases} f\p{n}=g\p{n-g\p{n-1}}+1\\ g\p{n}=f\p{n-f\p{n}}+2 \end{cases} \] given initial conditions $f\p{1}=0$, $f\p{2}=1$, $f\p{3}=1$, $g\p{1}=1$, $g\p{2}=1$, and $g\p{3}=2$ generates a slow solution where: \begin{itemize} \item If $i\geq3$ is a multiple of $3$, $i$ appears once in the $f$-sequence and $2i-2$ times in the $g$-sequence. \item If $i\geq4$ is congruent to $1$ mod $3$, $i$ appears $2i-1$ times in the $f$-sequence and twice in the $g$-sequence. \item If $i$ is a positive integer congruent to $2$ mod $3$, $i$ appears twice in the $f$-sequence and once in the $g$-sequence. \item The $f$-sequence starts with $0$, this being the only appearance of $0$ in either sequence. \item The number $1$ appears exactly twice in each sequence. \end{itemize} \end{pro} Again, the proof is similar and is omitted for brevity. \subsection{An Infinite Family of Solutions to the $V$-recurrence} We are now able to describe an infinite family of solutions to the $V$-recurrence that consist of interleavings of five simpler sequences. These solutions are of a similar flavor to those in~\cite{14}. But, the methods of that paper would not find these solutions, as these include subsequences that are $\Theta\p{\sqrt{n}}$ in growth. \begin{thm}\label{thm:infv} Let $K$, $b_0$, $b_1$, $b_2$, $b_4$, $a_f$, $a_g$, and $m$ be integers satisfying the following properties: \begin{itemize} \item $b_0\equiv1\pmod{5}$ \item $b_1\equiv4\pmod{5}$ \item $b_2\equiv2\pmod{5}$ and $7\leq b_2<K+3$ \item $b_4\equiv3\pmod{5}$ and $8\leq b_4<K+5$ \item $a_f\equiv 2\pmod{5}$ \item $a_g\equiv 3\pmod{5}$ \item $a_f+a_g>0$ \item $m\geq1$. \end{itemize} \noindent Define the following Golomb-like system: \[ \begin{cases} f\p{n}=g\!\pb{n-g\p{n-1}-\frac{b_1+1}{5}}+\frac{b_1-b_0+a_f}{5}\\ g\p{n}=f\!\pb{n-f\p{n}-\frac{b_0-1}{5}}+\frac{b_0-b_1+a_g}{5}. \end{cases} \] Then, there is a solution $V_G$ to the $V$-recurrence that, starting at index $K$, has the form \[ \begin{cases} V_G\p{K+5k}=5f\p{k}+b_0\\ V_G\p{K+5k+1}=5g\p{k}+b_1\\ V_G\p{K+5k+2}=5k+b_2\\ V_G\p{K+5k+3}=5m\\ V_G\p{K+5k+4}=5k+b_4 \end{cases} \] with any initial condition satisfying the following properties: \begin{enumerate} \item\label{it:af} $V_G\p{K+5-b_4}=a_f$ \item\label{it:ag} $V_G\p{K+6-b_2}=a_g$ \item\label{it:5m} $V_G\p{K+3-b_2}+V_G\p{K+8-b_4}=5m$ \item\label{it:b2} For each integer $1\leq i\leq m$, $V_G\p{K+2-5i}=b_2-5i$ \item\label{it:b3} $V_G\p{K-2}=5m$ \item\label{it:b4} For each integer $1\leq i\leq m$, $V_G\p{K+4-5i}=b_4-5i$ \item\label{it:ic} Let $n_0=\fl{\frac{K-1}{5}}$. The initial conditions \[ \ic{\frac{V_G\p{K-5n_0}-b_0}{5}, \frac{V_G\p{K-5\pb{n_0-1}}-b_0}{5}, \frac{V_G\p{K-5\pb{n_0-2}}-b_0}{5}, \ldots, \frac{V_G\p{K-5}-b_0}{5}} \] for the $f$-sequence and \[ \ic{\frac{V_G\p{K-5n_0+1}-b_1}{5}, \frac{V_G\p{K-5\pb{n_0-1}+1}-b_1}{5}, \frac{V_G\p{K-5\pb{n_0-2}+1}-b_1}{5}, \ldots, \frac{V_G\p{K-4}-b_1}{5}} \] for the $g$-sequence generate sublinear sequences where, for all $n>n_0$, $f\p{n}\leq n+\frac{b_0-1}{5}$, $g\p{n-1}\leq n+\frac{b_1+1}{5}$, and $g\p{n}\leq n+\frac{b_1+1}{5}$. (Note that not all terms in these initial conditions need to be integers, but, in order for the sequences to live, no term after the initial condition can refer to a non-integer term.) \end{enumerate} \end{thm} \begin{proof} The proof is by induction on the index, with base case provided by the initial condition. Suppose the parameters and initial conditions satisfy all of the listed conditions, and furthermore suppose that the general form of the solution holds through index $n-1$ for some $n\geq K$. We now have five cases to consider: \begin{description} \item[$n-K\equiv 0\pmod{5}$:] In this case, $n=K+5k$ for some $k\geq0$. We have \[ V_G\p{K+5k}=V_G\p{K+5k-V_G\p{K+5k-1}}+V_G\p{K+5k-V_G\p{K+5k-4}}. \] By induction, $V_G\p{K+5k-1}=5\pb{k-1}+b_4$, as this term either falls in the inductive hypothesis or in restriction~\ref{it:b4} on the initial condition. Similarly, $V_G\p{K+5k-4}=5g\p{k-1}+b_1$, as this term either falls in the inductive hypothesis or in restriction~\ref{it:ic} on the initial condition. So, we have \begin{align*} V_G\p{K+5k}&=V_G\p{K+5k-5\pb{k-1}-b_4}+V_G\p{K+5k-5g\p{k-1}-b_1}\\ &=V_G\p{K+5-b_4}+V_G\p{K+5\pb{k-g\p{k-1}}-b_1}. \end{align*} We now observe that $V_G\p{K+5-b_4}=a_f$ by restriction~\ref{it:af} on the initial condition. Also, since $b_1\equiv4\pmod{5}$ and since restriction~\ref{it:ic} guarantees $g\p{k-1}\leq k+\frac{b_1+1}{5}$, we have $V_G\p{K+5\pb{k-g\p{k-1}}-b_1}=5g\!\pb{k-g\p{k-1}-\frac{b_1+1}{5}}+b_1$. Putting these together yields \begin{align*} V_G\p{K+5k}&=a_f+5g\!\pb{k-g\p{k-1}-\frac{b_1+1}{5}}+b_1\\ &=5g\!\pb{k-g\p{k-1}-\frac{b_1+1}{5}}+\pb{b_1-b_0+a_f}+b_0\\ &=5f\p{k}+b_0, \end{align*} as required. \item[$n-K\equiv 1\pmod{5}$:] In this case, $n=K+5k+1$ for some $k\geq0$. We have \[ V_G\p{K+5k+1}=V_G\p{K+5k+1-V_G\p{K+5k}}+V_G\p{K+5k+1-V_G\p{K+5k-3}}. \] By induction, $V_G\p{K+5k-3}=5\pb{k-1}+b_2$, as this term either falls in the inductive hypothesis or in restriction~\ref{it:b2} on the initial condition. Similarly, $V_G\p{K+5k}=5f\p{k}+b_0$, as this term falls in the inductive hypothesis. So, we have \begin{align*} V_G\p{K+5k+1}&=V_G\p{K+5k+1-5f\p{k}-b_0}+V_G\p{K+5k+1-5\pb{k-1}-b_2}\\ &=V_G\p{K+5\pb{k-f\p{k}}+1-b_0}+V_G\p{K+6-b_2}. \end{align*} We now observe that $V_G\p{K+6-b_2}=a_g$ by restriction~\ref{it:ag} on the initial condition. Also, since $b_0\equiv1\pmod{5}$ and since restriction~\ref{it:ic} guarantees $f\p{k}\leq k+\frac{b_0-1}{5}$, we have $V_G\p{K+5\pb{k-f\p{k}}-b_0}=5f\!\pb{k-f\p{k}-\frac{b_0-1}{5}}+b_0$. Putting these together yields \begin{align*} V_G\p{K+5k+1}&=5f\!\pb{k-f\p{k}-\frac{b_0-1}{5}}+b_0+a_g\\ &=5f\!\pb{k-f\p{k}-\frac{b_0-1}{5}}+\pb{b_0-b_1+a_g}+b_1\\ &=5g\p{k}+b_1, \end{align*} as required. \item[$n-K\equiv 2\pmod{5}$:] In this case, $n=K+5k+2$ for some $k\geq0$. We have \[ V_G\p{K+5k+2}=V_G\p{K+5k+2-V_G\p{K+5k+1}}+V_G\p{K+5k+2-V_G\p{K+5k-2}}. \] By induction, $V_G\p{K+5k+1}=5g\p{k}+b_1$, as this term falls in the inductive hypothesis. Similarly, $V_G\p{K+5k-2}=5m$, as this term either falls in the inductive hypothesis or in restriction~\ref{it:b3} on the initial condition. So, we have \begin{align*} V_G\p{K+5k+2}&=V_G\p{K+5k+2-5g\p{k}-b_1}+V_G\p{K+5k+2-5m}\\ &=V_G\p{K+5\pb{k-g\p{k}}+2-b_1}+V_G\p{K+5k+2-5m}. \end{align*} We now observe that $V_G\p{K+5k+2-5m}=5\pb{k-m}+b_2$, as this term falls in the inductive hypothesis or in restriction~\ref{it:b2} on the initial condition. Also, since $b_1\equiv4\pmod{5}$ and since restriction~\ref{it:ic} guarantees $g\p{k}\leq k+\frac{b_1+1}{5}$, we have $V_G\p{K+5\pb{k-g\p{k}}+2-b_1}=5m$. Putting these together yields \begin{align*} V_G\p{K+5k+2}&=5m+5\pb{k-m}+b_2=5k+b_2, \end{align*} as required. \item[$n-K\equiv 3\pmod{5}$:] In this case, $n=K+5k+3$ for some $k\geq0$. We have \[ V_G\p{K+5k+3}=V_G\p{K+5k+3-V_G\p{K+5k+2}}+V_G\p{K+5k+3-V_G\p{K+5k-1}}. \] By induction, $V_G\p{K+5k+2}=5k+b_2$, as this term falls in the inductive hypothesis. Similarly, $V_G\p{K+5k-1}=5\pb{k-1}+b_4$, as this term either falls in the inductive hypothesis or in restriction~\ref{it:b4} on the initial condition. So, we have \begin{align*} V_G\p{K+5k+3}&=V_G\p{K+5k+3-5k-b_2}+V_G\p{K+5k+3-5\pb{k-1}-b_4}\\ &=V_G\p{K+3-b_2}+V_G\p{K+8-b_4}. \end{align*} By restriction~\ref{it:5m}, this equals $5m$, as required. \item[$n-K\equiv 4\pmod{5}$:] In this case, $n=K+5k+4$ for some $k\geq0$. We have \[ V_G\p{K+5k+4}=V_G\p{K+5k+4-V_G\p{K+5k+3}}+V_G\p{K+5k+4-V_G\p{K+5k}}. \] By induction, $V_G\p{K+5k}=5f\p{k}+b_0$, as this term falls in the inductive hypothesis. Similarly, $V_G\p{K+5k+3}=5m$, as this term also falls in the inductive hypothesis. So, we have \begin{align*} V_G\p{K+5k+4}&=V_G\p{K+5k+4-5m}+V_G\p{K+5k+4-5f\p{k}-b_0}\\ &=V_G\p{K+5k+4-5m}+V_G\p{K+5\pb{k-f\p{k}}+4-b_0}. \end{align*} We now observe that $V_G\p{K+5k+4-5m}=5\pb{k-m}+b_4$, as this term falls in the inductive hypothesis or in restriction~\ref{it:b4} on the initial condition. Also, since $b_0\equiv1\pmod{5}$ and since restriction~\ref{it:ic} guarantees $f\p{k}\leq k+\frac{b_0-1}{5}$, we have $V_G\p{K+5\pb{k-f\p{k}}+4-b_0}=5m$. Putting these together yields \begin{align*} V_G\p{K+5k+4}&=5\pb{k-m}+b_4+5m=5k+b_4, \end{align*} as required. \end{description} \end{proof} \subsubsection{Concrete Examples of Solutions to the $V$-recurence} Let us now see a couple of concrete solutions to the $V$-recurrence corresponding to specific settings of the parameters in Theorem~\ref{thm:infv}. \begin{pro}\label{prop:v1} The initial conditions $\ic{4,2,5,3,1}$ to the Hofstadter $V$-recurrence produce a solution of the following form for $k\geq1$: \[ \begin{cases} V_G\p{5k}=5f\p{k}+1\\ V_G\p{5k+1}=5g\p{k}-1\\ V_G\p{5k+2}=5k+2\\ V_G\p{5k+3}=5\\ V_G\p{5k+4}=5k+3, \end{cases} \] where $f$ and $g$ are the sequences in Proposition~\ref{prop:fg1}. \end{pro} \begin{proof} Let $K=10$, $b_0=1$, $b_1=-1$, $b_2=12$, $b_4=13$, $a_f=2$, $a_g=3$, and $m=1$. These values satisfy all of the requirements on these parameters. The first nine terms of the sequence resulting from the initial conditions $\ic{4,2,5,3,1}$ are $4,2,5,3,1,4,7,5,8$. We now check that these satisfy all seven requirements: \begin{itemize} \item We have $V_G\p{K+5-b_4}=V_G\p{10+5-13}=V_G\p{2}=2=a_f$, as required. \item We have $V_G\p{K+6-b_2}=V_G\p{10+6-12}=V_G\p{4}=3=a_g$, as required. \item We have $V_G\p{K+3-b_2}+V_G\p{K+8-b_4}=V_G\p{10+3-12}+V_G\p{10+8-13}=V_G\p{1}+V_G\p{5}=4+1=5$, as required. \item We have $V_G\p{K+2-5}=V_G\p{7}=7=12-5$, as required. \item We have $V_G\p{K-2}=V_G\p{8}=5$, as required. \item We have $V_G\p{K+4-5}=V_G\p{9}=8=13-5$, as required. \item In this case, $n_0=1$. We have $V_G\p{5}=1$ and $V_G\p{6}=4$. This means our initial conditions to the recurrence system are $\ic{0}$ for $f$ and $\ic{1}$ for $g$. Furthermore, we see that the Golomb-like system obtained is precisely the one in Proposition~\ref{prop:fg1}, so we obtain those sequences. We now observe that, in those sequences, for $n>1$, $f\p{n}\leq n$, $g\p{n-1}\leq n$, and $g\p{n}\leq n$, meaning this final restriction is satisfied. \end{itemize} The above means we have a solution of the form \[ \begin{cases} V_G\p{10+5k}=5f\p{k}+1\\ V_G\p{10+5k+1}=5g\p{k}-1\\ V_G\p{10+5k+2}=5k+12\\ V_G\p{10+5k+3}=5\\ V_G\p{10+5k+4}=5k+13, \end{cases} \] beginning at index~$10$. Re-indexing and noting that the pattern actually starts earlier results in the desired solution. \end{proof} \begin{pro}\label{prop:v2} The initial conditions $\ic{3,1,4,2,5,3}$ to the Hofstadter $V$-recurrence produce a solution of the following form for $k\geq2$: \[ \begin{cases} V_G\p{5k}=10\\ V_G\p{5k+1}=5k-2\\ V_G\p{5k+2}=5f\p{k+1}+1\\ V_G\p{5k+3}=5g\p{k+1}-1\\ V_G\p{5k+4}=5k-3, \end{cases} \] where $f$ and $g$ are the sequences in Proposition~\ref{prop:fg2}. \end{pro} \begin{proof} Let $K=22$, $b_0=1$, $b_1=-1$, $b_2=17$, $b_4=23$, $a_f=2$, $a_g=8$, and $m=2$. These values satisfy all of the requirements on these parameters. The first $21$ terms of the sequence resulting from the initial conditions $\ic{3,1,4,2,5,3}$ are \[ 3,1,4,2,5,3,6,4,7,10,8,6,9,7,10,13,6,14,12,10,18,6. \] We now check that these satisfy all seven requirements: \begin{itemize} \item We have $V_G\p{K+5-b_4}=V_G\p{22+5-23}=V_G\p{4}=2=a_f$, as required. \item We have $V_G\p{K+6-b_2}=V_G\p{22+6-17}=V_G\p{11}=8=a_g$, as required. \item We have $V_G\p{K+3-b_2}+V_G\p{K+8-b_4}=V_G\p{22+3-17}+V_G\p{22+8-23}=V_G\p{8}+V_G\p{7}=4+6=10$, as required. \item We have $V_G\p{K+2-10}=V_G\p{14}=7=17-10$, and $V_G\p{K+2-5}=V_G\p{19}=12=17-5$, as required. \item We have $V_G\p{K-2}=V_G\p{10}=10$, as required. \item We have $V_G\p{K+4-10}=V_G\p{16}=13=23-10$, and $V_G\p{K+4-5}=V_G\p{21}=18=23-5$, as required. \item In this case, $n_0=4$. We have $V_G\p{2}=1$, $V_G\p{7}=6$, $V_G\p{12}=6$, $V_G\p{17}=6$ and $V_G\p{3}=4$, $V_G\p{8}=4$, $V_G\p{13}=9$, $V_G\p{18}=14$. This means our initial conditions to the recurrence system are $\ic{0,1,1,1}$ for $f$ and $\ic{1,1,2,3}$ for $g$. Furthermore, we see that the Golomb-like system obtained is precisely the one in Proposition~\ref{prop:fg2}, so we obtain those sequences. (The initial conditions there are the first three terms of each of these initial conditions, but the fourth terms here equal the fourth terms in those sequences.) We now observe that, in those sequences, for $n>4$, $f\p{n}\leq n$, $g\p{n-1}\leq n$, and $g\p{n}\leq n$, meaning this final restriction is satisfied. \end{itemize} The above means we have a solution of the form \[ \begin{cases} V_G\p{22+5k}=5f\p{k}+1\\ V_G\p{22+5k+1}=5g\p{k}-1\\ V_G\p{22+5k+2}=5k+17\\ V_G\p{22+5k+3}=10\\ V_G\p{22+5k+4}=5k+23, \end{cases} \] beginning at index~$22$. Re-indexing and noting that the pattern actually starts earlier results in the desired solution. \end{proof} \subsection{A Companion to the $V$-Recurrence} The patterns we observe in the $V$-recurrence all repeat with a period of $5$. The $V$-recurrence, $V\p{n}=V\p{n-V\p{n-1}}+V\p{n-V\p{n-4}}$ prominently features a $1$ and a $4$, which sum to $5$. There is another recurrence, $H\p{n}=H\p{n-H\p{n-2}}+H\p{n-H\p{n-3}}$ with a similar property. In fact, this recurrence seems to be a sort of \emph{companion} to the $V$-recurrence, in that it has a similar families of period-$5$ solutions. Like the $V$-recurrence, $H$ has four fundamentally different families of solutions that eventually consist of interleavings of five constant or linear sequences (see Table~\ref{tab:ich}). More importantly, there is a family of solutions analogous to the solutions to $V$ described in Theorem~\ref{thm:infv}. \begin{table} \begin{tabular}{|c|c|}\hline \textbf{Pattern} & \textbf{Initial Condition}\\\hline C,C,C,L,L & \[ \ic{5, 3, 0, -1, -1, 5, 0, 1, 4, 2, 5, 3, 10} \]\\\hline C,C,L,C,L & \[ \ic{2, 0, 5, 0, 0, 0, 5, 5, 5, 3, 2} \]\\\hline C,C,L,L,L & \[ \ic{7, 0, -3, 0, 4, 7, 5, 0, 7, 4, 0, 8, 7, 5, 4, 7, 15, 12, 10} \]\\\hline C,C,L,L,L & \[ \ic{6, 1, 0, 3, 3, 0, 6, 4, -1, 3, 6, 0, 12, 4, 3, 6, 16, 14, 9} \]\\\hline \end{tabular} \caption{Patterns and representative initial conditions for each of the four families of period-$5$ solutions to the recurrence $H$. (C=constant, L=linear)} \label{tab:ich} \end{table} \begin{thm}\label{thm:infh} Let $K$, $b_0$, $b_1$, $b_2$, $b_4$, $a_f$, $a_g$, and $m$ be integers satisfying the following properties: \begin{itemize} \item $b_0\equiv1\pmod{5}$ and $6\leq b_0<K+2$ \item $b_1\equiv4\pmod{5}$ and $9\leq b_1<K+3$ \item $b_2\equiv2\pmod{5}$ \item $b_4\equiv3\pmod{5}$ \item $a_f\equiv 4\pmod{5}$ \item $a_g\equiv 1\pmod{5}$ \item $a_f+a_g>0$ \item $m\geq1$. \end{itemize} \noindent Define the following Golomb-like system: \[ \begin{cases} f\p{n}=g\!\pb{n-g\p{n-1}-\frac{b_4+2}{5}}+\frac{b_4-b_2+a_f}{5}\\ g\p{n}=f\!\pb{n-f\p{n}-\frac{b_2-2}{5}}+\frac{b_2-b_4+a_g}{5}. \end{cases} \] Then, there is a solution $H_G$ to the $H$-recurrence that, starting at index $K$, has the form \[ \begin{cases} H_G\p{K+5k}=5k+b_0\\ H_G\p{K+5k+1}=5k+b_1\\ H_G\p{K+5k+2}=5f\p{k}+b_2\\ H_G\p{K+5k+3}=5m\\ H_G\p{K+5k+4}=5g\p{k}+b_4 \end{cases} \] with any initial condition satisfying the following properties: \begin{enumerate} \item\label{it:afh} $H_G\p{K+2-b_0}=a_f$ \item\label{it:agh} $H_G\p{K+4-b_1}=a_g$ \item\label{it:5mh} $H_G\p{K+3-b_0}+V_G\p{K+3-b_1}=5m$ \item\label{it:b0h} For each integer $1\leq i\leq m$, $H_G\p{K-5i}=b_0-5i$ \item\label{it:b1h} For each integer $1\leq i\leq m$, $H_G\p{K+1-5i}=b_1-5i$ \item\label{it:b3h} $H_G\p{K-2}=5m$ \item\label{it:ich} Let $n_0=\fl{\frac{K+1}{5}}$. The initial conditions \[ \ic{\frac{H_G\p{K-5n_0+2}-b_0}{5}, \frac{H_G\p{K-5\pb{n_0-1}+2}-b_0}{5}, \frac{H_G\p{K-5\pb{n_0-2+2}}-b_0}{5}, \ldots, \frac{H_G\p{K-3}-b_0}{5}} \] for the $f$-sequence and \[ \ic{\frac{H_G\p{K-5n_0+4}-b_1}{5}, \frac{H_G\p{K-5\pb{n_0-1}+4}-b_1}{5}, \frac{H_G\p{K-5\pb{n_0-2}+4}-b_1}{5}, \ldots, \frac{H_G\p{K-1}-b_1}{5}} \] for the $g$-sequence generate sublinear sequences where, for all $n>n_0$, $f\p{n-1}\leq n+\frac{b_2+3}{5}$, $f\p{n}\leq n+\frac{b_2-2}{5}$, and $g\p{n-1}\leq n+\frac{b_4+2}{5}$ \end{enumerate} \end{thm} \begin{proof} The proof is by induction on the index, with base case provided by the initial condition. Suppose the parameters and initial conditions satisfy all of the listed conditions, and furthermore suppose that the general form of the solution holds through index $n-1$ for some $n\geq K$. We now have five cases to consider: \begin{description} \item[$n-K\equiv 0\pmod{5}$:] In this case, $n=K+5k$ for some $k\geq0$. We have \[ H_G\p{K+5k}=H_G\p{K+5k-H_G\p{K+5k-2}}+H_G\p{K+5k-H_G\p{K+5k-3}}. \] By induction, $H_G\p{K+5k-2}=5m$, as this term falls in the inductive hypothesis or into restriction~\ref{it:b3h} on the initial condition. Similarly, $H_G\p{K+5k+3}=5f\p{k-1}+b_2$, as this term falls in the inductive hypothesis or into restriction~\ref{it:ich}. So, we have \begin{align*} H_G\p{K+5k}&=H_G\p{K+5k-5m}+H_G\p{K+5k-5f\p{k-1}-b_2}\\ &=H_G\p{K+5k-5m}+H_G\p{K+5\pb{k-f\p{k-1}}-b_2}. \end{align*} We now observe that $H_G\p{K+5k-5m}=5\pb{k-m}+b_0$, as this term falls in the inductive hypothesis or in restriction~\ref{it:b0h} on the initial condition. Also, since $b_2\equiv2\pmod{5}$ and since restriction~\ref{it:ich} guarantees $f\p{k-1}\leq k+\frac{b_2+3}{5}$, we have $H_G\p{K+5\pb{k-f\p{k-1}}-b_2}=5m$. Putting these together yields \begin{align*} H_G\p{K+5k}&=5\pb{k-m}+b_0+5m=5k+b_0, \end{align*} as required. \item[$n-K\equiv 1\pmod{5}$:] In this case, $n=K+5k+1$ for some $k\geq0$. We have \[ H_G\p{K+5k+1}=H_G\p{K+5k+1-H_G\p{K+5k-1}}+H_G\p{K+5k+1-H_G\p{K+5k-2}}. \] By induction, $H_G\p{K+5k-1}=5g\p{k-1}+b_4$, as this term falls in the inductive hypothesis or in restriction~\ref{it:ich} on the initial condition. Similarly, $H_G\p{K+5k-2}=5m$, as this term either falls in the inductive hypothesis or in restriction~\ref{it:b3h} on the initial condition. So, we have \begin{align*} H_G\p{K+5k+1}&=H_G\p{K+5k+1-5g\p{k-1}-b_4}+H_G\p{K+5k+1-5m}\\ &=H_G\p{K+5\pb{k-g\p{k-1}}+1-b_4}+H_G\p{K+5k+1-5m}. \end{align*} We now observe that $H_G\p{K+5k+1-5m}=5\pb{k-m}+b_1$, as this term falls in the inductive hypothesis or in restriction~\ref{it:b1h} on the initial condition. Also, since $b_4\equiv3\pmod{5}$ and since restriction~\ref{it:ic} guarantees $g\p{k-1}\leq k+\frac{b_4+2}{5}$, we have $H_G\p{K+5\pb{k-g\p{k-1}}+1-b_4}=5m$. Putting these together yields \begin{align*} H_G\p{K+5k+1}&=5m+5\pb{k-m}+b_1=5k+b_1, \end{align*} as required. \item[$n-K\equiv 2\pmod{5}$:] In this case, $n=K+5k+2$ for some $k\geq0$. We have \[ H_G\p{K+5k+2}=H_G\p{K+5k+2-H_G\p{K+5k}}+H_G\p{K+5k+2-H_G\p{K+5k-1}}. \] By induction, $H_G\p{K+5k}=5k+b_0$, as this term falls in the inductive hypothesis. Similarly, $H_G\p{K+5k-1}=5g\p{k-1}+b_4$, as this term either falls in the inductive hypothesis or in restriction~\ref{it:ich} on the initial condition. So, we have \begin{align*} H_G\p{K+5k+2}&=H_G\p{K+5k+2-5k-b_0}+H_G\p{K+5k+2-5g\p{k-1}-b_4}\\ &=H_G\p{K+2-b_0}+H_G\p{K+5\pb{k-g\p{k-1}}+2-b_4}. \end{align*} We now observe that $H_G\p{K+2-b_0}=a_f$ by restriction~\ref{it:afh} on the initial condition. Also, since $b_4\equiv3\pmod{5}$ and since restriction~\ref{it:ich} guarantees $g\p{k-1}\leq k+\frac{b_4+2}{5}$, we have $H_G\p{K+5\pb{k-g\p{k-1}}+2-b_4}=5g\!\pb{k-g\p{k-1}-\frac{b_4+2}{5}}+b_4$. Putting these together yields \begin{align*} H_G\p{K+5k+2}&=a_f+5g\!\pb{k-g\p{k-1}-\frac{b_4+2}{5}}+b_4\\ &=5g\!\pb{k-g\p{k-1}-\frac{b_4+2}{5}}+\pb{b_4-b_2+a_f}+b_2\\ &=5f\p{k}+b_2, \end{align*} as required. \item[$n-K\equiv 3\pmod{5}$:] In this case, $n=K+5k+3$ for some $k\geq0$. We have \[ H_G\p{K+5k+3}=H_G\p{K+5k+3-H_G\p{K+5k+1}}+H_G\p{K+5k+3-H_G\p{K+5k}}. \] By induction, $H_G\p{K+5k+1}=5k+b_1$, as this term falls in the inductive hypothesis. Similarly, $H_G\p{K+5k}=5k+b_0$, as this term also falls in the inductive hypothesis. So, we have \begin{align*} H_G\p{K+5k+3}&=H_G\p{K+5k+3-5k-b_1}+H_G\p{K+5k+3-5k-b_0}\\ &=H_G\p{K+3-b_1}+H_G\p{K+3-b_0}. \end{align*} By restriction~\ref{it:5m}, this equals $5m$, as required. \item[$n-K\equiv 4\pmod{5}$:] In this case, $n=K+5k+4$ for some $k\geq0$. We have \[ H_G\p{K+5k+4}=H_G\p{K+5k+4-H_G\p{K+5k+2}}+H_G\p{K+5k+4-H_G\p{K+5k+1}}. \] By induction, $H_G\p{K+5k-1}=5k+b_1$, as this term falls in the inductive hypothesis. Similarly, $H_G\p{K+5k+2}=5f\p{k}+b_2$, as this term falls in the inductive hypothesis. So, we have \begin{align*} H_G\p{K+5k+4}&=H_G\p{K+5k+4-5f\p{k}-b_2}+H_G\p{K+5k+4-5k-b_1}\\ &=H_G\p{K+5\pb{k-f\p{k}}+4-b_2}+H_G\p{K+4-b_1}. \end{align*} We now observe that $H_G\p{K+4-b_1}=a_g$ by restriction~\ref{it:agh} on the initial condition. Also, since $b_2\equiv2\pmod{5}$ and since restriction~\ref{it:ich} guarantees $f\p{k}\leq k+\frac{b_2-2}{5}$, we have $H_G\p{K+5\pb{k-f\p{k}}+4-b_2}=5f\!\pb{k-f\p{k}-\frac{b_2-2}{5}}+b_0$. Putting these together yields \begin{align*} H_G\p{K+5k+4}&=5f\!\pb{k-f\p{k}-\frac{b_2-2}{5}}+b_2+a_g\\ &=5f\!\pb{k-f\p{k}-\frac{b_2-2}{5}}+\pb{b_2-b_4+a_g}+b_4\\ &=5g\p{k}+b_4, \end{align*} as required. \end{description} \end{proof} \subsection{Concrete Examples for the $H$-recurrence} Let us now see a couple of concrete solutions to the $H$-recurrence corresponding to specific settings of the parameters in Theorem~\ref{thm:infh}. \begin{pro}\label{prop:h1} The initial conditions $\ic{3,1,4,2}$ to the $H$-recurrence produce a solution of the following form for $k\geq1$: \[ \begin{cases} H_G\p{5k}=5\\ H_G\p{5k+1}=5g\p{k}-2\\ H_G\p{5k+2}=5k+1\\ H_G\p{5k+3}=5k+4\\ H_G\p{5k+4}=5f\p{k+1}+2, \end{cases} \] where $f$ and $g$ are the sequences in Proposition~\ref{prop:fg1}. \end{pro} \begin{proof} Let $K=12$, $b_0=11$, $b_1=14$, $b_2=2$, $b_4=-2$, $a_f=4$, $a_g=1$, and $m=1$. These values satisfy all of the requirements on these parameters. The first $11$ terms of the sequence resulting from the initial conditions $\ic{3,1,4,2}$ are $3,1,4,2,5, 3, 6, 9, 7, 5, 3$. We now check that these satisfy all seven requirements: \begin{itemize} \item We have $H_G\p{K+2-b_0}=H_G\p{12+2-11}=H_G\p{3}=4=a_f$, as required. \item We have $H_G\p{K+4-b_1}=H_G\p{12+4-14}=H_G\p{2}=1=a_g$, as required. \item We have $H_G\p{K+3-b_0}+H_G\p{K+3-b_1}=H_G\p{12+3-11}+H_G\p{12+3-14}=H_G\p{4}+H_G\p{1}=2+3=5$, as required. \item We have $H_G\p{K-5}=H_G\p{7}=6=11-5$, as required. \item We have $H_G\p{K+1-5}=H_G\p{8}=9=14-5$, as required. \item We have $H_G\p{K-2}=H_G\p{10}=5$, as required. \item In this case, $n_0=2$. We have $H_G\p{4}=2$, $H_G\p{9}=7$ and $H_G\p{6}=3$, $H_G\p{11}=3$. This means our initial conditions to the recurrence system are $\ic{0,1}$ to $f$ and $\ic{1,1}$ to $g$. Furthermore, we see that the Golomb-like system obtained is precisely the one in Proposition~\ref{prop:fg1}, so we obtain those sequences. (The initial conditions there are the first term of our $f$-initial condition, but the other terms here equal the next terms in those sequences.) We now observe that, in those sequences, for $n>2$, $f\p{n-1}\leq n+1$, $f\p{n}\leq n$, and $g\p{n}\leq n$, meaning this final restriction is satisfied. \end{itemize} The above means we have a solution of the form \[ \begin{cases} H_G\p{12+5k}=5k+11\\ H_G\p{12+5k+1}=5k+14\\ H_G\p{12+5k+2}=5f\p{k}+2\\ H_G\p{12+5k+3}=5\\ H_G\p{12+5k+4}=5g\p{k}-2, \end{cases} \] beginning at index~$12$. Re-indexing and noting that the pattern actually starts earlier results in the desired solution. \end{proof} \begin{pro}\label{prop:h2} The initial conditions $\ic{4,2,5,3,1,4,7,5}$ to the $H$-recurrence produce a solution of the following form for $k\geq3$: \[ \begin{cases} H_G\p{5k}=5k-9\\ H_G\p{5k+1}=5k-6\\ H_G\p{5k+2}=5f\p{k}+2\\ H_G\p{5k+3}=10\\ H_G\p{5k+4}=5g\p{k}-2, \end{cases} \] where $f$ and $g$ are the sequences in Proposition~\ref{prop:fg12}. \end{pro} \begin{proof} Let $K=25$, $b_0=16$, $b_1=19$, $b_2=2$, $b_4=-2$, $a_f=9$, $a_g=6$, and $m=2$. These values satisfy all of the requirements on these parameters. The first $24$ terms of the sequence resulting from the initial conditions $\ic{4,2,5,3,1,4,7,5}$ are \[ 4, 2, 5, 3, 1, 4, 7, 5, 3, 6, 9, 7, 10, 8, 6, 9, 12, 10, 13, 11, 14, 12, 10, 13. \] We now check that these satisfy all seven requirements: \begin{itemize} \item We have $H_G\p{K+2-b_0}=H_G\p{25+2-16}=H_G\p{11}=9=a_f$, as required. \item We have $H_G\p{K+4-b_1}=H_G\p{25+4-19}=H_G\p{10}=6=a_g$, as required. \item We have $H_G\p{K+3-b_0}+H_G\p{K+3-b_1}=H_G\p{25+3-16}+H_G\p{25+3-19}=H_G\p{12}+H_G\p{9}=7+3=10$, as required. \item We have $H_G\p{K-10}=H_G\p{15}=6=16-10$, and $H_G\p{K-5}=H_G\p{20}=11=16-5$, as required. \item We have $H_G\p{K+1-10}=H_G\p{16}=9=19-10$, and $H_G\p{K+1-5}=H_G\p{21}=14=19-5$, as required. \item We have $H_G\p{K-2}=H_G\p{23}=10$, as required. \item In this case, $n_0=5$. We have $H_G\p{2}=2$, $H_G\p{7}=7$, $H_G\p{12}=7$, $H_G\p{17}=12$, $H_G\p{22}=12$ and $H_G\p{4}=3$, $H_G\p{9}=3$, $H_G\p{14}=8$, $H_G\p{19}=13$, $H_G\p{24}=13$. This means our initial conditions to the recurrence system are $\ic{0,1,1,2,2}$ to $f$ and $\ic{1,1,2,3,3}$ to $g$. Furthermore, we see that the Golomb-like system obtained is precisely the one in Proposition~\ref{prop:fg12}, so we obtain those sequences. (The initial conditions there are the first three terms of our initial conditions, but the other terms here equal the next terms in those sequences.) We now observe that, in those sequences, for $n>5$, $f\p{n-1}\leq n+1$, $f\p{n}\leq n$, and $g\p{n}\leq n$, meaning this final restriction is satisfied. \end{itemize} The above means we have a solution of the form \[ \begin{cases} H_G\p{25+5k}=5k+16\\ H_G\p{25+5k+1}=5k+19\\ H_G\p{25+5k+2}=5f\p{k}+2\\ H_G\p{25+5k+3}=10\\ H_G\p{25+5k+4}=5g\p{k}-2, \end{cases} \] beginning at index~$25$. Re-indexing and noting that the pattern actually starts earlier results in the desired solution. \end{proof} \section{Conclusion} This study sheds light on a new kind of solution while exploring the mysterious nature behind the $V$-recurrence. Especially, $V_{c}(n)$ can be seen as a fascinating example for coexistence of order and chaos in a meta-Fibonacci recurrence although in that case chaotic behaviour brings about termination of corresponding sequence after billions of terms. This perfect mixture of regularity and irregularity reminds that results of Pinn’s study that suggests a physical picture such as terms of random walks in some bizarre surrounding could perhaps help to better understand some of the interesting properties of certain chaotic meta-Fibonacci sequences ~\cite{9}. Indeed, for example, it would be remarkably interesting if the sequences obtained in this study would be helpful to model and calculate the transport of atoms by altering the site number of the potential in terms of localization and dislocalization properties of the quasi-periodic lattices ~\cite{20}. Some future works could potentially focus on such similar physical application attempts of these curious family of nonlinear recurrences. At the same time, it is known that finding different meta-Fibonacci recurrences with similar behavior is significant and has been an essential key to the substantial progress in terms of new directions in this research area ~\cite{1}. In that direction, this study also provides new connections for two essential nested recurrence relations which are represented by $V$ and $H$ and corresponding results strongly suggest that Hofstadter-Huber generalization is fruitful to discovery for new curious solution sequence families, especially for quasi-periodic relations that this study focuses on. \begin{backmatter} \section*{Competing interests} The authors declare that they have no competing interests. \section*{Authors' contributions} Altug Alkan is the main contributor of this research. Altug Alkan, Orhan Ozgur Aybar and Zehra Akdeniz analysed the chaotic behaviour of sequence in Section 2, they collaborate in this section. Nathan Fox give significant support in proof related works of this study and Nathan Fox and Altug Alkan collaborate in proof of solutions in section 3. All authors read and approved the final manuscript. \section*{Acknowledgements} Authors would like to thank Robert Israel for his valuable help on Maple-related requirements of this study. Altug Alkan also would like to thank Rémy Sigrist and Giovanni Resta for their valuable computational assistance especially regarding to OEIS contributions such as A309567, A309636, A309650, A309704 and helpful feedbacks about some related sequences of this study. \section*{Funding} There is no funding request for this article.
3,212,635,537,858
arxiv
\section{Continuous transition between doubled semion and toric code models} \label{dstcSec} Here we note that it is possible to understand a continuous transition between two kinds of $SU(2)$ invariant $Z_2$ spin liquids, corresponding to the doubled semion and toric code models, respectively. Specifically, the doubled semion model can continuously transition to the toric code model in the presence of $SU(2)$ spin symmetry. The Lagrangian at long wavelengths is described by $QED_3$ with two flavors of Dirac fermions ($N_f = 2$). Remarkably, in this case the gaps of both triplet and singlet excitations approach zero at the quantum phase transition. In the absence of $SU(2)$ spin symmetry, there can be generically an intervening topologically trivial gapped insulator unless other symmetries are present to stabilize the direct transition. To see this, first consider the BIQH phase diagram studied in \cite{grover2013,lu2013}. This can be understood using the following field theory: \begin{align} \label{biqhtrans} \mathcal{L} = &\sum_{k=\uparrow, \downarrow} \bar{\psi}_k [\gamma^\mu (\partial_\mu - i \alpha_\mu) + M_k] \psi_k - \frac{1}{g} (\epsilon_{\mu\nu\lambda}\partial_\nu \alpha_\lambda)^2 \nonumber \\ &-\frac{1}{4\pi} A_e \partial A_e - \frac{1}{2\pi} A_e \partial \alpha, \end{align} where here $\alpha$ is an emergent $U(1)$ gauge field, $\psi_k$ for $k =\uparrow, \downarrow$ are each two-component Dirac fermions, and $k$ is a physical $SU(2)$ spin index. $\gamma^\mu$ are the Pauli matrices, and $\bar{\psi}_k \equiv \psi_k^\dagger \gamma^0$. $SU(2)$ symmetry implies that $M_\uparrow = M_\downarrow$. When $M_1 = M_2 > 0$, the theory is in the BIQH state; when $M_1 = M_2 < 0$, the theory is in a topologically trivial insulating state with $\bar{\sigma}_{xy} = 0$. When $M_1$ and $M_2$ have opposite signs, which breaks the $SU(2)$ spin symmetry, then the theory can be shown to be in a superfluid state. \begin{figure} \centerline{ \includegraphics[width=3.2in]{dstc.eps} } \caption{\label{dstcFig} Possible schematic phase diagram for $Z_2$ spin liquids. A direct transition between the doubled semion and toric code models can occur in the presence of $SU(2)$ spin symmetry. The topologically trivial phase may itself spontaneously break $SU(2)$ and correspond to a conventional magnetically ordered state. In the absence of $SU(2)$ spin symmetry, the direct transition splits into two transitions, with the topologically trivial insulating phase intervening. } \end{figure} (\ref{biqhtrans}) can be understood as arising from the following parton construction \begin{align} b_k = f_0 f_k, \end{align} where $k = \uparrow,\downarrow$ and $b_k$ have charge $1$ under the external gauge field $A_e$. Next, we assume a mean-field ansatz where $f_0$ forms a quantized Hall insulator with Chern number $C_0 = -1$, while each $f_k$ is undergoing a Chern-number changing transition from Chern numbers $C_k=1$ to $C_k=0$. (\ref{biqhtrans}) is simply the field theory for this transition. Since the spinful fermions are undergoing a Chern-number changing transition, the gap to spinful excitations closes at this transition. Note that the direct transition involving two Dirac fermions is protected by the $SU(2)$ spin symmetry; when it is broken, and there is no additional symmetry protecting $M_\uparrow = M_\downarrow$, the direct transition will split into two transitions. The charge-$1$ superfluid phase of $b_k$ requires $(C_\uparrow, C_\downarrow) = (1,0)$ or $(0,1)$, and therefore requires the $SU(2)$ to be broken. Note that in this theory, it is important that there is no chemical potential term for the fermions, which requires that the density of $f_\alpha$, and correspondingly $b_\alpha$, each be constant through the transition. Now consider the above theory, but with the global $U(1)$ symmetry associated with $A_e$ explicitly broken to $Z_2$ by a charge-2 Higgs scalar: \begin{align} \mathcal{L} = &|(\partial - i 2 A_e) \Phi|^2 - m |\Phi|^2 + \lambda |\Phi|^2 - \frac{1}{4\pi} A_e \partial A_e \nonumber \\ &+\sum_{k=\uparrow, \downarrow} \bar{\psi}_k [\gamma^\mu (\partial_\mu - i \alpha_\mu) + M_k] \psi_k - \frac{1}{2\pi} A_e \partial \alpha , \end{align} with $m > 0$ so that $\Phi$ is condensed: $\langle \Phi \rangle \neq 0$. The insulating states are now replaced by gapped states with a $Z_2$ global symmetry. Remarkably, the BIQH state descends into the topologically non-trivial $Z_2$ SPT state, while the trivial insulator descends into the topologically trivial $Z_2$ symmetric gapped state. The superfluid descends into a $Z_2$ symmetry breaking state. This can be seen as follows. The BIQH state has two counterpropagating edge modes, described by two chiral boson fields $\phi_L$ and $\phi_R$. The backscattering term $\cos(a(\phi_L + \phi_R))$, for integer $a$, is prohibited by the $U(1)$ charge conservation, because only $\phi_L$ transforms under the action of the $U(1)$ symmetry. If $U(1)$ is broken to $Z_2$, terms of the form $\cos(2a(\phi_L + \phi_R))$ are now allowed in the edge Hamiltonian as they preserve the $Z_2$ symmetry. However if they generate an energy gap, then $\langle e^{i (\phi_L + \phi_R) } \rangle \neq 0$, which breaks the $Z_2$ symmetry. Therefore the edge states still cannot be gapped without breaking the $Z_2$ symmetry. This is the signature of the topologically non-trivial $Z_2$ SPT state. In contrast, the trivial Mott insulator has no edge states in the presence of either the $U(1)$ or $Z_2$ global symmetries. The critical theory between the $Z_2$ trivial and non-trivial SPT states becomes a critical theory between the doubled semion and toric code models when the global $Z_2$ symmetry is gauged. This can be done in the above theory by replacing $A_e$ with a dynamical $U(1)$ gauge field $a$: \begin{align} \mathcal{L} = &|(\partial - i 2 a) \Phi|^2 - m |\Phi|^2 + \lambda |\Phi|^2 - \frac{1}{4\pi} a \partial a \nonumber \\ &+\sum_{k=\uparrow, \downarrow} \bar{\psi}_k [\gamma^\mu (\partial_\mu - i \alpha_\mu) + M_k] \psi_k - \frac{1}{2\pi} a \partial \alpha , \end{align} This occurs when we interpret $b_k$ as Schwinger bosons (which are related to the $SU(2)$ spins by $\vec{S} = \frac12 b^\dagger \vec{\sigma} b$). When $\Phi$ is condensed, at long wavelengths the effective theory can be written in terms of the phase $\theta$ of $\Phi$: \begin{align} \mathcal{L} = &|(\partial \theta - 2 i a)|^2 + \sum_{k=\uparrow, \downarrow} \bar{\psi}_k [\gamma^\mu (\partial_\mu - i \alpha_\mu) + M_k] \psi_k \nonumber \\ &- \frac{1}{4\pi} a \partial a - \frac{1}{2\pi} a \partial \alpha \end{align} Picking the gauge $\theta = 0$, $a_0 = 0$ and integrating out $a$ then just gives the action of $QED_3$, with two flavors of Dirac fermions ($N_f = 2$): \begin{align} \mathcal{L} = &\sum_{k=\uparrow, \downarrow} \bar{\psi}_k [\gamma^\mu (\partial_\mu - i \alpha_\mu) + M_k] \psi_k - \frac{1}{g^2} (\epsilon_{\mu\nu\lambda} \partial_\nu \alpha_\lambda)^2 \end{align} In the large $N_f$ limit, it is known that a critical fixed point exists, although this has not been fully established when $N_f = 2$. \section{Projective construction of BIQH state} \label{biqhProjConApp} Here we fill in some details about why the construction presented in the main text describes the BIQH state. Recall that we set \begin{align} b = f_1 f_2, \end{align} and we consider a mean-field ansatz where $f_1$, $f_2$ form quantized Hall insulators with Chern numbers $(C_1, C_2) = (-1,2)$. The emergent $U(1)$ gauge field $a$ is associated with the gauge redundancy $f_1 \rightarrow e^{i\theta} f_1$ and $f_2 \rightarrow e^{-i\theta} f_2$. Without loss of generality, suppose $f_1$, $f_2$ carry charges $1$ and $0$, respectively, under the external gauge field $A_e$. Integrating out $f_1$ and $f_2$ then yields the following effective theory for the gauge fields: \begin{align} \mathcal{L} &= -\frac{1}{4\pi} (A_e + a)\partial (A_e + a) + \frac{2}{4\pi} a \partial a \nonumber \\ &= -\frac{1}{4\pi} A_e \partial A_e - \frac{1}{2\pi} A_e \partial a + \frac{1}{4\pi} a \partial a. \end{align} Since CS term for $a$ has unit coefficient, clearly this describes a gapped state with unique ground state degeneracy on all closed manifolds. The elementary excitations are particles/holes in the $f_1$ and $f_2$ states, which, after being dressed by $a$ gauge flux due to the CS term, become bosonic excitations. Furthermore, integrating out $a$ yields the response theory \begin{align} \mathcal{L} = -\frac{2}{4\pi} A_e \partial A_e, \end{align} which shows that the state has Hall conductance $\bar{\sigma}_{xy} = 2$. Therefore this state describes a BIQH state, with no intrinsic topological order. An alternative way of seeing this result, and directly deriving the effective theory (\ref{BIQHLag1}), is as follows. We introduce three gauge fields, $a^1$, $a^2$, and $a^3$, and their associated conserved currents $j^I_{\mu} = \frac{1}{2\pi} \epsilon_{\mu\nu\lambda} \partial_\nu a^I_\lambda$, for $I = 1,.., 3$. $j^1$ is taken to describe the current of $f_1$. The Chern number $2$ state of $f_2$ is then assumed to consist of two filled bands, each with Chern number $1$, so that the current of $f_2$ can be described by two conserved currents, $j^2$ and $j^3$, each of which describes the dynamics of a Chern number $1$ band. Now, since $j^I$ each describe the dynamics of a band with unit Chern number, the effective theory is: \begin{align} \mathcal{L} =& \frac{1}{4\pi} (a^2 \partial a^2 + a^3 \partial a^3 - a^1 \partial a^1) + \nonumber \\ &\frac{1}{2\pi} a \partial (a^2 + a^3 - a^1) + \frac{1}{2\pi} A_e \partial a^1. \end{align} Integrating out $a$ then gives (\ref{BIQHLag1}), for $n = 1$. Finally, note that the case where $(C_1, C_2) = (-k,k+1)$ also describes a BIQH state, with Hall conductance $\bar{\sigma}_{xy} = k(k+1)$. \section{Physical operators at the CSL - $Z_2$ critical point} \label{physicalOps} Recall the critical theory between the $1/2n$ Laughlin FQH state and the $Z_2$ fractionalized states is described by \begin{align} \label{CSHiggsv} \mathcal{L} = &|(\partial - i 2 A) \Phi|^2 + m |\Phi|^2 + \lambda |\Phi|^2 \nonumber \\ &- \frac{2n}{4\pi} A \partial A + \frac{1}{2\pi} A_e \partial A. \end{align} In the case of the FQH transition, where the physical degrees of freedom are bosons with a conserved $U(1)$ charge, the boson destruction operator at the critical point is $b = \hat{M} \Phi$, where $\hat{M}$ removes $2\pi$ units of flux of $A$. Due to the CS term, $2\pi$ flux of $A$ carries 2 units of $A$ charge, and therefore the physical gauge-invariant operator must also remove a quanta of $\Phi$. From the coupling of the external field, we see that a $2\pi$ flux of $A$ will carry unit charge under $A_e$ and therefore corresponds to the physical boson. $\Phi$ describes double vortices, as it carries charge $2$ under $A$. Single vortices remain gapped through the transition and do not appear in the low energy theory at the critical point. Now let us consider the $SU(2)$ invariant spin-$1/2$ system. Here, the critical theory is described by (\ref{CSHiggsv}), but without the external gauge field $A_e$ and with $A$ replaced by the gauge field $a$ to which the Schwinger bosons couple. The Higgs field $\Phi$ itself can be physically understood as a spin singlet pair of the Schwinger bosons. The single spin operators do not appear as scaling operators in the critical theory. The fact that the partons $f_\alpha$ remain in a gapped Chern insulator indicates that the triplet gap remains finite through the transition. The only scaling operators in the theory are therefore spin singlet operators. These include the gauge flux: $j_\mu \equiv \frac{1}{2\pi}\epsilon_{\mu \nu \lambda} \partial_\nu a_\lambda \propto \epsilon_{\mu\nu\lambda} \vec{S} \cdot (\partial_\nu \vec{S} \times \partial_\lambda \vec{S})$. $\int d^2 r j_0$ is proportional to the skyrmion number of $\vec{S}$. The other basic gauge invariant operator is $\hat{M} \Phi$, which physically corresponds to the operator that changes the skyrmion number by 1. The scaling functions of these operators can be readily obtained through various large $N$ approximations. These were performed for $U(1)$ CS-Higgs theories in \cite{wen1993}, where it was shown that the transitions are indeed continuous in the large $N$ limit. \section{CSL - $Z_2$ transition for general $n$} \label{generalN} In the main text we discussed the transition between the $n =1$ CSL and the $Z_2$ spin liquid (doubled semion). To describe the case with general $n$, we start with the $n$ flavors of two-component Schwinger bosons: $b_\alpha(\b{r}) \approx \sum_{\beta=1}^n e^{i\b{Q}_\beta \cdot \b{r}}b_{\alpha \beta}(\b{r})$ for $\beta = 1,...,n$ and $\alpha =\uparrow, \downarrow$, and we consider a state where one of the $n$ flavors transitions from the $\bar{\sigma}_{xy} =2$ BIQH to the charge-2 SC while the other $n-1$ flavors stay in the $\bar{\sigma}_{xy} = 2$ BIQH state. The resulting state is described by a $U(1) \times U(1)$ CS theory with $K = \left(\begin{matrix} 0 & 2 \\ 2 & 2n \end{matrix}\right)$, which is topologically equivalent to the toric code/doubled semion models when $n$ is even/odd, respectively. The wave function that interpolates through this transition, for general $n$, is given by (\ref{wfnPg}), with $|\Phi_{BIQH} \rangle$ replaced by $ \sum_\beta e^{-i Q_\beta \cdot r} \langle 0| f^\dagger_{1 \beta} f^\dagger_{\alpha \beta} | \Phi_{fMF} \rangle$, where $|\Phi_{fMF}\rangle$ is the mean-field state of the fermionic partons $f_{1,\beta}$ and $f_{\alpha \beta}$ where $f_{\alpha \beta}$ for $\beta = 1, ...,n$ form spin singlet Chern insulators with Chern number $2$, and $f_{1\beta}$ form a Chern insulator with Chern number $-1$, while one of the $f_{1 \beta}$ undergoes a Chern number changing transition from Chern number $-1$ to $-2$. $|0\rangle$ denotes the $f$-vacuum. \section{Review of fermionization of 3D XY transition} \label{xyAppendix} Here we will briefly review the fermionization of the 3D XY transition, which was proposed in \cite{chen1993}. It will be helpful to consider Table \ref{statesTable} of the main text. Notice that the Chern number assignment $(C_1, C_2) = (1,0)$ for the mean-field states of $f_1$ and $f_2$ leads to a description of a topologically trivial Bose Mott insulator. In contrast, the case $(C_1, C_2) = (1,-1)$ describes the Bose superfluid. Therefore, the effective theory describing the transition between a Bose Mott insulator and a superfluid can be understood as a Chern number changing transition for $f_2$. Such a critical theory is described by the following action: \begin{align} \label{fermL} \mathcal{L}_{ferm} = &\frac{1}{4\pi} \frac{1}{2} a \partial a + \bar{\psi} \gamma^\mu (\partial_\mu - i a_\mu) + m \bar{\psi} \psi \nonumber \\ &+ \frac{1}{4\pi} A_e \partial A_e + \frac{1}{2\pi} A_e \partial a, \end{align} where $m > 0$ describes the Mott insulator and $m < 0$ describes the superfluid. Since this theory descibes the Mott insulator - superfluid transition in the presence of particle-hole symmetry, it is conjectured to be equivalent to the conventional 3D XY critical point: \begin{align} \label{xyL} \mathcal{L}_{xy} = |(\partial - i A_e) \Phi|^2 + m |\Phi|^2 + \lambda |\Phi|^4 \end{align} Rescaling $A_e \rightarrow A_e' = A_e/2$ and subtracting both (\ref{fermL}) and (\ref{xyL}) by $\frac{2}{4\pi} A_e \partial A_e$ gives the duality used in the main text. \section{Insufficiency of slave fermion construction} \label{slaveFermApp} In this section, we discuss in some more detail the necessity of the BIQH construction of this paper, as compared with the fermionic IQH construction of \cite{wen1989}, for understanding the transitions between the CSL and the $Z_2$ spin liquids. The construction of \cite{wen1989} starts by writing the spin-1/2 operator in terms of slave fermions: \begin{align} \vec{S} = \frac12 c^\dagger \vec{\sigma} c, \end{align} where $c^\dagger = (c_\uparrow^\dagger, c_\downarrow^\dagger)$ is a two-component fermion. This leads to an emergent $U(1)$ gauge field $a$ associated with the gauge transformation $c \rightarrow e^{i\theta} c$. The CSL corresponds to a state where the fermions $c$ form a spin singlet Chern insulator with Chern number $C_c = -2n$. For $n = 1$, this leads to a state that is equivalent to the KL-CSL, while for $n > 1$, the fractional statistics of the quasiparticles are slightly different from the KL-CSL, as summarized in the main text. In order to describe a transition to a $Z_2$ state, one possibility is to consider a transition where the pair field of the fermions, $\Phi = f_\uparrow f_\downarrow$, condenses, thus breaking the $U(1)$ gauge symmetry to $Z_2$. In contrast to the BIQH state, in the fermionic case there is no theory of a transition from a fermion IQH state that simultaneously condenses the pair field and completely destroys the edge modes. Instead we can consider a scenario where the pair field condenses, but the edge modes are not destroyed. To understand the nature of the resulting state, let us focus on the case $n =1$. The IQH states of the fermions can be described by two $U(1)$ CS gauge fields $a^\uparrow$, $a^\downarrow$, such that $j_{\alpha;\mu} = \frac{1}{2\pi} \epsilon_{\mu\nu\lambda} \partial_\nu a_\lambda^\alpha$ describes the current of $c_\alpha$. The effective theory is therefore: \begin{align} \mathcal{L} = &\frac{1}{4\pi} (a^\uparrow \partial a^\uparrow + a^\downarrow \partial a^\downarrow) + \frac{1}{2\pi} a \partial (a^\uparrow + a^\downarrow) + \nonumber \\ &|(\partial - 2 i a) \Phi|^2 + m |\Phi|^2 + \lambda |\Phi|^4. \end{align} Here, $\Phi$ represents the pair field $f_\uparrow f_\downarrow$; its condensation breaks the $U(1)$ gauge symmetry to $Z_2$. In order to understand the topological properties of the $\Phi$-condensed phase, it is helpful to perform a particle-vortex duality on $\Phi$: \begin{align} \mathcal{L} = &\frac{1}{4\pi} (a^\uparrow \partial a^\uparrow + a^\downarrow \partial a^\downarrow) + \frac{1}{2\pi} a \partial (a^\uparrow + a^\downarrow + 2\alpha) + \nonumber \\ &|(\partial - i \alpha) \Phi_v|^2 + m' |\Phi_v|^2 + \lambda' |\Phi_v|^4. \end{align} Here, $j_{\Phi;\mu} = \frac{1}{2\pi} \epsilon_{\mu\nu\lambda} \partial_\nu \alpha_\lambda$ is the current of the original $\Phi$, while $\Phi_v$ describes the vortices of $\Phi$. The phase where $\Phi$ is condensed corresponds to the case where $\Phi_v$ is uncondensed, and vice versa. Therefore, when $\Phi$ is condensed, $\Phi_v$ is uncondensed and can be integrated out, leaving us with a $U(1)^4$ CS theory with $K$-matrix \begin{align} K = \left(\begin{matrix} 1 & 0 & 1 & 0 \\ 0 & 1 & 1 & 0 \\ 1 & 1 & 0 & 2 \\ 0 & 0 & 2 & 0 \\ \end{matrix}\right). \end{align} This satisfies $|\text{Det } K| = 4$. However it has three positive eigenvalues and one negative eigenvalue, implying that the system has a net chiral central charge $c = 2$, and therefore has topologically protected gapless edge states. Such a state is topologically distinct from both the toric code and doubled semion models. It can be thought of as a chiral spin liquid with a discrete gauge structure, \it eg \rm a chiral topological superconductor with $4$ chiral edge Majorana modes coupled to a fluctuating $Z_2$ gauge field. Such a state can be constructed using the honeycomb Kitaev model \cite{kitaev2006,yao2007}. While it is interesting that this ``$Z_2$ CSL'' also neighbors the KL-CSL, it does not correspond to the two different $Z_2$ spin liquids considered in the main text. It appears that the construction in terms of fermionic spinons \cite{wen1989} is indeed insufficient to describe the transition between the CSL and the $Z_2$ spin liquids, although it does allow access to the transition to a $Z_2$ CSL. \section{Topological properties of Abelian Chern-Simons-Higgs theories} \label{CSHapp} Here we will briefly review the topological properties of Abelian CS-Higgs theories \cite{dijkgraaf1990}(see also \cite{cheng2013} for a recent discussion). Specifically, consider a $U(1)_{n}$ CS term, coupled to a charge-$k$ Higgs field: \begin{align} \label{CSH} \mathcal{L} = |(\partial - i k A)\Phi|^2 + m|\Phi|^2 + \lambda |\Phi|^2 + \frac{n}{4\pi} A \partial A, \end{align} where $\lambda > 0$. When $\Phi$ is uncondensed this describes $U(1)_n$ CS theory, the topological properties of which are well known \cite{wen04}: there are $n$ topologically distinct quasiparticles, with fractional statistics $\theta_a = \pi a^2/n$. In order to understand the topological properties of the Higgs phase, where $\Phi$ is condensed and the gauge group $U(1)$ is broken to $Z_k$, it is helpful to perform a duality transformation on $\Phi$ and to consider the theory in terms of the vortices of $\Phi$. This is described by the theory \begin{align} \mathcal{L} = &|(\partial - i \alpha)\Phi_v|^2 + \bar{m}|\Phi_v|^2 + \bar{\lambda} |\Phi_v|^2 \nonumber \\ &+ \frac{k}{2\pi} A \partial \alpha + \frac{n}{4\pi} A \partial A, \end{align} where $\Phi_v$ is a complex scalar describing the vortices of $\Phi$. The condensed phase of $\Phi$ is therefore the uncondensed phase of $\Phi_v$. Considering the case where $\Phi_v$ is uncondensed (Higgs phase of (\ref{CSH})), we can integrate it out to obtain a $U(1) \times U(1)$ CS theory: \begin{align} \mathcal{L} = \frac{n}{4\pi} A \partial A + \frac{k}{2\pi} A \partial \alpha. \end{align} The topological properties of such a theory can be directly read off from the $K$-matrix: \begin{align} K = \left(\begin{matrix} n & k \\ k & 0 \\ \end{matrix} \right). \end{align} It has $k^2$ distinct quasiparticles. For $n$ even, this describes a topological phase where the local degrees of freedom are all bosons, and otherwise the local degrees of freedom contain fermions. In general, we can perform an $SL(2;Z)$ transformation $K \rightarrow K' = W^T K W$, with $W \in SL(2;Z)$, which keeps the topological properties of $K$ invariant, such that \begin{align} K' = \left(\begin{matrix} n -2k & k \\ k & 0 \end{matrix} \right). \end{align} Therefore, the case where $n$ is a multiple of $2k$ leads to $K' = \left(\begin{matrix} 0 & k \\ k & 0 \end{matrix} \right)$, which encodes the topological properties of ordinary $Z_k$ gauge theory. When $n$ is even, the above can be viewed as a generalized version of $Z_k$ gauge theory that corresponds to one of the $k$ different kinds of $Z_k$ gauge theory found in \cite{dijkgraaf1990} The case $n = k = 2$ describes the doubled-semion model discussed in the main text.
3,212,635,537,859
arxiv
\section{Acknowledgments} The authors gratefully acknowledge stimulating discussions with Mario Italo Trioni. This work was supported by the Deutsche Forschungsgemeinschaft within SPP 1666 (Grant No. BO1468/21-1). K.A.K. and O.E.T. acknowledge the financial support by the RFBR (Grant nos. 13-02-92105 and 14-08-31110). \section{Experimental section} Bi$_2$Te$_3$ crystals have been grown by the modified Bridgman technique with a temperature gradient of about 10\,K/cm at the front of crystallization \cite{KMG2014}. As-grown ingots had a single crystalline structure and were split into two parts along the cleavage plane (0001) oriented along the growth direction (Fig.\,\ref{fig:crystal}). One part of each crystal was cut perpendicular to the growth axis into 0.5--1\,mm samples. Indium solder Ohmic contacts were used for transport measurements. The Hall resistance $R_{yx}$ and the resistance $R_{xx}$ were measured in the Hall bar geometry using a standard six-probe method on rectangular samples. A potential Seebeck microprobe was used to investigate the room-temperature Seebeck coefficient in the region of p--n transition with a spatial resolution of 200\,$\mu$m. The Hall mobility was calculated from the measured conductivity and calculated carrier concentration, which was extracted from the measured Hall coefficient: n=1/(Rh*e). The carrier concentration dependence along the crystal was shown in \cite{KMG2014}. STM measurements were performed at a tip and sample temperature $T = 4.8$\,K with electro-chemically etched tungsten tips. After transfer into the ultra-high vacuum system, samples were cleaved at room temperature at a base pressure $p < 2 \times 10^{-11}$\,mbar and immediately inserted into the cryogenic STM. The PNJ was located by bringing the tip into tunneling distance from the surface and recording a local tunneling spectrum indicative for p- or n-type doping [cf. Fig.\,\ref{fig:STM}(c) and (f)]. After retracting the tip the sample was moved with an $x-y$-stage (initially by several tens to hundreds of $\mu$m). This procedure was repeated until a spectral shift signaled that the boundary to the region governed by the other dopant had been crossed. Then a refined procedure with smaller $x-y$-movements was performed. \newpage \section{Theoretical calculation} {\em Ab-initio} theoretical calculations were performed by density functional theory, using a pseudo-potential representation of the electron--ion interaction and local orbital basis sets, as implemented in the SIESTA code \cite{SIESTA}. We used a generalized gradient approximation (GGA) for the exchange-correlation functional \cite{PBE} and a plane wave cutoff equal to 250\,Ry. The experimental lattice constant for the hexagonal cell of Bi$_2$Te$_3$ was used \cite{EXPLA} and a $8 \times 8$ surface cell was adopted to reduce the lateral interaction between crystal defects. The slab thickness was chosen in order to include three of the quintuple layers forming the building block of Bi$_2$Te$_3$ in the direction orthogonal to the surface and a large portion of vacuum of approximatively 30\,\AA. Constant--distance STM images have been simulated in the Tersoff-Hamann model \cite{TH} by calculating the Kohn-Sham local density of states (LDOS) in the energy interval between the Fermi level $E_{\rm F}$ and $E_{\rm F} + eV_{\rm b}$ \cite{STM}, where the applied bias $eV_{\rm b}$ was fixed in agreement with the experimental setup. In particular the moderately large value of $eV_b$, of the order of 0.4 eV, allows us to obtain numerically stable results and to reasonably neglect the small inaccuracy of the LDOS around the Fermi level due to the absence of spin-orbit interaction in the calculation. A tip--surface distance equal to 3\,{\AA} was considered, although we verified that the result of our STM simulations are robust against small variations of this value. The extension of the tip was taken into account in the STM simulation by considering the mean value of LDOS within a radius of 4\,{\AA} around the tip position.
3,212,635,537,860
arxiv
\section{\label{s1}Introduction} Digital receivers are a class of electronic systems where operations like amplification, filtering, integration etc. are performed as a series of mathematical operation on embedded components like FPGAs, Microprocessors or GPUs. Compared to their analog counterparts, digital receivers are immune to variations in gain and temperature. However, digital systems have quantization noise, sampling rate and phase noise which can be minimized by choosing high bit-width ADCs and low drift clock sources. These characteristics make digital receivers an attractive option in applications where precision measurements are required \cite{wepman1996,jamin2014}. AMO experiments are one such example. \par Digital receivers are being used in AMO experiments for process control \cite{luda2019} (e.g. temperature, current and wavelength control in lasers), synchronous detection \cite{stimpson2019} and in compact magnetic resonance spectroscopy systems \cite{takeda2007}. However, synchronous detection may not be feasible in many AMO experiments including spin noise spectroscopy (SNS) \cite{MSwar2018, zapasskii2013, Crooker2004}. The typical strength of the raw SNS signal is $< 100nV/{\sqrt{Hz}}$ which is far less compared to the previous studies\cite{luda2019,stimpson2019,takeda2007}. Therefore, the SNS signal has to be recorded continuously (or on trigger) to improve the SNR. \par In this paper, we discuss the development and utilization of a versatile digital receiver to measure the spin noise (SN) in atomic vapor systems. We have performed a comparative study with the results of our previous work in \cite{MSwar2018}. We also demonstrate its utility in real-time precision magnetometry. This digital receiver will be used as a component of a novel, miniaturized magnetometer based on SNS techniques. \par The paper is organized as follows: We introduce spin noise spectroscopy and the digital receiver system developed for its measurement in section \ref{s2}. The firmware architectures of the former is described in section \ref{s3}. The methods adopted to mitigate the effects of electro-magnetic interference (EMI) in the measurements are described in section \ref{s4}. In section \ref{s5}, we present the SNS data obtained by using our DRS as well as a comparison with the results obtained from SFSA. Further, we demonstrate triggered data acquisition and time resolved magnetometry using our DRS. We conclude in section \ref{s6} after a brief discussion on further applications and future scope of the developed receiver. \section{Digital Receivers for Spin noise spectroscopy}\label{s2} In this section, we describe the spin noise spectroscopy technique as well as detection schemes by introducing our digital receiver. \subsection{Spectroscopy Technique} Study of SN of an atomic ensemble has varied applications, ranging from precision magnetometry, non-perturbative optical detection to metrology and quantum sensing. The fluctuation in the spin population of an atomic system in time leads to temporal changes in the refractive index of the medium. A linearly polarized and far detuned probe laser beam can detect this temporal variations of the refractive index in its polarization angle. We use a polarimetric detection system where the polarization fluctuation of the probe laser beam is detected in a balanced photodetector \cite{MswarBook}. The power spectrum of the polarization fluctuation (second order correlation function - g$^{(2)}$) gives the information about the spectral properties of the atomic spin ensemble. In a typical experimental conditions we apply a uniform magnetic field perpendicular to the propagation direction of the probe beam. The atomic spins precess around the magnetic field with a Larmor frequency which is proportional to the strength of the magnetic field. Therefore, the peak of the spectrum appears at the Larmor frequency. Also, the width of the signal is inversely proportional to the transverse spin relaxation time ($\sim$100 kHz in our experimental conditions). Our SNS experimental set-up is shown in Fig.\ref{F1}. A probe laser beam which is detuned by $\approx$ 10 GHz from the strongest optical transition in neutral rubidium (Rb) atoms, is used to probe the spin fluctuations of the atoms in the vapor cell heated to temperatures ranging between 350 K and 400 K. The spin fluctuations causes polarization fluctuations of the probe laser. The polarimetric detection scheme employing a half wave plate (HWP), polarizing beam splitter (PBS), and a balanced photo-detector (BPD) is capable of measuring this polarization fluctuation. A uniform magnetic field ($B_{\bot}$), produced using a pair of magnetic coils in Helmholtz configuration, is applied on the atomic vapor which is perpendicular to the propagation direction of the probe laser beam. The output signal of the BPD is recorded using the digital receiver described in this article and reported in section \ref{s5}. \begin{figure*}[!ht] \centering \includegraphics[trim={3.5cm 6cm 2.5cm 2cm},clip,scale=0.55,keepaspectratio]{sns_setup.png} \caption{A typical spin noise spectroscopy (SNS) set-up of rubidium (Rb) atomic vapor. L- plano-convex lens, VC- vapor cell (contains Rb atomic vapor), M- dielectric mirror, HWP- half wave plate, PBS- polarizing beam-splitter, BPD- balanced photodetector, $B_{\bot}$- orthogonal magnetic field.} \label{F1} \end{figure*} Previously we used a commercially available SFSA\footnote{https://www.keysight.com/in/en/assets/7018-01953/data-sheets/5989-9815.pdf} to detect the SN signal from the balanced photo-detector. It has a superheterodyne stage whose mixer provided a swept local oscillator (LO) signal, such that the radio-frequency (RF) signal is translated to a fixed intermediate frequency (IF). Since the LO has to be swept across a range of frequencies, the sweep time increases with frequency span. This reduces the dwell time at each frequency resulting in a decreased sensitivity. From the experimental point of view, this leads to poor SNR. Moreover, if the signal is expected to show variations smaller than the sweep-time, it cannot be detected. SFSAs also have a low frequency cut-off (in our case 100 kHz) below which the output amplitude and frequency measurements are not possible. We have ongoing experiment to perform SNS on laser cooled atoms to investigate spin dynamics in quantum regime, where a triggered data recording of a SN signal over a short duration ($\sim$ few ms) is required. In this case a SFSA can not be used. There have been recent experiments reporting measurements of SNS in quantum dots \cite{ crooker2010} as well as in atomic vapors \cite{lucivero2016}, where the use of non-reconfigurable and somewhat expensive digital receivers are reported. However, the digital receiver described in this paper, allows us to overcome the aforementioned limitations of SFSA, with certain trade-offs. \subsection{Digital Receivers for SNS} We developed a digital receiver capable of operating in two modes, as listed below. \begin{enumerate}[label=\Roman*] \item A fast Fourier transform spectrometer (FFTS) to probe the entire frequency range of interest, \item A real time data recorder (RTDR) with trigger capabilities. \end{enumerate} Both the aforementioned modes are implemented on the STEMLab 125-14\footnote{https://www.redpitaya.com/f130/STEMlab-board} development board. This board is selected for our application because it has two 14 bit analog-to-digital converter (ADC) channels, with each channel providing a dynamic range better than 80 dB. It has an analog bandwidth of 62.5 MHz, and is DC coupled. The heart of the board is a Xilinx Zynq 7010 System on Chip (SoC), with integrated programmable logic (PL) cells and ARM microprocessor based processing system (PS). The signal processing algorithms in this work are implemented on the PL side, while user control and data transmission programs are implemented on the PS side. \section{Firmware Description}\label{s3} In this section, we describe the firmware architecture of the two operational modes of the DRS. \subsection{Fast Fourier Transform Spectrometers (FFTS)} Fourier transform is used to find the spectral content of a time domain signal \cite{bracewell1986}. Fast Fourier transform (FFT) is an algorithm which reduces the complexity involved in calculating the Fourier transform from a $O(n^2)$ to $O(n\log{n})$ problem by using the periodicity and the symmetry properties of the former \cite{cooley1969,duhamel1990}. This in general reduces the number of operations required to obtain the spectrum and results in resource savings when implemented in embedded devices e.g. FPGA or in Microprocessors. The SNR is proportional to $\sqrt{\beta\tau}$, where $\beta$ is the bandwidth, and $\tau$ is the integration time \cite{tiuri1964}. For a conventional spectrum analyzer, there are two time scales involved: $t_s$, the sweep time, and $t_d$, the dead time. So, if a sweep contains $N_s$ points, the amount of time required for obtaining the power at each frequency becomes ${t_s}/{N_s}$. In cases where the data is to be acquired using interfaces such as GPIB, USB, or ethernet, $t_d$ includes the time taken for the spectrum analyzer to transfer the data to the DAQ system, during which time no new acquisition can occur. In case of an FFTS, the estimation of the power spectrum involves complex weighting and summation of all time domain samples of the burst used to perform the FFT. If a streaming algorithm is used, data acquisition, performing FFT and data transfer can happen simultaneously resulting in zero dead time. Thus for a single spectrum, with the same spectral resolution, an FFTS ideally provides $\sqrt{N_s}$ improvement in the SNR as compared to SFSA\cite{mugundhan2018}. \begin{figure*}[!t] \centering \includegraphics[width=\textwidth,scale=0.75,keepaspectratio]{FFT_spectrometer.png} \caption{Top level block diagram of the FFTS. PS and related interfaces are shown as green blocks. The red dashed lines indicate the flow of the master clock at 125 MHz, which is derived from the ADC data clock. The data and clock inputs from the ADC to the FPGA are Low Voltage Complementary Metal Oxide Semiconductor (LVCMOS) signals.} \label{F2} \end{figure*} The base firmware version of the Stem-Lab 125-14 provides a burst mode version of the FFTS. However, we required capability to perform this operation in streaming mode, and average the spectra on the FPGA itself. This allows us to keep data rates below 30 MBps, beyond which loss-less data transfer via ethernet becomes difficult, due to bottlenecks in communication between the PL and the PS sides of the SoC. The block diagram of our FFTS implementation is shown in Fig. \ref{F2}. The analog signal is sampled by the on-board ADC at 125 MHz. The digitized data is captured synchronously on the FPGA, converted to 2's complement format and passed on to the spectrometer block. The spectrometer block is implemented using Simulink System Generator. The signed data obtained so far is polyphase filtered using an 8 tap FIR filter\cite{vaidyanathan1990}. The output of the filterbank is an 18 bit fixed-point number, with a binary point at the 17th bit. This is input to the biplex FFT block IP core \cite{emerson1976}, available from the CASPER signal processing library \cite{hickish2016,CASPERweb}. A 4096 points FFT is performed resulting in a spectral resolution of $\approx$ 30.5 kHz. To avoid overflow during subsequent stages of the FFT, the output of each stage is scaled by a factor 2. As the data to the FFT block is real-valued, the power spectrum is estimated by computing the magnitude of the positive half of the spectrum. After the integration of a programmable number of spectra, the \texttt{dv} signal is asserted high and the power spectrum is presented to the subsequent blocks for transmission to the DAQ. For example, if 1000 such spectra are summed, the resulting integration time is $\approx$ 32 ms. The output spectra is recast as an Advanced eXtensible Interface (AXI) stream and is written to a Block RAM (BRAM) through the \texttt{AXIS BRAM WRITER} IP block \cite{paveldemin}. Once the spectrum is written into the BRAM, a \texttt{finished} signal indicating this is asserted and posted to the \texttt{sts} register. The FPGA present on-board provides access only to the PS Ethernet, therefore the transfer of data to the DAQ is mediated by executing a C code on the PS. The \texttt{sts}, \texttt{cfg} and \texttt{AXI BRAM READER} provide memory mapped access to the PL. \texttt{cfg} register is used to provide a master reset and configuration information to the PL. \texttt{sts} register provides information on the BRAM address pointer and holds the state of the \texttt{finished} signal. On the assertion of the \texttt{finished} signal, the PS starts reading the contents of the BRAM. The BRAM data is packetized as \texttt{UDP} packets with packet and spectrum count information and is transmitted to the DAQ using linux socket functions. We characterize the developed spectrometer using continuous wave (CW) and SN signals. For the CW tests, signals at different frequencies and different powers are fed to the system and recorded. These tests are carried out to estimate the SNR of the spectrometer at different frequencies and to identify the linear regime. The power measured by the FFTS is found to linearly vary with the input power. \subsection{Real Time Data Recorder (RTDR)} \begin{table*}[tp] \begin{center} \captionsetup{justification=raggedright} \caption{The different options available in the RTDR firmware.}\label{T1} \begin{tabular}{|l|l|} \hline \textbf{Option} & \textbf{Description} \\\hline baseband & Records signal in dc\,-\,625 kHz base-band; LO is disabled \\ IF & Records signal in an IF band spanning from $f_{lo}$ to $f_{lo}+625$ kHz \\ triggered & Records a signal burst for a programmable predefined time on rising edge of trigger pulse \\ continuous & Records data continuously\\\hline \end{tabular} \end{center} \end{table*} The FFTS discussed thus far, provides a simple and compact measurement option for SNS. In scenarios where the SN signal is short lived in time, a real time triggered data acquisition and processing protocol is required. We developed a low bandwidth, raw voltage recorder. In Fig. \ref{F5}, we show a top-level block description of the same. Table \ref{T1} outlines the various options available in the data recorder. Here, we use both the input channels, where one is from the BPD and the other is a signal generator (BK PRECISION Model no. 4040B), which is used as the local oscillator (LO). As in the FFTS firmware, the signals from the ADC are converted to 2's complement and multiplied to obtain an intermediate frequency (IF) signal. The IF signal is usually close to dc and tracking the variations in the former using the master clock operating at 125 MHz would result in sub-optimal usage of resources. Hence, we use a cascaded integrated comb (CIC) filter for down-sampling the signal\cite{hogenauer1981,lyons2005}. When the factor by which the signal has to be down-sampled is large, CIC filters, used as a front end for FIR filters result in decreased number of filter taps required for anti-aliasing. The response of the CIC filter is not uniform within the band of interest. To compensate for this, a FIR filter with a complementary response is cascaded to it. The FIR filter is implemented as a half-band filter, resulting in an output data rate that is half of that of the input. A 256 tap Kaiser window, with $\beta=10$, is used to shape the filter pass band. When both the filters are cascaded, the resulting combination has an uniform amplitude response over the band of operation and an out of band rejection $\geq$ 90 dB. \begin{figure*}[ht] \includegraphics[width=\textwidth,scale=0.75,keepaspectratio]{trig_rec_blk.png} \caption{Firmware description of the triggered raw voltage recorder. The dotted blue lines represent the flow of control signals and status signals to and from the memory mapped AXI registers. The green blocks represent signal processing elements of the design.} \label{F5} \end{figure*} The data from the multiplier as it enters the CIC filter is at 125 MSPS. This is decimated by 50, resulting in a data rate of 2.5 MSPS at the output of the CIC filter and an aggregate data rate of 1.25 MSPS after the FIR filter, resulting in a base-band data of 625 kHz. We use four counters in the firmware for timekeeping purposes: a trigger counter (TC), a free running counter (FRC) at 125 MHz rate, a over- flow counter (OC) and a packet counter (PC). The TC keeps track of the triggers received by the RTDR, while the FRC keeps track of the time since the power on. The 32 bit OC counts the number of FRC overflows. These counters allow us to obtain the time between two trigger events and also its occurance instances since the start of acquisition. The PC helps us to ensure no data was missed during packet framing and transmission. \begin{figure*}[ht] \begin{center} \includegraphics[width=\textwidth]{pin_vs_pout_new-eps-converted-to} \caption{Characterization of the DRS (in both RTDR and FFTS modes) with signals of various power fed across the frequency range of operation. The '+' markers indicate the data and the solid lines represent the first order polynomial fits. The bottom plot shows the residuals. The black dashed lines encompass the linear range of the DRS.} \label{F13} \end{center} \end{figure*} \par The data, along with the counter values, is written to a BRAM and read out by the PS and transmitted using ethernet. CW signals at various frequencies within the 0-62.5 MHz band were injected at different power levels from -90 to 10 dBm. As shown in Fig.\ref{F13}, the output power scales linearly with the input power, irrespective of the frequency. \section{Electromagnetic Interference (EMI) and its mitigation}\label{s4} \par The ambient EMI can hinder the detection of weak signals. Some common sources of unavoidable EMI are 50/60 Hz AC lines, switching regulators and LO harmonics. Strong EMI affects the dynamic range of the receiver system at frequencies $<$ 1 MHz. EMI can be mitigated during pre- and post-processing stages. We describe the techniques adopted, during these stages of EMI mitigation, in this section. \par During our measurements, we found that the strong 50 Hz AC signal and its harmonics were getting coupled into the system through the AC adapter of the FPGA evaluation board. To overcome this, we replaced the power adapter with a commercially available battery-bank of similar specifications. \begin{figure} \centering \includegraphics[trim={0.5cm 0cm 0.5cm 0.5cm},width=\linewidth]{emi_mitigation_plot_log-eps-converted-to} \caption{The effect of EMI on the measured spectrum (green trace) and after its mitigation (blue trace). The effect of EMI mitigation is clearly visible through the absence of spikes and a reduction in the noise floor by $\approx$ 20 dB in the blue trace.} \label{F12} \end{figure} A second prominent source of EMI were the switching circuits associated with the magnetic coil current driver. This gave rise to strong peaks in the low kHz frequencies. The SNS setup is located in a laboratory environment with multiple sources of EMI. Therefore, we custom designed a mild-steel enclosure for the DRS. In Fig.\ref{F12} we show the significant mitigation of EMI by $\approx$ 20 dB after adopting the aforementioned schemes. We performed further processing on the archived data to excise low-level narrow-band and impulsive broad-band EMI \cite{fridman2008},\cite{ford2014}. For EMI that was stationary in frequency, we used a combination of background and median subtraction. Impulsive, broadband EMI, when present, was clipped from each channel when its value exceeded the 3$\sigma$ threshold. \section{Results and Discussion}\label{s5} Our experimental set-up is shown schematically in Fig.\ref{F1}. A uniform transverse magnetic field ($B_{\bot}$) was produced by the current flowing through a pair of magnetic coils in Helmholtz configuration. The signal from the BPD\footnote{The BPD used was a Newport 1807 model having a cut-off frequency of $\approx$ 80 MHz.} was recorded either with our digital receiver system (DRS) or a commercial spectrum analyzer (SFSA) for the purpose of performance comparison. \par In Fig.\ref{F8}, we show the SN spectrum of Rb atomic vapor with $B_{\bot}$ $\approx$ 5.1 gauss obtained using SFSA (top panel) and the FFTS (bottom panel). These sets of data were recorded under similar experimental conditions. In each of the panels, we observe two peaks appearing at $\approx$ 2.4 MHz and $\approx$ 3.6 MHz corresponding to SN signal due to $^{85}$Rb and $^{87}$Rb, respectively. The RBW of the SFSA was the same as the channel width of the FFTS which is 30 kHz. The integration time for the SFSA to obtain the data presented in Fig.\ref{F8} (top panel) was 45 seconds whereas the same for the FFTS presented in Fig.\ref{F8} (bottom panel) was 10 seconds. In both the cases the spectrum was background subtracted and normalized to it's peak value. The SNR of the SN signal is defined as the ratio of the strength of the strongest signal to the rms value of the background signal. The background signal is obtained by recording the SN signal at zero magnetic field. Comparing these two spectra, we note that the SNR for the DRS is more than an order of magnitude better than that for the SFSA for the same integration time. This improvement in the SNR along with the fact that our DRS is light weight, portable, low-cost, low-power ($<$ 10 Watts) as compared to a commercial SFSA, makes it preferable for both laboratory and field measurements. \begin{figure}[!ht] \centering \includegraphics[trim={0.5cm 0.25cm 0.5cm 0.5cm},width=\linewidth]{spec_comparison-eps-converted-to} \caption{Spin noise (SN) spectrum acquired from SFSA (top panel) and FFTS (bottom panel). Note that the SNR is $\approx$ 50 for the FFTS for an integration time 5x lower than SFSA. These two spectra were recorded under the same experimental conditions, at $B_{\bot} \approx 5.1$ G.} \label{F8} \end{figure} Since the SN spectrum peak position is the Larmour frequency $\nu_L (=g_F \mu_B B_{\bot} /h $, where $g_F$ is the g-factor of the hyperfine levels, $\mu_B$ is the Bohr magneton, and $h$ is the Plank's constant$)$, by precisely measuring the peak position of the spectrum we can estimate the strength of $B_{\bot}$. Therefore, this measurement technique can be used as a precision magnetometry tool. Since the developed DRS described in this article is easily field deployable, we would highlight the applicability of this device to construct a robust miniaturized magnetometer. As an example, the SN spectrum recorded at various magnetic field strengths are shown Fig.\ref{F7}. By fitting a Lorentzian to the individual spectrum, we can estimate the Larmour frequency and in turn the magnetic field. \begin{figure} \centering \includegraphics[trim={0.5cm 0.25cm 0.5cm 0.5cm},width=\linewidth]{spec_vs_b_without_editing-eps-converted-to} \caption{Background subtracted spin noise (SN) spectra measured for different magnetic field values. The error in the measurements are $\approx$ 0.02 G, which is limited by the spectral resolution $\approx$ 30 kHz of the FFTS.} \label{F7} \end{figure} Another advantage of using this DRS is that the device is reconfigurable which enables triggered real time measurements of the SN spectra. In Fig.\ref{F8_realtime}, we show an example of real time data acquisition. In each of the four panels we show SN spectrum obtained using our DRS with an integration time of as low as 100 ms after the TTL trigger pulse. The magnetic field strength was also changed after each trigger pulse. Hence, we can sample the magnetic field with a time resolution of 100 msec. The integration time of the data shown in Fig. \ref{F8_realtime} was 100 ms and the corresponding spectral resolution was $\approx$ 610 Hz resulting in an SNR $\approx$ 5. Whereas for the data shown in Fig.\ref{F7} the integration time was 1 s and the spectral resolution was 30 kHz, hence an SNR $\approx$ 15. For testing and verification of the triggered mode acquisition, we generated an external trigger TTL pulse from a function generator and fed this into a SMA-to-GPIO board, which was connected to the Red-Pitaya using a ribbon cable and connector assembly. The acquisition time was set to $\approx$ 100 ms. The trigger rate was $\approx$ 10 per minute. The output from the BPD was fed to the DRS. The triggered acquisition works as follows: (a) The FPGA waits for a rising edge on the trigger input port, (b) On the rising edge of the trigger input, a \texttt{dv} signal is asserted, upon which the BRAM starts to store the data, (c) Once 256 such samples are written, the data is packetized and transferred to the DAQ server. The trigger count and an absolute time stamp is included in the header data of the Ethernet packet for assisting in data analysis. \begin{figure}[!ht] \centering \includegraphics[trim={0.5cm 0.5cm 1cm 0.5cm},clip,width=\linewidth]{sns_data_diff_lo_4pt_binned_diff_subplots2-eps-converted-to} \caption{Spin noise (SN) spectrum at various magnetic field strengths from the RTDR. These series of spectra has an SNR $\approx$ 5, and the peak positions can be determined within an accuracy of $\approx$ 5$\%$.} \label{F8_realtime} \end{figure} Another application of using our device is the temporal and spatial correlation measurements in a system with laser cooled atoms and ions. Intrinsically the correlation signals are expected to be extremely narrow in frequency domain and are promising candidates for various quantum technology applications. However, the measurement duration is typically limited to few milliseconds. Therefore, it is difficult to perform the SNS in cold atoms using traditional SFSA and our DRS emerges as a promising candidate for this purpose. In fact, we have already started using the DRS described in this article in cold atom measurements. Also, since in the RTDR configuration, the DRS records the voltage samples directly, it gives us the flexibility to achieve high frequency resolutions, which is only limited by the timing jitter of the on-board clock. \begin{figure*}[!ht] \centering \includegraphics[trim={0.5cm 0cm 0.25cm 0.25cm},clip,width=\linewidth]{bfield_sweep_with_lin_fit_err_y_bfield-eps-converted-to} \caption{Waterfall plot of the DRS output when the magnetic field was swept from $\sim$ 3.35 G to 3.85 G. We have synchronously varied the coherent drive frequency (see text). The yellow patches represent the spin noise signal in the time-frequency plane, the green dashed line is a fit to the centroids of the yellow patches and its slope indicates the rate of variation of the magnetic field in time.} \label{F10} \end{figure*} \par Our DRS in the RTDR mode, allows for a minimum time resolution of 800 ns when the data is treated in its raw form. When spectral analysis is carried out with a N point fourier transform, the channel width and the time resolution is $\frac{1.25 MHz}{N}$ and $800 ns \times N$, respectively. Therefore, when the signal is inherently strong, higher time resolutions can be obtained at the cost of reduced frequency resolution. However, for intrinsic SN signal from atomic vapor presented in this article, the SNR deteriorates for shorter integration times which affects the precision of the measurements. Therefore, for the purpose of demonstration, we integrate the SN signal for 100 ms time window and obtain a precision in measured magnetic field of the order of 800 $\mu$G. To demonstrate the response of the detection system to fast varying magnetic fields, we conducted an experiment \cite{comment}, where we added a coherent drive \cite{fleischhauer2005} using a pair of Raman beams, which enhanced the SN signal strength million fold, improving the SNR. We then varied the magnetic field from 3.35 G to 3.85 G, synchronously with the coherent drive field frequency, and recorded the signal using our DRS. The results from this experiment is shown in Fig. 9, where each pixel along time and frequency axis is $\approx$ 800 $\mu s$ and 610 Hz, respectively. We see that the DRS simultaneously tracks the “step” changes in the magnetic field, as well as its drifts in milli-second timescales. We highlight that a time resolved measurement of magnetic field has applications ranging from geophysics \cite{prouty2013} to physiology\cite{bison2003,uchikawa1992}. The time stamp contained in the received data (see Section \ref{s3} B), can be used to determine the absolute time variation of the magnetic field strength. \section{Conclusions}\label{s6} \par We presented the development of a software defined digital receiver system (DRS) with two operating modes for spin noise spectroscopy (SNS) experiments. The highlight of this work is that we show the applicability of SNS in precision magnetometry and measure fast temporal variations in the magnetic field. This receiver is fully re-configurable and had a short development time at a low cost. \par The FFTS mode allows for user programmable integration times. The RTDR mode was specifically developed for high spectral and temporal resolution studies of spin noise (SN) signals from both hot and cold atoms. Such a mode, where high time resolution voltage data can be recorded, does not exist in SFSA. The FFTS can also be used to supplement the RTDR mode as follows: using the FFTS, the user will be able to explore the spectrum over a wide range of frequencies and once the peak location is known, the RTDR can be used to obtain a time-frequency resolved picture of the signal of interest. \par Future directions for the system development include using a DDS core which can internally generate the LO signals to facilitate a two channel implementation, replace the DAQ computer with a ARM based microprocessor system viz. Raspberry Pi for system miniaturization. \par The receiver described here was developed to be a part of a compact, portable SNS magnetometer. While we have demonstrated the application of the digital receiver using the SNS experiments, it can be easily adapted for use in other experiments requiring correlation measurements. \section*{Acknowledgments} The authors acknowledge partial support provided by Ministry of Electronics and Information Technology (MeitY), Govt. of India under grant for ``Center for Excellence in Quantum Technology" with Ref. No. 4(7)/2020-ITEA. Authors also acknowledge Department of Science and Technology (DST), Govt. of India, Prof. Hema Ramachandran, Prof. Dibyendu Roy, Electronics Engineering Group (EEG) and central workshop facility, Raman Research Institute. We extend our thanks to the CASPER signal processing community and Dr. Pavel Demin for maintaining many useful open-source IP cores. We are grateful to Xilinx for donation of Vivado Design Suite, through their University Program. \bibliographystyle{IEEEtran}
3,212,635,537,861
arxiv
\section{Introduction} The Resonating Valence Bond (RVB) state, defined as an equal weight superposition of (non-orthogonal) nearest neighbor (NN) singlet (or dimer) coverings, was first proposed by Anderson~\cite{Anderson1973} to describe a possible spin liquid ground state (GS) of the $S=1/2$ antiferromagnetic Heisenberg (HAF) model on the triangular lattice. Later on, it was also introduced as the parent Mott state of high-$T_c$ superconductors~\cite{RVB}. Several works ~\cite{poilblanc2012topological,schuch2012resonating,wildeboer,yangfan} have demonstrated that NN RVB states defined on triangular and kagome lattices are gapped spin liquid states with $\mathbb{Z}_2$ topological order, and GSs of local parent Hamiltonians~\cite{schuch2012resonating,zhou2014}. Spin liquid behaviors are expected in two-dimensional (2D) frustrated quantum magnets where magnetic frustration prohibits magnetic ordering at zero temperature. The spin-1/2 Heisenberg antiferromagnet on the kagome lattice (KHAFM) is believed to be the simplest archetypical model hosting a spin liquid GS with no Landau-Ginzburg spontaneous symmetry breaking. However, the precise nature of this spin liquid is still actively debated. While the HLSM theorem \cite{hastings2004lieb} excludes a unique GS separated from the first excitations by a finite gap (so-called ``trivial'' spin liquid), a gapless spin liquid \cite{Ran2007,iqbal2013gapless,he2017signatures,liao2017gapless} or a gapful {\it topological} spin liquid (of the RVB type) \cite{yan2011spin,mei2017gapped,depenbrock:kagome-heisenberg-dmrg} are the two favored candidates. An important aspect is to understand the stability of the spin liquid GS against various perturbations, such as long-range interactions or different anisotropies. Beyond being of interest by itself, it might also yield alternative ways to assess the nature of the ground state of the isotropic KHAFM by allowing to adiabatically connect it to a limiting case which might be easier to study. An important case of such perturbations is the HAF model on the kagome lattice with anisotropy, which can be written as \begin{equation}\label{eq:heis1} H(\gamma) = \sum_{(ij) \in \triangleright }{\textbf{S}_i . \textbf{S}_j} + \gamma \sum_{ (ij) \in \triangleleft }{\textbf{S}_i . \textbf{S}_j} \end{equation} with $0\le\gamma\le1$ (where $S_z=\pm\tfrac12$). The Hamiltonian $H(\gamma)$, except at $\gamma=1$, explicitly breaks the inversion symmetry between the strong (or right-pointing) $\triangleright$ and the weak (or left-pointing) $\triangleleft$ triangles of the kagome lattice (Fig.~\ref{fig:kagome_1}). The anisotropic model~(\ref{eq:heis1}) (also referred to as ``breathing'' HAF \cite{okamoto2013breathing}) has gained additional relevance because recent studies have shown a realization of \eqref{eq:heis1} for particular values of $\gamma$ in a vanadium-based compound~\cite{Aidoudi2011,Harrison2013,Bert2017}. Moreover, in the limit of strong anisotropy, $\gamma\to0$, it can be mapped to a simpler model with two spin-$\tfrac12$ degrees of freedom per site, similar to a Kugel-Khomskii model~\cite{mila:breathing}. The Hamiltonian~\eqref{eq:heis1} has been studied using different numerical methods: In Ref.~\onlinecite{schaffer2017quantum}, Gutzwiller-projected generalized BCS wavefunctions have been used, finding a gapped $\mathbb Z_2$ topological phase throughout; in contrast to this, Ref.~\onlinecite{iqbal2018persistence}, supplementing the same ansatz with two Lanczos steps and anisotropic couplings in an enlarged unit cell, finds that around the isotropic point $\gamma=1$, a gapless $U(1)$ Dirac spin liquid (DSL) phase outperforms the gapped $\mathbb Z_2$ phase for sufficiently large systems, while for $\gamma\lesssim 0.25$, Valence Bond Crystal (VBC) order dominates. Finally, Ref.~\onlinecite{repellin2017stability} analyzes the model using iDMRG, supplemented by exact diagonalization, and finds a $\mathrm{U}(1)$ DSL for sufficiently large $\gamma$, which at $\gamma\lesssim 0.1$ transitions to a phase with nematic order (i.e., breaking lattice rotation symmetry). In the light of these conflicting results, the nature of the strongly anisotropic limit, and the question whether it is adiabatically connected to the isotropic KHAFM seems wide open. In this paper, we use an ansatz based on a systematic optimal cooling procedure, applied to the RVB state, to analyze the nature of the breathing KHAFM, focusing on the strong anisotropy limit. Our ansatz, which we term ``simplex RVB'', clearly outperforms the previous results obtained for the thermodynamic limit, and clearly yields a gapped $\mathbb Z_2$ spin liquid rather than a VBC phase. Our ansatz differs from previous approaches in several ways: First, it implements a systematic and optimized cooling procedure -- in essence, an optimized imaginary time evolution scheme -- which can be systematically constructed from any Hamiltonian at hand. Second, it requires only a very small number of parameters with a clear physical interpretation; in our case, we use at most $3$ parameters. Third, those parameters have a clear physical interpretation in terms of a variational RVB-type wavefunction: Their role is to create longer-range singlets with suitable amplitude and phase such as to systematically decrease the energy of the variational wavefunction. And lastly, the clear role of the variational parameters in the ansatz facilitates the analysis of its order. Our analysis reveals a gapped $\mathbb Z_2$ topological spin liquid phase for the whole range $0\le\gamma\le1$. In particular, in the strongly anisotropic limit, our results clearly outperform the energies previously obtained in the thermodynamic limit~\cite{iqbal2018persistence} which found a VBC phase, while at the same time they require a significantly smaller number of variational parameters. More specifically, our ansatz with $2$ parameters -- corresponding to only one optimized trotterized imaginary time evolution step on top of the RVB state -- already yields a slightly better energy than the VBC ansatz with $2$ Lanczos steps, while it clearly outperforms it with an additional parameter (half a Trotter step). This can be attributed to the fact that our ansatz, unlike Lanczos steps, captures the extensive nature of perturbations and thus correctly reproduces the perturbative expansion in the thermodynamic limit~\cite{vanderstraeten:peps-perturbations}. \begin{figure}[t!] \centering \includegraphics[width=16em]{kagome_1} \caption{A singlet covering of RVB state on the kagome lattice. Arrowheads indicate the counter-clockwise orientation of singlets on edges. Gray indicates defect triangles of the singlet covering.} \label{fig:kagome_1} \end{figure} On a technical level, we use the formalism of Projected Entangled Pair States (PEPS)~\cite{verstraete2004renormalization} to implement the simplex RVB ansatz. The idea of the PEPS description is to specify the entanglement structure of the wavefunction as a network of local tensors. The so-called bond dimension determines the efficiency of the PEPS description. The NN RVB state on the kagome lattice can be represented as a PEPS with bond dimension $D=3$ \cite{verstraete:comp-power-of-peps,schuch2012resonating}. While the bond dimension required for $p$ half Trotter steps -- corresponding to singlet coverings which contain long-range singlets with range $p+1$ -- grows as $D_p=3\times 2^{p-1}$, a number $p$ of steps (and thus a singlet span) large enough to yield competitive energies for the anisotropic limit can be reached with computationally accessible bond dimension. A key advantage of this explicit PEPS construction, which is obtained from the RVB PEPS by applying cooling steps, is that it gives us direct access to the relevant entanglement properties for determining the topological nature of the system, and thus allow for a direct and unambiguous identification of the quantum phase of the wavefunction. The outline of this paper is as follows. In Sec.~\ref{sec:simplexrvb} we motivate and formally define the simplex RVB ansatz, and give its PEPS construction. In Sec.~\ref{sec:results}, we present our numerical results: First, we discuss the optimal variational energies and corresponding parameters for our ansatz; second, we use PEPS techniques for analyzing the quantum phase of the system as well as the properties of its topological (anyonic) excitations (specifically, anyon masses and order parameters for anyon condensation and deconfinement); and third, we discuss the physical structure of the optimal wavefunction (this is possible due to the clear physical picture behind our variational parameters), as well as possible extensions to potentially further improve the ansatz. Finally, we summarize our results and give an outlook in Sec.~\ref{sec:summary}. \section{Simplex RVB ansatz and PEPS}\label{sec:simplexrvb} The key idea behind the simplex RVB ansatz is motivated by the construction used in Ref.~\onlinecite{poilblanc2013simplex} to study the isotropic KHAFM. The motivation behind it is to transform a quantum state into the ground state of a given Hamiltonian by algorithmic cooling, that is, by applying local quantum gates (or potentially more broadly local modifications to the wavefunction) which systematically lower the energy where applied. Unlike the construction of the GS in terms of applications of a trotterized imaginary time evolution operator, where all evolution operators are chosen identical and close to the identity, in the simplex ansatz the step size of each evolution operator is optimized variationally such as to minimize the energy. In addition, we start from a well-chosen initial state which already by itself captures essential features of the low-energy physics of the system at hand. For what follows, it will be convenient to rewrite the Hamiltonian~\eqref{eq:heis1} as \begin{equation}\label{eq:hamiltonian} H = \tfrac{3}{2} \left( \sum_{ (ijk) \in \triangleright }\!\!\!\!{P_{(ijk)}} + \gamma \!\!\!\sum_{ (ijk) \in \triangleleft }\!\!\!\!{P_{(ijk)}} \right) - \tfrac{3(1+\gamma)}{8} \sum_{(ijk)}{\mathbb{I}}\ , \end{equation} where $P_{(ijk)}$ is a projector onto the spin-3/2 subspace of $\tfrac{1}{2}\otimes\tfrac12\otimes\tfrac12$. In order to obtain an approximation of the ground state of $H$, we can perform imaginary time evolution $\ket{\psi}=e^{-\beta H}\ket{\phi_\mathrm{init}}$ for sufficiently large $\beta$ and a suitable initial state $\ket{\phi_\mathrm{init}}$. If we trotterize $e^{-\beta H}$, this yields \begin{equation} \label{eq:trotter} \ket\psi = \left[Q^{\triangleleft}(\alpha_\triangleleft) Q^{\triangleright}(\alpha_\triangleright) \right]^{K}\ket{\phi_\mathrm{init}} \end{equation} with \begin{equation} Q^{\triangleright}(\alpha_\triangleright):= \prod_{(ijk)\in\triangleright}{ \left( \mathbb{I} -\alpha_\triangleright P_{(ijk)} \right)}\ , \end{equation} and accordingly for $Q^{\triangleleft}(\alpha)$. When applied to a suitable initial state, such as the nearest-neighbor RVB state (i.e., a superposition of all nearest-neighbor singlet coverings of the lattice), Eq.~\eqref{eq:trotter} has a natural interpretation: First, it is known~\cite{elser1993kagome,zeng1995quantum} that each NN singlet covering on the kagome lattice contains exactly 25\% of ``defect triangles'', that is, triangles which don't contain a singlet (Fig.~\ref{fig:kagome_1}). Those defect triangles have overlap with the spin-$3/2$ subspace and thus incur a higher energy than triangles holding a singlet (whose energy is locally optimal). The effect of $Q^{\triangleright}$ ($Q^{\triangleleft}$) is to decrease the weight of defect triangles on right-pointing (left-pointing) triangles. This is achieved by creating longer-range singlets: Since \begin{equation} \label{eq:perm-is-rotate} P_{(ijk)} = \frac{1}{3} \{\mathbb{I} + R_{(ijk)} + R_{(ijk)}^2 \} \end{equation} (where $R_{(ijk)}$ rotates the qubits), acting with $P_{(ijk)}$ on a defect triangle produces longer-range singlets, i.e. \begin{equation}\label{eq:P3_half_action} \cbox{21.0}{Z3Rot} \nonumber\quad . \end{equation} (Note, however, that the linear dependence between pairwise permutations and rotations for $3$ qubits -- which allows for the form \eqref{eq:perm-is-rotate} -- implies that this way of looking at the longer-range singlet patterns is not unique; a unique pattern within the singlet subspace can be singled out by avoiding crossings.) Unfortunately, representing imaginary time evolution accurately using a Trotter expansion such as Eq.~\eqref{eq:trotter} is costly, as it requires a large number of Trotter steps which grows with the system size; in particular, in the context of tensor networks this incurs an exponentially growing bond dimension. One option here is to compress $e^{-\beta H}$ to a more compact tensor network~\cite{hastings:locally,molnar:thermal-peps}; however, in this process, translational invariance is either lost or obfuscated, and the physical interpretation of the parameters missing. We therefore resort to a different approach: We restrict to a small number of ``Trotter'' layers in Eq.~\eqref{eq:trotter}, but we allow for independent parameters $\alpha_i$ for each step $i$ and optimize all those parameters such as to minimize the variational energy. This leads to the following ansatz, which we term \emph{simplex RVB}: \begin{equation}\label{eq:simplex-ansatz} \left| \text{RVB}(\bm{\alpha})\right\rangle = Q^{\triangleright}(\alpha_1) Q^{\triangleleft}(\alpha_2)\cdots Q^{*}(\alpha_p) \left| \text{NN RVB}\right\rangle, \end{equation} where $*\in \{\triangleright, \triangleleft\}$ is determined by the parity of $p$, and the $\alpha_i$, $i=1,\dots,p$, are the variational parameters. Note that we choose to apply $Q^\triangleright$ leftmost: This way, the ansatz yields the correct behavior in leading order perturbation theory around $\gamma=0$ (we discuss this in detail in Sec.~\ref{sec:results_energies}); in agreement with this, we observe that this ordering gives better energies in particular in the limit of strong anisotropy. \begin{figure}[!t] \centering \includegraphics[width=0.95\columnwidth]{kagome_4} \caption{Construction of the tensor network for the simplex RVB. \textbf{(a)} Each local tensor contains three kagome spins. In particular, the NN RVB has a PEPS description of this form with bond dimension $D=3$. \textbf{(b)} The operator $Q=\mathbb I-\alpha_i P$ acts on three physical spins. It can be considered as an operation controlled by a control qubit in state $\ket0-\alpha_i\ket1$. \textbf{(c)}~The tensor network for the simplex RVB for $p=3$, obtained by three applications of $Q$.} \label{fig:kagome} \end{figure} We now give the PEPS description of the simplex RVB ansatz. We start by reviewing the construction of the NN RVB state~\cite{verstraete:comp-power-of-peps,schuch2012resonating} which is comprised of triangular and on-site tensors. The triangular tensor is defined to be the sum of one configuration with a defect (containing no singlet) and three configurations without defect (containing one singlet each): \begin{equation}\label{eq:eps} \cbox{2.5}{eps}\ = \delta_{i2}\delta_{j2}\delta_{k2} + \varepsilon_{ijk}\ , \end{equation} where $\delta$ and $\varepsilon$ denote 3-dimensional Kronecker delta and fully antisymmetric tensors, respectively. The on-site tensor ensures that every site is paired with exactly one of its neighbors, \begin{equation}\label{eq:P} \cbox{2.5}{P}\ = \left( \delta_{i0}\delta_{s0} + \delta_{i1}\delta_{s1}\right)\delta_{j2} + \left( \delta_{j0}\delta_{s0} + \delta_{j1}\delta_{s1}\right)\delta_{i2}\ . \end{equation} The resulting tensor network, obtained by blocking the triangular and on-site tensors, has a three-site unit cell and is given in Fig.~\ref{fig:kagome}a. We implement each local action $(\mathbb{I}-\alpha P)$, which is not unitary, as a ``controlled'' gate on three qubits, controlled by a control qubit (this will be useful for extensions of the ansatz discussed later). The gate acts trivially when the control qubit is $\vert0\rangle$, while a projector onto the energetically favorable spin-1/2 subspace of the three qubits is applied if the control qubit is $\vert1\rangle$ (Fig.~\ref{fig:kagome}b). For the time being, we choose the control qubits in a product state $\vert0\rangle + \alpha\vert1\rangle$, leading to a gate $Q=(\mathbb{I} - \alpha P)$, as described previously. For illustration, the tensor network obtained through three applications of $Q$ to the NN RVB, starting with the right-pointing triangles, is shown in Fig.~\ref{fig:kagome}c. What is the bond dimension of the simplex RVB PEPS with $p$ layers? We work with the square unit cell shown in Fig.~\ref{fig:kagome}a, which contains three kagome spins. With this unit cell and the triangular tensor (\ref{eq:eps}), the NN RVB itself has $D=3$. $Q^\triangleright(\alpha_i)$ on right-pointing triangles lie within the unit cell and therefore carry no increase in the bond dimension. Operators $Q^\triangleleft(\alpha_i)$ on left-pointing triangles can be implemented with bond dimension $4$: E.g., they can be constructed by teleporting the left and bottom neighboring spin to the central site (cf.~Fig.~\ref{fig:kagome}a), applying $Q^\triangleleft(\alpha_i)$, and teleporting them back. $D_p$ is therefore multiplied by $4$ for every even $p$, i.e., $D_p=3,12,12,48,\dots$ for $p=1,2,3,4,\dots$\;. However, for $p$ even we can do better: There, the rightmost $Q^\triangleleft$ is applied directly to the NN RVB, in which case the state of the teleported spins is already known to the central tensor if the NN RVB index is $0$ or $1$, allowing to compress the bond dimension for $p=2$ to $D_2=6$; thus, we obtain $D_p=3\times 2^{p-1}=3,6,12,24,\dots$\;\footnote{The same compression can also be obtained with the opposite blocking, noting that in expectation values, the final $Q^\triangleright$ appears as ${Q^\triangleright(\alpha_i)}^\dagger Q^\triangleright(\alpha_i)\propto Q^\triangleright(2\alpha_i-\alpha_i^2)$, which can be implemented with bond dimension $4$, i.e.\ $2$ per ket/bra layer.}. \section{Results}\label{sec:results} Let us now discuss our results obtained by using simplex RVB states as a variational ansatz for the breathing kagome Heisenberg Hamiltonian \eqref{eq:heis1}. The PEPS formalism enables the computation of expectation values of local observables and correlation functions directly in the thermodynamic limit, in contrast to other methods. We use standard numerical methods for infinite PEPS (iPEPS)~\cite{jordan:iPEPS,haegeman2017diagonalizing}, which approximate the boundary by an infinite matrix product state (iMPS) of bond dimension $\chi$ (which determines accuracy and computational cost). This allows us to compute the variational energy of an iPEPS with high accuracy and thus to determine the variationally optimal state. In addition, the PEPS approach allows us to utilize the entanglement symmetries of the PEPS and the way in which the iMPS boundary orders relative to those symmetries to study the quantum phase and the topological properties of the optimized wavefunction~\cite{duivenvoorden2017entanglement,iqbal2018study}. In our calculations, we choose not to truncate the PEPS tensor before contraction but rather keep the exact simplex ansatz, which avoids truncation errors and gives us direct access to the entanglement symmetries relevant to study the nature of the order in the system; on the other hand, this limits our ansatz to at most $p=3$ computationally attainable applications of $Q$'s. \subsection{Energies\label{sec:results_energies}} Let us start by giving the results on the optimal variational energy obtained within the simplex RVB ansatz family. For all calculations, we have determined the optimal parameters $\{\alpha_i\}$ through a gradient search using the corner transfer matrix method with a boundary bond dimension $\chi = 36$, and subsequently extracted the energies of the optimized wavefunctions using boundary iMPS (i.e., the fixed point of the transfer operator) with $\chi=64$ (this only requires truncation along one direction, resulting in a better convergence of the energy in $\chi$). A table with the detailed energies, including a convergence analysis and error bounds, as well as a discussion of a potential extrapolation in $p$, are given in Appendix~\ref{app:energy-densities}. \begin{figure}[t] \centering \includegraphics[width=27em]{plots/main_energy.pdf} \caption{ \textbf{(a)} Energy densities for the simplex RVB ansatz with $p=1,2,3$, and extrapolated values for $p\rightarrow \infty$ (cf.\ Appendix~\ref{app:c1-coefficient}). \textbf{(b)} Comparison of energy densities for the simplex RVB ansatz with the data from variational Monte Carlo (VMC) \cite{iqbal2018persistence} and DMRG \cite{repellin2017stability, cecile:private}. Solid lines give quadratic fits to the data for $\gamma \in [0,0.3]$. The gray region is bounded by lines $-0.25-0.1354\gamma$ and $-0.25-0.133\gamma$, which are the slopes extracted from DMRG calculations for the full model for $N_v=12$ and the extrapolation $N_v\to\infty$, respectively.} \label{fig:main_energy} \end{figure} In Fig.~\ref{fig:main_energy}a, we plot the energy density $e$ (i.e., the energy per site) of the optimized simplex RVB wavefunction for the breathing kagome Hamiltonian~\eqref{eq:heis1} as a function of the anisotropy $\gamma$, for $p=1,2,3$. For better comparison in the strongly anisotropic limit, we plot in Fig.~\ref{fig:main_energy}b $e(\gamma)+0.1353\,\gamma$ for $0\le\gamma\le0.2$, where the subtracted linear offset corresponds to the behavior in first order perturbation theory, as obtained by extrapolating DMRG calculations~\cite{repellin2017stability,cecile:private} for the effective first-order model~\cite{mila:breathing} on finite cylinders. Beyond the $p=1,2,3$ simplex RVB results, we also show the data obtained for the VBC (and the energetically less favorable $\mathrm{U}(1)$ DSL) ansatz in Ref.~\onlinecite{iqbal2018persistence} using VMC. We find that already for $p=2$, our ansatz gives energies slightly below the VBC ansatz, and for $p=3$, it clearly outperforms it. This is particularly remarkable since the $p=2$ ($p=3$) simplex ansatz has only two (three) parameters, corresponding to effectively one (one and a half) imaginary time evolution steps, while the VMC ansatz of Ref.~\onlinecite{iqbal2018persistence} has $11$ parameters, including two Lanczos steps (cf.\ the discussion in the introduction). In addition, Fig.~\ref{fig:main_energy}b also shows energies obtained by extrapolating DMRG data for the full model \eqref{eq:heis1} for cylinders with $N_v=8,10,12$ to $N_v\to\infty$, which we find to be remarkably close to our $p=3$ data in the strong anisotropy regime around $\gamma\le0.04$, given that our ansatz only depends on three parameters rather than about $10^8$. Since the extrapolation of the DMRG data is subtle (cf.\ Appendix~\ref{app:c1-coefficient}), the gray cone indicates the linear order extracted from the $N_v=12$ and $N_v\to\infty$ DMRG data, which we expect to provide reliable lower and upper bounds to the true slope for the full model. For better comparison in the strong anisotropy limit, we expand the energy density for small $\gamma$ as \begin{equation} e(\gamma)=-0.25 + c_1\gamma+ c_2\gamma^2 +\dots \, , \end{equation} where the $c_i$ depend on $p$. The values $c_1$ for the slope at $\gamma=0$ for the different methods are given in Table~\ref{table:c1} (see Appendix~\ref{app:c1-coefficient} for details on the extraction). They confirm that for $p=2$, our ansatz performs slightly better than the VBC ansatz~\cite{iqbal2018persistence}, while for $p=3$, it clearly outperforms it. The DMRG data for the nematic spin liquid is the same as the gray cone in Fig.~\ref{fig:main_energy}b, that is, obtained from the DMRG data of Ref.~\onlinecite{repellin2017stability}~\cite{cecile:private} for $N_v=12$ cylinders and the $N_v\to\infty$ extrapolation, which should give lower and upper bounds to the true value. Finally, we give values for $c_1$ obtained by extrapolating to $p\to\infty$ in the inverse bond dimension $1/D_p\sim 1/2^p$, which we expect to be a reasonable fit in a gapped phase, and which yields a value competitive with the DMRG results. \begin {table}[t] \begin{center} \bgroup \def\arraystretch{1.4 \begin{tabular}{r @{\ }||@{\ } l l} Ansatz \hspace*{3em} & \hspace*{1.5em}$c_1$ \\ \hline\hline U(1) DSL (VMC) \cite{iqbal2018persistence} & $-0.119(1)$ \\ \hline VBC (VMC) \cite{iqbal2018persistence} & $-0.125545(20)$ \\ \hline Nematic SL (DMRG) & $-0.1354$ & (fit, $N_v=12$) \\ \cite{repellin2017stability,cecile:private} & $-0.133$ &(fit, $N_v\!\to\!\infty$) \\ \hline simplex RVB, $p=1$ & $-0.1242$ &(fit) \\ & $-0.1241978(4)$ & ($e^\star_\triangleleft$)\\ & $-0.1243(3)$ & (\cite{iqbal2018persistence}, extrap.\ $N_v$) \\ \hline \dots, $p=2$ & $-0.1261$ & (fit) \\ & $-0.126217(7)$ & ($e^{\star}_{\triangleleft}$) \\\hline \dots, $p=3$ & $-0.1319$ & (fit) \\ & $-0.13225(5)$ & ($e^{\star}_{\triangleleft}$) \\\hline \dots, $p=\infty$ & $-0.1345$& (extrap.\ fit) \\ & $-0.1349$ &(extrap.\ $e^\star_\triangleleft$) \end{tabular} \egroup \end{center} \caption{ \label{table:c1} Coefficient of the linear term in the energy density $e\approx -0.25 +c_1\gamma$, obtained with different methods (see text). The simplex RVB values labelled $e^\star_\triangleleft$ have been obtained using first-order perturbation theory, cf.~text.} \end {table} An alternative way to extract $c_1$ is by using a perturbative expansion. Perturbation theory predicts that for small $\gamma$, the energy per site is given by \begin{equation} e(\gamma)\approx -0.25+c_1\gamma=\frac1N \min_{\ket\psi\in\mathcal G} \bra\psi H(\gamma)\ket{\psi}\ , \end{equation} where $\mathcal G$ denotes the ground state manifold at $\gamma=0$, that is, the subspace of all states with spin $\frac12$ on the right-pointing triangles. Within our variational family, this corresponds to fixing $\alpha_1=1$ (which explains why we want to have $Q^\triangleright$ leftmost in Eq.~(\ref{eq:simplex-ansatz}) if we want to correctly reproduce the perturbative limit), and letting $\mathcal G$ be the set of simplex RVBs for a given $p$ with fixed $\alpha_1=1$. We thus find that within our ansatz family, we can determine $c_1$ perturbatively as \begin{align*} c_1 &= \frac1\gamma \left[\frac1N \min_{\ket\psi\in\mathcal G} \bra\psi \textstyle\sum_{\triangleright }{\textbf{S}_i . \textbf{S}_j} + \gamma \textstyle\sum_{\triangleleft }{\textbf{S}_i . \textbf{S}_j} \ket\psi + 0.25 \right] \\ &=\frac1N\min_{\ket\psi\in\mathcal G} \bra\psi \textstyle\sum_{\triangleleft }{\textbf{S}_i . \textbf{S}_j} \ket\psi \ , \end{align*} that is, by minimizing the energy density $e^\star_\triangleleft$ on the left-pointing triangles within the simplex RVB family with $\alpha_1=1$. This optimization incurs one less variational parameter and does not require fitting, and can thus be carried out to significantly higher precision; we report the corresponding values in Table~\ref{table:c1} alongside the values obtained from fitting $e(\gamma)$, which are in excellent agreement. \begin{figure}[!t] \centering \includegraphics[width=26em]{plots/simplex_heisenberg.pdf} \caption{ Restored inversion symmetry at the Heisenberg point. \textbf{(a)} Convex hull of energy densities for $\gamma=1$ vs.\ energy difference between left- and right-pointing triangles for the $p=2,3$ simplex ansatz states. The inversion symmetry, which is not explicitly contained in the ansatz, is essentially perfectly restored in the energy. \textbf{(b)} Optimal energies for $p=2,3$ in the symmetric gauge, where $\delta$ measures the distance to the Heisenberg point, with quadratic fits. The slope at $\delta=0$ is essentially zero, reconfirming that inversion symmetry of the energy is restored at the Heisenberg point. } \label{fig:leftright} \end{figure} As an additional check for the quality of the optimal variational state, we have considered the energetics of left- and right-pointing triangles at and around the Heisenberg point. We find that even though the $p=3$ ansatz treats the inequivalent triangles differently (in particular, $Q$ acts twice on the the right- and only once on the left-pointing triangles), the energy splitting between the triangles vanishes for the optimal energy wavefunction (Fig.~\ref{fig:leftright}a); we observe the same effect also for the optimal wavefunction with $p=2$. Alternatively, we can consider the optimal energy density in a symmetric gauge, $H(\delta) = (1-\delta)\sum_{ \triangleright }{\textbf{S}_i . \textbf{S}_j} + (1+\delta) \sum_{ \triangleleft }{\textbf{S}_i . \textbf{S}_j}$, in the vicinity of the Heisenberg point. We obtain a fit $e^{p=2}_{\mathrm{GS}}(\delta) \approx - 0.4283 +0.001\delta -0.083\delta^2$ and $e^{p=3}_{\mathrm{GS}}(\delta) \approx - 0.4333 -0.001\delta -0.071\delta^2 $ for the simplex RVB ansatz with $p=2$ and $p=3$, respectively (Fig.~\ref{fig:leftright}b), which essentially show a zero slope at $\delta=0$ and is thus symmetric around $\delta=0$ to very good accuracy, as required by symmetry considerations~\footnote{In fact, the two tests probe the same property: Equal energies for both triangles are equivalent to zero slope at $\delta=0$, since otherwise, the wavefunction for some non-zero $\delta$ would constitute a better ansatz even at $\delta=0$.}. \subsection{Order, correlations, and quantum phase}\label{ssec:qph} The PEPS description of the NN RVB has a \z{2} symmetry on the entanglement degrees of freedom. Such a symmetry has been shown to be essential to explain the topological features of PEPS models, as well as to understand and analyze the breakdown of topological order in such systems~\cite{chen:topo-symmetry-conditions,schuch:peps-sym,schuch2013topological,haegeman2015shadows,duivenvoorden2017entanglement,iqbal2018study}. This is accomplished by considering the boundary iMPS obtained when contracting the 2D PEPS (i.e., the fixed point of the transfer operator), and analyzing how it orders relative to those symmetries: The specific type of order is directly related to the quantum phase displayed by the bulk wavefunction. From those symmetries, we can construct half-infinite string operators which on the one hand create anyonic bulk excitations in a given anyon sector $a$ (with $a=s,v,sv$ for spinon, vison, and the composite fermion, respectively), but at the same time form (string) order parameters which detect the ordering of the boundary state. By computing expectations values of these string operators either in one layer (denoted $\langle a\rangle$) or in both layers (denoted $\langle a a^\dagger\rangle$), we can construct order parameters which probe the condensation and confinement of anyons, and thus the proximity to a topological phase transition; specifically, a non-zero value of the ``deconfinement fractions'' $\langle aa^\dagger\rangle$, as well as ``condensate fractions'' $\langle a\rangle\equiv 0$, are indicative of the topological phase. At the same time, for vanishing order parameters we can study the rate at which the corresponding expectation value for finite strings with two dual endpoints decays to zero as their separation increases, giving rise to corresponding length scales for condensation (mass gap) and confinement. We have computed anyonic order parameters for the vison, as well as correlation lengths for all anyons, for the optimized simplex RVB for $p=1,2,3$ as a function of the anisotropy $\gamma$. Since we do not truncate local tensors during optimization, the entanglement symmetries of our tensors remain easily accessible, facilitating the analysis. The corresponding data is shown in Fig.~\ref{fig:cf_gamma}. For the anyonic order parameters (Fig.~\ref{fig:cf_gamma}a), we find that $\langle v\rangle=0$ and $\langle vv^\dagger\rangle>0$, which implies that the system is in a $\mathbb Z_2$ topologically ordered phase for the given $p=1,2,3$. The $\langle vv^\dagger\rangle$ for the different $p$ all show only a small $\gamma$-dependence, with no indication of a phase transition at some intermediate $\gamma$ building up at larger $p$. On the other hand, at least for $\gamma$ close to $1$, $\langle vv^\dagger\rangle$ clearly decreases with $p$, leaving open the possibility of a critical phase around the Heisenberg point. Next, let us analyze the correlation lengths, shown in Fig.~\ref{fig:cf_gamma}b for $p=3$. We find that the dominant correlation length is given by spinon correlations, as known for the NN RVB state~\cite{haegeman2015shadows}. In addition to the different anyon correlations, the figure also shows data for spin-spin ($\langle \mathbf{S}_i.\mathbf{S}_{i+r} \rangle$) and dimer-dimer ($\langle \mathbf{D}_i.\mathbf{D}_{i+r} \rangle$) correlations, computed for the spin and dimer pairs indicated in Fig.~\ref{fig:cf_gamma}d. Again, all lengths change smoothly with $\gamma$, and while we observe a minor increase of correlations with $\gamma$, there is no sign of a phase transition. Note that the similar behavior of spinon and leading trivial (including dimer-dimer) correlations and their relative scale is consistent with previous observations~\cite{iqbal:rvb-perturb} which could be explained as arising from correlations between pairs of spinons~\footnote{It is worth noting that the values we obtain for $\xi_s$ and $\xi_v$ for the NN RVB wavefunction are in remarkable agreement with their earlier estimates in Ref.~\onlinecite{poilblanc2013simplex} which had been extracted from the finite-size scaling of the energy splitting for the corresponding ground states.}. \begin{figure}[t] \centering \includegraphics[width=27em]{plots/cf_gamma.pdf} \caption{\textbf{(a)} Deconfinement fraction $\langle vv^{\dagger} \rangle$ and condensate fraction $\langle v \rangle$ of visons as a function of $\gamma$. \textbf{(b)} Different correlation lengths for $p=3$, computed with $\chi=192$: For the trivial ($0$), spinon ($s$), vison ($v$), and fermionic ($sv$) sector, as well as the spin-spin ($\mathbf S.\mathbf S$) and dimer-dimer ($\mathbf D.\mathbf D$) correlations shown in (d). \textbf{(c)}~Comparison of $\xi_s$ and $\xi_{\mathbf{D}.\mathbf{D}}$ for $p=1,2,3$. \textbf{(d)} Spin-spin and dimer-dimer correlation functions considered in (b,c,e). \textbf{(e)} $\langle \mathbf{D}_i.\mathbf{D}_{i+r} \rangle$ for $r=1,2$ and $p=1,2,3$.} \label{fig:cf_gamma} \end{figure} In Fig.~\ref{fig:cf_gamma}c, we compare the spinon and dimer correlation lengths $\xi_\bullet^p$ for the different $p=1,2,3$. We find a surprising behavior: While the curves for $p=1$ and $p=2$ show qualitatively similar behavior (with increased correlation length for $p=2$), and display a decrease of correlations with growing $\gamma$, the $p=3$ curve exhibits the opposite behavior. Even more noteworthily, while at the Heisenberg point, correlations keep increasing with $p$ (consistent with a gapless phase), in the small $\gamma$ regime the correlations decrease again, speaking against a long-range ordered or critical system. The behavior for $p=1,2$ can be qualitatively explained from the way the $Q(\alpha)$ act (cf.\ the next section where the optimal $\alpha$ are discussed): $Q^\triangleright(\alpha_1)$ acts on the strong right triangles. As it decreases the energy of the latter, $\alpha_1$ will increase with growing anisotropy. At the same time, the $Q$'s create longer-range singlets, which should give rise to longer-range correlations: Correlation functions are obtained from overlaps of singlet configurations, weighted by the number $\ell$ of singlets involved as $2^{-\ell/2}$, and longer-range singlets allow to connect two points at a given distance with smaller $\ell$ and thus larger weight~\footnote{Note that this is only a qualitative argument, as it does not take into accounts cancellations due to different phases, or the scaling of the number of loop patterns with the distance.}. The additional increase in correlation length for $p=2$ can be explained from the presence of the $Q^\triangleleft(\alpha_2)$ layer which gives rise to additional longer range singlets before the application of $Q^\triangleright(\alpha_1)$. But why does the behavior change for $p=3$? As we discuss in more detail in the next section, the role of the topmost $Q$'s is to adjust the energy, as they shift the weight between spin $\frac12$ and $\tfrac32$ right before applying the Hamiltonian; the optimal value of the corresponding $\alpha_i$ -- and thus the amount of correlations they create -- is thus governed by immediate energetic considerations (i.e., the overlap with the spin $\tfrac12$ space). The lower-lying $Q$ layers, on the other hand, are not directly relevant for the energetics -- rather, their job is to set up the underlying wavefunction by creating longer-range singlets in a way where the topmost layers can produce the best possible energies for left- and right-pointing triangles simultaneously. Thus, it is only with the lower layers $i\ge3$ that the $Q$'s primarily serve the purpose of creating the right type of long-range singlets and correlations, rather than just tuning the value of the energy. It therefore seems plausible that the $p=3$ behavior of the correlations is closer to the true behavior at large $p$, and we expect this tendency to continue as we further increase $p$. Finally, let us discuss the possibility of a nematically ordered phase, as proposed in Ref.~\cite{repellin2017stability} for the strongly anisotropic limit; here, the nematic order was found to break rotational symmetry around the center of the triangles, leading to different Heisenberg energies along inequivalent links. By construction, our ansatz keeps all symmetries of the Hamiltonian and thus cannot explicitly break this symmetry. On the other hand, if nematic order were favored we would expect the system to form a long-range ordered state, that is, an equal weight superposition of all three nematically ordered states. This long-range order should be reflected in a diverging correlation length, and thus, the absence of any such divergence in the dimer-dimer correlations (which in fact rather decrease) speaks against the presence of a nematically ordered phase. To further strengthen this point, we have also considered the dimer-dimer correlations $\langle \bm D_0.\bm D_r\rangle$ at a given short distance $r$, shown in Fig.~\ref{fig:cf_gamma}e -- if nematic order were favorable, we would expect to see such correlations build up at short distance already at low $p$. However, Fig.~\ref{fig:cf_gamma}e shows no significant increase of $\langle \bm D_0.\bm D_r\rangle$ at small $\gamma$, and it in fact decreases for $p=3$. More importantly, the observed values of $\langle \bm D_0.\bm D_r\rangle$ are of the order of $10^{-4}$ even for $r=1$, while for the nematic order parameter found in Ref.~\cite{repellin2017stability}, we would expect them to be on the order of $0.03$. Overall, we find that our results show no indications of a nematic phase in the strong anisotropy limit $\gamma\ll 1$. As a final test for the $\mathbb Z_2$ topological spin liquid nature of the simplex RVB state for $p=3$ for all values of $\gamma$, we study the deconfinement order parameter for visons and the spinon correlation length as we interpolate from the NN RVB state (which is known to be a gapped $\mathbb Z_2$ topological spin liquid) to the optimal simplex RVB wavefunction, by interpolating $ \bm{\alpha}(\theta)=\theta \times \bm{\alpha^{\star}}_\gamma$ from $\theta=0$ (NN RVB) to $\theta=1$ (optimized simplex RVB), where $\bm{\alpha^{\star}}_\gamma$ denotes the optimal parameter values for a given $\gamma$. The result is shown in Fig.~\ref{fig:interpolation}. Again, we find that both quantities change smoothly, re-confirming the topological $\mathbb Z_2$ spin liquid nature of the optimal wavefunction for the whole range of $\gamma$. \begin{figure}[t] \centering \includegraphics[width=24em]{plots/interpolation.pdf} \caption{ Curves in (a) and (b) share the same color for each $\gamma$ along the path $\bm{\alpha}(\theta) = \theta\times \bm{\alpha^{\star}}_\gamma$ for the $p=3$ simplex ansatz. (a) The deconfinement fraction of visons. (b) The length scale of spinon excitations with $\chi=144$.} \label{fig:interpolation} \end{figure} \subsection{Structure of optimal wavefunction \mbox{and possible generalizations}} The fact that our simplex RVB ansatz encompasses only very few parameters with a clear interpretation allows us to directly study how the structure of the optimal wavefunction changes as we vary $\gamma$ and increase $p$. We recall that our ansatz [Eq.~\eqref{eq:simplex-ansatz}] was of the form \begin{equation} \left| \text{RVB}(\bm{\alpha})\right\rangle = Q^{\triangleright}(\alpha_1) Q^{\triangleleft}(\alpha_2)\cdots Q^{*}(\alpha_p) \left| \text{NN RVB}\right\rangle, \end{equation} where $Q^{\bullet}(\alpha_i)$, $\bullet\in\{\triangleleft,\triangleright\}$ projects onto the spin-$1/2$ subspace of the corresponding triangles for $\alpha_i=1$ and acts trivially for $\alpha_i=0$ -- that is, it lowers the energy of those triangles as $\alpha_i$ approaches $1$. At the same time, it increases the number of longer-range singlets, as it acts by permuting the singlets; following Eq.~\eqref{eq:perm-is-rotate}, one can argue that the largest amount of singlets is permuted at $\alpha=3$, though this has to be taken with due care due to the large number of linear dependencies of different long-range singlet patterns, as well as cancellations in the singlet range growth from permutations on adjacent sites. Indeed, the fact that the $Q$'s arise from trotterizing the imaginary time evolution implies that a certain amount of such long-range singlets is \emph{required} to obtain a good variational wavefunction. \begin{figure}[t] \centering \includegraphics[width=23em]{plots/simplex_params_splitted.pdf} \caption{Optimal values of variational parameters for the simplex RVB ansatz for $p=1,2,3$. } \label{fig:params_conv} \end{figure} Fig.~\ref{fig:params_conv} shows the optimal values of $\{\alpha_i\}_{i=1}^p$ for $p=1,2,3$, as a function of the breathing anisotropy parameter $\gamma$. For $p=1$, we find that for maximum anisotropy $\gamma=0$, $\alpha_1=1$ -- this is expected, as it forces all strong triangles to have spin $1/2$ and thus minimum energy, and the state of the weak triangles is irrelevant for $\gamma=0$. As we increase $\gamma$, we observe that the value of $\alpha_1$ decreases, increasing the probability of the weak triangles to have spin $1/2$. Remarkably, however, we see that the optimal value of $\alpha_1$ even at the symmetric point $\gamma=1$ is significantly above $0$: This can be understood from the fact that $Q$ in the simplex RVB ansatz does act not only by shifting the weights of defects between inequivalent triangles in the RVB, but at the same time creates energetically favorable longer-range singlets. Note, however, that those singlets are created by decreasing, rather than increasing, $\alpha_1$, suggesting that for $p=1$, the amount of longer-range singlets at the Heisenberg point is smaller than for $\gamma\ll1$. For $p=2$, the behavior of $\alpha_1$ closely resembles that for $p=1$, but with a smaller change (i.e., a larger value $\alpha_1$) towards $\gamma=1$. The correspondingly lower energy gain on the left-pointing triangles is compensated by $Q^\triangleleft(\alpha_2)$, which lowers the energy of the left-pointing triangles prior to the application of $Q^\triangleright(\alpha_1)$, while in addition allowing for the creation of NNN neighbor singlets, thus being energetically favorable. For $p=3$, we make a similar observation: The curves for $\alpha_1$ and $\alpha_2$ are again quite close to the $p=2$ case. Now, the value of $\alpha_2$ for small $\gamma$ has increased -- giving the weak left-pointing triangles a lower energy, but also creating longer-range singlets. The energy gain of the weak triangles is now compensated by biasing the system towards strong triangles using $\alpha_3$. An interesting point to note is that $\alpha_3>1$, unlike $\alpha_1$ and $\alpha_2$: That is, in the first layer applied to the NN RVB, it is now favorable to flip the sign of the spin-$3/2$ space or -- to the extent the picture of Eq.~(\ref{eq:perm-is-rotate}) as creating longer-range singlets is correct -- to create a larger fraction of longer-range singlets at the expense of not lowering the energy of the strong $\triangleright$ triangles as much as possible. Indeed, the latter interpretation is plausible, given that longer-range singlets are overall energetically favorable, and the immediate energetics is taken care of by $Q^\triangleright(\alpha_1)$. If we were to follow this reasoning, we would expect further layers to also have $\alpha_i>1$, $i\ge3$; this suggests that the qualitative change in the behavior of order parameters and correlations occurring at $p=3$ will persist for larger values of $p$. Given the observation that the lowest layer (i.e., $Q^\bullet(\alpha_p)$) significantly biases the NN RVB towards configurations with no defects on one kind of triangles, it seems plausible that a modification of the NN RVB layer in a way which biases it towards configurations with less defects on the suitable triangles should further improve the ansatz. This can be done following the idea of the original simplex RVB paper~\cite{poilblanc2013simplex}: We modify the right-pointing triangular tensor \eqref{eq:eps} of the NN RVB as \begin{equation}\label{eq:eps2} \cbox{2.5}{eps}\ = (1-\beta) \delta_{i2}\delta_{j2}\delta_{k2} + \varepsilon_{ijk}\ , \end{equation} where a parameter $\beta>0$ ($\beta<0$) effectively shifts the amplitude of defect configurations towards left-pointing (right-pointing) triangles. Importantly, this modification does not lead to an increase in the bond dimension. Subsequently, we can apply the $Q$'s as before, \begin{equation}\label{eq:simplex-ansatz-2} Q^{\triangleright}(\alpha_1) Q^{\triangleleft}(\alpha_2)\cdots Q^{*}(\alpha_p) \left| \text{NN RVB} (\beta)\right\rangle\ , \end{equation} to obtain an enhanced simplex RVB ansatz with one additional parameter. We have tested this ansatz and found that it does not lead to better variational energies, except in the case $p=1$. We attribute this to two facts: As discussed, the energetics is predominantly taken care of by the top $Q$ layers (in particular $\alpha_1$ and $\alpha_2$), rather than the low-lying $\beta$; on the other hand, the lower layers mostly serve the purpose to create longer range singlets, whereas $\beta$, while changing the weight of the spin-$1/2$ space on the corresponding triangles, does not give rise to longer range singlets (and in fact reduces the spinon correlation in the system). As mentioned earlier, we can consider the $Q$ operators as gates which are controlled by a ``control qubit'' $\ket{0}+\alpha_i\ket{1}$. This enables us to interpret the simplex ansatz as a variational optimization over $p$ control qubits chosen from the manifold of product states (Fig.~\ref{fig:kagome}c). In principle, we can further enrich the simplex ansatz \eqref{eq:simplex-ansatz} by allowing the control qubits to be in a general $p$-qubit state, i.e., correlating the $\alpha_i$ of the different layers. This provides a significantly enlarged simplex manifold, even though it does not allow to increase the range of the singlets. However, we have found that for the computationally feasible values of $p$, the optimization over the manifold of general $p$-qubit control states does not lead to an improvement in energy as compared to the optimization over the manifold of $p$-qubit product states. \section{Summary and outlook}\label{sec:summary} In this paper, we have introduced a simple yet powerful ansatz for the kagome Heisenberg antiferromagnet (KHAFM) with breathing anisotropy, termed simplex RVB. Our ansatz is physically motivated from algorithmic cooling, and effectively consists of $p/2$ imaginary time evolution layers with optimized step sizes applied to the NN RVB, approaching the true ground state as $p\to\infty$. It yields simple few-parameter families of wavefunctions with a clear physical interpretation in terms of longer range singlets, which are energetically favorable. The ansatz has a simple PEPS representation, which makes it amenable to numerical simulations and an in-depth analysis of its order. We have analyzed the optimal simplex RVB ansatz for $p=1,2,3$ for the breathing KHAFM, with a focus on the strong anisotropy limit, and found that already for $p=2$ it improves over existing VMC results, while for $p=3$ it clearly outperforms them, even though it requires significantly less parameters. We also find that for $p=3$, our energies are rather close to extrapolated DMRG energies. It is thus probable that with just a few more layers, the simplex RVB will be fully competitive with DMRG simulations, which is remarkable given the small number of parameters. We have investigated the nature of the order in the optimized simplex RVB for the breathing Heisenberg model, using a wide range of probes based on the explicit PEPS description and the underlying entanglement symmetries. We find that for the whole parameter regime, our ansatz yields a gapped $\mathbb Z_2$ topological spin liquid for the accessible values of $p$. In the strongly anisotropic regime, we find that the correlations saturate as we increase $p$, thus exhibiting no signs of long-range order which one would expect e.g.\ for a nematically ordered phase. On the other hand, at the Heisenberg point, our results show a clear tendency to larger correlation lengths as $p$ increases, which is consistent with a critical DSL phase at the Heisenberg point; both the improvement in energy and the growth of correlations with $p$ points to the relevance of long-range fluctuating singlets for the kagome Heisenberg antiferromagnet. In order to benchmark a gapped vs.\ a gapless spin liquid in particular at the Heisenberg point, it would be interesting to compare the simplex RVB ansatz with a variant where one starts from a gapless $\mathrm{U}(1)$ spin liquid rather than the gapped NN RVB. One idea which fits well with the PEPS picture is to change the $\mathbb Z_2$ invariant PEPS tensors by $\mathrm{U}(1)$ invariant ones (which we expect to give a critical wavefunction), e.g.\ by omitting those $\mathbb Z_2$-invariant configurations which break $\mathrm{U}(1)$. We describe and test such an ansatz in Appendix~\ref{app:gaplessrvb}. However, while the resulting ansatz indeed yields a gapless spin liquid, we find that the corresponding wavefunction is energetically unfavorable for the Heisenberg model. The reason for this can be found in the general approach of the construction: Since different $\mathbb Z_2$ configurations map to different singlet patterns, removing configurations amounts to omitting certain singlet patterns and thus breaks the lattice symmetry, which induces doping with visons and ultimately closes the vison gap. However, as we have observed, the dominating correlation (and thus gap) at the Heisenberg point is given by spinon correlations. Thus, a suitable ansatz would have to drive the system into criticality through doping with spinons. To this end, we would have to resort to a different approach and allow for longer-range singlets e.g.\ by introducing teleportation bonds in the PEPS~\cite{wang2013constructing}, which break the $\mathbb Z_2$ symmetry. We leave the study of such an ansatz for future work. \vspace*{3.5ex} \begin{acknowledgments} We thank Ji-Yao Chen, Henrik Dreyer, and Frank Pollmann for valuable discussions, and C\'ecile Repellin for providing us with the DMRG data of Ref.~\cite{repellin2017stability}. MI and NS acknowledge support by the European Union's Horizon 2020 programme through the ERC Starting Grant WASCOSYS (Grant No.~636201) and from the DFG (German Research Foundation) under Germany's Excellence Strategy (EXC-2111 -- 390814868). Computations have been carried out on the TQO cluster of the Max-Planck-Institute of Quantum Optics. DP acknowledges support by the TNSTRONG ANR-16-CE30-0025 and TNTOP ANR-18-CE30-0026-01 grants awarded by the French Research Council. This work was granted access to the HPC resources of CALMIP supercomputing center under the allocations P1231. \end{acknowledgments}
3,212,635,537,862
arxiv
\section{Introduction} \label{sec:intro} One of the mysteries of nature is the origin of mass scales. At least in QCD, we have an answer: the hadronic mass scale can arise when the gauge coupling evolves to large values such that the fundamental constituents, the quarks, condense to bound states. From dimensional transmutation, the proton mass can be found even in terms of the Planck mass $m_{Pl}$ via $m_{proton}\simeq m_{Pl}\exp (-8\pi^2/g^2)$ which gives the right answer for $g^2\sim 1.8$. Another mass scale begging for explanation is that associated with weak interactions: $m_{weak}\simeq m_{W,Z,h}\sim 100$ GeV. In the Standard Model (SM), the Higgs mass is quadratically divergent so one expects $m_h$ to blow up to the highest mass scale $\Lambda$ for which the SM is the viable low energy effective field theory (EFT). Supersymmetrization of the SM eliminates the Higgs mass quadratic divergences so any remaining divergences are merely logarithmic\cite{Witten:1981nf,Kaul:1981wp}: the minimal supersymmetric Standard Model, or MSSM\cite{WSS}, can be viable up to the GUT or even Planck scales. In addition, the weak scale emerges as a derived consequence of the visible sector SUSY breaking scale $m_{soft}$. So the concern for the magnitude of the weak scale is transferred to a concern for the origin of the soft breaking scale. In gravity mediated SUSY breaking models\footnote{In days of yore, gauge mediated SUSY breaking (GMSB) models\cite{Dine:1995ag} were associated with dynamical SUSY breaking in that they allowed much lighter gravitinos. In GMSB models, the trilinear soft term $A_0$ is expected to be tiny, leading to too light a Higgs boson mass unless soft terms are in the 10-100 TeV regime\cite{Arbey:2011ab,Draper:2011aa,Baer:2012uya}. Such large soft terms then lead to highly unnatural third generation scalars. For this reason, we focus on DSB in a gravity-mediation context\cite{Bose:2012gq}.}, it is popular to impose spontaneous SUSY breaking (SSB) at tree level in the hidden sector, for instance via the SUSY breaking Polonyi superpotential\cite{Polonyi:1977pj}: $W=m_{hidden}^2 (\hat{h}+\beta )$ where $\hat{h}$ is the lone hidden sector field. For $\beta =(2-\sqrt{3})m_P$ (with $m_P$ the reduced Planck mass $m_P\equiv m_{Pl}/\sqrt{8\pi}$ and $m_{hidden}\sim 10^{11}$ GeV) then one determines $m_{soft}\sim m_{3/2}\sim m_{weak}$. Thus, the exponentially-suppressed hidden sector mass scale must be put in by hand, so SSB can apparently only accommodate, but not explain, the magnitude of the weak scale.\footnote{A related problem is how the SUSY conserving $\mu$ parameter is {\it also} generated at or around the weak scale. A recent explanation augments the MSSM by a Peccei-Quinn (PQ) sector plus a $\mathbb{Z}_{24}^R$ discrete $R$-symmetry\cite{Lee:2011dya} which generates a gravity-safe accidental approximate $U(1)_{PQ}$ which solves the strong $CP$ and SUSY $\mu$ problems, and leads to an axion decay constant $f_a\sim m_{hidden}$ whilst $\mu\sim m_{weak}$\cite{Baer:2018avn}. A recent review of 20 solutions to the SUSY $\mu$ problem is given in Ref. \cite{Bae:2019dgg}.} A more attractive mechanism follows the wisdom of QCD and seeks to generate the SUSY breaking scale from dimensional transmutation, which automatically yields an exponential suppression. This is especially attractive in string models where the Planck scale is the only mass scale available. Then one could arrange for dynamical SUSY breaking (DSB)\cite{Dimopoulos:1981au,Witten:1981nf,Dine:1981za} (for reviews, see \cite{Poppitz:1998vd,Shadmi:1999jy,Dine:2010cv}) wherein SUSY breaking arises non-perturbatively.\footnote{The DSB scenario has been made more plausible in recent years with the advent of {\it metastable} DSB\cite{Intriligator:2006dd,Dine:2010cv}.} Some possibilities include hidden sector gaugino condensation\cite{Ferrara:1982qs}, where a hidden sector gauge group such as $SU(N)$ becomes confining at the scale $\Lambda_{\GC}$ and a gaugino condensate occurs with $\langle\lambda\lambda\rangle\sim \Lambda_{\GC}^3$ leading to SUSY breaking with soft terms $m_{soft}\sim \Lambda_{\GC}^3/m_P^2$. The associated hidden mass scale\cite{Affleck:1984mf} is given by \be m_{hidden}^2\sim m_P^2\exp (-8\pi^2/g_{hidden}^2) \ee where then $m_{hidden}^2\sim \Lambda_{\GC}^3/m_P$. Another possibility is non-perturbative SUSY breaking via instanton effects which similarly leads to an exponential suppression of mass scales\cite{Affleck:1983rr}. Of course, now the mass scale selection problem has been transferred to the selection of an appropriate value of $g_{hidden}^2$. A solution to the origin of mass scales also arises within the string landscape picture\cite{Bousso:2000xa,Susskind:2003kw}. This picture makes use of the vast array of string vacua found in IIB flux compactifications\cite{Douglas:2006es}. Some common estimates from vacuum counting\cite{Ashok:2003gk} are $N_{vac}\sim 10^{500}-10^{272,000}$\cite{Denef:2004ze,Taylor:2015xtz}. The landscape then provides a setting for Weinberg's anthropic solution to the cosmological constant problem\cite{Weinberg:1987dv}: the value of $\Lambda_{\cc}$ is expected to be as large as possible such that the expansion rate of the early universe allows for galaxy condensation, and hence the {\it structure formation} that seems essential for the emergence of life. Can similar reasoning be applied to the origin of the weak scale, or better yet, the origin of the SUSY breaking scale? This issue has been explored initially in Ref's \cite{Susskind:2004uv}, \cite{Douglas:2004qg} and \cite{ArkaniHamed:2005yv}. Here, one assumes a fertile patch of the landscape of vacua where the MSSM is the visible sector low energy EFT. The differential distribution of vacua is expected to be of the form \be dN_{vac}[m_{hidden}^2,m_{weak},\Lambda_{\cc}] = f_{\SUSY}\cdot f_{\EWSB}\cdot f_{\cc}\cdot dm_{hidden}^2 \ee where $f_{\SUSY}(m_{hidden}^2)$ contains the distribution of SUSY breaking mass scales expected on the fertile patch and $f_{EWSB}$ contains the anthropic weak scale selection criteria. Denef and Douglas have argued that the cosmological constant selection acts independently and hence does not affect landscape selection of the SUSY breaking scale\cite{Denef:2004ze}. For SSB, then SUSY breaking $F_i$- and $D_\alpha$-terms are expected to be uniformly distributed across the landscape, the first as complex numbers and the latter as real numbers\cite{Douglas:2004qg}. This would lead, in the case of spontaneous SUSY breaking, to a power law distribution of soft terms \be f_{\SUSY}^{\SSB}\sim m_{soft}^n \ee where $n={2n_F+n_D-1}$ and $n_F$ are the number of hidden sector SUSY breaking $F$-fields and $n_D$ is the number of hidden sector $D$-breaking fields contributing to the overall SUSY breaking scale. Such a distribution would tend to favor SUSY breaking at the highest possible mass scales for $n\ge 1$. Also, Broeckel {\it et al.}\cite{Broeckel:2020fdz} analyzed the distributions of SUSY breaking scales from vacua for KKLT\cite{Kachru:2003aw} and LVS\cite{Balasubramanian:2005zx} flux compactifications and found for the KKLT model that $f_{\SUSY}\sim m_{soft}^2$ while the LVS model gives $f_{\SUSY}\sim \log (m_{soft})$\cite{Baer:2020dri}. For the anthropic selection, an initial guess was to take $f_{\EWSB}=(m_{weak}/m_{soft})^2$ corresponding to a simple fine-tuning factor which invokes a penalty for soft terms which stray too far beyond the measured value of the weak scale. As emphasized in Ref. \cite{Baer:2016lpj} and \cite{Baer:2017uvn}, this breaks down in a number of circumstances: 1. soft terms leading to charge-or-color-breaking (CCB) vacua must be vetoed, not just penalized, 2. soft terms for which EW symmetry doesn't even break also ought to be vetoed (we label these as noEWSB vacua), 3. for some soft terms, the larger they get, then the {\it smaller} becomes the derived value of the weak scale. To illustrate this latter point, we write the {\it pocket universe} (PU)\cite{Guth:1999rh} value of the weak scale in terms of the pocket-universe $Z$-boson mass $m_Z^{\PU}$ and use the MSSM Higgs potential minimization conditions to find: \be (m_Z^{\PU})^2/2=\frac{m_{H_d}^2+\Sigma_d^d -(m_{H_u}^2+\Sigma_u^u)\tan^2\beta} {\tan^2\beta -1}-\mu^2\simeq -m_{H_u}^2-\Sigma_u^u-\mu^2 \ee where $m_{H_{u,d}}^2$ are Higgs soft breaking masses, $\mu$ is the superpotential Higgsino mass arising from whatever solution to the SUSY $\mu$ problem is invoked, and $\tan\beta\equiv v_u/v_d$ is the ratio of Higgs field vevs. The $\Sigma_u^u$ and $\Sigma_d^d$ contain over 40 1-loop radiative corrections, listed in the Appendix of Ref. \cite{rns}. The soft term $m_{H_u}^2$ must be driven to negative values at the weak scale in order to break EW symmetry. If its high scale value is small, then it is typically driven deep negative so that compensatory fine-tuning is needed in the $\mu$ term. If $m_{H_u}^2$ is too big, then it doesn't even run negative and EW symmetry is unbroken. The landscape draw to large soft terms pulls $m_{H_u}^2$ big enough so EW symmetry barely breaks, corresponding to a natural value of $m_{H_u}^2$ at the weak scale. (this can be considered as a landscape selection mechanism for tuning the high scale value of $m_{H_u}^2$ to such large values that its weak scale value becomes natural.) Also, for large negative values of trilinear soft term $A_t$, then large cancellations occur in $\Sigma_u^u (\tilde t_{1,2})$ leading to more natural $\Sigma_u^u$ values and a large $m_h\sim 125$ GeV due to large stop mixing in its radiative corrections. Also, large values of first/second generation soft scalar masses $m_0(1,2)$ cause stop mass soft term running to small values, thus also making the spectra more natural\cite{Baer:2019cae}. The correct anthropic condition we believe was set down by Agrawal, Barr, Donoghue and Seckel (ABDS) in Ref. \cite{Agrawal:1997gf}. In that work, they show that for variable values of the weak scale, then nuclear physics is disrupted if the pocket-universe value of the weak scale $m_{weak}^{\PU}$ deviates from our measured value $m_{weak}^{\OU}$ by a factor $2-5$. For values of $m_{weak}^{\PU}$ outside this range, then nuclei and hence atoms as we know them wouldn't form. In order to be in accord with this {\it atomic principle}, then to be specific, we require $m_{weak}^{\PU}<4 m_{weak}^{\OU}$. In the absence of fine-tuning of $\mu$, this requirement is then the same as requiring the electroweak fine-tuning measure\cite{ltr,rns} $\Delta_{\EW}<30$. Thus, we require \be f_{\EWSB}=\Theta (30-\Delta_{\EW}) \label{eq:fewsb} \ee as the anthropic condition while also vetoing CCB and noEWSB vacua. For the case of {\it dynamical} SUSY breaking, the SUSY breaking scale is expected to be of the form $m_{hidden}^2\sim m_P^2\exp (-8\pi^2/g_{hidden}^2)$ where in the case of gaugino condensation, $g_{hidden}$ is the coupling constant of the confining hidden sector gauge group. It is emphasized by Dine {\it et al.}\cite{Banks:2003es,Dine:2004is,Dine:2015xga} and by Denef and Douglas\cite{Denef:2004cf} that the coupling $g_{hidden}^2$ is expected to scan uniformly on the landscape. According to Fig.~\ref{fig:mvsgs}, for $g_{hidden}^2$ values in the confining regime $\sim 1-2$, we expect a uniform distribution of soft breaking terms on a log scale: {\it i.e.} each possible decade of values for $m_{soft}$ is as likely as any other decade. Thus, with $m_{soft}\sim m_{hidden}^2/m_P\sim \Lambda_{\GC}^3/m_P^2$, we would expect \be f_{\SUSY}^{\DSB}\sim 1/m_{soft} \ee which provides a uniform distribution of $m_{soft}$ across the decades of possible values\footnote{Dine\cite{Banks:2003es,Dine:2004is,Dine:2005iw,Dine:2015xga} actually finds $f_{\SUSY}\sim 1/[m_{soft}\log (m_{soft})]$ which is also highly uniform across the decades. We have checked that Dine's distribution gives even softer mass distributions than the $1/m_{soft}$ which we use.}. Such a distribution of course favors the {\it lower} range of soft term values. \begin{figure}[tbh] \begin{center} \includegraphics[height=0.4\textheight]{mu_g2.png} \caption{Expected SUSY breaking scale $m_{hidden}$ vs. hidden sector coupling $g^2$ from dynamical SUSY breaking. \label{fig:mvsgs}} \end{center} \end{figure} \section{Results} \label{sec:results} Next, we will present the results of calculations of the string landscape probability distributions for Higgs and sparticle masses under the assumption of $f_{\SUSY}^{\DSB}=1/ m_{soft}$ along with Eq. \ref{eq:fewsb} for $f_{\text{EWSB}}$. Our results will be presented within the gravity-mediated three extra parameter non-universal Higgs model NUHM3 with parameter space given by\cite{nuhm2,nuhm22,nuhm23,nuhm24,nuhm25,nuhm26} \be m_0(1,2),\ m_0(3),\ m_{1/2},\ A_0,\ \tan\beta,\ \mu,\ m_A\ \ \ \ \text{(NUHM3)}. \ee We adopt the Isajet\cite{isajet} code for calculation of the Higgs and superparticle mass spectrum\cite{Baer:1994nc} based on 2-loop RGE running\cite{Martin:1993zk} along with sparticle and Higgs masses calculated at the RG-improved 1-loop level\cite{Pierce:1996zz}. To compare our results against similar calculations which were presented in Ref. \cite{Baer:2017uvn}--but using $f_{\SUSY}=m_{soft}^n$-- we will scan over the same parameter space \begin{itemize} \item $m_0(1,2):\ 0.1 - 60$ TeV, \item $m_0(3):\ 0.1 - 20$ TeV, \item $m_{1/2}:\ 0.5 - 10$ TeV, \item $A_0:\ -50 -\ 0$ TeV, \item $m_A:\ 0.3 - 10$ TeV, \end{itemize} using the $f_{\SUSY}^{\DSB}$ distribution for soft terms with $\mu = 150$ GeV while $\tan\beta:3-60$ is scanned uniformly. The goal here was to choose upper limits to our scan parameters which will lie beyond the upper limits imposed by the anthropic selection from $f_{\text{EWFT}}$. Lower limits are motivated by current LHC search limits, but also must stay away from the singularity in the $f_{\SUSY}^{\DSB}$ distribution. Our final results will hardly depend on the chosen value of $\mu$ so long as $\mu$ is within an factor of a few of $m_{W,Z,h}\sim 100$ GeV. We expect the different classes of soft terms to scan independently as discussed in Ref. \cite{Baer:2020vad}. We will compare the $f_{\SUSY}^{\DSB}$ results against the $f_{\SUSY}^{\SSB}$ results from Ref. \cite{Baer:2017uvn} using an $n=2$ power-law draw. In Fig. \ref{fig:m0mhf}, we first show probability distributions for various soft SUSY breaking terms for $f_{\SUSY}^{\DSB}$ and also for $f_{\SUSY}^{\SSB}=m_{soft}^2$. In frame {\it a}), we show the distributions versus first/second generation soft breaking scalar masses $m_0(1,2)$. We see the old SSB $n=2$ result gives a peak distribution at $m_0(1,2)\sim 25$ TeV with a tail extending to over 40 TeV. This distribution reflects the mixed decoupling/quasi-degeneracy landscape solution to the SUSY flavor and CP problems\cite{Baer:2019zfl}. In contrast, the distribution from $f_{\SUSY}^{\DSB}$ peaks at the lowest allowed $m_0(1,2)$ values albeit with a tail extending out beyond 10 TeV. Thus, we would expect relatively light, LHC accessible, squarks and sleptons from gravity-mediation with DSB in a hidden sector. In frame {\it b}), we show the distribution in third generation soft mass inputs: $m_0(3)$. Here also the soft terms peak at the lowest values, but this time the tail extends only to $\sim 4$ TeV (lest $\Sigma_u^u (\tilde t_{1,2})$ becomes too large). In contrast, the SSB $n=2$ distribution peaks around 7 TeV. In frame {\it c}), the distribution in unified gaugino soft term $m_{1/2}$ is shown. Here again, gaugino masses peak at the lowest allowed scales for DSB while the $n=2$ distribution peaks just below 2 TeV. Finally, in frame {\it d}), we show the distribution in trilinear soft term $-A_0$. Here, the DSB distribution peaks at $-A_0\sim 0$ leading to little mixing in the stop sector and consequently lower values of $m_h$\cite{Carena:2002es,Baer:2011ab}. In contrast, the $n=2$ distribution has a double peak structure with peaks at $\sim -4$ and $-7$ TeV with a tail extending to $\sim -15$ TeV: thus, we expect large stop mixing and higher $m_h$ values from the SSB with $n=2$ case. \begin{figure}[H] \begin{center} \includegraphics[height=0.22\textheight]{m012_unilog.png} \includegraphics[height=0.22\textheight]{m03_unilog.png}\\ \includegraphics[height=0.22\textheight]{mhf_unilog.png} \includegraphics[height=0.22\textheight]{A0_unilog.png} \caption{Probability distributions for NUHM3 soft terms {\it a}) $m_0(1,2)$, {\it b}) $m_0(3)$, {\it c}) $m_{1/2}$ and {\it d}) $A_0$ from a $f_{\SUSY}^{\DSB}=1/m_{soft}$ distribution of soft terms in the string landscape with $\mu =150$ GeV. For comparison, we also show probability distributions for $f_{\SUSY}^{\SSB}\sim m_{soft}^2$. \label{fig:m0mhf}} \end{center} \end{figure} In Fig. \ref{fig:higgs}, we show distributions in light and heavy Higgs boson masses. In frame {\it a}), we show the $m_h$ distribution. For the DSB case, we see a peak at $m_h\sim 118$ GeV with almost no probability extending to $\sim 125$ GeV. This is in obvious contrast to the data and to the $n=2$ distribution which we see has a sharp peak at $m_h\sim 125-126$ GeV (as a result of large trilinear soft terms). In frame {\it b}), we see the distribution in pseudoscalar Higgs mass $m_A$. In the DSB case, $dP/dm_A$ peaks in the $\sim 300$ GeV range, leading to significant mixing in the Higgs sector and consequently possibly observable deviations in the Higgs couplings (see Ref. \cite{Bae:2015nva}). Alternatively, the SSB $n=2$ distribution peaks at $m_A\sim 3.5$ TeV with a tail extending to $\sim 8$ TeV. In the latter case, we would expect a decoupled Higgs sector with a very SM-like lightest Higgs scalar $h$ (as indeed the ATLAS/CMS data seem to suggest). \begin{figure}[H] \begin{center} \includegraphics[height=0.22\textheight]{Higgs_unilog.png} \includegraphics[height=0.22\textheight]{mA_unilog.png}\\ \caption{Probability distributions for light Higgs scalar mass {\it a}) $m_h$ and pseudoscalar Higgs mass {\it b}) $m_A$ from a $f_{\SUSY}^{\DSB}\sim 1/m_{soft}$ distribution of soft terms in the string landscape with $\mu =150$ GeV. For comparison, we also show probability distributions for $f_{\SUSY}^{\SSB}\sim m_{soft}^2$. \label{fig:higgs}} \end{center} \end{figure} In Fig. \ref{fig:mass}, we show predictions for various sparticle masses from the DSB and SSB $n=2$ cases. In frame {\it a}), we show the distribution in gluino mass $m_{\tilde g}$. For the DSB case, the distribution peaks around the $\sim$ TeV range while LHC search limits typically require $m_{\tilde g}\gtrsim 2.2$ TeV. In fact, almost all parameter space of DSB is then excluded. Had we lowered the lower scan cutoff on $m_{1/2}$, the distribution would shift lower, making matters worse. The SSB $n=2$ distribution peaks at $m_{\tilde g}\sim 4-5$ TeV with a tail extending to $\sim 6$ TeV; hardly any probability is excluded by the LHC $m_{\tilde g}\gtrsim 2.2$ TeV limit. In frame {\it b}), we show the distribution in first generation squark mass $m_{\tilde u_L}$ (as a typical example of first/second generation matter scalars). The distribution from DSB peaks in the $0-3$ TeV range with a tail extending beyond 10 TeV. Coupled with the gluino distribution, most probability space would be excluded by LHC search limits from the $m_{\tilde g}$ vs. $m_{\tilde q}$ plane. The SSB $n=2$ distribution peaks above 20 TeV with a tail extending beyond 40 TeV. In frame {\it c}), we show the distribution in lighter top squark mass $m_{\tilde t_1}$. Here, we see DSB peaks around 1 TeV with a tail to $\sim 2.5$ TeV. LHC searches require $m_{\tilde t_1}\gtrsim 1.1$ TeV so that about half of probability space is excluded. For the SSB $n=2$ case, the peak shifts to $m_{\tilde t_1}\sim 1.6$ TeV so the bulk of p-space is allowed by LHC searches. Finally, in frame {\it d}), we show the distribution in heavier stop mass $m_{\tilde t_2}$. The DSB distribution peaks around $\sim 1.5$ TeV whilst the SSB $n=2$ distribution peaks around $4$ TeV. Thus, substantially heavier $\tilde t_2$ squarks are expected from SSB as compared to DSB. \begin{figure}[H] \begin{center} \includegraphics[height=0.22\textheight]{gl_unilog.png} \includegraphics[height=0.22\textheight]{ul_unilog.png}\\ \includegraphics[height=0.22\textheight]{t1_unilog.png} \includegraphics[height=0.22\textheight]{t2_unilog.png} \caption{Probability distributions for {\it a}) $m_{\tilde g}$, {\it b}) $m_{\tilde u_L}$, {\it c}) $m_{\tilde t_1}$ and {\it d}) $m_{\tilde t_2}$ from a $f_{\SUSY}^{\DSB}\sim 1/m_{soft}$ distribution of soft terms in the string landscape with $\mu =150$ GeV. For comparison, we also show probability distributions for $f_{\SUSY}^{\SSB}\sim m_{soft}^2$. \label{fig:mass}} \end{center} \end{figure} \section{Conclusions} \label{sec:conclude} One of the mysteries of particle physics is the origin of mass scales, especially in the context of string theory where only the Planck scale $m_P$ appears. Here, we investigated the origin of the weak scale which is presumed to arise from the scale of SUSY breaking. The general framework of dynamical SUSY breaking presents a beautiful example of the exponentially suppressed SUSY breaking scale (relative to the Planck scale) arising from non-perturbative effects such as gaugino condensation or SUSY breaking via instanton effects. The SUSY breaking scale from DSB is expected to be uniformly distributed on a log scale within a fertile patch of the string landscape with the MSSM as the low energy EFT. In this case, the probability distribution $f_{\SUSY}^{\DSB}\sim 1/m_{soft}$. Such a distribution, coupled with the ABDS anthropic window, typically leads to Higgs masses $m_h$ well below the measured 125 GeV value and many sparticles such as the gluino expected to lie below existing LHC search limits. Thus, the LHC data seem to falsify this approach. That would leave the alternative option of spontaneous SUSY breaking where instead the soft SUSY breaking distribution is expected to occur as a power law or log distribution. These latter cases lead to landscape probability distributions for $m_h$ that peak at $m_h\sim 125$ GeV with sparticles typically well beyond current LHC reach, but within reach of hadron colliders with $\sqrt{s}\gtrsim 30$ TeV. For perturbative, or spontaneous, SUSY breaking, then apparently the magnitude of the SUSY breaking scale is set anthropically much like the cosmological constant is: those vacua with too large a SUSY breaking scale lead to either CCB or noEWSB vacua, or vacua with such a large weak scale that it lies outside the ABDS allowed window, in violation of the atomic principle. {\it Acknowledgements:} We thank X. Tata for comments on the manuscript. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics under Award Number DE-SC-0009956 and U.S. Department of Energy (DoE) Grant DE-SC-0017647. The computing for this project was performed at the OU Supercomputing Center for Education \& Research (OSCER) at the University of Oklahoma (OU).
3,212,635,537,863
arxiv
\section{Introduction} The non local character of Quantum Mechanics (\textit{QM}) has been object of a great debate starting from the famous Einstein-Podolsky-Rosen (\textit{EPR}) paper \cite{EPR}. Consider, for instance, a quantum system made by two photons \emph{a} and \emph{b} that are in the polarization entangled state \begin{equation} |\psi>=\frac{1}{\sqrt{2}}\left(|H,H>+e^{i\phi}|V,V>\right)\label{eq:1} \end{equation} where \textit{H} and \textit{V} stand for horizontal and vertical polarization, respectively, and $\phi$ is a constant phase coefficient. The two entangled photons are created at point \textit{O}, propagate in space far away one from the other (see Fig.\ref{fig:fotoni entangled}) and reach at the same time points \textit{A} and \textit{B} that are equidistant from \textit{O} as schematically drawn in Fig.\ref{fig:fotoni entangled}. \begin{SCfigure}[50] \centering \includegraphics[width=0.5\textwidth]{Figura1.jpg} \hspace{0.05in} \caption{\textit{O}: point where a couple of entangled photons (\emph{a} and \emph{b}) are created; \textit{A} and \textit{B}: points equidistant from \textit{O} ($d_{A}=d_{B}$) where the polarization of the entangled photons is measured.} \label{fig:fotoni entangled} \end{SCfigure}Suppose the polarization of the two photons is measured at the same time at points \textit{A} and \textit{B}. According to \textit{QM}, a measurement of horizontal polarization of photon \emph{a} (or \emph{b}) leads to the collapse of the entangled state to $|H,H>$, then, also photon \emph{b} (or \emph{a}) must collapse to the horizontal polarization. This behaviour\textit{ }suggests the existence of a sort of ``action at a distance'' qualitatively similar to that introduced in the past to describe interactions between either electric charges or masses. However, according to the Maxwell electromagnetic theory and to the Einstein General Relativity theory, it is now commonly accepted that interactions between electric charges or masses are not instantaneous but occur through exchange of signals (photons or gravitons). This means that classical physical phenomena are described by local models. Many physicists are unsatisfied of the non local character of \textit{QM} and alternative local models have been proposed to explain quantum correlations. The simplest way to explain the non local aspects of \emph{QM} as well as its probabilistic behaviour is to assume a lack of complete information about the actual state of the system in analogy with what happens for thermodynamic systems. Then, one could explain the probabilistic behaviour predicted by \textit{QM} as due to a not complete knowledge of all influences affecting the system (\textit{hidden variables}). In particular, the polarizations of the entangled photons \emph{a} and \emph{b} would be already well defined (by their hidden variables) when they are created at point \emph{O}. However, as shown by Bell \cite{Bell} and other authors \cite{CHSH,Clauser_PhysRevD_1974}, the correlations between entangled particles predicted by any theory based on local hidden variables must satisfy some inequalities that are not satisfied by \textit{QM.} The existence of these inequalities permits to perform experiments to decide unambiguously between hidden variables theories and \textit{QM.} Many experiments of this kind have been performed before and after the famous Aspect experiments \cite{Feedman_PhysRevLett_1972,Aspect,Zeilinger_PLA_1986,Tittel_PhysRevLett_1998,Weihs_PhysRevLett_1998,Aspect_Nature_1999,Pan_Nature_2000,Grangier_Nature_2001,Rowe_Nature_2001,Matsukevich_PRL_2008}. All the experiments (except a few old ones \cite{Faraci_LettNuovoCim_1974,Clauser_NuovoCim_1976}, see cap. 11 of \cite{Selleri_FisicaNovecento_2003} for a detailed bibliography) demonstrated that the Bell inequalities are violated. Although the locality loophole and the detection loophole have not yet completely closed using a single experimental apparatus \cite{Genovese_PhysRep_2005}, experiments have separately closed both the locality loophole \cite{Aspect,Weihs_PhysRevLett_1998,Zeilinger_PLA_1986,Aspect_Nature_1999,Tittel_PhysRevLett_1998} and the detection loophole \cite{Rowe_Nature_2001,Grangier_Nature_2001,Matsukevich_PRL_2008}. Then, it is reasonable to think that hidden variables alone cannot justify the experimentally observed correlations (except if implausible combinations of loopholes are supposed to exist). For many classical systems, correlations between two events are often explained as the consequence of communications and, thus, also quantum correlations between entangled particles could be due to some communication. However, the Aspect experiments and many other \emph{EPR} experiments were performed in space-like conditions and, thus, if Quantum correlations would be due to communications, the communications velocity should exceed the light velocity \emph{c}. For this reason, after the Aspect experiment, Bell said ``\emph{in these EPR experiments there is the suggestion that behind the scenes something is going faster than light}''\cite{Davies_ghost_1993}. Successively, well defined models for \textit{QM} based on the presence of superluminal communications have been proposed \cite{Eberhard_1989,Bohm_undivided_1991}. The possibility of the existence of particles going faster than light (\emph{tachyons}) has been proposed some years ago by some authors \cite{Bilaniuk_AmJPh_1962,Feinberg_PhysRev_1967,Bilaniuk_PhysToday_1969}. Tachyons are known to lead to causal paradoxes \cite{moller_theory_1955} (the present occurring in a given point can be affected by the future occurring in the same point). Consider, for instance, a first person that exits from his home and is wetted by rain. He could send a tachyon to inform a second person that send a replay tachyon that is received by the first person before he exits from the home. Then, he could decide to get an umbrella to be not wetted by the rain. Obviously such a behaviour is unrealistic. However, no causal paradox arises if tachyons are supposed to propagate in a preferred frame where the tachyon velocity $v_{t}=\beta_{t}c$ ($\beta_{t}>1$) is the same in all directions (see, for instance, \cite{Kowalczynski_IntJThPhys_1984,Reuse_AnPhys_1984,Caban_PhysRevA_1999,maudlin_quantum_2001,Cocciaro_riserratevi_2012}). Note that also in this case the Special Relativity predicts that a tachyon that is emitted at point \emph{A} at the local time $t=0$ can get point \emph{B} at time $t<0$. However, this does not represent a paradoxical result since the time ordering between separated points of the space has no direct physical meaning. Indeed, the time ordering between events occurring in different space points is conventional and depends on the conventional procedure used to synchronize far clocks. We assume here the conventionalist thesis on the synchronization of distant clocks \cite{Anderson_PhysRep_1998}. It is true that we cannot yet consider closed the debate on this topic \cite{jammer_concepts_2006}, but we believe that the conventionalist thesis is the correct one. Consider, now, two photons that are in the entangled state of Eq.\eqref{eq:1} and suppose that photon \emph{a} passes through a polarizing filter with horizontal polarization axis. According to the superluminal models of \textit{QM}, when photon \emph{a} passes through the polarizing filter, it collapses to the horizontally polarized state, then a tachyon is sent to the entangled photon \emph{b} that collapses to the horizontally oriented state only after this communication has been received. Therefore, the \textit{QM} correlations between entangled photons can be recovered only if it has been sufficient time to communicate between the two entangled photons. Consider, for instance, an ideal experiment performed in the tachyon preferred frame $S'$ where two polarizing filters lie at points \emph{A} and \emph{B} at the same optical distances $d'_{A}=d'_{B}$ from source \emph{O} of the entangled photons as shown in Fig.\ref{fig:fotoni entangled}. In these conditions photons \emph{a} and \emph{b} get both polarizers at the same time (in the \emph{PF}) and, thus, if the tachyon velocity in the \emph{PF} has a finite value, no communication is possible and correlations between entangled particles should differ appreciably from the predictions of \emph{QM}. Of course, there is always communication if the tachyon velocity is $v_{t}\rightarrow\infty$, then no experiment satisfying \textit{QM} can invalidate the superluminal models provided $v_{t}\rightarrow\infty$. In such a case superluminal models are completely equivalent to \emph{QM} as occurs for the Bohm model \cite{Bohm_1,Bohm_2}. It has been recently shown \cite{Bancal_NatPhys_2012,gisin_quantum_2012} that, if \emph{QM} correlations are due to superluminal signals with finite velocity $v_{t}$, then superluminal signalling becomes possible, that is communications at faster than light velocity must be possible at a macroscopic level and they cannot be confined to ``hidden'' variables. From the experimental point of view, equality $d'_{A}=d'_{B}$ can be only approximatively verified within a given uncertainty $\Delta d'$. Consequently, photons \emph{a} and \emph{b} could get the polarisers at two different times ($\Delta t'=\nicefrac{\Delta d'}{c}$) and, thus, they could communicate only if the tachyon velocity exceeds a lower bound $v_{t,min}=\nicefrac{d'_{AB}}{\Delta t'}$ where $d'_{AB}$ is the distance between points \emph{A} an \emph{B} in the \emph{PF}. The possible results of such an experiment are:\emph{ i)} a lack of quantum correlations is observed; \emph{ii)} quantum correlations are always satisfied. In the first case (\emph{i)}) one can conclude that orthodox \emph{QM} is not correct and that quantum correlations are due to exchange of superluminal messages. By suitably changing distances $d'_{A}$ and $d'_{B}$ one could obtain a measure of the tachyon velocity $v_{t}$. In the second case (\emph{ii)}), due to the experimental uncertainty, one cannot invalidate the tachyon model of \emph{QM} but can only establish a lower bound $v_{t,min}$ for the tachyon velocity. So far we assumed that the experiment is carried out in the preferred frame, but velocity vector \textbf{\emph{$\vec{V}$}} of \emph{PF} is unknown and, thus, the experiment cannot be performed in this frame. It will be shown in Section \ref{sec:The-main-features} that this drawback can be bypassed performing the experiment on the Earth with the \emph{A}-\emph{B} axis aligned along the West-East direction. Such an experiment could provide both the tachyon velocity $v_{t}$ and the the velocity vector \textbf{\emph{$\vec{V}$}} of the \emph{PF}. A long-distance (10.6 km) \textit{EPR} experiment to detect possible effects of superluminal quantum communications has been performed by Scarani et al. \cite{scarani_PLA_2000} using energy-time entangled photons. No deviation from the predictions of \emph{QM} was observed and, thus, the authors obtained only a lower bound for the tachyon velocities in the \emph{PF}. The experimental results were analyzed under the assumption that the preferred frame is the frame of cosmic microwave background radiation. With this assumption, the authors obtained a lower bound $v_{t,min}=1.5\times10^{4}\, c$. Successively, similar long-distance measurements have been performed by Salart et al. \cite{Salart_nature_2008} improving some features of the previous experiment and using detectors aligned close to West-East direction (at angle $\alpha=5.8\text{\textdegree}$). The authors found a lower bound for the tachyon velocity for many different possible directions of velocity \textbf{\emph{$\vec{V}$}} of the \emph{PF}. More recently \cite{Cocciaro_PLA_2011} we performed \emph{EPR} measurements on polarization entangled photons in a laboratory experiment with small distances $d_{A}\thickapprox d_{B}\thickapprox1\,\mathrm{m}$ and with the \emph{AB} axis in Fig.\ref{fig:fotoni entangled} precisely aligned along the West-East direction ($\left|\alpha\right|<0.2\text{\textdegree}$). In our experiment, too, no deviation from the predictions of \emph{QM} was found and, thus, we obtained only a lower bound for the tachyon velocity. The choice of aligning the measurement points \emph{A} and \emph{B} just along the West-East direction allowed us to obtain a lower bound for the tachyon velocity for any possible orientation of the velocity of the preferred frame. The experiment in \cite{Salart_nature_2008} and in \cite{Cocciaro_PLA_2011} are somewhat complementary. Indeed, the Salart et al. experiment used very large distances ($d_{A}\thickapprox d_{B}\thickapprox10\,\mathrm{km}$) but somewhat large acquisition times ($\delta t\thickapprox360\,\mathrm{s}$), while our experiment used shorter distances ($d_{A}\thickapprox d_{B}\thickapprox1\,\mathrm{m}$) but much shorter acquisition times ($\delta t\thickapprox4\,\mathrm{s}$). With these features, the Salart et al. experiment was much more sensitive to tachyons propagating in \emph{PF} that have velocities much smaller than the light velocity whilst our experiment was more sensitive to \emph{PF} travelling at higher velocities (see Fig.\ref{fig:4}). As will be shown in Section \ref{sec:The-main-features}, an \emph{EPR} experiment similar to our previous experiment but with entangled photons that propagate in air at much larger distances (of order of 1 km) and with much smaller acquisition times (of order of 0.1 s) could increase greatly the range of detectable tachyon velocities. To get this goal, we plan to perform such an experiment inside the long galleries of the \emph{EGO} structure (European Gravitational Observatory) \cite{EGO}. In this paper, we will analyse the main features of the proposed experiment. The possible results of this experiment are either the detection of possible discrepancies between experiment and \emph{QM} due to a finite velocity superluminal communication or the increase by about two orders of magnitude of the actual lower bounds for the tachyon velocities. The main features of the experiment are discussed in Section \ref{sec:The-main-features}. The critical points and the main sources of experimental uncertainty are discussed in Section \ref{sec:Critical-points-and}. Finally, the conclusions are given in Section \ref{sec:Conclusions}. \section{\label{sec:The-main-features}The main features of the experimental method.} Consider the geometry of Fig.\ref{fig:fotoni entangled}. We start considering the ideal case where the experiment is performed in the preferred frame \emph{$S'$ }with tachyons that propagate with the same velocity $v_{t}=\beta_{t}c$ ($\beta_{t}>1$) along any direction \cite{Salart_nature_2008}. For simplicity, in the following, we will call ``tachyon velocity'' the reduced velocity $\beta_{t}=\frac{v_{t}}{c}$. Two polarizing filters lie at points \emph{A} and \emph{B} aligned along a \emph{x}'-axis and at optical distances $d'_{A}$ and $d'_{B}$ from the source \emph{O} of the entangled photons (the apostrophe denotes the parameters measured in the \emph{PF}). The two entangled photons will get both the polarisers at the same times $t'_{A}=t'_{B}$ if $d'_{A}=d'_{B}$ and, thus, no superluminal communication can be possible if the tachyon velocity has a finite value. Due to the experimental uncertainty $\Delta d'$, distances $d'_{A}$ and $d'_{B}$ can never be exactly equalized and $\left|\Delta t'\right|=\left|t'_{A}-t'_{B}\right|=\nicefrac{\Delta d'}{c}\neq0$. In these conditions, a superluminal communication between points \emph{A} and \emph{B} will be not possible only if the time that a tachyon spend to go from \emph{A} to \emph{B} (or from \emph{B} to \emph{A}) is greater than $\left|\Delta t'\right|$, that is if the tachyon velocity $\beta_{t}$ is smaller than \begin{equation} \beta_{t,min}=\frac{d'_{AB}}{\left|\Delta ct'\right|},\label{eq:2} \end{equation} where\emph{ }$d'_{AB}$ is the distance between points \emph{A} and \emph{B. QM }correlations will be always established if the tachyon velocity $\beta_{t}$ is higher than $\beta_{t,min}$ of Eq.\eqref{eq:2} but appreciable differences between \emph{QM} correlations and experimentally observed correlations is expected if $\beta_{t}<\beta_{t,min}$. In this latter case, the experimental correlations should satisfy the Bell inequality. Therefore, to detect possible discrepancies between experiment and \emph{QM} due to the finite velocity of tachyons, one has to increase the lower limit in Eq.\eqref{eq:2} as much as possible to satisfy condition $\beta_{t}<\beta_{t,min}$. This goal could be achieved either reducing $\Delta d'=\left|\Delta ct'\right|$ or increasing distance $d'_{AB}$ between the polarisers. If discrepancies between \emph{QM} and experiment would be found for given values of $\Delta d'$ and $d'_{AB}$, one could conclude that the tachyon velocity is lower than the lower bound in Eq.\eqref{eq:2}. In such a case, one could measure the tachyon velocity $\beta_{t}$ changing distances $d'_{A}$, $d'_{B}$ and $d'_{AB}$ to reduce $\beta_{t,min}$ of Eq.\eqref{eq:2} until condition $\beta_{t,min}<\beta_{t}$ is verified and\emph{ QM} correlations are re-established. The tachyon velocity $\beta_{t}$ would correspond to this critical value of $\beta_{t,min}$. The results above were obtained assuming that the experiment is performed in the tachyons \emph{PF}. Of course, this is not possible because we do not know what is the direction and the magnitude of the \emph{PF} velocity $\vec{V}=\vec{\beta}c$. However, this apparent difficulty can be overcome if one takes advantage of the rotation motion of the Earth around its axis and aligns points \emph{A} and \emph{B} along the West-East \emph{x}-axis on the Earth as shown in Fig.\ref{fig:2}. Suppose that points \emph{A} and \emph{B} are precisely aligned along the \emph{x}-axis and that optical distances $d{}_{A}$ and $d{}_{B}$ of \emph{A} and \emph{B} from the entangled photon source \emph{O} have the same values on the Earth frame. In this condition, the two entangled photons get the two polarizing filters at the same times \textbf{\emph{$t_{A}$}} and\textbf{\emph{ $t_{B}$}} in the Earth frame but the arrival times \textbf{\emph{$t'_{A}$}} and\textbf{\emph{ $t'_{B}$}} can be different in the tachyon \emph{PF}. Due to the rotation of the Earth with the angular velocity \textbf{\emph{$\omega$}}, angle \textbf{\emph{$\theta$}} between the the velocity \textbf{\emph{$\vec{V}$}} of the \emph{PF} and the\emph{ x}-axis (West-East direction) changes periodically with time \emph{t} according to the simple law: \begin{equation} \theta(t)=\arccos\left[\sin\chi\cos\omega\left(t-t_{0}\right)\right],\label{eq:3} \end{equation} where $t_{0}$ is the unknown time which gives $\varphi\left(t\right)=0$ in Fig.\ref{fig:2}. Angle \textbf{\emph{$\theta$}} oscillates periodically between a minimum value \textbf{\emph{$\nicefrac{\pi}{2}-\chi$}} (\textbf{\emph{$\varphi(t)$}}=0 in Fig.\ref{fig:2}) and a maximum value \textbf{\emph{$\nicefrac{\pi}{2}+\chi$}} (\textbf{\emph{$\varphi(t)$}}=\textbf{\emph{$\pi$}} in Fig.\ref{fig:2}). Then, whatever is the orientation of the velocity vector \textbf{\emph{$\vec{V}$}} of the \emph{PF}, there are two times \textbf{\emph{$t_{1}$}} and\textbf{\emph{ $t_{2}$}} during each sidereal day where \textbf{\emph{$\vec{V}$}} becomes perpendicular to the West-East \emph{x}-axis. At these two times, according to the Special Relativity, the distances of \emph{A} and \emph{B} from \emph{O} are equal also in the \emph{PF} frame ($d'_{A}=d'_{B}$) and, thus, the arrival of the entangled photons at points \emph{A} and \emph{B} is simultaneous also in the \emph{PF}. Then, deviations of correlations from the predictions of the\emph{ QM} should be observed at the special times \textbf{\emph{$t_{1}$}} and\textbf{\emph{ $t_{2}$}} if the optical paths of the entangled photons would be equal. Of course, due to the experimental uncertainty $\Delta d$ on the equalization of the optical paths, deviations from the predictions of \emph{QM }could be only observed if the tachyon velocity $\beta_{t}$ is lower than a lower bound $\beta_{t,min}$. \begin{figure} \begin{centering} \includegraphics[scale=0.3]{Figura2} \par\end{centering} \caption{\label{fig:2}(a) geometry of the experiment with segment \emph{AB} oriented along the West-East direction on the Earth; (b) detail of the geometric parameters that characterize the experiment.\textbf{\emph{ }}\emph{z} is the North-South axis and \emph{x} is the West-East axis.\textbf{\emph{ $\vec{V}$}} is the velocity vector of the \emph{PF} with respect to the laboratory,\emph{ }\textbf{\emph{$\theta$}} denotes the angle of \textbf{\emph{$\vec{V}$}} with the \emph{x}-axis, \textbf{\emph{$\varphi$}} is the azimuthal angle and \textbf{\emph{$\chi$}} is the polar angle. The polarizing filters that collect the entangled photons lie at points \emph{A} and \emph{B} aligned along the \emph{x}-axis at the same optical distances from the source \emph{O} of the entangled photons.} \end{figure} Furthermore, the need of performing the experiment in the Earth frame introduces also another source of uncertainty because vector \textbf{\emph{$\vec{V}$}} becomes orthogonal to the West-East axis only at two well defined times \textbf{\emph{$t_{1}$}} and\textbf{\emph{ $t_{2}$}}. However, to detect a statistically significant number of coincidences of entangled photons, a sufficiently long acquisition time $\delta t$ is needed. Also if this acquisition time is centred around the special times \textbf{\emph{$t_{1}$}} and\textbf{\emph{ $t_{2}$}}, the velocity vector \textbf{\emph{$\vec{V}$}} does not remain exactly perpendicular to the \emph{x}-axis during the whole acquisition time. Therefore, the finite acquisition time leads to an adjunctive uncertainty on the equality of the optical paths with a consequent decreasing of the value of $\beta_{t,min}$. In conclusion, two main parameters determine the lower limit of $\beta_{t,min}$ obtainable with the measurements on the Earth frame: 1) the accuracy $\Delta d$ on the equalization of the optical paths; 2) the finite acquisition time $\delta t$. The lower bound $\beta_{t,min}$, defined by Eq.\eqref{eq:2}, can be rewritten in terms of physical parameters measured in the Earth frame and becomes \cite{Salart_nature_2008,Cocciaro_PLA_2011}: \begin{equation} \beta_{t,min}=\sqrt{1+\frac{\left(1-\beta^{2}\right)\left[1-\bar{\rho}^{2}\right]}{\left[\bar{\rho}+\beta\sin\chi\sin\frac{\pi\delta t}{T}\right]^{2}}},\label{eq:4} \end{equation} where $\bar{\rho}=\frac{\Delta d}{d_{AB}}$, \emph{T} is the sidereal day, $\delta t$ is the acquisition time, $\chi$ is the polar angle between the North-South axis of the Earth and velocity \textbf{\emph{$\vec{V}$}} of the \emph{PF }(see Fig.\ref{fig:2}) and $\beta$ is the reduced velocity ($\beta=\nicefrac{V}{c}$) of the \emph{PF}. A quick way to obtain \eqref{eq:4} is shown in the Appendix. In typical experimental conditions \cite{Salart_nature_2008,Cocciaro_PLA_2011}, the acquisition time $\delta t$ is much smaller than the sidereal day \emph{T }and $\beta_{t,min}$ is a decreasing function of both $\bar{\rho}$ and $\delta t$ and reaches a minimum value if $\chi=\nicefrac{\pi}{2}$. Our following considerations and figures will be restricted to $\delta t\ll T$ and to the unfavourable condition $\chi=\nicefrac{\pi}{2}$. $\beta_{t,min}$ is also a decreasing function of $\beta$ that assumes its maximum value $\beta_{t,min}=\frac{1}{\bar{\rho}}$ for $\beta=0$ and approaches the minimum value $\beta_{t,min}=1$ for $\beta\rightarrow1$. The typical plot of function $\beta_{t,min}$ versus the reduced velocity $\beta$ of the \emph{PF} for $\chi=\nicefrac{\pi}{2}$ is drawn in Fig.\ref{fig:3}\begin{SCfigure}[50] \centering \includegraphics[width=0.4\textwidth]{Figura3.jpg} \hspace{0.05in} \caption{The typical plot of function $\beta_{t,min}$ versus $\beta$ is shown for the unfavourable case $\chi=\nicefrac{\pi}{2}$ and for some values of the experimental parameters $\bar{\rho}$ and $\delta t$. Curves \emph{a}, \emph{b} and \emph{c} correspond to the fixed acquisition time $\delta t=10^{-1}\times\frac{T}{\pi}$ and to decreasing values of $\bar{\rho}$ ($a:\bar{\rho}=10^{-3},\, b:\bar{\rho}=10^{-5},\, c:\bar{\rho}=10^{-6}$). Curves \emph{c}, \emph{d} and \emph{e} correspond to a fixed value $\bar{\rho}=10^{-6}$ and to decreasing values of $\delta t$ ($c:\delta t=10^{-1}\times\frac{T}{\pi},\, d:\delta t=10^{-3}\times\frac{T}{\pi},\, e:\delta t=10^{-7}\times\frac{T}{\pi}$). Note that curve \emph{e} satisfies condition \eqref{eq:7} and, thus, it depends on $\beta$ only for \emph{PF} moving at improbable relativistic velocities.} \label{fig:3} \end{SCfigure} for some values of the experimental parameters $\bar{\rho}$ and $\delta t$. $\beta_{t,min}$ keeps an almost constant value $\beta_{t,min}\thickapprox\frac{1}{\bar{\rho}}$ for $\beta\ll\beta_{0}=\nicefrac{\bar{\rho}T}{(\pi\delta t)}$ , then it decreases rapidly as $\beta$ increases above $\beta_{0}$. Eq.\eqref{eq:4} and Fig.\ref{fig:3} evidence what is the optimal strategy to increase $\beta_{t,min}$ as much as possible. First of all we must reduce $\bar{\rho}$ minimizing the uncertainty $\Delta d$ and increasing distance $d_{AB}$ as far as possible. Then, we must use an acquisition time $\delta t$ sufficiently small to render negligible the $\delta t$-contribution in Eq.\eqref{eq:4}. This contribution is always negligible if: \begin{equation} \delta t\ll\bar{\rho}\frac{T}{\pi}.\label{eq:5} \end{equation} If condition \eqref{eq:5} is satisfied, the lower limit $\beta_{t,min}$ becomes insensitive to the acquisition time $\delta t$. In the Geneva experiment \cite{Salart_nature_2008} the very small value $\bar{\rho}=5.4\times10^{-6}$ was obtained using Telecom optical fibres connecting two Telecom stations at a distance of about $20\,\mathrm{km}$ and aligned approximatively along the West-East direction. The experiment was performed using energy-time entangled photons. The two Telecom stations (\emph{A} and \emph{B}) were not exactly aligned along the West-East axis but did the angle $\gamma=5.8\text{\textdegree}$ with this axis and, thus, the experiment was poorly sensitive to tachyons \emph{PF} travelling with velocity \textbf{\emph{$\vec{V}$}} lying in a cone of aperture $\approx6\text{\textdegree}$ around the North south axis. Indeed, due to the Earth rotation, the angle between the \emph{AB} axis and the velocity vector \textbf{\emph{$\vec{V}$}} oscillates with time between the minimum value $\nicefrac{\pi}{2}-\chi-\gamma$ and the maximum value $\nicefrac{\pi}{2}+\chi-\gamma$. Then, the \emph{AB} axis can become orthogonal to \textbf{\emph{$\vec{V}$}} only if $\chi\geq\gamma$. In this long path experiment the main effect that limited a further reduction of $\bar{\rho}$ was due to the appreciable optical dispersion of the photon wave-packet in the long Telecom fibres. On the other hand, due to losses in the optical fibres, the photons count rate was relatively small and a somewhat long measurement time ($\delta t\approx360\,\mathrm{s}$) was needed to obtain a statistically significant number of coincidences. In these conditions the experiment was greatly sensitive to tachyons \emph{PF} travelling at speeds much smaller than that the light velocity and much less sensitive to relativistic \emph{PF} (see curve\emph{ II} in Fig.\ref{fig:4}). The Pisa experiment \cite{Cocciaro_PLA_2011} was performed with entangled photons that propagate in air over much shorter distances (about $2\,\mathrm{m}$) and the polarization correlations were measured instead of energy-time correlations. In these conditions photon losses were minimized and the typical acquisition time was $\delta t\approx4\,\mathrm{s}$. An interferometric method was used to minimize the uncertainty on the equalization of the photons optical paths and the main source of uncertainty $\Delta d$ was due to the $220\,\mu\mathrm{m}$ thickness of the absorbing layer of the polarizing filters. The obtained value of parameter $\bar{\rho}$ was $\bar{\rho}=1.6\times10^{-4}$ that is about 30 times higher than the value characterizing the experiment of \cite{Salart_nature_2008}. Then, our experiment is much less sensitive than that in \cite{Salart_nature_2008} to tachyons propagating in a low speed \emph{PF} but is more sensitive to tachyons propagating in high speed \emph{PF} (see curve\emph{ I} in Fig.\ref{fig:4}). In this sense, our experiment was somewhat complementary to that of the Geneva Group. In the present paper we propose a new experiment exploiting the propagation of polarized entangled photons in air at distances about 850 times longer than in our previous experiment. The experiment should be performed inside the long tunnels of the European Gravitational Observatory (\emph{EGO}) that host the \emph{VIRGO} experiment on the detection of gravitational waves \cite{VIRGO}. Using an interferometric method and feedback procedures we will hold the uncertainty $\Delta d$ on the equality of the optical paths of the entangled photons well below the basic uncertainty due to the thickness of the absorbing layer of the polarizing filters (220 $\mu$m). The small uncertainty and the long optical paths will allow us to decrease our previous value $\bar{\rho}=1.6\times10^{-4}$ up to the much smaller value $\bar{\rho}=1.9\times10^{-7}$. As shown in the discussion above, the lower bound for the tachyon velocities is also determined by the acquisition time $\delta t$ that must satisfy the condition that the number of detected coincidences must be statistically relevant. $\delta t$ can be reduced increasing the brightness of the source of the entangled photons and minimizing the losses. In some papers, the Kwiat group \cite{Kwiat_OptExpr_2005,Kwiat_OptExpr_2007,Kwiat_OptExpr_2009} developed a very efficient and simple method to obtain highly bright sources of entangled photons with a high degree of entanglement. Using this technique and a suitable optical configuration we plan to increase by more than 1000 times the number of measured coincidences and to reduce the acquisition time to less than $\delta t=0.1\,\mathrm{s}$. The lower bounds obtained in the previous experiments together with the new lower bound that should be reached with the experiment proposed here are shown in Fig.\ref{fig:4}.\begin{SCfigure}[50] \centering \includegraphics[width=0.4\textwidth]{Figura4.jpg} \hspace{0.05in} \caption{The three curves \emph{I} (red), \emph{II} (green) and \emph{III} (blue) correspond to the experimental results for the lower bound $\beta_{t,min}$ versus $\beta$ in different experiments. \emph{I}: our previous results ($\bar{\rho}=1.6\times10^{-4}$, $\delta t=4 $s) ; \emph{II}: the Geneva group results ($\bar{\rho}=5.4\times10^{-6}$, $\delta t=360 $s) ; \emph{III}: the predicted results for the experiment proposed here ($\bar{\rho}=1.9\times10^{-7}$, $\delta t=0.1 $s). The gray region represents the new region of tachyons velocities $\beta_{t}$ that would become accessible with the new experiment.} \label{fig:4} \end{SCfigure} Curve \emph{I} (red) corresponds to our previous experimental results, curve \emph{II} (green) to the results of the Geneva group and curve \emph{III} (blue) to the results expected with the proposed experiment. The grey region corresponds to the new region of tachyon velocities that could become accessible. In the next section, the critical points of the proposed experiment and the main sources of experimental uncertainty will be discussed. \section{\label{sec:Critical-points-and}The experiment: critical points and main sources of experimental uncertainty} As stated above, the polarizing filters should be aligned along the West-East direction, but the \emph{EGO} tunnels are oriented at 19\textdegree{} and 109\textdegree{} with respect to this direction, respectively. Therefore, the polarizing filters must be placed in two different tunnels and one of the entangled photons must be deviated from a \emph{EGO} tunnel to the other through the small \emph{CD }tube ($20\,\mathrm{cm}$ diameter, $100\,\mathrm{m}$ length) built to connect them. The source of the entangled photons (point \emph{O}) will be placed in the 19\textdegree{} tunnel together with polariser $P_{A}$ at distance $\approx800\,\mathrm{m}$ while polariser $P_{B}$ will be placed at point \emph{B} in the other \emph{EGO} tunnel at the same distance from \emph{O} as shown schematically in Fig.\ref{fig:5} ($\overline{OA}=\overline{OC}+\overline{CD}+\overline{DB}$).\begin{SCfigure}[50] \centering \includegraphics[width=0.4\textwidth]{Figura5c.jpg} \hspace{0.05in} \caption{ A Google earth view of the \emph{EGO} structure with the two orthogonal tunnels is shown together with the West-East line. Point \emph{O} represents the position of the source of entangled photons, while \emph{A} and \emph{B} denote the positions of the two polarizing filters in the two \emph{EGO} arms. The full line represents the West-East direction. The entangled photon that travel at the right of point \emph{O} will be deviated from the tunnel containing source \emph{O} to the other\emph{ }tunnel passing through a suitable \emph{CD} tube (20 cm diameter) that will be built to connect the two arms of the \emph{EGO} structure.} \label{fig:5} \end{SCfigure} The experimental apparatus is schematically shown in Fig.\ref{fig:6} \begin{figure} \centering{}\includegraphics[scale=0.3]{Figura6}\caption{\label{fig:6} Schematic view of the experimental apparatus. To simplify the drawing, the optical path followed by the left entangled photon has been represented by a straight line and the optical prisms that deviate the beam from a tunnel to the other (see Fig.\ref{fig:5}) are not shown. A $405\,\mathrm{nm}$ laser beam impinges on two adjacent \emph{BBO} plates with orthogonal optical axes and produces two 810 nm entangled beams preferentially emitted at the symmetric angles $\alpha_{A}=-\alpha_{B}\approx2\text{\textdegree}$. $C,\, C{}_{1},\, C_{A}$ and $C_{B}$ are optical compensators, $R_{A}$ and $R_{B}$ are two right angle prisms; $L_{A},\, L'_{A},\, L_{B},\, L'_{B}$ are large diameter ($18\,\mathrm{cm}$) planoconvex lenses, $P_{A}$ and $P_{B}$ are polarizing filters that are at the same distance from source \emph{O}, $F_{A}$ and $F_{B}$ are optical interferometric filters with 10 nm line-width, $O_{A}$ and $O_{B}$ are aspheric lenses and $D_{A}$ and $D_{B}$ are single photons counters. \emph{V}to\emph{O} and \emph{O}to\emph{V} are voltage pulses to optical pulses converters and vice-versa.} \end{figure} where, for simplicity of drawing, the path \emph{OCDB } shown in Fig.\ref{fig:5} has been represented by a straight line and the prisms needed to deviate the photons from one \emph{EGO} arm to the other are not represented. A blue diode laser beam ($\lambda=405\,\mathrm{nm}$) is polarized at 45\textdegree{} with respect to the vertical axis by polariser $P_{0}$, passes through a tilting plate optical compensator (\textit{C}) with vertical extraordinary axis and the quartz compensator ($C_{1}$) with horizontal extraordinary axis and, finally, impinges at normal incidence on two thin ($0.5\,\mathrm{mm}$) adjacent non-linear optical crystals (\textit{BBO}) cut for type-I phase matching \cite{Kwiat_PhysRevA_1999}. The optic axes of the adjacent \emph{BBO} crystals are tilted at the angle 29.2\textdegree{} and lie in planes perpendicular to each other with the first plane that is horizontal. The pump beam induces down conversion at $\lambda=810\,\mathrm{nm}$ in each crystal \cite{Kwiat_PhysRevA_1999} with a maximum of emission at the two symmetric angles $\alpha_{A}=-\alpha_{B}\approx2\text{\textdegree}$ with respect to the pump laser beam. The down converted photons are created in the maximally entangled state $\left(|H,H>+e^{i\phi}|V,V>\right)/\sqrt{2}$, where phase $\phi$ can be adjusted tilting the optical compensator \textit{C}. $R_{A}$ and $R_{B}$ are two right angle prisms, $C_{A}$ and $C_{B}$ are optical \emph{BBO} plates with tilt angle 29.2\textdegree{}, $P_{A}$ and $P_{B}$ are thin near infrared polarizing films (LPNIR, Thorlabs), $F_{A}$ and $F_{B}$ are interference filters ($\lambda=810\,\mathrm{nm}\pm5\,\mathrm{nm}$) and $D_{A}$ and $D_{B}$ are single photons counters (Perkin Elmer SPCM-AQ4C). $L_{A},L_{B},L'_{A}$ and $L'_{B}$ are large plano-convex optical lenses (18 cm-diameter) that ensure that all the entangled photons emitted within a cone of $\approx0.35\text{\textdegree}$ aperture angle (around the two maximum emission directions $\alpha_{A}=-\alpha_{B}\approx2\text{\textdegree}$) can be collected by aspheric objectives $O_{A}$ and $O_{B}$ and sent to detectors $D_{A}$ and $D_{B}$. The centres of polarisers $P_{A}$ and $P_{B}$ are aligned along a \textit{x}-axis in the West-East direction within $\approx0.1\text{\textdegree}$. The role of $P_{0}$, \emph{C}, $P_{A}$, $P_{B}$, $F_{A}$ and $F_{B}$ is shown in \cite{Kwiat_PhysRevA_1999}; the role of $C_{1}$, $C_{A}$ and $C_{B}$ is explained in \cite{Kwiat_OptExpr_2005,Kwiat_OptExpr_2007,Kwiat_OptExpr_2009} and is resumed in Section \ref{sub:Minimization-of-the}. The outputs electric pulses of the detectors ($20\,\mathrm{ns}$ width) are transformed into optical pulses, sent via optical fibres to the central region close to the photon source, transformed again to electric pulses and sent to electronic counters and to a coincidence circuit connected to a \textit{PC}. A labview program controls any experimental feature. The results shown in curve \emph{III} of Fig.\ref{fig:4} were obtained assuming that the uncertainty $\Delta d$ can be maintained much lower than the thickness of the polarizing layers ($\approx220\,\mu\mathrm{m}$) and the acquisition time can be small enough ($\delta t\approx0.1\,\mathrm{s}$). Here below we will describe how both these conditions can be satisfied. \subsection{\label{sub:Minimization-of-the}Minimization of the acquisition time $\delta t$.} The minimum acquisition time $\delta t$ is determined by the condition that the number of detected entangled photons must be statistically significant during time $\delta t$ ($>1000$ measured coincidences). Then, low values of $\delta t$ can be obtained only if: \emph{a}) the number of entangled photons collected by lenses $L_{A}$ and $L_{B}$ is large enough, \emph{b}) the losses of the entangled photons in the path from the source to the detectors are as smaller as possible. The momentum conservation would require that the entangled photons are emitted at two well defined angles ($\alpha_{A}=-\alpha_{B}\approx2\text{\textdegree}$), but, due to the finite thickness of the \emph{BBO} plates ($0.5\,\mathrm{mm}$), a $\approx\pm0.35\text{\textdegree}$ spreading around these directions occurs. Then, a large fraction of the entangled photons can be collected only if the aperture angle of lenses $L_{A}$ and $L_{B}$ with respect to the photon source is $\approx0.35\text{\textdegree}$. However, due to the great optical anisotropy of the \emph{BBO} plates, the relative phase between entangled photons ($\phi$ in Eq.\eqref{eq:1}) depends greatly on the emission direction. This means that the entanglement of the photons is satisfactory (close to 100\% fidelity) only if the aperture angle is $\ll0.05\text{\textdegree}$. Furthermore, the number of entangled photons greatly increases with the power of the pump laser diode and, thus, a high power laser diode should be used. However, high power laser diodes ($>50\,\mathrm{mW}$) are characterized by a small coherence length which is typically $\approx0.2\,\mathrm{mm}$. In these conditions, the \emph{V}-polarized down converted photons that are produced in the first \emph{BBO} plate ($0.5\,\mathrm{mm}$ thick) are essentially uncorrelated with respect to the \emph{H}-polarized photons produced in the second adjacent plate and, thus, the entanglement is very poor. This latter drawback could be avoided using high coherence length laser diodes but, in this case, the laser power would be smaller than $50\,\mathrm{mW}$. The Kwiat group \cite{Kwiat_OptExpr_2005,Kwiat_OptExpr_2007,Kwiat_OptExpr_2009} showed that the drawbacks described above can be bypassed using suitable optical compensators ($C_{1}$, $C_{A}$ and $C_{B}$ in Fig.\ref{fig:6}). $C_{1}$ is a quartz plate that introduces a retardation between the \emph{V} and the \emph{H} components of the pump beam virtually equal to the difference between the emission times of down converted photons from the two \emph{BBO} plates. In this way, the losses of entanglement due to the low coherence of the laser beam are virtually eliminated. $C_{A}$ and $C_{B}$ are two anisotropic plates (\emph{BBO}) with a suitable thickness to compensate the phase differences between entangled photons propagating along different directions. Using these compensation methods with a $280\,\mathrm{mW}$ laser beam and collecting entangled photons with a 0.35\textdegree{} aperture angle, the Kwiat group obtained an ultra bright and high fidelity (99 \%) source of entangled photons with $1.02\times10^{6}$ detected coincidences/s. In our experiment we require also that all the entangled photons that are emitted within the 0.35\textdegree{} aperture angle get photodetectors $D_{A}$ and $D_{B}$ at a great distance ($\approx800\,\mathrm{m}$) from the source. Furthermore, it is convenient to maintain the diameter of the entangled beam sufficiently smaller when passes through the 20 cm-diameter connecting tube. Both these conditions are satisfied using large diameter ($18\,\mathrm{cm}$) plano-convex lenses with long focal length $f=10\,\mathrm{m}$ and slightly focusing the pump beam on the \emph{BBO} plates to have a source of entangled photons with a small diameter \emph{D} ($D\lesssim0.5\,\mathrm{mm}$). Lenses $L_{A}$ and $L_{B}$ are put at a distance from the source slightly higher than \emph{f }to produce\emph{ }real images of the source\emph{ }approximatively at distance $d_{i}\approx400\,\mathrm{m}$ from $L_{A}$ and $L_{B}$. In this condition, the image produced by lens $L_{B}$ occurs in the centre of the connecting tube. The image diameter is $D_{i}\approx D\nicefrac{d_{i}}{f}=$2 cm that is much smaller than the $20\,\mathrm{cm}$-diameter of the connecting tube. Furthermore, due to the large diameter of lenses, the enlargement produced by diffraction is negligible. In these conditions all the entangled photons emitted in the cones of aperture angle 0.35\textdegree{} are collected by the large diameter lenses $L_{A},\, L'_{A}$ and $L_{B},\, L'_{B}$ and real images of the source with diameter $D'_{i}\approx D=0.5\,\mathrm{mm}$ are generated on the surfaces of polarisers $P{}_{A}$ and $P{}_{B}$ at distance slightly higher than $f=10\,\mathrm{m}$ from lenses $L'_{A}$ and $L'_{B}$. The optical rays exiting from these images are collected by the aspheric objectives $O_{A}$ and $O{}_{B}$ that focus the entangled photons on two $60\,\mu\mathrm{m}$-diameter optical fibres connected to detectors $D_{A}$ and $D{}_{B}$. Using the Zemax optical design program we have verified that this geometric arrangement ensures that all the entangled photons emitted by the source within the 0.35\textdegree{} aperture angle are collected by detectors. The previous analysis was performed disregarding any effect due to the presence of air. Density fluctuations of air produce wander of the photons beams and beam size variations that could reduce our collection efficiency. To estimate the relevance of these effects we consider the experimental results obtained by Resch et al. \cite{Resch_OptExpr_2005} that performed measurements with entangled photons propagating between two buildings on the Vienna sky at a distance of $7.8\,\mathrm{km}$. Due to air turbulences, the authors observed large beam size variations (displacement of the centre of the beam and changes of diameter) of about $25\,\mathrm{cm}$ at distance $7.8\,\mathrm{km}$ from the source. Making a linear extrapolation of the Resch et al. results to our experimental lengths ($\approx0.8\,\mathrm{km}\ll7.8\,\mathrm{km}$), we expect that beam size variations should be reduced to less than $2.6\,\mathrm{cm}$ in our experiment. The diameter of lenses $L'_{A}$ and $L'_{B}$ is $18\,\mathrm{cm}$ whilst the diameter of entangled beams impinging on lenses $L'_{A}$ and $L'_{B}$ is expected to be $<14\,\mathrm{cm}$, then, the $2.6\,\mathrm{cm}$ displacements should not affect appreciably the collection efficiency. Another effect that could reduce the collection efficiency is the air absorption at the wavelengths of the entangled photons ($\lambda=810\,\mathrm{nm}\pm5\,\mathrm{nm}$). However, air is known to exhibit a transmission window at these frequencies (see Fig.8 in \cite{Gisin_RevModPhys_2002}) and, thus, losses due to absorption are expected to be not dramatic. Precise in loco measurements would be needed to evaluate losses due to absorption. However, one can obtain a very rough upper estimate of the losses due to absorption yet using the results by Resch et al. obtained with $810\,\mathrm{nm}$ entangled photons propagating on the Vienna sky. In their experiment with the $7.8\,\mathrm{km}$ distance they observed that only a 1.4\% fraction of the photons emitted by the source actually reached the detectors at the $7.8\,\mathrm{km}$ distance. Note that the average beam displacements in their experiment were very large ($25\,\mathrm{cm}$) and greater than the diameter of their collecting telescopes. Then, beam displacements are expected to appreciably reduce the photon counts in their experiment. However, also assuming that losses would be entirely due to absorption, one would obtain the absorption coefficient $\alpha=5.5\times10^{-4}\,\mathrm{m}^{-1}$ that would correspond to a 65\% collection efficiency for a beam that propagates at a $800\,\mathrm{m}$ distance. Tacking into account for the entangled photon intensities reported in \cite{Resch_OptExpr_2005}, we can conclude that, using a $120\,\mathrm{mW}$ laser diode and using suitable compensation procedures and a proper optical design, the number of measured coincidences should be $\gg10^{5}$ coincidences/s that is more than $10^{4}$ coincidences during the acquisition time $\delta t=0.1\,\mathrm{s}$. \subsection{Equalization of the optical paths.} The optical apparatus in Fig.\ref{fig:6} provides the collection of a large number of entangled photons (99\%-fidelity) emitted within the 0.35\textdegree{} aperture angle during the short acquisition time $\delta t=0.1\,\mathrm{s}$. However, a high value of the lower bound $\beta_{t,min}$ can be obtained only if also parameter $\bar{\rho}$ has a very small value, that is if the uncertainty on the equalization of the optical paths \emph{OA }and \emph{OB} is small enough. In our experiment, the thickness of the sensitive layer of the polarizing filters is $220\,\mathrm{\mu m}$ and this leads to an intrinsic uncertainty $\Delta d\lesssim220\,\mathrm{\mu m}$, that defines the minimum obtainable value of $\bar{\rho}$ in Eq.\eqref{eq:4}. Then, it is sufficient to require that the difference between the optical paths of the entangled photons is much smaller than $\Delta d\lesssim220\,\mathrm{\mu m}$. This goal will be reached using the interferometric method described below. Initially, polarisers $P_{A}$ and $P_{B}$ will be positioned approximatively at the same distance (within one centimeter) using \emph{GPS} and, successively, the distances will be equalized with the interferometric method with an estimated accuracy better that $10\,\mathrm{\mu m}$. \begin{figure} \centering{}\includegraphics[scale=0.2]{Figura7}\caption{\label{fig:7} (a): system used to obtain two laser beams impinging at $\pm2\text{\textdegree}$. 1 and 4 are beam splitters, 2,3 are mirrors and 5 is a rigid plate that is mounted on a \emph{x}-\emph{y} carriage that allows its insertion and removal. Beam splitter 4 needs to collect the beams reflected by mirrors $M_{A}$ and $M_{B}$ as shown in Fig.(b). (b): A schematic view of the interferometer. The laser beams are reflected by mirrors $M_{A}$ and $M_{B}$, go back and produce a interference pattern in the surface corresponding to the central plane of the \emph{BBO} plate. Then, beams are reflected by beam splitter 4 and a magnified image of the interference pattern is produced by the objective. } \end{figure} To obtain the paths equalization, we will use a $810\,\mathrm{nm}$ laser diode with low coherence length ($0.1-0.2\,\mathrm{mm}$). Using the simple optical system in Fig.\ref{fig:7}(a), we will obtain two beams that impinge with equal phases on the central point of the \emph{BBO} plates with incidence angles $\pm2\text{\textdegree}$. In such a way the outgoing beams follow the same paths of the entangled beams. Then, the \emph{BBO} plates will be removed using a translator and polarisers $P_{A}$ and $P_{B}$ will be replaced by two mirrors $M_{A}$ and $M_{B}$. The laser beams reflected by mirrors $M_{A}$ and $M_{B}$ will go back and will again meet the position of the central plane of the \emph{BBO} plates (now removed) where they form an interference pattern with a $\approx10\,\mathrm{\mu m}$ interline. The transmitted beams will be collected by beam splitter 4\emph{ }in Fig.\ref{fig:7}(b) and a magnified image of fringes will be produced by the objective. Fringes will be visible only if the difference of optical paths is lower than the coherence length of the laser beam ($0.1-0.2\,\mathrm{mm}$) with a maximum visibility when the optical paths are equalized. Translating one of the mirrors with a motorized micrometer carriage and doing real-time measurements of the fringes visibility, the optical paths can be equalized within $\pm10\,\mathrm{\mu m}$ (for a detailed analysis see \cite{Cocciaro_PLA_2011}). Satisfactory preliminary tests of the proposed optical scheme have been carried out in our laboratory over distances of about $10\,\mathrm{m}$. Once the optical paths are equalized, mirrors $M_{A}$ and $M_{B}$ should be replaced by polarizes $P_{A}$ and $P_{B}$, using the procedures described in \cite{Cocciaro_PLA_2011} that ensure that the surfaces of the polarizing layers just coincide with those of the mirrors within $\pm10\,\mathrm{\mu m}$. The above procedure to equalize the optical paths leads to a satisfactory equalization (within $\pm10\,\mathrm{\mu m}$) at a given time. However, due to the long optical paths characterizing our experiment, temperature variations of the air refractive index can produce the optical paths variation: \begin{equation} \Delta L_{ott}=\left(\frac{\partial n^{*}}{\partial T}\right)L_{0}\Delta T,\label{eq:6} \end{equation} where $L_{0}$ is the initial path length, $n^{*}$ is the group refractive index of air at the wavelength $\lambda=810\,\mathrm{nm}$, $\nicefrac{\partial n^{*}}{\partial T}=9.47\times10^{-7}\,\mathrm{K}^{-1}$ \cite{CiddorEquation} and $\Delta T$ is the difference between the mean temperatures along the paths of the two entangled photons. Substituting the path length value $L_{0}=800\,\mathrm{m}$ in Eq.\eqref{eq:6}, we get $\Delta L_{ott}=0.76\,\mathrm{mm}$ for $\Delta T=1\text{\textdegree}\,\mathrm{C}$. Other contributions are expected from changes of air pressure and humidity, from thermal expansion and from Earth tides. The resulting expected variations of the optical paths are much greater than the accuracy required on the equalization of the optical paths. Then, a suitable feedback procedure to keep constant the paths difference is needed. To reach this goal, polariser $P_{B}$ will be placed on a carriage driven by an electric motor and a proper feedback voltage will be sent to the electric motor. To obtain the feedback signal we want to build an interferometer with reference laser beams ($\lambda\approx780\,\mathrm{nm}$) moving along the arms on paths parallel to those of the entangled photons but spatially separated at an average distance of $20-30\,\mathrm{cm}$ (see Fig.\ref{fig:8}).\begin{SCfigure}[50] \centering \includegraphics[width=0.4\textwidth]{Figura8.jpg} \hspace{0.05in} \caption{Schematic view of the paths of the entangled photons and of the reference laser beams.} \label{fig:8} \end{SCfigure} Since the paths are sufficiently close, we think that the optical paths variations due to the above effects are similar for the entangled photons and for the reference beams. If this assumption is correct, one can measure the paths variations of the reference beams to obtain a feedback signal to translate the $P_{B}$ polariser. Of course, the validity of this assumption has to be firstly checked. If the assumption would not be verified, it would be needed to use more complex feedback schemes as, for example, those used in \cite{Peng_PhysRevLettl_2005} where the reference beams follow the same paths of the entangled photons. Finally, due to the long optical paths, a special care has to be devoted to take under control the laser beam pointing and possible drifts of the entangled beams trajectories. \section{\label{sec:Conclusions}Conclusions} In this paper, we propose a long distance \emph{EPR} experiment performed exploiting the long paths that characterize the\emph{ EGO} structures to test the superluminal models of \emph{QM}. Such an experiment should increase by about two orders of magnitude the actual lower bounds for the tachyon velocities. This goal can be reached using special compensation methods to obtain high intensity sources of entangled photons and interferometric methods to minimize the uncertainty on the optical paths. An important feature of this experiment is that the measuring points \emph{A} and \emph{B} are exactly aligned along the West-East axis. This alignment ensures that the experimental apparatus is sensitive to any possible orientation of the velocity vector of the preferred frame of tachyons. Therefore, if the Quantum Mechanic correlations between entangled photons would be entirely or partially due to superluminal communications and if the tachyon velocity in the \emph{PF} would be lower than the $\beta_{t,min}$ value shown in the curve \emph{III} in Fig.\ref{fig:4}, the measured correlations should exhibit appreciable deviations from the predictions of \emph{QM}. In such a latter case, also the actual velocity of tachyons and the actual velocity of the \emph{PF }could be obtained from further experimental measurements. If no deviation from the predictions of \emph{QM} would be found, our experiment would provide a lower bound for the tachyon velocities two orders of magnitude higher than the actual ones. In our previous laboratory experiment \cite{Cocciaro_PLA_2011} we tested only the simplest model of superluminal quantum communications where all correlations between entangled photons are only due to superluminal communications and no correlation exists if there is no communication. In such a special case, the model can be tested orienting the polarizing axes of polarisers $P_{A}$ and $P_{B}$ both at a $\nicefrac{\pi}{4}$ angle with respect to the horizontal direction {[}\emph{H} in Eq.\eqref{eq:1}{]} and measuring the coincidences between photons passing through both the polarisers. However, according to Eberhard \cite{Eberhard_1989}, this is only one of the possible superluminal models but more complex models are also possible. In particular, some correlation between entangled particles could be already present at the beginning (hidden variables) and only a part of quantum correlations could be due to superluminal communications. In such a case, also if particles have not sufficient time to communicate, some correlations between entangled particles remain. According to the Bell theorem, correlations due to hidden variables alone cannot reproduce entirely \emph{QM} correlations and, in particular, they have to satisfy the Bell inequalities. Then, a general test of any kind of possible superluminal model can be made only measuring a Bell-like inequality. The Bell inequalities and other equivalent inequalities proposed in the literature, need measurements of coincidences between entangled photons passing through polarisers $P_{A}$ and $P_{B}$ for at least four different orientations of their axes. In our experiment, one channel polarisers (polarizing filters) are used and special B.C.H.S.H. inequalities must be considered \cite{Aspect_2002}. In particular, for any hidden variables model, it has been shown that the quantity: \begin{equation} M=\frac{N(a,b)-N(a,b')+N(a',b)+N(a',b')-N(a',\infty)-N(\infty,b)}{N(\infty,\infty)}\label{eq:7} \end{equation} must satisfy the inequality \begin{equation} -1\leq M\leq0.\label{eq:8} \end{equation} \emph{N} represents the number of measured coincidences between photons passing through polarisers $P_{A}$ and $P_{B}$ for some different orientations of the polarisers. \emph{a }and $a'$ are two different orientations of polariser $P_{A}$, \emph{b }and $b'$ are two different orientations of polariser $P_{B}$ and symbol $\infty$ corresponds to a removed polariser. The maximum deviation of \emph{QM }correlations from condition \eqref{eq:8} occurs if the polarization directions \emph{a}, $a'$, \emph{b }and $b'$ are represented by angles $\theta_{a}=0\text{\textdegree},\theta_{a'}=45\text{\textdegree},\theta_{b}=22.5\text{\textdegree}$ and $\theta_{b}=67.5\text{\textdegree}$ with respect to the vertical axis. With this choice, a positive value of \emph{M} is predicted by \emph{QM} in disagreement with inequality \eqref{eq:8}. From Eq.\eqref{eq:7} we see that 7 different measurements of coincidences have to be performed to verify inequality \eqref{eq:8}. The measurements will be performed in this way: polarisers $P_{A}$ and $P_{B}$ will be oriented along the first combination (\emph{a},\emph{b}) occurring in Eq.\eqref{eq:7} and coincidences will be detected during an entire sidereal day, then the polarisers orientations will be changed to the second combination (\emph{a},$b'$) and measurements will be repeated until all the seven combinations have been considered. Finally, the value of quantity \emph{M} at the different sidereal times \emph{t }will be obtained substituting in Eq.\eqref{eq:7} the coincidences measured at the same sidereal times of different sidereal days. In such a way, our experiment will provide a complete test of any possible model of \emph{QM }superluminal communications.
3,212,635,537,864
arxiv
\section{Introduction} The $T\bar{T}$ deformation, introduced in \cite{Smirnov:2016lqw},\cite{Cavaglia:2016oda}, is a solvable irrelevant operator deformation of quantum field theories in two dimensions. The deforming operator is defined by the coincidence limit of the following bi-local operator formed from the energy momentum tensor: \begin{equation} T\bar{T}(x)\equiv \lim_{y\rightarrow x}\epsilon^{\alpha\beta} \epsilon^{\mu\nu} T_{\mu\alpha}(x)T_{\nu\beta}(y). \end{equation} It was shown in \cite{Zamolodchikov:2004ce} that this coincidence limit defines a local operator up to total derivatives. Upon deforming a quantum field theory by such an operator, the result is a one-parameter family of quantum field theories whose partition functions $Z$ solve the following flow equation: \begin{equation} \partial_{\mu}Z=\int \textrm{d}^{2}x \sqrt{g} \langle T\bar{T}(x)\rangle Z,\label{int_flow_eqn} \end{equation} provided the initial condition \begin{equation} Z|_{\mu=0}=Z_{0}, \end{equation} where $Z_{0}$ is the partition function of the undeformed theory. With this definition of the deformation, we can in principle discuss the $T\bar{T}$ flow of theories living on general curved backgrounds, for which the expectation value of the deforming operator is given by the following expression: \begin{equation} \langle T\bar{T}(x)\rangle= \lim_{x\rightarrow y} \frac{1}{Z[g]}\frac{\epsilon^{\alpha\beta}\epsilon^{\mu\nu}}{\sqrt{g}(x)\sqrt{g}(y)}\frac{\delta^{2}Z[g]}{\delta g^{\mu\alpha}(x) \delta g^{\nu\beta}(y)}.\label{operator_defn} \end{equation} In practice, it isn't guaranteed that the quantity above is computable, so one must seek specific contexts wherein it is. In particular, when the state of interest lacks translation invariance, one cannot exploit the factorization argument of \cite{Zamolodchikov:2004ce} to establish the unambiguous definition of this operator. There could potentially be curvature dependent terms in the OPE that make it ambiguous as to what the local operator obtained as a result of the coincidence limit of the bilocal operator above is. Many of these difficulties can be evaded in the classical limit of the theory under consideration, and this was the method used in \cite{Bonelli:2018kik} to obtain the deformation of various classical Lagrangians. When the undeformed theory is a conformal field theory (CFT), the $T\bar{T}$ flow coincides with a renormalization group flow. Given that the operator of interest is irrelevant, it triggers a flow from the CFT in the infrared (IR) to some other theory in the ultraviolet (UV). The RG flow essentially runs backwards from that deformed theory to the CFT along the UV critical surface. In \cite{McGough:2016lol}, it was argued that the $T\bar{T}$ deformation of holographic conformal field theories are the field theory dual to gravity in AdS$_{3}$ with a finite radial cutoff surface. This proposal has undergone many checks (\cite{Kraus:2018xrn}, \cite{bbc2019}, \cite{Donnelly:2018bef}, and references therein). There have also been higher dimensional generalizations, which were proposed and studied in \cite{Taylor:2018xcy},\cite{Hartman:2018tkw}, and \cite{Shyam:2018sro}. Additionally, the authors of \cite{Gross:2019ach} have studied the generalization of this deformation in quantum mechanics. The supersymmetric generalization of such an operator has been studied in \cite{Baggio:2018rpv}, \cite{Chang2019}, \cite{Chang:2019kiu}, \cite{Coleman:2019dvf}, \cite{Cribiori:2019xzp} and \cite{Jiang:2019hux}. From another point of view, one can define the theory on a cutoff surface of AdS$_{3}$ as one whose partition functions solves the bulk radial Wheeler-de Witt (WdW) equation. Based on this intuition, an integral transformation was found in \cite{Freidel:2008sh} that relates the CFT partition function with a solution to the radial WdW equation. This relation between the $T\bar{T}$ deformation and solutions to the WdW equation was also speculated in \cite{Caputa:2019pam}. The prescription is to integrate over the dyads of the space on which the undeformed theory lives with a certain Gaussian kernel in order to obtain this wavefunction. Explicitly it reads: \begin{equation}\label{deformation} Z[f] = \int \mathcal{D}e \, \exp \left[ - \frac{1}{\mu} \int \textrm{d}^{2}x \epsilon^{\alpha \beta} \epsilon_{ab} (e - f)^a_\alpha (e - f)^b_\beta \right] \, Z_0[e] \,. \end{equation} Here, $f^{a}_{\mu}$ is the dyad, or frame field, associated to the geometry on which the $T\bar{T}$ deformed theory lives, while $e^{a}_{\mu}$ is the dyad on the geometry the undeformed theory inhabits. Note that these are functional integrals over dyads on spaces with fixed topology. Defining the partition function of the deformed theory in this way ensures that the equation \eqref{int_flow_eqn} with the $T\bar{T}$ operator defined as \eqref{operator_defn} is satisfied as a functional identity. In other words, the equation \begin{equation} \partial_{\mu}Z = \int_{x}\textrm{d}^{2}x \epsilon^{ab}\epsilon_{\mu\nu} :\frac{\delta^{2}Z}{\delta f^{a}_{\mu}(x)\delta f^{b}_{\nu}(x)}:,\label{fo_fe} \end{equation} which is a rewriting of the flow equation in first order variables, follows directly as an identity from \eqref{deformation}. \footnote{Note that in the first order variables, we define a mixed index stress tensor \begin{equation} T^{\mu}_{a}= T^{\mu\nu} f_{\nu a}, \end{equation} in terms of which \begin{equation} T\bar{T}(x) = \frac{1}{2}\epsilon_{\mu\nu}\epsilon^{ab} T^{\mu}_{a}T^{\nu}_{b}(x) = \textrm{det}T(x) \end{equation}} More specifically, the coincidence limit in the definition \eqref{operator_defn} can be taken provided a normal ordering prescription is followed when taking the functional derivatives. This is what the semicolons in the expression \eqref{fo_fe} denote: \begin{equation} \epsilon^{ab}\epsilon_{\mu\nu} :\frac{\delta^{2}}{\delta f^{a}_{\mu}(x)\delta f^{b}_{\nu}(x)}: = \epsilon^{ab}\epsilon_{\mu\nu} \frac{\delta^{2}}{\delta f^{a}_{\mu}(x)\delta f^{b}_{\nu}(x)}+ \frac{2}{\lambda}\delta^{(2)}(0). \end{equation} For details of this calculation, we refer the reader to section 2.2. of \cite{mazenc2019t}. All this, in turn, means that the problem of having to carefully define the operator \eqref{operator_defn} is traded for that of having to define the functional integral \eqref{deformation} over dyads. This means that the above functional integral definition holds even when the undeformed theory doesn't possess conformal symmetry. Furthermore, we will see that ambiguities associated to operator ordering find an analogous manifestation in the choice of the normalization factor of the frame-field path integral measure in \eqref{deformation}. The connection between this transformation and the $T\bar{T}$ deformation was proposed in \cite{McGough:2016lol}. This perspective was further elucidated in \cite{tolley2019t}, where it was noted that the action appearing in the exponent of \eqref{deformation} is that of ghost free massive gravity in two dimensions. The aforementioned fact that \eqref{deformation} can be used to define the deformed partition function of theories without conformal symmetry is a reflection of the fact that massive gravity can be coupled to a quantum field theories without conformal symmetry. In particular we will apply it to two dimensional Yang--Mills with gauge group $U(N)$, whose partition function has particularly simple dependence on the background geometry. In doing so, we shall obtain a deformed partition function on an arbitrary curved background, where the Hamiltonian we infer of the deformed theory is identical to the one obtained from the classical analysis of \cite{Conti2018}. On the other hand, we will see what subtleties accompany the use of this method of defining $T\bar{T}$ deformed theories. We should mention that this is indeed similar to the prescription of \cite{Dubovsky:2017cnj},\cite{Dubovsky:2018bmo}, where it was argued that the $T\bar{T}$ deformation arises from coupling the undeformed theory to a particular dilaton-gravity theory in two dimensions known as Jackiw--Teitelboim gravity. This perspective allowed the authors of these articles to derive the CDD phases, which encode the deformation of the S-Matrix, as well as the torus partition function. The precise connection between these approaches is covered in \cite{mazenc2019t}. \section*{Comparison to earlier work} The deformed Lagrangian and Hamiltonian densities for two dimensional Yang--Mills theory were first presented in \cite{Conti2018}. These authors argued that the effect of the $T\bar{T}$ deformation could then be incorporated via a simple redefinition of the quadratic Casimir eigenvalues. They also constructed a flow equation for the deformed partition function written in terms of area derivatives, which matches with that of \cite{Cardy2018} for the case of the torus. \cite{Santilli2019} expanded upon this work by studying the deformed partition function for the case of YM$_2$ on the two-sphere. By performing the large $N$ analysis, they determined that the Douglas--Kazakov phase transition induced by unstable instantons persists in the deformed theory for a range of deformation parameter values. Here, in section \ref{ym}, we present another means of deriving the deformed partition function which matches with the result of \cite{Conti2018} and \cite{Santilli2019}. Thanks to the Gaussian nature of the integral in Eq.\,(\ref{deformation}), it can be performed exactly to obtain a result valid on a general background, provided an appropriate choice of measure for the path integration. This allows us to generalise the above results to arbitrary, possibly curved manifolds. Another advantage of our approach is that it provides a way to obtain the deformed Hamiltonian, previously obtained via classical analysis, directly in the quantum theory. We then derive a flow equation satisfied by the partition function on a general background, which matches that obtained in \cite{Cardy2018} when specialised to the case of the torus. It turns out that the integral kernel definition offers an interesting perspective on the presence of contact terms appearing in the generalised flow equation, which we comment on. Specifically, we make concrete the connection between operator ordering ambiguities in the flow equation and the choice of normalisation in the integral transform method. To be clear, the assumption made here is that the deformation of the Hamiltonian that the classical analysis of \cite{Conti2018} provides indeed retains its form even in the quantum theory on a general background. At this point, there aren't any complementary methods to check whether this is the case other than on the torus or the cylinder. Then, in section \ref{e&s}, we use our result to explore further YM$_2$ phenomena and determine how they are altered in the $T\bar{T}$ deformed theory. In particular, we look at the effect of the deformation on the string theoretic interpretation of YM$_2$, an analysis of which is currently lacking in the literature. As argued by \cite{Gross:1993}, YM$_2$ admits an effective string description in the large $N$ regime. We determine to what extent this description remains valid in the deformed theory. Finally, we use our main result of the deformed partition function on general background to compute entanglement entropy for an arbitrary state. We then specialise to the case of the Hartle--Hawking vacuum state in order to compare with entanglement entropy calculations for the undeformed theory in the existing literature \cite{Donnelly:2014}, \cite{GROMOV201460}. \section{2D Yang Mills}\label{ym} In order to demonstrate the applicability of the integral kernel definition of $T\bar{T}$ to general quantum field theories, we consider as a test case 2-dimensional Yang--Mills theory (YM$_2$). There are a number of reasons as to why YM$_2$ is ideal for these purposes. On one hand, it's tractable. The theory is semi-topological, depending only on the total area of the background manifold and the Euler characteristic characterizing its topology. This allows it to be solved exactly by topological field theory methods. Another consequence is that it has no local degrees of freedom. This renders it UV finite, so divergences are not an issue. Nevertheless, it has a number of interesting non-trivial features. For one, YM$_2$ admits a string theory interpretation in the large $N$ limit \cite{Gross:1993}. When the background manifold is a sphere, it also exhibits a (third order) phase transition induced by unstable instantons \cite{Douglas:1993}. Most importantly for our purposes, the partition function of YM$_2$ in representation basis has a very simple exponential dependence on the area. When written in the Vielbein formalism, it is quadratic in these frame fields. As such, the integral in Eq.\,(\ref{deformation}) is Gaussian and may be evaluated exactly, provided the appropriate measure for the path integration over zweibeins is chosen. \subsection{$T\bar{T}$ deforming $Z_{YM}$} For Yang--Mills theory living on a 2-dimensional manifold $\mathcal{M}$ of Euler characteristic $\chi$, the partition function $Z_{YM}$ admits the following group theory expansion \cite{Cordes:1994}: \begin{equation}\label{Z0} Z_{YM} = \sum_\mathcal{R} (\text{dim} \, \mathcal{R})^\chi \, e^{- \frac{\lambda C_2}{2 N} A}\,. \end{equation} Here, the sum runs over all equivalence classes of irreducible representations $\mathcal{R}$ of the gauge group, which we will take to be $U(N)$. $C_2(\mathcal{R})$ is the quadratic Casimir eigenvalue associated with $\mathcal{R}$, $\lambda$ is the dimensionful 't Hooft coupling $\lambda = g_{YM}^2 N$, and $A$ is the total area of $\mathcal{M}$. To make contact with the form of the kernel, which is expressed in terms of zweibeins, we will write the area as an integral over the 2D volume 2-form, $A = \int \textrm{d}^{2}x f$, where $f \equiv \det f^a_\alpha(x)$. More explicitly, we have $\det f^a_\alpha(x) = f_\alpha^+ f_\beta^- - f_\beta^+ f_\alpha^- \equiv f^+ \wedge f^-$, where $f^\pm$ are the usual $f^\pm = (f^0 \pm f^1)/\sqrt{2}$. Because it is quadratic in the frame fields, the YM$_2$ partition function presents a scenario for which the Gaussian integral involved in the kernel definition can be computed exactly. Of course, this relies on choosing a measure on the space of zweibeins that respects both the linearity property, i.e. \begin{equation} \mathcal{D}e=\mathcal{D}(e+f) \,, \end{equation} and diffeomorphism invariance. As explained in Appendix A of \cite{tolley2019t}, such a measure can indeed be found, and it is defined with respect to the following supermetric on the space of zweibeins: \begin{equation} \delta s^{2}= -\int \textrm{d}^{2}x\, \epsilon^{\mu\nu} \epsilon_{ab} \delta e^{a}_{\mu}(x) \delta e^{b}_{\nu}(x)= - 2 \int \textrm{d}^{2}x\, \det[\delta e^{a}_{\mu}(x)] \,. \end{equation} Having chosen this measure, from eq.\,(\ref{deformation}), we can then obtain the $T\bar{T}$ deformed partition function $Z$: \begin{equation} \begin{split} Z[f] & = \int \mathcal{D}e \, K[e,f] \, Z_0[e]\\ & = \sum_\mathcal{R} (\text{dim} \, \mathcal{R})^\chi \int \mathcal{D}e \, e^{- \frac{1}{\mu} \int\textrm{d}^{2}x\, (e-f)^+ \wedge (e-f)^-} e^{- \frac{\lambda C_2}{2 N} \int \textrm{d}^{2}x\, e^+ \wedge e^-}\\ & = \sum_\mathcal{R} (\text{dim} \, \mathcal{R})^\chi \mathcal{N}_{\mathcal{R}} \sqrt{\frac{\mu \pi}{1+ \mu \lambda C_2/2N}} \, e^{- \frac{\lambda}{2N} \left( \frac{C_2}{1+ \mu \lambda C_2/2N} \right) \int \textrm{d}^{2}x\, f^+ \wedge f^-} \,, \end{split} \end{equation} where $\mathcal{N}_{\mathcal{R}}$ is a normalisation constant coming from our agnosticism regarding the functional measure. Note that since we are doing one Gaussian integral per representation, this constant could possibly depend on the quadratic Casimir $C_{2}(\mathcal{R})$, in addition to depending on $\mu$. We will see that it is important to be able to fix the normalization separately for each Gaussian integral. In agreement with the claims of \cite{Conti2018}, we see that the quadratic Casimir eigenvalues are indeed dressed by the deformation as: \begin{equation} C_2 \rightarrow \frac{C_2}{1 + \mu \lambda C_2 / 2 N} \,. \end{equation} In order to normalise $Z[f]$, we look at the topological limit. In a topological theory, the energy-momentum tensor should vanish, such that the action of the $T\bar{T}$ deformation is trivial. We will thus fix $\mathcal{N}_\mathcal{R}$ by requiring that the integral transform of the topological theory yields the undeformed topological theory, $Z_0[A = 0] \overset{T\bar{T}}{\longrightarrow} Z = Z_0[A = 0]$. This instructs us to set: \begin{equation} \mathcal{N}_{\mathcal{R}} = \sqrt{\frac{1 + \mu \lambda C_2/2N}{\mu \pi}} \,. \end{equation} Then the $T\bar{T}$ deformed partition function for YM$_2$ living on arbitrary manifold is: \begin{equation}\label{mainresult} Z = \sum_\mathcal{R} (\text{dim} \, \mathcal{R})^\chi \, e^{- \frac{\lambda A}{2N} \left(\frac{C_2}{1 + \mu \lambda C_2 / 2 N} \right)} \,. \end{equation} This result is in agreement with existing literature. The classical analysis of \cite{Conti2018} found that the Hamiltonian density of $T\bar{T}$ deformed YM$_2$ is given by: \begin{equation} \mathcal{H} = \frac{\mathcal{H}_0}{1 + \mu \mathcal{H}_0} = \frac{\lambda C_2/2N}{1 + \mu \lambda C_2/2N} \,, \end{equation} from which they argued that the effect of the $T\bar{T}$ deformation should be incorporated via a redefinition of the quadratic Casimir eigenvalues: \begin{equation} C_2(\mathcal{R}) \rightarrow \frac{C_2(\mathcal{R})}{1 + \mu \lambda C_2 (\mathcal{R})/2N} \,. \end{equation} Replacing the Casimir eigenvalues in Eq.\,(\ref{Z0}) with the dressed versions indeed yields Eq.\,(\ref{mainresult}). Note that the $T\bar{T}$ deformation of YM$_{2}$ is invariant under area preserving diffeomorphisms, just like the undeformed theory. This fact will pay dividends in our computation of the entanglement entropy in section \ref{ees}. As another consistency check, the $T\bar{T}$ deformed partition function Eq.\,(\ref{mainresult}) can be shown to satisfy the flow equation: \begin{equation} \partial_\mu Z = A \partial_A^2 Z \,, \end{equation} first derived by Cardy in \cite{Cardy2018} for the special case of flat background geometry. To see why our deformed partition function on arbitrary manifold satisfies the flat-space Cardy result, we now turn to a derivation of the flow equation for generalised background geometry. We will find that the presence of contact terms in this generalised flow equation, reflecting operator ordering ambiguities, has an intimate connection with the choice of normalization in the integral transform method. \subsection{Flow Equation}\label{flowequation} As mentioned before, the method used in most studies of the $T\bar{T}$ deformation of quantum field theories is to obtain quantities such as the partition function from solving a flow equation. In this section, we will see how this is in fact equivalent to the method described above by deriving the flow equation that the deformed partition function defined in Eq.\,(\ref{deformation}) satisfies. First, a note of caution: the flow equation we describe here should not be seen as a renormalization group flow equation. Instead, it is an equation describing the response of the partition function to tuning the coupling of one irrelevant operator in the theory. The full renormalization group flow equation, or Callan--Symanzik equation, would encode the dependence of the theory on all scales that are present. \begin{comment} Starting from: \begin{equation*} Z[f]=\int \mathcal{D}e \, \exp \left[ - \frac{1}{\mu} \int \epsilon^{\alpha \beta} \epsilon_{ab} (e - f)^a_\alpha (e - f)^b_\beta \right] \, Z_0[e] \,, \end{equation*} we ask what effect taking the derivative with respect to $\mu$ has, and find: \begin{equation} \partial_{\mu}Z[f]=\int \mathcal{D}e \, \left(\frac{1}{\mu^2}\int \epsilon^{\alpha\beta}\epsilon_{ab}(e-f)^{a}_{\alpha}(e-f)^{b}_{\beta}\right)\exp \left[ - \frac{1}{\mu} \int \epsilon^{\alpha \beta} \epsilon_{ab} (e - f)^a_\alpha (e - f)^b_\beta \right] \, Z_0[e] \,. \end{equation} The right hand side can equivalently be written as: \begin{equation*} \int \mathcal{D}e \, \int_{x}\left(\frac{1}{\mu^{2}}\epsilon^{\alpha\beta}\epsilon_{ab}(e-f)^{\alpha}_{a}(e-f)^{\beta}_{b}\right)\exp \left[ - \frac{1}{\mu} \int \epsilon^{\alpha \beta} \epsilon_{ab} (e - f)^a_\alpha (e - f)^b_\beta \right] \, Z_0[e] \,= \end{equation*} \begin{equation*} \frac{1}{4}\lim_{\varepsilon \rightarrow 0}\bigg(\int_{x}\textrm{det}f(x)g_{\varepsilon}(x,y)\int_{y}\textrm{det}f(y)\left(\frac{\epsilon^{ab}\epsilon_{\mu\nu}}{\textrm{det}f(x)}\frac{\delta}{\delta f^{a}_{\mu}(x)}\left(\frac{1}{\textrm{det}f(y)}\frac{\delta Z[f]}{\delta f^{b}_{\nu}(y)}\right)\right) \end{equation*} \begin{equation} +\int_{x}\frac{g_{\varepsilon}(x,x)}{\textrm{det}f(x)}\left(f^{b}_{\nu}(x)\frac{\delta Z[f]}{\delta f^{b}_{\nu}(x)}\right)+\frac{2}{\mu}\int_{x}g_{\varepsilon}(x,x)Z[f]\bigg).\label{fl} \end{equation} We introduced a family of functions $g_{\varepsilon}(x,y)$, which limit to the Dirac delta function as $\varepsilon\rightarrow 0$ in order to regularize the $\delta(0)$ type divergences. In order for the regularization to work, we need $g_{\epsilon}(x,x)$ to be finite for $\varepsilon\neq0$. The heat kernel on the space of interest is one example of such a function. The fact that there are a number of different choices for $g_{\varepsilon}$ leads to an ambiguity in the coefficients of the first derivative and contact term in Eq.\,(\ref{fl}). Another way to put it is that there is an ordering ambiguity associated with taking second functional derivatives at coincident points. \end{comment} In our case of interest, the undeformed partition function depends only on the area, $Z_0[A(e)] = Z_0[A]$. By applying the integral transform, we know now that the deformed partition function also depends only on the area, $Z[A(f)] = Z[A]$. We now take our final result Eq.\,(\ref{mainresult}), and ask what flow equation it satisfies. We find this to be: \begin{equation} \partial_{\mu}Z=A\partial^{2}_{A}Z. \end{equation} Note that the form of this equation was sensitive to our choice of the normalization constants $\mathcal{N}_\mathcal{R}$. It is also sensitive to the fact that we choose them separately for each integral. For instance, if we had chosen the same normalization constant for every Gaussian integral--say $\mathcal{N}=1/\sqrt{\pi \mu}$, for example--then we would have obtained the partition function: \begin{equation} Z'=\sum_{\mathcal{R}} \frac{(\textrm{dim} \mathcal{R})^{\chi}}{\sqrt{1+\frac{\mu \lambda C_{2}}{2N}}} e^{-\frac{\lambda A}{2N}\left(\frac{C_{2}}{1+\mu \lambda C_{2}/2N}\right)}\,, \end{equation} which satisfies the flow equation: \begin{equation} \partial_{\mu}Z'=A\partial^{2}_{A}Z'+\frac{1}{2}\partial_{A}Z'. \end{equation} The extra first derivative term on the right hand side comes form an alternative ordering prescription for the second area derivative term. In particular, \begin{equation} A\partial^{2}_{A}Z'+\frac{1}{2}\partial_{A}Z'=\sqrt{A}\partial_{A}(\sqrt{A}\partial_{A}Z'). \end{equation} This is entirely analogous to the problem of finding a quantization for a phase space function of the form $xp^{2}$ in some mechanical system. In all, we see that the ordering ambiguity in the flow equation language is intimately tied to the choice of normalization when using the integral transformation \eqref{deformation} to define the deformed partition function. \section{Entanglement Entropy and the String Expansion}\label{e&s} Our result for the deformed partition function on a general background gives us a starting point to explore a number of YM$_2$ related phenomena under the $T\bar{T}$ deformation. We begin by computing entanglement entropy for the theory in a general state. Then, specialising to the case of the theory on a sphere, we make contact with existing calculations of entanglement entropy for the Hartle--Hawking state of the undeformed theory in the literature. YM$_2$ also admits a string theoretic description in the large $N$ limit, which is particularly well understood for the case that the background manifold is a sphere. We can then ask whether some semblance of this picture survives in the deformed theory. The sphere partition function will serve as a starting point for this analysis. \subsection{Entanglement Entropy}\label{ees} By making use of the replica trick, we can obtain the entanglement entropy in deformed 2-dimensional Yang--Mills theory directly from Eq.\,(\ref{mainresult}). Further, due to the simple dependence on the area and Euler characteristic of the background manifold, we can actually obtain a general expression valid for any state of the theory which can be prepared via Euclidean path integral \cite{willnicosyd}. Only at the end will we specialise to the Hartle--Hawking vacuum state, for which some results are already known in the case of undeformed YM$_2$. To set the stage for the entanglement entropy calculation, we will take our Yang-Mills theory to live on the 2-dimensional background manifold $\mathcal{M}$, and imagine preparing the state $\ket{\Psi}$ by performing a Euclidean path integral over $\mathcal{M}$. Now, we spatially partition the system into a region of interest $\mathcal{A}$ and its complement $\bar{\mathcal{A}}$. Curiously, because the $T\bar{T}$ deformation of YM$_2$ is still invariant under area preserving diffeomorphisms, it does not actually matter here how we choose to partition our system. To $\mathcal{A}$, we can associate a reduced density matrix $\rho_\mathcal{A} = \Tr_{\bar{\mathcal{A}}} \dyad{\Psi}$ obtained by tracing over the degrees of freedom in $\bar{\mathcal{A}}$. Then we can quantify the amount of entanglement between $\mathcal{A}$ and the rest of the system via an application of the von Neumann entropy formula. This gives the entanglement entropy: \begin{equation} S_\mathcal{A} = - \Tr_\mathcal{A} \rho_\mathcal{A} \log \rho_{\mathcal{A}} \,.\label{entropy} \end{equation} In practice, computing $S_\mathcal{A}$ directly from Eq.\,(\ref{entropy}) is impossible in all but the most trivial cases, since it involves taking the logarithm of an operator in a potentially infinite system. Instead, we will arrive at $S_\mathcal{A}$ in a slightly more circuitous manner. The key observation is that Eq.\,(\ref{entropy}) can be written equivalently as: \begin{equation}\label{entropy2} S_\mathcal{A} = - \partial_n \left( \Tr \rho_A^n \right) \, \rvert_{n = 1} \,. \end{equation} At first, this might not seem to be a simplification; after all, now we're tasked with computing moments of the reduced density matrix. Luckily, though, the replica trick gives us a way to do just that (see \cite{HEEbook} for a review). More precisely, it gives us a way to relate the trace of $\rho_\mathcal{A}^n$ to the partition function on a replicated manifold. The replica trick instructs us to first take our spacetime of interest, the background manifold $\mathcal{M}$, and to replicate it such that there are $n$ total copies. For each copy, we partition into subsystems $\mathcal{A}$ and $\mathcal{\bar{A}}$ by making a cut along $\mathcal{A}$. We will denote the upper boundary of the cut on the $i^{th}$ copy as $\mathcal{A}^+_i$ and the lower boundary as $\mathcal{A}^-_i$. The replicated manifold $\mathcal{M}_n$ can then be constructed by ``glueing" the $n$-sheets together cyclically along the cuts. That is, we identify $\mathcal{A}^+_{i-1} \leftrightarrow \mathcal{A}^-_{i}$, $\mathcal{A}^+_{i} \leftrightarrow \mathcal{A}^-_{i+1}$, \textit{etc}. The resultant manifold $\mathcal{M}_n$ is the $n$-fold branched cover over $\mathcal{M}$. On each copy of the background $\mathcal{M}$, we compute the reduced density matrix $\rho_\mathcal{A}$ via (Euclidean) path integral. The glueing operation by which we construct $\mathcal{M}_n$ is then morally equivalent to taking the trace over the $n$ copies of $\rho_\mathcal{A}$. This can equivalently be seen as computing the partition function $Z_n$ on the $n$-fold cover $\mathcal{M}_n$ (up to normalisation). The formal result is that: \begin{equation} \Tr \rho_\mathcal{A}^n = \frac{Z_n}{(Z_1)^n} \,, \end{equation} where $Z_1$ is the partition function of the original theory on $\mathcal{M}$. The one missing ingredient is the deformed partition function on the $n$-fold cover, $Z_n$. This is easy enough to obtain since Eq.\,(\ref{mainresult}) depends only on the total area and Euler characteristic. On $\mathcal{M}_n$, these quantities are given by \cite{Donnelly:2014}: \begin{subequations} \begin{equation} A_n = A n \,, \end{equation} \begin{equation} \chi_n = 2 n + 2(1-n) m \,, \end{equation} \end{subequations} where $A$ is the total area of a single copy of the replicated manifold and $m$ is the number of partitions into which we've subdivided our system. Here, $m = 1$, so we simply have $\chi_n = 2$. Finally, before applying Eq.\,(\ref{entropy2}), let us write $Z_n$ in a slightly more suggestive form by introducing the (normalised) probability distribution for the representations: \begin{equation}\label{probability} P(\mathcal{R}) = \frac{1}{Z_1} (\text{dim} \mathcal{R})^\chi e^{- \frac{\lambda C_2/2N}{1 + \mu \lambda C_2/2N} A} \,. \end{equation} In terms of $P(\mathcal{R})$, we have: \begin{equation} Z_n = \sum_\mathcal{R} (\text{dim} \mathcal{R})^2 e^{- \frac{\lambda C_2/2N}{1 + \mu \lambda C_2/2N} A n} = \sum_\mathcal{R} (\text{dim} \mathcal{R})^{2 - n \chi} (Z_1)^n P(\mathcal{R})^n \,. \end{equation} Then applying Eq.\,(\ref{entropy2}) gives: \begin{equation} \begin{split} S_\mathcal{A} & = - \partial_n \left( \frac{Z_n}{(Z_1)^n} \right) \bigg\rvert_{n = 1}\\ & = - \partial_n \left( \sum_\mathcal{R} (\text{dim} \mathcal{R})^{2 - n \chi} P(\mathcal{R})^n \right) \bigg\rvert_{n = 1}\\ & = \sum_\mathcal{R} P(\mathcal{R}) (\text{dim} \mathcal{R})^{2-\chi} \left[ \chi \log (\text{dim} \mathcal{R}) - \log P(\mathcal{R}) \right] \,. \end{split} \end{equation} Again, because YM$_2$ is semi-topological, the actual bipartition into $\mathcal{A}$ and $\bar{\mathcal{A}}$ is immaterial. We can then say that the entanglement entropy of $T\bar{T}$ deformed 2-dimensional Yang--Mills in a completely arbitrary state $\ket{\Psi}$ prepared via Euclidean path integral is: \begin{equation}\label{arbitraryee} S = \sum_\mathcal{R} P(\mathcal{R}) (\text{dim} \mathcal{R})^{2-\chi} \left[ \chi \log (\text{dim} \mathcal{R}) - \log P(\mathcal{R}) \right] \,, \end{equation} with the probability distribution of representations $P(\mathcal{R})$ given by Eq.\,(\ref{probability}). Now to make contact with existing results in the literature, let us restrict to the case that the background manifold is the two-sphere $\mathbb{S}^2$, for which $\chi = 2$. Slicing the sphere in angular time and taking the path integral over this hemisphere geometry gives the Hartle--Hawking vacuum state $\ket{HH}$. The corresponding entanglement entropy for $T\bar{T}$ deformed YM$_2$ in this state is: \begin{equation}\label{HHee} S_{HH} = \sum_\mathcal{R} \left[ 2 P(\mathcal{R}) \log (\text{dim} \mathcal{R}) - P(\mathcal{R}) \log P(\mathcal{R}) \right] \,, \end{equation} where again, $P(\mathcal{R})$ is given by Eq.\,(\ref{probability}) with $\chi = 2$. Meanwhile, for the undeformed theory, entanglement entropy in the Hartle--Hawking state is given by \cite{Donnelly:2014} \cite{GROMOV201460}: \begin{equation}\label{HHee0} (S_0)_{HH} = \sum_\mathcal{R} \left[ 2 P_0(\mathcal{R}) \log (\text{dim} \mathcal{R}) - P_0(\mathcal{R}) \log P_0(\mathcal{R}) \right] \,, \end{equation} where now \begin{equation} P_0 = \frac{1}{(Z_0)_1} (\text{dim} \mathcal{R})^2 e^{- \frac{\lambda C_2}{2N} A} \,, \end{equation} is the probability distribution of representations in the undeformed theory. We see that Eqs.\,(\ref{HHee}) and (\ref{HHee0}) take the same structural form. To understand why this is, we need to first understand the origin of the terms in these formulae. In both expressions for $S_{HH}$, the second term, $-\sum_\mathcal{R} P(\mathcal{R}) \log P(\mathcal{R})$, is essentially the classical entropy. The only observables one can measure in YM$_2$ are gauge invariant functions of the (non-Abelian) electric field $E^a(x)$. The Gauss law constraint sets $E^a(x)$ to be constant over the whole circle, so measurements of gauge invariant observables constructed out of $E^a$ made in region $\mathcal{A}$ are always going to be correlated with those made in $\bar{\mathcal{A}}$. This is another way to understand why the manner in which we partitioned the circle into $\mathcal{A}$ and $\bar{\mathcal{A}}$ didn't matter. Since the measurements are always correlated, tracing over $\bar{\mathcal{A}}$ should introduce no additional uncertainty. One would then expect that the only contribution to the entropy would be the statistical entropy coming from the classical uncertainty in the outcome of measurements of electric field observables. There is another source of entropy, though, as evidenced by the first term in Eqs.\,(\ref{HHee}) and (\ref{HHee0}). This term comes from counting ``edge modes"--additional degrees of freedom coming from states at the endpoints which transform non-trivially under gauge. The question of whether such non-gauge invariant degrees of freedom should be included in the definition of entanglement entropy is discussed at length in \cite{Donnelly:2014}, with an argument made in the affirmative. Here, we adopt this stance as well, and so each of the two endpoints of region $\mathcal{A}$ is taken to contribute a factor $\log (\text{dim} \mathcal{R})$, leading to the $2 \sum_\mathcal{R} P(\mathcal{R}) \log (\text{dim} \mathcal{R})$ term in entanglement entropy. Note that aside from these edge mode terms, there are no terms in the entanglement entropy coming from local degrees of freedom. In fact, even in the original undeformed theory, entanglement entropy was already finite. This lack of UV divergences makes sense, given that YM$_2$ has no local propagating degrees of freedom\footnote{The existence of local gauge invariant degrees of freedom is obstructed by YM$_2$'s extended symmetry group of area-preserving diffeomorphisms. The theory can still have degrees of freedom; to see them, though, one needs to look at spacetimes with non-trivial topology and non-local quantities like Wilson loops \cite{Cordes:1994}.}. In this sense, the effect of $T\bar{T}$ on entanglement entropy is somewhat obscured by the triviality of our theory. The deformation has been argued to act as an effective UV cutoff, rendering quantities like entanglement entropy finite. In the context of the $T\bar{T}$ deformation of large $c$ conformal field theories, this was shown to be the case in \cite{Donnelly:2018bef}. This result was further generalized in \cite{Lewkowycz:2019xse} and a similar effect is seen in higher dimensional generalizations \cite{Grieninger:2019zts}. For YM$_2$, which is already UV finite, the effect of the deformation on entanglement entropy is then minimal. Eqs.\,(\ref{HHee}) and (\ref{HHee0}) maintain the same structural form, including terms coming from classical statistical entropy as well as entropy associated with counting edge modes. The only difference is that the probability distributions associated with the irreducible representations $\mathcal{R}$ are shifted due to the deformation's dressing of the Casimir eigenvalues. \subsection{Fate of the string picture} One aspect of 2-dimensional Yang--Mills which makes it an exceptionally interesting theory in spite of its lack of local degrees of freedom and semi-topological nature is the fact that it admits a dual string interpretation. As first shown by Gross and Taylor \cite{Gross:1993}, YM$_2$ \textit{is} a string theory in the large $N$ limit. More precisely, expanding the partition function in a power series in $1/N$ results in an expression whose coefficients match order-by-order with those expected from a sum over maps from a 2D covering space $\Sigma$ to a 2D target space $\mathcal{M}$, each map being weighted by the Nambu-Goto action. Provided the identification of $1/N$ with the string coupling $g_s$ and $\lambda$ with the string tension $1/2 \pi \alpha'$, we have the moral equivalence: \begin{equation} \log Z_{YM} [N, \lambda, A] = Z_{string}\left[ g_s = \frac{1}{N}, \alpha' = \frac{1}{\pi \lambda} \right]\,. \end{equation} The intuitive picture is that we have a closed string worldsheet $\Sigma$ wrapping $n$ times around the manifold $\mathcal{M}$. Branch points on $\Sigma$ correspond to interactions where strings can either split apart or join together. In addition to these elementary branch point singularities, manifolds with non-vanishing $\chi$ also admit so-called $\Omega$ and $\Omega^{-1}$ point singularities, the number of which is fixed by the Euler characteristic of $\mathcal{M}$. Only recently have these singularities been given an interpretation\footnote{The authors of \cite{Donnelly:2019} have argued that $\Omega$ and $\Omega^{-1}$ points are related to positive and negative index singularities, respectively, of the modular flow.} in the context of the string picture \cite{Donnelly:2019}. In the case of the two-sphere $\mathcal{M} = \mathbb{S}^2$, for which $\chi = 2$ and there are two $\Omega$ points, the interpretation is particularly nice. This is the case to which we'll restrict our analysis. For a single chiral sector\footnote{The closed string Hilbert space $\mathcal{H}$ is a subspace of the tensor product of chiral and anti-chiral sectors $\mathcal{H}^+ \otimes \mathcal{H}^-$. These correspond to strings winding in opposite directions.} of YM$_2$ living on $\mathbb{S}^2$, the partition function can be expanded in string basis as \cite{Donnelly2017}: \begin{equation}\label{undeformedstrings} Z_{YM} = \sum_n \frac{1}{n!} \sum_k \frac{1}{k!} \left( - \frac{n \lambda A}{2} \right)^k \sum_r \frac{(-1)^r}{r!} \left( \frac{\lambda A}{N} \right)^r \sum_{\sigma \in S_n} \sum_{p_1...p_r \in T_2} N^{K_\sigma} N^{K_{p_1...p_r \sigma}} \,. \end{equation} As promised, we are summing over an $n$-sheeted covering, with the $1/n!$ accounting for redundancy in summing over homomorphisms differing only by a trivial relabeling. The $\sum_k \frac{1}{k!} \left( - \frac{n \lambda A}{2} \right)^k = \exp (- n \lambda A/2)$ is the Nambu-Goto action of the string worldsheet wrapping $n$ times about the sphere and describes the ``free" part of the theory. Specifically, $nA$ is the area of the string worldsheet with no foldings, and $\lambda$ is proportional to the string tension. The $r$ string interactions are encoded in $\sum_r \frac{(-1)^r}{r!} \left( \frac{\lambda A}{N} \right)^r$. Here, the $1/r!$ accounts for the indistinguishability of the interactions, a factor of the string coupling $1/N = g_s$ accompanies each interaction, and $\lambda A$ is a ``modulus factor" obtained from integrating over all possible places where an interaction could take place\footnote{The $(-1)^r$ factor is thought to relate to the fermionic nature of interaction points \cite{Donnelly2017}, but a precise identification remains unclear.}. Finally, $N^{K_\sigma}$ accounts for the $K_\sigma$ closed strings in the initial state $\ket{\sigma}$ emitted from one $\Omega$ point, while $N^{K_{p_1...p_r \sigma}}$ accounts for the $K_{p_1...p_r \sigma}$ closed strings in the final state $\ket{p_1...p_r \sigma}$ absorbed at the other $\Omega$ point. We sum over all permutations $\sigma \in S_n$ for the initial state as well as all sequences of transpositions $p_1...p_r \in T_2$ leading to the final state. The overall picture, then, is that we have the following evolution in Euclidean time: An initial state $\ket{\sigma}$ of $K_\sigma$ closed strings is emitted from one $\Omega$ point. The strings undergo $r$ interactions which locally cut and re-glue them, acting as a series of transpositions $p_1 p_2...p_r \equiv p \in T_2$ which take $\ket{\sigma} \rightarrow \ket{p \sigma}$. The interactions may change individual winding numbers, but preserve total winding number. The resultant final state $\ket{p \sigma}$ of $K_{p \sigma}$ strings is absorbed at the other $\Omega$ point. A natural question would be what happens to this string picture upon deforming the theory with $T\bar{T}$. After all, it has been shown that deforming a theory of free massless bosons with $T\bar{T}$ results in the Nambu-Goto action in static gauge \cite{Cavaglia:2016oda}. In this result as well as in other contexts (\textit{e.g.}, \cite{Kraus:2018xrn}, \cite{Dubovsky:2018bmo}, \cite{Giveon:2017nie}), it has become apparent that deforming QFTs with $T\bar{T}$ results in theories which are in a sense non-local. It would then be interesting to see what happens starting from a theory of strings. There are actually two convenient ways to go about obtaining the string expansion of the $T\bar{T}$ deformed partition function. One would be to start from the undeformed partition function in string basis and apply the kernel integral transform. While computationally simple, the resultant combinatorics make interpreting the expression rather difficult. On the other hand, given that we can identify the form of the deformed Hamiltonian density based on the transformation of the quadratic Casimirs under the deformation, \begin{equation} \mathcal{H}_0 \rightarrow \mathcal{H} = \frac{\mathcal{H}_0}{1 + \mu \mathcal{H}_0} = \frac{(\lambda/2N) \hat{C}_2}{1 + (\mu \lambda/2N) \hat{C}_2} \,, \end{equation} we could also obtain $Z$ as the Euclidean evolution between initial and final states inserted at the $\Omega$ points: \begin{equation}\label{evolution} \begin{split} Z & = \matrixel{\Omega}{e^{-\beta \mathcal{H}}}{\Omega} \\ & = \matrixel**{\Omega}{e^{- \frac{(\lambda A/2N)\hat{C}_2}{1+ (\mu \lambda/2N)\hat{C}_2}}}{\Omega} \\ & = \sum_k \frac{(-1)^k}{k!} \left(\frac{\lambda A}{2N}\right)^k \sum_\ell \frac{(-1)^\ell}{\ell!} \left( \frac{\mu \lambda}{2N} \right)^\ell \frac{(k + \ell - 1)!}{(k - 1)!} \matrixel{\Omega}{\hat{C}_2^{k+\ell}}{\Omega} \,. \end{split} \end{equation} In order to evaluate the matrix element, we note that the state $\ket{\Omega}$ may be written in string basis\footnote{In the representation basis, $\ket{\Omega} = \sum_\mathcal{R} \text{dim} \, \mathcal{R} \ket{\mathcal{R}}$. The bases are related by the Frobenius relation $\ket{\mathcal{R}} = \sum_{\sigma \in S_n} \chi_\mathcal{R}(\sigma)/n! \ket{\sigma}$, where $\chi_\mathcal{R}(\sigma)$ is the character of the permutation group associated to representation $\mathcal{R}$. See \cite{Donnelly2017} for more technical details.} as: \begin{equation} \ket{\Omega} = \sum_n \frac{1}{n!} \sum_{\sigma \in S_n} N^{K_\sigma} \ket{\sigma} \,, \end{equation} with $\sigma$ a permutation in the symmetric group $S_n$ and $K_\sigma$ the number of closed strings in the initial state $\ket{\sigma}$. Meanwhile, the quadratic Casimir operator can be decomposed in terms of ``free" and ``interacting" parts: $\hat{C}_2 = Nn + 2\hat{C}_{int}$. The leading term counts the total winding number $n$ while the interaction term implements a transposition $p \in T_2$, with the factor of 2 accounting for double counting. That is, \begin{equation} \hat{C}_2 \ket{\sigma} = Nn\ket{\sigma} + 2 \sum_{p \in T_2} \ket{p \sigma} \,. \end{equation} Then, making use of the fact that the inner product is $\braket{\Omega}{\sigma} = N^{K_\sigma}$, we have: \begin{equation} \begin{split} \matrixel{\Omega}{\hat{C}_2}{\Omega} & = \sum_n \frac{1}{n!} \sum_{\sigma \in S_n} N^{K_\sigma} \matrixel{\Omega}{\hat{C}_2^{k+\ell}}{\sigma}\\ & = \sum_n \frac{1}{n!} \sum_{\sigma \in S_n} N^{K_\sigma} \sum_{r = 0}^k \frac{k!}{r! (k-r)!} (Nn)^{k-r} \sum_{s = 0}^\ell \frac{\ell!}{s! (\ell - s)!} (Nn)^{\ell - s} 2^{r+s} \matrixel{\Omega}{\hat{C}^{r+s}_{int}}{\sigma} \\ & = \sum_n \frac{1}{n!} \sum_{\sigma \in S_n} N^{K_\sigma} \sum_{r = 0}^k \frac{k!}{r! (k-r)!} (Nn)^{k-r} \sum_{s = 0}^\ell \frac{\ell!}{s! (\ell - s)!} (Nn)^{\ell - s} 2^{r+s} \sum_{p_1...p_{r+s} \in T_2} N^{K_{p_1...p_{r+s} \sigma}} \,. \end{split} \end{equation} Plugging into Eq.\,(\ref{evolution}) and regrouping terms, we arrive at the following expression for the string basis expansion of the $T\bar{T}$ deformed partition function: \begin{equation}\label{deformedstrings} \begin{split} Z = \sum_n \frac{1}{n!} \sum_k \frac{1}{k!} \left( - \frac{n \lambda A}{2} \right)^k \sum_r & \frac{(-1)^r}{r!} \left( \frac{\lambda A}{N} \right)^r \sum_\ell \frac{1}{\ell !} \left( - \frac{n \lambda \mu}{2} \right)^\ell \sum_s \frac{(-1)^s}{s!} \left( \frac{\lambda \mu}{N} \right)^s \\ & \frac{(k + r + \ell + s - 1)!}{(k + r - 1)!} \sum_{\sigma \in S_n} \sum_{p_1...p_{r+s} \in T_2} N^{K_\sigma} N^{K_{p_1...p_{r+s} \sigma}} \,. \end{split} \end{equation} The overall form of Eq.\,(\ref{deformedstrings}) bears a structural resemblance to its undeformed counterpart Eq.\,(\ref{undeformedstrings}). Nevertheless, the interpretation of this expression is problematic. The $1/N$ expansion can still be interpreted as a sum over maps from an $n$-sheeted covering space $\Sigma$ to a target space $\mathcal{M}$, however the weighting of these maps is a lot more complicated. In the undeformed theory, each map was simply weighted by the (free) Nambu-Goto action. The remaining terms in the expansion were then interpreted as string interactions. After deforming the theory with $T\bar{T}$, though, there is a non-trivial coupling between the ``free" and ``interacting" sectors. Unfortunately, this precludes a straightforward interpretation. It somewhat looks like at each interaction site indexed by $r$, at which strings can split and rejoin, there is the possibility for a second kind of sub-interaction, indexed by $s$. This sub-interaction would also allow for strings to split and rejoin, but would occur over an area with scale set by $\mu$. This picture is tenuous, however, and we leave the precise interpretation of this expression and the fate of the string picture to future investigation. \section{Discussion} In this article, we have advocated for Eq.\,(\ref{deformation}) as an alternative definition for the action of the $T\bar{T}$ deformation which should apply for any 2-dimensional quantum field theory on general background. This integral transform was first put forth in the context of holography in \cite{Freidel:2008sh} as a means of relating the partition function of a 2D conformal field theory with the 3D gravity bulk wave function satisfying the radial Wheeler-DeWitt equation. It was then noted in \cite{McGough:2016lol} that this integral transform looks like the $T\bar{T}$ deformation for CFTs in the sense that the flow equation usually used to define the $T\bar{T}$ deformation of CFTs takes the form of the WDW equation in three dimensions with a negative cosmological constant. These authors thus argued that via this transform, one can obtain the $T\bar{T}$ deformed QFT partition function $Z_{QFT}[f]$ from that of an undeformed CFT $Z_{CFT}[e]$. Motivated by the observation that taking the $\mu$ derivative of $Z[f]$ as defined by Eq.\,(\ref{deformation}) effectively pulls down the $T\bar{T}$ operator, we have proposed that this integral transform be extended as a definition of the deformation for general 2D QFTs. That it involves an integral over frame fields is reminiscent of the proposal in \cite{Dubovsky:2018bmo}, which was used to obtain the torus partition function. The precise connection between the two approaches is dealt with in more detail in \cite{mazenc2019t}. We have supported our proposal by applying the integral transform in the test case of 2-dimensional Yang--Mills theory, whose partition function has a very simple Gaussian dependence on the frame fields. The resultant $T\bar{T}$ deformed partition function matches that obtained via other methods for all cases where the result was previously known, but can now be extended to any background manifold. Implicit in the kernel definition is an ambiguity regarding normalisation. For the test case of YM$_2$ investigated in this article, we chose a normalisation based on physical arguments that the deformation should do nothing to the theory in the topological limit, for which the stress-tensor vanishes. Consequently, the deformed partition function obeyed the standard Cardy flow equation \cite{Cardy2018} with no first derivative term. However, in formulating a more general flow equation for the theory on arbitrary background, we found that such terms could arise depending on the order in which functional derivatives were taken at coincident points. This ordering ambiguity in the flow equation formalism translates to the normalisation ambiguity in the integral transform definition of the deformation. Further development of the kernel method should hopefully make this connection more precise. From Eq.\,(\ref{mainresult}), the entanglement entropy of deformed YM$_2$ in any arbitrary state could be computed. Specialising to the case of the Hartle--Hawking vacuum state, the entanglement entropy Eq.\,(\ref{HHee}) was found to take the same structural form as in the undeformed theory, Eq.\,(\ref{HHee0}). In both, there was a term arising from classical statistical entropy as well as a term coming from the counting of edge modes. The only difference lay in the probability distributions associated with the irreducible representations, which were shifted due to the deformation's dressing of the quadratic Casimir eigenvalues. This was in a sense to be expected. The $T\bar{T}$ deformation has been argued to act as an effective UV cutoff on entanglement entropy \cite{Donnelly:2018bef}. However, YM$_2$, being a semi-topological theory with no local degrees of freedom, is already UV finite. Thus, it is reasonable that the deformation should have a minimal effect on entanglement entropy. Because the deformed Yang--Mills partition function has such a simple dependence on the geometry, there are unfortunately not many observables which can be computed from it. In this article, we have calculated the entanglement entropy of the deformed theory in an arbitrary state, but it would be interesting to see what other observables one can in principle compute. In the undeformed theory, another set of interesting observables considered are correlation functions of Wilson loops. Perhaps such observables can be computed in the deformed theory as well. Despite its simplicity, Yang--Mills does have a number of interesting associated phenomena which can be re-examined under the prism of $T\bar{T}$. One of these, the third order Douglas--Kazakov phase transition, was investigated in \cite{Santilli2019}. By performing the large $N$ analysis, it was found that not only does the transition persist for a range of deformation parameter values, but the interpretation of the transition as being induced by unstable instantons remains valid. Another interesting aspect of large $N$ Yang--Mills which had not yet been examined in the deformed theory was its dual interpretation as a string theory. Upon performing the large $N$ string expansion of the deformed partition function, we arrived at Eq.\,(\ref{deformedstrings}). While this expression superficially looks quite similar to Eq.\,(\ref{undeformedstrings}), its interpretation is troubled by the factorial factor which acts to couple the ``free" and ``interacting" terms of the theory. It is possible that this is simply an exotic string theory with a very complex weighting. If we take this perspective, then it looks like we can have two types of interactions which, while both splitting and rejoining strings, occur over two different fundamental length scales. Or it could well be that $T\bar{T}$ deformed Yang--Mills is simply no longer a string theory. We invite the reader to investigate this question further. \section*{Acknowledgements} We would like to thank W. Donnelly, N. Vald\'{e}s, S. Timmerman, E. Mazenc and R. Soni for many fruitful discussions. We would also like to thank A. Tolley for clarifications regarding the choice of the path integral measure. This research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Economic Development, Job Creation and Trade. \bibliographystyle{utphys}
3,212,635,537,865
arxiv
\section*{Abstract} In the present paper, an asymptotic model is constructed for the short-time deformation of an articular cartilage layer modeled as transversely isotropic, transversely homogeneous (TITH) biphasic material. It is assumed that the layer thickness is relatively small compared with the characteristic size of the normal surface load applied to the upper surface of the cartilage layer, while the bottom surface is assumed to be firmly attached to a rigid impermeable substrate. In view of applications to articular contact problems it is assumed that the interstitial fluid is not allowed to escape through the articular surface. \section{Introduction} \label{sec:intro} Articular cartilage is a thin tissue which covers the diathrodial joints of the bones. Its structural functions facilitate the transmission of forces between the bones and minimize the stresses contact peaks as well as minimize the friction by means of self-pressurized lubrication. A great interest surrounds its understanding because a correct modeling may lead to correct patient-specific diagnosis for degeneration pathologies and provide operative tools for repair and replacement engineering (see \cite{ateshian2015toward}). A cartilage layer itself is a complex arrangement of a solid matrix saturated by interstitial fluid, mainly composed of water and mobile ions. Its collagen fibrils and proteoglycans are considered the cartilage most relevant solid elements and are heterogeneously distributed along the depth from the subchondral bone to the contact surface. This complex architecture supplies the anisotropic and inhomogeneous electro-mechanical features of the thin structure and is the cause for the nonlinear response to external stimuli. One approach to the analysis consists in considering the solid phase as a fibril-reinforced material and modeling the full complex layer through a finite element analysis (e.g. \cite{li1999nonlinear, korhonen2003fibril, wilson2005fibril}). A big concern related to the use of the latter arises, concerning contact problems, in modeling thin layers as interphases between structures whose sizes exceed the layers ones of at least one order of magnitude. An extremely fine mesh is required for both the thin layer and the neighbor bone regions, which can easily give place to ill-conditioning and numerical instability of the method if not simply to an enormous increase of the computational effort (see e.g. \cite{wilson2005role, day1994zero, capdeville2008shallow} and relative references). Homogenization procedures are then required in order to provide mathematically workable mechanical laws and they are often obtained, following a long tradition, via multi-scale approaches. Not only the pleasant circumstance of the 70th anniversary, but his great contribution in the field encourage us to mention at least a few works of Professor Federico J.~Sabina in the realm of fiber-reinforced with transversely isotropic constituents \cite{rodriguez2001closed, bravo2001closed, guinovart2001closed, guinovart2005recursive,sabina2001closed,berger2006unit,guinovart2005closed} or laminated \cite{bravo2008homogenization,camacho2009magnetoelectric} materials subjected to elastic, thermal, electrical, magnetic multi-physics. With a homogenized constitutive law in hand, analytical methods to tackle the mechanics of thin layers have been developed. Mainly, they consist in reducing the problem to boundary value problems, thus allowing for the substitution of the finite thickness layer with a zero-thickness one \cite{bovik1994modelling, movchan1995mathematical, klarbring1998asymptotic, mishuris2004imperfect, benveniste2006general, sussmann2011combined} and sometimes even used to improve and make more efficient experimental data extrapolations (e.g. \cite{ochsner2007new,argatov2014small}). Their applicability must be examined case by case since inaccurate assumptions may even lead to non-uniqueness of the solutions which does not derive from the original mathematical ansatz, as proved by Dalla Riva and Mishuris in \cite{dalla2015existence}, however these analytical models can be eventually suitable to be thereafter implemented for asymptotic finite-element computation. A very recent work has been published by Cerfontaine et al. \cite{cerfontaine20153d} about the construction of a zero-thickness homogeneous element which includes the hydro-mechanical coupling. The debate on the appropriate constitutive model for articular cartilage, when assumed as a continuum medium, is wide, but applications basically count two families. The cartilage material can be considered monophasic and thus its observed delayed response requires a viscoelastic constitutive law \cite{parsons1977viscoelastic, armstrong1986analysis}, or its phenomenology derives from flow-dependent viscoelasticity. The former finds application, for instance, in dynamic \cite{simon1984creep,argatov2013accounting} and impact \cite{GarciaAltieroHaut1998,argatov2015impact} problems for articular cartilage, the latter leads to the development of a biphasic tissue model within the settings of the mixture theory \cite{mow1980biphasic}. It is noticeable that, in terms of response for the impacting body, the two models can be mathematically connected and give nearly the same results \cite{argatov2013mathematical}. The present work is inscribed within the second framework described above. This approach is for instance particularly suited for underlining that the fluid, about 80\% of the structure volume, is the main responsible, for load-bearing at early time of deformation and allows to distinguish between the stresses of the solid structure and the pressure of the interstitial fluid. With the purpose of studying the contact problem for the diathrodial joint, analytical solutions for biphasic isotropic homogeneous \cite{ateshian1994asymptotic,wu1997improved,argatov2010axisymmetric,argatov2011elliptical,argatov2011contact,quinonez2011analytical}, elastic and viscoelastic \cite{Barber1990,Eberhardt1990joint,perez2008modified,Lin2010surrogate,argatov2011frictionless,argatov2012development} and transversely isotropic models \cite{rahman2001type,argatov2014small} have been retrieved. Nevertheless, it has been shown that a depth-dependent variation of the solid matrix stiffness and permeability may play a crucial role in determining the internal behavior of the layer. For instance it affects the homogeneity of the stress fields and improves the superficial fluid support in contact solicitation (see \cite{schinagl1997depth, krishnan2003inhomogeneous, federico2005transversely, federico2008anisotropy, ateshian2009modeling, chegini2010time}). A recent interest developed in mechanics, which involves the study of inhomogeneous structures in the second half of the last century for aerospace or geomechanical purposes. The main reason is to be addressed is the necessity of individuating the response features of composite materials, eventually functionally-graded. A number of analytical studies of inhomogeneous structures have been provided for special material variation functions and for arbitrary inhomogeneity in axisymmetric configuration for monophasic layers. An extended bibliography was examined by Tokovyy and Ma in \cite{tokovyy2015analytical}. To the best of our knowledge, the present work is the first study which, by means of asymptotic analysis, provides an analytical solution for the deformation problem of a biphasic transversely isotropic transversely homogeneous (TITH) thin layer. An infinitely extended thin porous solid matrix is considered to be linear elastic, the interstitial fluid is inviscid, and the problem is stated within the framework developed by Athesian et al. in \cite{ateshian1994asymptotic} for an isotropic homogeneous layer and by Argatov and Mishuris in \cite{argatovcontact,argatov2015asymptotic} for the transversely isotropic case. The fluid flow is impeded through both the surfaces and the structure is supposed to be firmly attached to a rigid substrate, thus neglecting the influence of the deformability of the substrate, for which an approach was proposed in \cite{argatov2014small}. An arbitrary load is applied to the external surface in absence of friction. Whereas the formulation remains completely general, a special in-depth exponential variation of the stiffness and permeability is assumed. The leading terms of the Laplace transform of the displacement field and the fluid pressure are retrieved. The boundary conditions used in the deformation problem are thought to be applied to contact problems for which analytical results have been already provided in \cite{argatovcontact}. In this context, explicit formulae are given for the dependent variables only along the external surface, which is of the main interest in contact problem. Numerical benchmarks are studied and compared to the above mentioned existing solutions. \section{TITH Solid Matrix and Biphasic Model} \label{sec:model} The cartilage layer is modelled as a transversely isotropic, transversely homogeneous (TITH) porous linear elastic solid matrix, saturated by a fluid with zero-viscosity. The particular boundary condition problem investigated here describes a thin layer completely constrained at the bottom by a flat impermeable surface. A vertical load is applied at its top by a rigid impermeable punch. No friction arises between the punch and the layer upper surface. While $(x',y')$ are the in-plane coordinates, $z'$, the vertical one, is directed downward, set to 0 at the top. The solid matrix equilibrium and constitutive equations, coupled with Terzaghi's principle are written as follows: \begin{gather} \begin{split} \pder{}{z'}\left(A_{44}\pder{\VV'}{z'}\right)+A_{13}\pder{\nabla_{y'} w'}{z'}+\pder{}{z'}\left(A_{44}\nabla_{y'} w'\right)+A_{66}\Delta_{y'}\VV'\qquad\qquad\qquad\\+(A_{11}-2A_{66}-A_{12})\nabla_{y'}\nabla_{y'}\cdot\VV+(A_{66}+A_{12})\mathcal{H}_{y'}\VV'=\nabla_{y'}p ,\end{split}\label{eq:equi1} \\ \pder{}{z'}\left(A_{33}\pder{w'}{z'}\right)+\pder{}{z'}\left(A_{13}\nabla_{y'}\cdot\VV'\right)+A_{44}\pder{\nabla_{y'}\cdot\VV'}{z'}+A_{44}\Delta_{y'}w'=\pder{p}{z'}.\label{eq:equi2} \end{gather} In this notation $\mathcal{H}_y$ indicates the Hessian matrix operator whose $ij$-components are $\pmder{}{x'_i}{x'_j}$. The continuity equation for the fluid and Darcy's law are collected as \begin{equation} \pder{}{z'}\left(K_3\pder{p}{z'}\right)-\pder{\nabla_{y'}\cdot\VV'}{t'}-\pmder{w'}{t'}{z'}+K_1\Delta_{y'}p=0. \label{eq:darc} \end{equation} If the thickness of the layer is $h$, the constraint at the bottom surface leads to the boundary conditions \begin{gather} \VV'|_{z'=h}=0,\\ w'|_{z'=h}=0, \label{eq:BCorigd} \end{gather} while, the impermeability of both of the bottom and upper surfaces implies \begin{gather} \left.\pder{p}{z'}\right|_{z'=h}=0,\\ \left.\pder{p}{z'}\right|_{z'=0}=0. \end{gather} The frictionless contact between the rigid punch and the top of the layer allows to state that \begin{equation} \left.\pder{\VV'}{z'}+\nabla_{y'}w'\right|_{z'=0}=\mathbf{0}. \end{equation} The top surface itself must be also in equilibrium and respect Terzaghi's principle, that is \begin{equation} \left.A_{13}\nabla_{y'}\cdot\VV'+A_{33}\pder{w'}{z'}-p\right|_{z'=0}=-q. \end{equation} As for the initial conditions, every variable is set to 0 at $t'=0$. \section{Asymptotic Analysis} The thinness of the layer suggests to make use of perturbation analysis to solve the system of second order partial differential equations described in Section~\ref{sec:model}. The thickness $h$ is assumed to be represented as \begin{equation} h=\E h_*, \end{equation} where $\E$ is a small positive parameter and $h_*$ is a length independent of $\E$ with the the same order of magnitude as the characteristic in-plane length of the loaded layer. Thus, it becomes useful: \begin{itemize} \item to introduce the new independent variables \begin{equation} z=\frac{z'}{h},\quad t=\frac{t'}{h^2},\quad x_i=\frac{x'_i}{h_*}\quad(i=1,2), \label{eq:nindv} \end{equation} so that $z\in[0,1]$; \item to set the new unknowns variables \begin{equation} w=\frac{w'}{h},\quad \VV=\frac{\VV'}{h}; \label{eq:nunkv} \end{equation} \item to express the elastic parameters $A_{jk}$ and the hydraulic resistivities $K_j$ ($j,k=1,2,3$) as functions of the new stretched vertical coordinate $z=\frac{z'}{h}=\frac{z'}{\E h_*}$. \end{itemize} The asymptotic expansion of the unknowns is written as follows: \begin{equation} \begin{aligned} \VV&=\E^0 \VV_0+\E^1 \VV_1+\E^2 \VV_2...,\\ w&=\E^0 w_0+\E^1 w_1+\E^2 w_2...,\\ p&=\E^0 p_0+\E^1 p_1+\E^2 p_2...\,. \end{aligned} \label{eq:asyexp} \end{equation} Substituting \eqref{eq:nindv} and \eqref{eq:nunkv} into Eqs.\eqref{eq:equi1}--\eqref{eq:darc} leads to a new set of differential equations governing the problem \begin{align} \begin{split} \pder{}{z}\left(A_{44}\pder{\VV}{z}\right)+\E\left(A_{13}\pder{\nabla_{y} w}{z}+\pder{}{z}\left(A_{44}\nabla_{y} w\right)-\nabla_{y}p\right)\qquad\qquad\\+\E^2\left( (A_{11}-2A_{66}-A_{12})\nabla_{y}\nabla_{y}\cdot\VV+(A_{66}+A_{12})\mathbf{H}_{y}\VV\right)=\mathbf{0},\end{split} \label{eq:nasy1} \\ &\pder{}{z}\left(A_{33}\pder{w}{z}\right)-\pder{p}{z}+\E\left(\pder{}{z}\left(A_{13}\nabla_{y}\cdot\VV\right)+A_{44}\pder{\nabla_{y}\cdot\VV}{z}\right)+\E^2A_{44}\Delta_{y}w=0, \label{eq:nasy2}\\ &\pder{}{z}\left(K_3\pder{p}{z}\right)-\pmder{w}{t}{z}+\E\left(-\pder{\nabla_{y}\cdot\VV}{t}\right)+\E^2K_1\Delta_{y}p=0.\label{eq:nasy3} \end{align} In the same way, the boundary conditions take the form \begin{align} \VV|_{z=1}=\mathbf{0}, \quad w|_{z=1}=0,\\ \left.\pder{p}{z}\right|_{z=1}=0, \quad \left.\pder{p}{z}\right|_{z=0}=0,\\ \left.\pder{\VV}{z}+\E\nabla_{y}w\right|_{z=0}=\mathbf{0},\\ A_{33}\pder{w}{z}-p+q+\E\left.A_{13}\nabla_{y}\cdot\VV\right|_{z=0}=0. \label{eq:bc4asy} \end{align} Using the expansions \eqref{eq:asyexp} to solve the system \eqref{eq:nasy1}--\eqref{eq:nasy3} and taking into account the boundary conditions, it is easy to verify that the trivial terms of the expansions are \begin{gather} \VV_0=\VV_2=\mathbf{0},\quad w_0=w_1=p_1=0,\\ p_0=q, \label{eq:trivterm} \end{gather} so that \begin{equation} \VV_1=\nabla_{y}q\dinteg{1}{z}{\frac{z}{A_{44}}}{z}. \label{eq:vv1impl} \end{equation} The $\E^2$-terms of Eqs.~\eqref{eq:nasy2} and \eqref{eq:nasy3} yield \begin{gather} \pder{}{z}\left(A_{33}\pder{w_2}{z}\right)+\pder{}{z}\left(A_{13}\nabla_{y}\cdot \VV_1\right)-\pder{p_2}{z}+A_{44}\pder{\nabla_{y}\cdot \VV_1}{z}=0 ,\label{eq:n22}\\ \pder{}{z}\left(K_3\pder{p_2}{z}\right)-\pmder{w_2}{t}{z}-\pder{\nabla_{y}\cdot \VV_1}{t}+K_1\Delta_{y}q=0 .\label{eq:n32} \end{gather} The boundary condition \eqref{eq:bc4asy} becomes \begin{equation} \left.A_{33}\pder{w_2}{z}+A_{13}\nabla_{y}\cdot \VV_1-p_2\right|_{z=0}=0. \end{equation} Integrating Eq.~\eqref{eq:n22} once between 0 and $z$ and applying the latter boundary condition leads to the following equation: \begin{equation} A_{33}\pder{w_2}{z}=p_2-\Delta_{y}q\frac{z^2}{2}-A_{13}\nabla_{y}\cdot \VV_1 .\label{eq:w2fromp2} \end{equation} Thanks to Eq.~\eqref{eq:vv1impl} and the equation above, Eq.~\eqref{eq:n32} can be expressed exclusively in terms of $p_2$ as \begin{equation} \begin{split} \ppder{p_2}{z}-\frac{1}{K_3}\pder{K_3}{z}\pder{p_2}{z}-\frac{1}{K_3 A_{33}}\pder{p_2}{t}=\\ \frac{1}{K_3 A_{33}}\pder{\Delta_y q}{t}\left((A_{33}-A_{13})\dinteg{1}{z}{\frac{z}{A_{44}}}{z}-\frac{z^2}{2}\right)-\frac{K_1}{K_3}\Delta_y q, \end{split} \label{eq:n2+3} \end{equation} where the unknowns are kept on the left-hand side. \section{Laplace transformation in the case of a specific type of inhomogeneity} \label{1otSection4} Some assumption on the variation of the five parameters $A_{13}$, $A_{33}$, $A_{44}$, $K_1$, and $K_1$ needs to be done in order to simplify the continuation. In the present work we consider that they vary exponentially along the $z$-axis while the product $K_3 A_{33}$ remains constant. The latter feature is validated by experimental observation which suggests that the axial mechanical stiffness --- then $A_{33}$ --- increases with the depth from the surface toward the bone \cite{schinagl1997depth,klein2007depth,wang2001analysis}, while the fact that a decreasing porosity causes an overall reduction in permeability was shown in \cite{federico2008anisotropy}, where, through this hypothesis, the classical results of \cite{maroudas1968permeability} were fitted and justified. The exponential depth-dependency used later on is \begin{equation}\left\{\begin{aligned} &A_{33}=a_{33}\me^{2\gamma z},\quad A_{44}=a_{44}\me^{\alpha z},\quad A_{13}=a_{13}\me^{\alpha_{13}z},\\ &K_3=k_3\me^{-2\gamma z},\quad K_1=k_1\me^{-\gamma_1 z}, \end{aligned}\right. \label{eq:paramdef} \end{equation} where $\gamma>0$ is a specified constant. Certainly the choice of exponential functions is not completely general, nevertheless, given that we deal with a very thin layer, the opportunity of fitting experimental data does not appear particularly compromised, at least in the case of monotonic variations of the parameters in exam. The latter expressions are substituted into Eq.~\eqref{eq:n2+3} and the time variable $t$ is changed into the dimensionless one by the formula $$\tau=K_3 A_{33} t.$$ Recalling that any unknown is set to 0 for $-\infty<t<0^-$, the inverse Laplace transformation is applied to yield \begin{equation} \begin{split} \ppder{P}{z}-2\gamma\pder{P}{z}-s P=\\ s\Delta_y Q\left(\frac{a_{33}\me^{2\gamma z}-a_{13}\me^{\alpha_{13}z}}{a_{44}}\dinteg{1}{z}{\frac{z}{\me^{\alpha z}}}{z}-\frac{z^2}{2}\right)-\frac{k_1}{k_3}\me^{(2\gamma-\gamma_1)z}\Delta_y Q, \end{split} \label{eq:Ln2+3} \end{equation} where $P(s)$ and $Q(s)$ are respectively the Laplace transforms of $p_2(\tau)$ and $q(\tau)$, $s$ is the transformation parameter. For the terms that multiply an exponential function of $z$ we introduce the following abbreviation: \begin{equation} \Phi^{(M)}_i=\me^{M_i z}(b_{i1}z+b_{i0})\quad(i=1,2,3,4), \end{equation} whose coefficients are resumed in Table~\ref{tab:phiM}. \begin{table} \begin{center} \begin{tabular}{c} $\begin{array}{|c||c|c|c|} \hline i&M_i&b_{i1}\alpha^2a_{44}&b_{i0}\alpha^2a_{44}\\ \hline\hline 1&\alpha_{13}-\alpha&\alpha a_{13}&a_{13}\\ \hline 2&\alpha_{13}&0&-a_{13}(1+\alpha)\me^{-\alpha}\\ \hline 3&2\gamma-\alpha&-\alpha a_{33}&-a_{33}\\ \hline 4&2\gamma&0&a_{33}(1+\alpha)\me^{-\alpha}\\ \hline \end{array}$ \end{tabular} \caption{Parameters of $\Phi^{(M)}_i$}\label{tab:phiM} \end{center} \end{table} It ensues that the latter second order ordinary differential equation is suitable to be rewritten as \begin{equation} \ppder{P}{z}-2\gamma\pder{P}{z}-s P=\Delta_y Q \sum_{i=1}^6\Upsilon_i(s,z) .\label{eq:LMn2+3} \end{equation} Here we have introduced the notation \begin{equation} \begin{aligned} &\Upsilon_i(s,z)=s\Phi^{(M)}_i(z),\quad i=1,2,3,4,\quad\\ &\Upsilon_5(s,z)=-s\frac{z^2}{2},\quad \Upsilon_6(s,z)=-\frac{k_1}{k_3}\me^{(2\gamma-\gamma_1)z}. \end{aligned}\label{eq:Upsilon} \end{equation} Posed $\sigma(s)=\sqrt{\gamma^2+s}$, the homogeneous solution of Eq.~\eqref{eq:LMn2+3} is \begin{equation} P_h=\me^{\gamma z} (C_1 \sinh\sigma z+C_2\cosh\sigma z), \label{eq:HomogP} \end{equation} where the two constants $C_1(s)$ and $C_2(s)$ must be determined to fulfill the boundary conditions $\pder{P}{z}=0$ at $z=0$ and $z=1$. Within the Appendices \ref{1otAppendix_A}, \ref{1otAppendix_B} and \ref{1otAppendix_C}, we deal with the solution of Eq.~\eqref{eq:LMn2+3} splitting it into three parts (see Eq.~\eqref{eq:Upsilon}) as follows: one part containing the so-called $\Phi^{(M)}_i$-terms, corresponding to $\Upsilon_i(s,z)$, $i=1,2,3,4$, one containing the $z^2$-term, corresponding to $\Upsilon_5(s,z)$, and the last one which involves the permeability, specifically the ratio $\frac{K_1(z)}{K_2(z)}$. This term, corresponding to $\Upsilon_6(s,z)$, will take the name of $k$-term. It is useful to calculate (with the same subdivision) the integral $$\xi=\dinteg{1}{z}{p_2\me^{-2\gamma z}}{z}$$ which comes out from Eq.~\eqref{eq:w2fromp2}, if $w_2$ is recovered and the boundary condition $w_2=0$ at $z=1$ is applied. \begin{equation} w_2=\dinteg{1}{z}{\dfrac{\me^{-2\gamma z}\left(p_2-\Delta_{y}q\frac{z^2}{2}-A_{13}\nabla_{y}\cdot \VV_1\right)}{a_{33}}}{z}. \label{eq:Iw2fromp2} \end{equation} In particular we are interested in the evaluation of the pressure and the vertical displacement at the load application surface ($z=0$) because those results are especially important for contact problems. The asymptotic expansion for the fluid pressure at the load application surface ($z=0$) results from Eq.~\eqref{eq:asyexp} and Eq.~\eqref{eq:trivterm}: \begin{gather} p_0\approx q(\tau)+\frac{h^2}{h_\ast^2}p_{02}(\tau),\label{eq:p0tot}\\ p_{02}=\sum_{i=1}^4 p_{0i}^{(M)}+p_0^{(2)}+p_0^{(k)}, \label{eq:p0sum} \end{gather} where the terms $p_{0i}^{(M)}$, $p_0^{(2)}$ and $p_0^{(k)}$ must be calculated as derived in the Appendices \ref{1otAppendix_A}, \ref{1otAppendix_B} and \ref{1otAppendix_C}. Let us assume that both $\alpha_{13}$ and $\gamma$ are zero. In this case Table~\ref{tab:phiM} becomes Table~\ref{tab:phiM00}. For studying the homogeneous case $\alpha$ must also tend to zero, so that \begin{table} \begin{center} \begin{tabular}{c} $\begin{array}{|c||c|c|c|} \hline i&M_i&b_{i1}\alpha^2a_{44}&b_{i0}\alpha^2a_{44}\\ \hline\hline 1&-\alpha&\alpha a_{13}&a_{13}\\ \hline 2&0&0&-a_{13}(1+\alpha)\me^{-\alpha}\\ \hline 3&-\alpha&-\alpha a_{33}&-a_{33}\\ \hline 4&0&0&a_{33}(1+\alpha)\me^{-\alpha}\\ \hline \end{array}$ \end{tabular} \caption{Parameters of $\Phi^{(M)}_i$ for $\alpha_{13}=\gamma=0$}\label{tab:phiM00} \end{center} \end{table} \begin{equation} -\sum_{i=1}^4b_{i0}\Delta_yq=\Delta_yq\frac{a_{33}-a_{13}}{a_{44}}\lim_{\alpha\rightarrow 0}\frac{1-(1+\alpha)\me^{-\alpha}}{\alpha^2}=\frac{1}{2}\frac{a_{33}-a_{13}}{a_{44}}\Delta_yq, \end{equation} \begin{equation} \sum_{i=1}^4\sum_{n=0}^\infty{\mathrm{Res}\left\{\me^{s\tau}\Omega_i^{(M)}(s);s_{n}\right\}}\ast\Delta_yq=\frac{a_{33}-a_{13}}{a_{44}}2\sum_{n=0}^\infty(-1)^n\me^{-n^2\pi^2\tau}\ast\Delta_yq. \end{equation} The last two equations imply that the homogeneous solution regarding the $\Phi^{(M)}_i$-terms is written as \begin{equation} \sum_{i=1}^4 p_{0i}^{(Mh)}=\frac{1}{2}\frac{a_{33}-a_{13}}{a_{44}}\Delta_yq+\frac{a_{33}-a_{13}}{a_{44}}2\sum_{n=0}^\infty(-1)^n\me^{-n^2\pi^2\tau}\ast\Delta_yq. \end{equation} Analyzing the same limits for the part of the solution generated by the $z^2$-term, we get \begin{equation} p_0^{(2h)}=-2\sum_{n=0}^\infty(-1)^n\me^{-n^2\pi^2\tau}\ast\Delta_yq, \end{equation} and for the $k$-term, we obtain \begin{equation} p_0^{(kh)}=\frac{k_1}{k_3}\ast\Delta_yq. \end{equation} Collecting the three formulas above and substituting them into Eqs.~\eqref{eq:p0tot} and \eqref{eq:p0sum}, the following complete expression for the $\varepsilon^2$-approximation of the fluid pressure at $z=0$ can be achieved: \begin{equation} p_0^h\approx q(\tau)+\frac{h^2}{h_\ast^2}p_{02}^h(\tau)\label{eq:p0htot}, \end{equation} \begin{equation} \begin{split} p_{02}^h&=\frac{1}{2}\frac{a_{33}-a_{13}}{a_{44}}\Delta_yq(\tau)\\ &+\frac{k_1}{k_3}\dinteg{0}{\tau}{\Delta_yq(\theta)}{\theta}\\ &+2\left(\frac{a_{33}-a_{13}}{a_{44}}-1\right)\sum_{n=0}^\infty(-1)^n\dinteg{0}{\tau}{\me^{-n^2\pi^2(\tau-\theta)}\Delta_yq(\theta)}{\theta}. \end{split} \label{eq:p0hsum} \end{equation} This is exactly the same expression obtained by Argatov and Mishuris in \cite{argatovcontact}. Equation~\eqref{eq:Iw2fromp2} shows how to obtain the $\varepsilon^2$-term of the asymptotic expansion of the vertical displacement $w_2$ --- which is actually also the only non-zero term of the approximation of $w$ (see Eq.~\eqref{eq:trivterm}) --- starting from $p_2$ and $\VV_1$. Sticking to the coordinate $z=0$, the mentioned equation can be written as \begin{equation} \begin{split} \frac{w_0}{\varepsilon^2}\approx w_{02}&=\frac{1}{a_{33}}\left(\sum_{i=1}^4\xi_{0i}^{(M)}+\xi_0^{(2)}+\xi_0^{(k)}\right)+\frac{\Delta_yq}{2a_{33}}\dinteg{0}{1}{\me^{-2\gamma z}z^2}{z}\\ &+\frac{a_{13}\Delta_yq}{a_{33}a_{44}}\dinteg{0}{1}{\me^{(\alpha_{13}-2\gamma)z}\left(\dinteg{z}{1}{\me^{-\alpha\tilde{z}}\tilde{z}}{\tilde{z}}\right) }{z}. \end{split} \end{equation} It is easy to notice, making use of Eq.~\eqref{eq:xi0QMt}, Table~\ref{tab:phiM}, Eqs.~\eqref{eq:xi02t} and \eqref{eq:xi0kt}, that the only terms which do not vanish are \begin{equation}\begin{split} w_{02}&=\frac{\xi_{03}^{(M)}+\xi_{04}^{(M)}+\xi_0^{(k)}+b_{11}M_1(\me^{M_1-2\gamma}-1)\me^{A_2\tau}\ast\Delta_yq(\tau)}{a_{33}}\\ &=\frac{\me^{-\alpha}(\alpha^2+2\alpha+2)-2}{\alpha^3a_{44}}\Delta_yq\\ &+\frac{a_{13}(\alpha_{13}-\alpha)(\me^{\alpha_{13}-\alpha-2\gamma}-1)}{\alpha a_{33}a_{44}}\Delta_yq\ast\me^{(\alpha_{13}-\alpha)(\alpha_{13}-\alpha-2\gamma)\tau} \\ &+(\alpha-2\gamma)\frac{\me^{-\alpha}-1}{\alpha a_{44}}\Delta_yq\ast\me^{\alpha(\alpha-2\gamma)\tau}\\ &+\frac{k_1}{a_{33}k_3}\frac{\me^{-\gamma_1}-1}{\gamma_1}\ast\Delta_yq(\tau). \end{split} \end{equation} When all the exponents defined in Eq.~\eqref{eq:paramdef} are set to zero, the previous equation takes the form \begin{equation} w_{02}= -\frac{1}{3a_{44}}\Delta_yq(\tau)-\frac{k_1}{a_{33}k_3}\dinteg{0}{\tau}{\Delta_yq(\theta)}{\theta}. \end{equation} As for Eq.~\eqref{eq:p0hsum}, analysing the behavior of a homogeneous transversally isotropic layer, Argatov and Mishuris \cite{argatovcontact} gained the same result. Recovering all the original variables from Eqs.~\eqref{eq:nindv}, \eqref{eq:nunkv}, and \eqref{eq:asyexp}, and writing the load as $q=q(x',y',t')$, we find \begin{equation}\begin{split} w'_{02} &=\frac{\me^{-\alpha}(\alpha^2+2\alpha+2)-2}{\alpha^3a_{44}}h^3\Delta_{y'}q\\ &+\frac{a_{13}(\alpha_{13}-\alpha)(\me^{\alpha_{13}-\alpha-2\gamma}-1)}{\alpha a_{44}}h k_3\dinteg{0}{t'}{\me^{(\alpha_{13}-\alpha)(\alpha_{13}-\alpha-2\gamma)\frac{a_{33}k_3}{h^2}(t'-\theta)}\Delta_{y'}q}{\theta} \\ &+(\alpha-2\gamma)\frac{\me^{-\alpha}-1}{\alpha a_{44}}h k_3 a_{33}\dinteg{0}{t'}{\me^{\alpha(\alpha-2\gamma)\frac{a_{33}k_3}{h^2}(t'-\theta)}\Delta_{y'}q}{\theta}\\ &+\frac{\me^{-\gamma_1}-1}{\gamma_1}h k_1 \dinteg{0}{t'}{\Delta_{y'}q(\theta)}{\theta}. \end{split} \label{eq:displ} \end{equation} \section{Numerical examples} In this Section, we present some numerical examples with the main purpose to underline the effect of the inhomogeneity to the response of the cartilage layer to an applied load. For this reason we compare every benchmark to the results obtained considering the solution by Argatov and Mishuris \cite{argatovcontact} for a transversely isotropic homogeneous (TIH) model with the same average permeability and mechanical stiffness. The thickness of the layer is taken to be $h=10$mm (for the sake of easiness of scaling). The applied distributed load $q=q_{t'} q_r$ is axisymmetric with respect to the radial coordinate $r$, though this symmetry (not necessary from the assumptions) represents a lack of generality only for the sake of clarity in this section. It results from the product of two factors: $q_{t'}=q_{t'}(t')$ assigns the behavior in time, while $q_r=q_r(x',y')=q_r(r)$ contributes to the spacial distribution. A total force $F=125$N is distributed according to the law \begin{equation} q_r(r)=\me^{-\left(\frac{1.73 r}{10 h}\right)^2}\left(\frac{10 h}{1.73}\right)^2\frac{F}{\pi}, \label{eq:spaF} \end{equation} so that 100N are loaded within a radius of about $10h$. A homogeneous and isotropic Poisson's ratio $\nu=0$ is considered, thus the stiffness parameters (see Eq.~\eqref{eq:paramdef}) result as follows: \begin{equation} \left\{\begin{aligned} A_{33}&=H_{A3}(z')=a_{33}\me^{2\gamma z'/h},\\ A_{13}&=\frac{\nu}{1-\nu}H_{A1}(z')=a_{13}\me^{\alpha_{13}z'/h},\\ A_{44}&=\frac{1-2\nu}{2(1-\nu)}\text H_{A1}(z')=a_{44}\me^{\alpha z'/h}, \end{aligned}\right. \end{equation} where $H_{A1}$ and $H_{A3}$ are respectively the planar and the vertical aggregate moduli. For the fixed $\nu$, the behavior of $A_{33}$ and $A_{13}$ must be the same and only due to the variation of $\text{Ha}_1$, so that $\alpha_{13}=\alpha$. The examples show a cartilage layer which is in average isotropic both in aggregate modulus and permeability. Using the operator $\Avg{\cdot}$ to express average along the depth: \begin{equation} \left\{\begin{aligned} \Avg{H_{A1}}=\Avg{H_{A3}}=\Avg{H_{A}}&=0.5\text{MPa},\\ \Avg{K_1}=\Avg{K_3}=\Avg{K}&=2\cdot 10^{-15}\frac{\text{m}^2}{\text{Pa s}}. \end{aligned}\right. \label{eq:setqua} \end{equation} The parameter that is used both to describe the inhomogeneity of the permeability $K_3$ to the one of the stiffness $A_{33}$ is $\gamma$. Anyway one can use a more intuitive quantity (called the \textit{ratio of inhomogeneity}) $R_I$, which says how much $A_{33}$ grows from the articular surface to the bone (i.e., $\me^{2\gamma}$) and at the same time the ratio of $K_3$ at $z'=0$ and at $z'=h$. Thus, we put \begin{equation} \gamma=0.5\log R_I. \label{eq:setg} \end{equation} In order to study the effects of the inhomogeneity, we set the remaining parameters as functions of the same $R_I$ as follows: \begin{equation} \left\{\begin{aligned} \gamma_1&=2\log R_I,\\ \alpha=\alpha_{13}&=0.7 \log R_I . \end{aligned}\right. \label{eq:setexp} \end{equation} \begin{figure} \centering \includegraphics[scale=.75]{Figures/RA3param} \caption{Permeability and aggregate modulus versus the isotropic homogeneous ones plotted along the depth $z'/h$ for an \textit{inhomogeneity ratio} $R_I=3$ in a) and c) following the settings of Eq.~\eqref{eq:setqua}, \eqref{eq:setg} and \eqref{eq:setexp}; Plots b) and d) illustrate the descending anisotropy as ratios between those quantities and the equivalent inhomogeneous isotropic ones.} \label{fig:RA3param} \end{figure} As shown by Federico and Herzog in \cite{federico2008anisotropy} via a micromechanical approach, the anisotropy of permeability can be explained through the fact that the collagen fibers, whose statistical orientation varies with the depth, are impermeable. Consequently, being the fibers nearly parallel to the surface and perpendicular to the tidemark, $K_1>K_3$ for small $z'$ and vice versa. Since the same fibers are known to be responsible also for the mechanical properties, and particularly for the anisotropy and inhomogeneity of the cartilage stiffness \cite{federico2005transversely}, it is expected that $H_{A1}>H_{A3}$ in the upper part of the layer, conversely in the lower one. The achievement of these features are the reason of the choice of the parameters above. For instance, the effect of this characterization is visualized in Fig.~\ref{fig:RA3param} for $R_I=3$, where ${K}^{\text{iso}}$ and ${H}^{\text{iso}}_{A}$ (see Fig.~\ref{fig:RA3param}.b and Fig.~\ref{fig:RA3param}.d) are the equivalent inhomogeneous isotropic aggregate modulus and permeability defined as \begin{equation} \left\{\begin{aligned} {K}^{\text{iso}}(z')&=\frac{2}{3}K_1+\frac{1}{3}K_3,\\ {H}^{\text{iso}}_{A}(z')&=\frac{2}{3}{H}_{A1}+\frac{1}{3}{H}_{A3}. \end{aligned}\right. \label{eq:} \end{equation} \begin{figure} \centering \includegraphics[clip=true, keepaspectratio, scale=.70 ]{Figures/const1} \caption{Vertical displacement of the TITH layer surface under a constant load and comparison with an IH layer behavior: a) is the evolution in time at $r=0$; b) shows the deformation profile along the radius for different times.} \label{fig:const1} \end{figure} According to Eq.~\eqref{eq:displ}, in the first place we show the behavior of the present cartilage model for a constant load ($q_{t'}=1$) applied for 1200s, smaller than the characteristic time $t'={h^2}/({a_{33} k_3})$ for which the solution is valid. Both from Fig.~\ref{fig:const1}.a and Fig.~\ref{fig:const1}.b it is observable that $R_I$ strongly influences the response of the structure. In particular, while in the case of a homogeneous isotropic layer the deformation increases for a constant load, above a certain value of $R_I$, which depends on the parameters settings, a phenomenon of swelling appears during the initial phase. It derives (see the second and third addends in Eq.~\eqref{eq:displ}) from the contribution of $K_3$, effect that vanishes in the IH case. Fig.~\ref{fig:const1}.b draws attention to the profile of deformation of the contact surface. The obtained asymptotic solution provides that $w'$ depends only on $\Delta_{y'}q$, so that, since for our loading condition its zeroes remain fixed (see Eq.~\eqref{eq:spaF}), every benchmark calculated in this section implies a homothetic deformation profile. \begin{figure} \centering \includegraphics[clip=true, width=.9\textwidth]{Figures/constRI3ab \caption{a) Lateral displacement of a TITH layer with $R_I=3$ under constant load at $t'=0$; b) Difference of lateral displacement between a TITH layer with $R_I=3$ and an IH one under constant load at $t'=0$.} \label{fig:constRI3ab} \end{figure} Through Eq.~\eqref{eq:vv1impl} the lateral displacements are achieved. In Fig.~\ref{fig:constRI3ab}.a, the axisymmetric $v'$ is plotted under a constant load from $r=0$ to $r=20h$ at $t'=0s$. The highest displacement is obtained at the load surface at $r\approx 4h$ with a value of $v'\approx 0.33 h$ while the base is constrained (see Eq.~\eqref{eq:BCorigd}). In Fig.~\ref{fig:constRI3ab}.b, the difference between the solution for a TITH layer with $R_I=3$ and the one for a IH layer is measured. The maximum difference appears at the same $r\approx 4h$ but $z'\approx 0.5h$, and is about $0.034h$, so that, in terms of lateral displacements, the TITH structure with $R_I=3$ shows to deform less than the equivalent IH one. In Fig.~\ref{fig:constRI3ab}, the instantaneous response is calculated, and one can notice how (although the surface displacements can appear qualitatively similar at the load surface (Fig.~\ref{fig:const1}.b) and the same fittings are eventually possible both through a homogeneous and inhomogeneous model calibrating the material parameters) remarkable differences are returned between the two if light needs to be shed inside the layer. \begin{figure} \centering \includegraphics[clip=true, keepaspectratio, scale=.70 ]{Figures/texpt} \caption{Displacement at $r=0$. Peaks of the three considered loads: 1s, 5s, 20s. The representation timescale is logarithmic. The TITH has $R_I=3$ and does not present significant residual displacements at $t'=200s$. Its peak displacements verify at the respective $t_P$ and are about $0.17h$, while the peaks for IH are $0.19h$.} \label{fig:texpt} \end{figure} \begin{figure} \centering \includegraphics[clip=true, keepaspectratio, scale=.70 ]{Figures/sine5th} \caption{Vertical displacement at $r=0$ under a sinusoidal load oscillating with a frequency of 1Hz. Three different inhomogeneous layers ($R_I=1/3,3,10$) are compared to an IH one.} \label{fig:sine5th} \end{figure} The second case that we consider deals with a load which reaches its peak at $t'=t_P$. Successively it decreases to 0 asymptotically following the law: \begin{equation} q_{t'}=\frac{t'\me^{-(\frac{t'}{t_P}+1)}}{t_P}. \label{eq:} \end{equation} The displacement of the point $r=0$ is depicted in Fig.~\ref{fig:texpt} for three different values of $t_P$ for the first 200s. The peaks of the displacement are the same in the three cases and happen approximately at the respective $t_P$. As in the case of a constant load, the deformation at $r=0$ becomes smaller for inhomogeneities with $R_I>1$. The difference between the IH model and the TITH ($R_I=3$) model consists mainly in the behavior at large $t'$. While the transversely homogeneous layer returns to the undeformed configuration after the load removal, the homogeneous solution presents a residual deformation that depends on $t_P$, that is on the rate with which the load is applied. Finally, in Fig.~\ref{fig:sine5th} the effect on $w'|_{r=0}$ of a sinusoidal load is plotted. The frequency applied is considered to be 1Hz, similar to the one that can occur to a knee articular cartilage due to walking. As expected, since the period of 1s is small in comparison to the characteristic time, the short term difference between differently inhomogeneous layers is exclusively in terms of amplitude and no residual displacements are accumulated for next cycles. The structure deforms as a monophasic elastic one; yet again a positive $R_I$ produces a \textit{stiffer} response, while a negative one vice versa. \section{Discussion and conclusions} An analytical approach is provided for solution of the deformation problem of a TITH biphasic thin layer. The mathematical analysis is conducted by use of Laplace transformation and asymptotic analysis. The leading terms of the displacement and fluid pressure fields are retrieved through the solution of ordinary differential equations. Such equations are made particularly simple thanks to the assumption of exponential in-depth variation of the solid matrix elastic stiffness and permeability with the only restriction of keeping the product $k_3 A_{33}$ constant along the layer transverse direction. This particular setting appears reasonable since experimental investigations on articular cartilage show that the aggregate modulus, contrarily to the permeability, decreases toward the subchondral bone (see \cite{wang2001analysis, maroudas1968permeability}). The scope of the present work is presenting an explicit form for the deformation of the external cartilage surface which can be straightforwardly applied for solving contact problems. It is reached through the formula in Eq.~\eqref{eq:displ}. In addition to the contribution of $A_{33}$, $A_{44}$, $k_1$ and $k_3$ found by Argatov and Mishuris in \cite{argatovcontact}, the one of $A_{13}$ is shown, other than the effects of the variation parameters $\alpha_{ij}$ and $\gamma_i$ (see their definitions in Eq.~\eqref{eq:paramdef}). As discussed in Section~\ref{sec:intro}, the role of inhomogeneity and anisotropy in affecting the internal state of the cartilage layer during loading encouraged many authors to develop fully 3D models for its mechanical analysis. However their applicability for the study of contact problems, due to the large difference in scales between the thin tissue and the bones interacting along the articular joint may result arduously suitable because of the deriving numerical problems, possibly obligating to use homogeneous elements as interphases. The simplicity of Eq.~\eqref{eq:displ}, on the opposite, suggests that the same constitutive equations can be used both for an insight into the layer (see Fig.~\ref{fig:constRI3ab}), for instance for experimental investigations, and for a large scale contact problems. The full-thickness layer can finally be substituted by a zero-thickness one through transmission conditions. Eventually an asymptotic-based finite element can be implemented for assessing patient-specific problems for real diathrodial joints and complex geometries, once the material parameters are experimentally estimated. Only in sight of a future application to contact problems the results are presented extensively on the contact surface, while, for the reader who desired to obtain the full-depth solution, it would be enough to remove the restriction on the $z$-coordinate for the Laplace inversion shown in the Appendices. \section*{Acknowledgments} GV participated to the present work under the support of \textit{FP7-MC-ITN-2013-606878-CERMAT2}, IA and GM acknowledge \textit{H2020-MSCA-RISE-2014-644175-MATRIXASSAY}. \bibliographystyle{abbrv}
3,212,635,537,866
arxiv
\section{Introduction} To understand the emergence of cooperative behavior among selfish individuals, researchers have considered various mechanisms, such as network reciprocity~\cite{1,2}, voluntary participation~\cite{3,4}, aspiration~\cite{5,6,7}, social diversity~\cite{9,10}, migration~\cite{11,12,13}, chaotic payoff variations~\cite{14}, extortion~\cite{15,16,17}, punishment~\cite{the1,the2,the3,the4}, and so on. The reproductive ability, also is known as the teaching ability or the learning ability, has been extensive studied in the evolutionary games~\cite{a1,a2,a3,a4,a5,a6}. Szolnoki $et$ $al$ proposed an inhomogeneous teaching ability in which the probability that the individual $i$ adopts a randomly chosen neighbor $j$'s strategy depends on the payoff difference and a two-value pre-factor $\omega$ characterizes the teaching ability of neighbor $j$~\cite{a1}. They found that inhomogeneous teaching ability can promote cooperation for the prisoner's dilemma games in lattices~\cite{a1} and complex networks~\cite{a2}. Guan $et$ $al$ found that the introduction of the inhomogeneous activity of teaching of individuals can remarkably promote cooperation in spatial public goods games~\cite{a3}. Szolnoki and Perc defined the teaching ability of a node $i$ as its collective influence which is the product of its reduced degree and the total reduced degree of all $j$ nodes at a hierarchial depth $\ell$ from node $i$~\cite{a4}. It was found that there exists an optimal hierarchical depth for the determination of collective influence that favors cooperation. Chen $et$ $al$ proposed an inhomogeneous learning ability in which the two-value pre-factor $\omega$ characterizes the strength of the individual $i$'s own learning activity~\cite{a5}. They found that appropriate intermediate levels of learning activity can promote or sustain cooperation for the prisoner's dilemma games in small-world networks and scale-free networks. Wu $et$ $al$ discovered that cooperation on square lattices is promoted (inhibited) in the case of synchronous (asynchronous) strategy updating, if heterogeneous learning ability is considered~\cite{a6}. In many real-life situations, an individual tends to follow the majority in behavior or opinion within the interaction range. Recently, the consideration of conformity has attracted much attention in the study of evolutionary games. Szolnoki and Perc designated a fraction of population as being driven by conformity rather than payoff maximization~\cite{inter1,inter2}. These conformists simply adopt whichever strategy is most common within their interaction range at any given time, regardless of the expected payoff. They showed that an appropriate fraction of conformists within the population introduces an effective surface tension around cooperative clusters and ensures smooth interfaces between different strategy domains. Motivated by the work of Szolnoki and Perc, we propose a conformity-driven reproductive ability in which the probability that the individual $i$ adopts a randomly chosen neighbor $j$'s strategy depends on the payoff difference and a pre-factor $\omega_{ij}$ characterizing the popularity of $j$'s strategy among $i$'s neighbors. The value of $\omega_{ij}$ is above (below) 0.5 if $j$'s strategy is majority (minority) in $i$'s neighborhood. Different from previous works of the teaching ability, the pre-factor in our model is determined not only by $j$ but also by $i$'s other neighbors. \section{Model}\label{sec:model} Our model is described as follows. Player $x$ can take one of two strategies: cooperation or defection, which are described by \begin{equation}\label{1} s_{x} =\left( \begin{array}{c} 1 \\ 0 \\ \end{array} \right)\mathrm{or}\left( \begin{array}{c} 0 \\ 1 \\ \end{array} \right), \end{equation} respectively. At each time step, each individual plays the prisoner's dilemma game with its nearest neighbors. An individual will punish the neighbors that hold different strategies. The accumulated payoff of player $x$ can thus be expressed as \begin{equation} \label{2} P_{x}=\sum_{y\in \Omega_{x}}s_{x}^{T}Ms_{y}, \end{equation} where the sum runs over the nearest neighbor set $\Omega_{x}$ of player $x$ and $M$ is the rescaled payoff matrix given by \begin{equation}\label{3} M=\left( \begin{array}{cc} 1 & 0 \\ b & 0 \\ \end{array} \right). \end{equation} Here the parameter $b (>1)$ denotes the temptation to defect. Initially, cooperators and defectors are randomly distributed with the equal probability 0.5. Players asynchronously update their strategies in a random sequential order~\cite{random1,random2,random3}. Firstly, an individual $i$ is randomly selected who obtains the payoff $P_{i}$ according to the above equations. Next, individual $i$ chooses one of its nearest neighbors at random, and the chosen neighbor $j$ also acquires its payoff $P_{j}$. Finally, individual $i$ adopts the neighbor $j$'s strategy with the probability~\cite{a1}: \begin{equation}\label{4} W(s_{i}\leftarrow s_{j})=\omega_{ij}\frac{1}{1+\exp[(P_i-P_j)/K]}, \end{equation} where $K$ characterizes the noise introduced to permit irrational choices and $\omega_{ij}$ characterizes the ability that $j$ transfers its strategy to $i$. We define the reproductive ability $\omega_{ij}$ as: \begin{equation}\label{5} \omega_{ij}=\frac{1}{1+\exp[(k_i/2-N_{s_j})/H]}, \end{equation} where $N_{s_j}$ is the number of players adopting strategy $s_{j}$ within the interaction range of player $i$ (including $j$ itself), $k_i$ is the degree of player $i$, and $H (>0)$ represents the steepness of the function. The more popular is $j$'s strategy in $i$'s neighborhood, the higher is the reproductive ability of $j$. For $H=\infty$, the reproductive ability is a constant equaling 0.5. In this situation, our model is restored to the original homogeneous ability. Conversely, for $H=0$ the reproductive ability becomes steplike so that $i$ refuses to adopt strategy $s_{j}$ if this strategy is minority in $i$'s neighborhood. \section{Results}\label{sec: results} \begin{figure} \begin{center} \scalebox{0.4}[0.4]{\includegraphics{Graph1.eps}} \caption{(Color online) The fraction of cooperators $\rho_{c}$ as a function of the temptation to defect $b$ for different values of the steepness parameter $H$. For each value of $H$, $\rho_{c}$ decreases to 0 as $b$ increases. For small values of $H$ (e.g., $H=0.01$ or $H=0.3$), cooperators can occupy the whole system when $b$ is small. However, for large values of $\alpha$ (e.g., $\alpha=10$), full cooperation cannot reach even $b=1$.} \label{fig1} \end{center} \end{figure} \begin{figure} \begin{center} \scalebox{0.4}[0.4]{\includegraphics{Graph2.eps}} \caption{(Color online) The fraction of cooperators $\rho_{c}$ as a function of the steepness parameter $H$ for different values of the temptation to defect $b$. The results from (a) simulation and (b) theoretical analysis, respectively.} \label{fig2} \end{center} \end{figure} Following the previous studies~\cite{noise1,noise2}, we set the noise level to be $K = 0.1$. The key quantity for characterizing the cooperative behavior of the system is the fraction of cooperators $\rho_{c}$ in the steady state. In the following simulation, $\rho_{c}$ is obtained by averaging over the last $10^{3}$ Monte Carlo steps (MCS) of the entire $10^{5}$ MCS. Each MCS consists of on average one strategy-updating event for all individuals. Each data is obtained by averaging over 100 different realizations. Unless otherwise specified, all our simulations are performed in a $100 \times 100$ square lattice with the periodic boundary condition. \begin{figure*} \begin{center} \scalebox{0.78}[0.78]{\includegraphics{Graph3.eps}} \caption{(Color online) Snapshots of typical distributions of cooperators (blue) and defectors (red) at different time steps. Initially, we set cooperators (defectors) in the left (right) half of square lattices. The temptation to defect $b=1.05$. The steepness parameter $H$ is $H=0.3$ for (a)-(d) and $H=10$ for (e)-(h). For $H=0.3$, the cooperator cluster continually expands and the boundary between the two competing clusters keeps smooth during the whole evolution. For $H=10$, the defector cluster continually expands and the boundary becomes littery as time evolves.} \label{fig3} \end{center} \end{figure*} \begin{figure}[htbp] \begin{center} \scalebox{0.37}[0.37]{\includegraphics{Graph4.eps}} \caption{(Color online) The time evolution of the average number of cooperative neighbors $\langle n_{c} \rangle$ for players along the interfaces separating domains of cooperators and defectors. We define a player along the the interface as the one who has at least one neighbor with the opposite strategy. The temptation to defect $b=1.05$. For each value of the steepness parameter $H$, $\langle n_{c} \rangle$ firstly decreases and then increases as time evolves. } \label{fig4} \end{center} \end{figure} Figure~\ref{fig1} shows the fraction of cooperators $\rho_{c}$ as a function of the temptation to defect $b$ for different values of the steepness parameter $H$. From Fig.~\ref{fig1}, one can see that, for each value of $H$, $\rho_{c}$ decreases to 0 as $H$ increases. For small values of $H$ (e.g., $H=0.01$ or $H=0.3$), full cooperation can reach when $b$ is below a threshold value. However, for large values of $H$ (e.g., $H=10$), cooperators cannot take over the whole system even $b=1$. Figure~\ref{fig2} shows the fraction of cooperators $\rho_{c}$ as a function of the steepness parameter $H$ for different values of the temptation to defect $b$. We see that, for relatively small values of $b$ (e.g., $b = 1.01$), $\rho_{c}$ decreases as $H$ increases. However, for larger values of $b$ (e.g., $b = 1.05$ or 1.09), there exists an optimal value of $H$ (about 0.2) leading to the highest cooperation level. The dependence of $\rho_{c}$ on $H$ can be qualitatively predicted analytically through a pair-approximation analysis~\cite{pair1,pair2}, the results of which are shown in Fig. ~\ref{fig2}(b). To intuitively understand why the moderate value of $H$ that can best enhance cooperation, we plot spatial strategy distributions as time evolves for different values of $H$ when the temptation to defect $b=1.05$. Initially we set a giant cooperator (defector) cluster in the left (right) half of square lattices. From Figs.~\ref{fig2}(a)-(d), one can see that for the moderate value of $H$ (e.g., $H=0.3$), the cooperator cluster continually expands while the defector cluster gradually shrinks. Note that for $H=0.3$, the boundary between the two competing clusters remains smooth during the whole evolution. However, for the large value of $H$ (e.g., $H=10$), the defector cluster gradually invade the cooperator cluster and the original one big cooperator cluster is divided into some small clusters [see Figs.~\ref{fig2}(e)-(h)]. For $H=10$, the interfaces separating domains of cooperators and defectors become littery. As pointed out in Ref.~\cite{border1,border2}, noisy borders are beneficial for defectors, while straight domain walls help cooperators to spread. For the very small value of $H$ (e.g., $H=0.01$), the cooperator and defector clusters keep almost unchanged (the results are not shown here). \begin{figure*} \begin{center} \scalebox{0.8}[0.8]{\includegraphics{Graph5.eps}} \caption{(Color online) The fraction of cooperators $\rho_{c}$ as a function of the steepness parameter $H$ for different values of the temptation to defect $b$ under different types of networks and different kinds of updating rules. Left panel: for square lattices and synchronous updating rule. Middle panel: for scale-free networks and asynchronous updating rule. Right panel: for scale-free networks and synchronous updating rule. The network size is set to be 10000 and the average degree of the network is 4. Note that for scale-free networks, we use degree-normalized payoffs. } \label{fig5} \end{center} \end{figure*} Next, we study the average number of cooperative neighbors $\langle n_{c} \rangle$ for players along the interfaces separating domains of cooperators and defectors. A player is along the interface if it has at least one neighbor with the opposite strategy. Figure~\ref{fig4} shows the time evolution of $\langle n_{c} \rangle$ for different values of the steepness parameter $H$ when the temptation to defect $b=1.05$. One can see that initially $\langle n_{c} \rangle$ decreases from 2 to about 1.6, and then increases to a stable value. For the small value of $H$ (e.g., $H=0.01$) and the large value of $H$ (e.g., $H=2$), the final value of $\langle n_{c} \rangle$ is below 2. However, for the moderate value of $H$ (e.g., $H=0.3$), $\langle n_{c} \rangle$ finally reaches 3.1. Once the value of $\langle n_{c} \rangle$ exceeds 2, cooperation becomes the majority strategy in a player's neighborhood. In this case, the conformity-driven reproductive ability is beneficial for the expansion of cooperator clusters. In all the above studies, we use square lattices and asynchronous strategy updating. In fact, our finding that the moderate value of the steepness parameter $H$ can best promote cooperation is robust with respect to different kinds of network structures and different ways of strategy updating. Since square lattice is homogeneous interaction networks, it is interesting for us to consider heterogeneous interaction networks. We use the famous Barab\'{a}si-Albert scale-free networks to construct heterogeneous interaction~\cite{BA}. In the asynchronous updating rule, at each time only a randomly selected player is allowed update its strategy. While in synchronous updating rule, at each time all players update their strategies. We consider three cases: square lattices with synchronous strategy updating, scale-free networks with asynchronous strategy updating and scale-free networks with synchronous strategy updating. From Fig.~\ref{fig5}, one can see that for all these cases, the cooperation level reaches the highest at the moderate value of $H$ when the temptation to defect $b$ is fixed. \section{Conclusions and discussions}\label{sec: conclusion} To summarize, we have proposed a conformity-driven reproductive ability in which the probability that a player $i$ adopts a neighbor $j$'s strategy depends on their payoff difference and a pre-factor $\omega_{ij}$ characterizing the popularity of $j$'s strategy among $i$'s neighbors. The value of $\omega_{ij}$ increases with the number of $i$'s neighbors holding the same strategy with $j$. Both numerical and theoretical results show that, the cooperation level of the spatial prisoner's dilemma game can be greatly enhanced by moderately increasing the teaching ability of the neighbor with the majority strategy in the local community. In the case of the conformity-driven reproductive ability, the borders of cooperator clusters become smooth, thus cooperators along the borders can get more help to resist the invasion of defectors. Note that the concept of conformity is widely existent in opinion dynamics~\cite{majority1,majority2}. We hope our work can attract more interest in the study of the heterogeneous reproductive ability based on the opinion dynamics. \begin{acknowledgments} This work was supported by the National Natural Science Foundation of China (Grants Nos. 61403083, 71301028 and 71671044), and Excellent Youth Science Foundation of Fujian Province (Grant No. 2016J06017). \end{acknowledgments}
3,212,635,537,867
arxiv
\section{Introduction} \IEEEPARstart{L}{oad} forecasting plays a key role in the management and dispatching of power systems. Load forecasting involves forecasting the load demand of a future time span. Load forecasting within an interval of an hour to a week is often referred to as short-term load forecasting (STLF). The accuracy of STLF significantly affects the economic operation and reliability of a power system, and inadequate STLF may lead to insufficient reserve capacity and allocation with expensive peaking units or cause unnecessarily large reserve capacity. Both these outcomes are related to increased operating costs. Therefore, accurate load forecasting plays an important role in energy market analysis and economic dispatch in the power industry. In future smart grids, reliable STLF is of great significance for operators to manage grids with higher efficiency and lower cost. Load demand is a non-stationary process that is affected by many factors, including weather conditions, seasonal effects, socioeconomic factors, and random effects \cite{1-hahn2009electric}, which makes load demand difficult to predict. At present, many methods have been proposed for STFL. Most of these methods are based on statistical methods or artificial intelligence algorithms. In the early days, the autoregressive moving average model (ARMA) \cite{2-huang2003short}, fuzzy logic \cite{3-rejc2011short}, expert systems \cite{4-kandil2002long} and other algorithms were widely used in load forecasting. In recent years, artificial intelligence methods such as neural networks and support vector machines \cite{5-de2011short,30-ceperic2013strategy} have been proposed. Some different types and variants of neural networks have also been proposed and applied to STLF, such as wavelet neural networks \cite{9-guan2012very,8-chen2009short} , extreme learning machines (ELMs) \cite{10-li2015short} and wavelet-based hybrid neural networks \cite{34-li2015novel}. At present deep neural networks (DNNs) have facilitated great achievements in many fields \cite{11-bahdanau2014neural,12-tompson2015efficient}. For the success of deep neural networks, model structure design and model depth play an important role \cite{19-szegedy2015going,20-hu2018squeeze,15-he2016deep}. The application of DNNs to short-term load forecasting is a relatively new topic. In \cite{16-ryu2017deep,17-chen2018short}, deep artificial neural networks and deep residual networks are applied to load forecasting. In \cite{35-kong2017short}, long short-term memory (LSTM) recurrent neural network-based is applied to the task of short-term load forecasting for individual residential households. In \cite{36-pramono2019deep}, a wavenet based model that employs dilated causal residual convolutional neural network (CNN) and LSTM layer is applied to load forecasting. In this work, we propose a novel model structure for load forecasting. First, we propose the dense average connection, in which the outputs of all preceding layers are averaged as the input of the next layer in a feed-forward fashion. Based on the dense average connection, we build the dense average network. Second, we further improve the prediction accuracy by the ensemble method. Finally, we perturb the input of the model to varying degrees to verify the robustness of the model. The main contributions of this work are as follows: \begin{itemize} \item We propose the dense average connection, and based on this connection, we build the dense average network. On two public datasets, we evaluate the validity of the dense average network. The dense average network does not require external feature extraction and only uses the data of load, temperature and date information as input. \item We use the ensemble method to further improve the prediction effect. The experimental results show that compared with a single model, the ensemble method can not only improve the prediction accuracy but also reduce the standard deviation and peak value of the final prediction bias. \item To ensure the reliability of model prediction, we conduct a comprehensive analysis on the robustness of the model. We disturb the original load data and temperature data to different degrees. The experimental results show that the proposed model is very robust to data noise. \end{itemize} The remainder of this paper is organized as follows. In section II, we introduced the model structure, ensemble method, and implementation details. In section III, We compare the proposed model with the current methods on two public datasets to verify the validity of the proposed model. We will also apply the proposed model to a real dataset.Section IV summarizes the paper and puts forward the future work. We will release our experimental code and trained models later. \section{Methodology} This paper proposes the Dense Average Network (DaNet) for short-term load forecasting. We first construct features as input to the model from three aspects: historical load data, historical temperature data, and date information of historical load data. Secondly, we introduce the origin of the Dense Average connection. Based on the Dense Average connection, we build the Dense Average Network. After that, we use ensemble method to improve the accuracy of the load forecasting. Finally, in order to ensure the reliability of the model, we performed a robust analysis on the model. \subsection{Model Input Variables } The actual load demand is often affected by many factors, such as the economy, weather, seasons, and holidays \cite{1-hahn2009electric}. Therefore, the features that are selected as the input of the model greatly influence the final prediction result. Meanwhile, we need to note that the raw data of the model input variables we construct should be easily accessible so that our method can be applied in most real-world scenarios. In this work, we mainly construct features from three aspects: historical load data, historical temperature data and date information. Specifically, the variables related to the input are listed in Table I. For the historical load data, we extract historical data from the past two days. Since the load data interval size of the public datasets is 1 hour, the data size of the load data is 48. The recent fluctuations of data can often indicate the recent trends of data. For example, if the load data trend upward, it is likely that the load data also trend upward in the near future. Therefore, to obtain the recent fluctuations of data, we extract the slope of the historical load data. The slope value {\em $S_{h}$} at time {\em h} is defined as \begin{equation} S_{h}=\left (L^{h}-L^{h-1} \right )/\left (h-\left ( h-1 \right ) \right )=L^{h}-L^{h-1} \end{equation} where {\em $L^{h}$} is the load value at time {\em h} and {\em $L^{h-1}$} is the load value at time {\em h-1}. When {\em $S_{h}$} \textgreater0, the load value trends upward; when {\em $S_{h}$} \textless0, the load value trends downward. For the historical temperature data, we obtain the temperature data corresponding to the historical load data. Therefore, the data size of the temperature data is also 48. For the date information, we mainly extract two features: month and weekday. We do not extract the two features of season and holiday because the information of season is hidden in the information of month, and the information of holiday is hidden in the information of week. In the experiment, we find that an increase in season and holiday does not increase the accuracy of the model prediction. In data processing, we perform one-hot processing for month and weekday. \subsection{Dense Average Network Structure} In \cite{18-huang2017densely}, a novel connection is proposed for image recognition. Layer $\ell$ concatenates the output of all preceding layers (feature maps of the same size), which aims to achieve feature reuse and improve the information flow between layers. Let {\em $H_{\ell}(.)$} be the nonlinear transformation of layer $\ell$, then the output of layer $\ell$ is \begin{equation} x_{\ell}=H_{\ell}([x_{0},x_{1},...,x_{\ell-1}]) \end{equation} where [$x_{0}$,$x_{1}$,...,$x_{\ell-1}$] indicates that the input and feature maps from layer 1 to layer $x_{\ell-1}$ are concatenated according to depth. Based on the convolution operation, DenseNet \cite{18-huang2017densely} can obtain superior results when processing two-dimensional data with spatial correlations such as images. However, because the load data is one-dimensional, it is impossible to use DenseNet directly for load forecasting. Of course, we can also directly concatenate the network output of all processing layers to form a large vector, but there are two major problems with this method. First, the parameters of the network model increase dramatically due to the concatenating of all processing layers. Second, as the network depth increases, this concatenating method causes the model to fail to train.We verify this in experiments. Therefore, a feasible idea is to use a combination method to keep the parameters of the {\em$x_{\ell}$} layer the same as those of the processing {\em $x_{\ell-1}$} layer. \begin{table}[] \centering \caption{Model input variables and input related variables} \begin{tabular}{@{}lrl@{}} \toprule Input Variable & \multicolumn{1}{l}{Size} & Description of Input Variable \\ \midrule L & 48 & Load data for the last two days \\ T & 48 & Temperature of load data \\ S & 48 & Slope of load data \\ $L_{i}$ & 1 & The {\em i}-th element of L \\ $S_{i}$ & 1 & The {\em i}-th element of S \\ L\_S & 96 & {[}{[}$S_{1}$,$L_{1}${]},..,{[}$S_{48}$ ,$L_{48}${]}{]} \\ W & 7 & One-hot code for weekday \\ M & 12 & One-hot code for Month \\ \bottomrule \end{tabular} \label{tab:table1} \end{table} The first method we use is to add all the outputs of the processing layer. Unfortunately, this method can cause problems with gradient explosion. We analyze the essential problem of this method, which provides the idea for the method we ultimately adopt. Let $x_{0}$ be the input of the model, and the output of the model with $\ell$ layers is \begin{equation} \centering x_{\ell}=H_{\ell}(x_{\ell-1})+\sum _{i=0}^{\ell-1}x_{i} \end{equation} The total loss of the neural network for the back propagation of {\em$x_{0}$} is calculated as \begin{equation} \frac{\partial L}{\partial x_{0}} = \frac{\partial L}{\partial x_{\ell}}\frac{\partial x_{\ell}}{\partial x_{0}}\\ =\frac{\partial L}{\partial x_{\ell}}(\frac{\partial H_{\ell-1}(x_{\ell-1})}{\partial x_{0}}+\sum _{i=1}^{\ell-1}\frac{\partial x_{i}}{\partial x_{0}}+1) \end{equation} where {\em L} is the loss function of the neural network. Since the outputs of all processing layers are combined by using the addition operation, $\partial x_{i}/\partial x_{0}$ $>$1. Therefore, as the number of layers increases, $\sum _{i=1}^{\ell-1}\frac{\partial x_{i}}{\partial x_{0}}$ increases linearly. Therefore, when building a deep model, gradient explosion occurs. We finally use the average operation to combine the outputs of all processing layers.The output of layer ${\ell}$ using this method is \begin{equation} x_{\ell}=\frac{1}{\ell+1}(H_{\ell}(x_{\ell-1})+\sum _{i=0}^{\ell-1}x_{i}) \end{equation} There are three benefits to using the average operation to combine the outputs of all processing layers. First, since the outputs of all processing ${\ell-1}$ layers are averaged, the problem of gradient explosion is largely dealt with, which allows the model to be trained deeply. Second, using the average operation to combine the outputs of all processing layers does not introduce new parameters. Finally, assuming that the distribution of the processing ${\ell-1}$ layers is similar, the output of the processing ${\ell-1}$ layer is combined using the average operation as the input of the second layer, and the output of the layer ${\ell}$ maintains an approximate distribution in the processing layer. We call this connection the dense average connection. To facilitate the establishment of a deeper network, we construct a dense average block. As shown in Figure 1, each dense average block has four fully connected layers. For building a deep network, we simply stack multiple dense average blocks. Based on the dense average block, we build the dense average network. The structure of the dense average network is shown in Figure 2. Our model has a total of 3 inputs, which are load data, temperature data and date information data. Among them, the dimensions of the temperature data and date data are one-dimensional, so we only use the fully connected layer for feature extraction. The combination of load data and slope data are two-dimensional data, so we can use two-dimensional convolution to extract features. To fully extract features of different scales, we use the design idea of Inception \cite{19-szegedy2015going} for reference. We use four convolution kernels of different sizes to extract features from the load data. The sizes of the four convolution kernels are $1\times 2,2\times 2,3\times 2,4\times 2$, and the step size is set to 1 in all convolution operations. To pay more attention to features with rich information and suppress features with less information, a squeeze-and-excitation (SE) block was proposed in \cite{20-hu2018squeeze} for feature recalibration. This method can play a better role in convolution operations. The structure of the SE block is shown in Figure 3. The overall information of the feature map is obtained through the average pooling operation. After that, two more hidden layers are used to generate weights. Finally, the generated weights are multiplied by the feature maps. In our model, we also add the SE block operation for each convolution layer. \begin{figure}[] \centering \includegraphics[ width=0.13\textwidth]{fig1.png} \caption{Structural diagram of the dense average block. Each dense average block consists of four fully connected layers. The blue circle in the figure represents the average operation.} \end{figure} The dense average network uses a total of 5 dense average blocks, and the depth of the model is 22 layers (excluding the input layer and output layer). The number of neurons or kernels in all hidden layers is set to 128. Except for the activation function of the last layer in the SE block, which is sigmoid, the remaining activation functions are set to ReLU \cite{21-dahl2013improving}. The forms of the ReLU and sigmoid are shown in (6) and (7). {\setlength\arraycolsep{2pt} \begin{eqnarray} ReLU(x)&=&max\left \{0,x \right \} \\ Sigmoid(x)&=&\frac{1}{1+e^{-x}} \end{eqnarray} \begin{figure}[] \centering \includegraphics[ width=0.35\textwidth]{fig2.png} \caption{Model structure of the dense average network. Model input variables are all listed in Table I.} \end{figure} \begin{figure}[] \centering \includegraphics[ width=0.16\textwidth]{fig3.png} \caption{ Structure of the squeeze-and-excitation (SE) block.} \end{figure} \subsection{Ensemble Method Based on Dense Average Network} In the field of machine learning, a common approach to improve the prediction results of models is to use the ensemble method based on multiple models. In this work, we first train multiple different dense average networks, then we average the predicted result of each model to obtain the final results. This ensemble method is called Bagging \cite{17-chen2018short}. In fact, the error of the model can be divided into two parts: bias and variance. Bagging reduces the error of the model by lowering the variance error of the model. We assume that {\em f(x)} is the designed model, {\em f(x;D)} is the prediction of the model on dataset {\em D}, and {\em y} is the label of the sample; then, the expectation for the mean square error of the model prediction is {\setlength\arraycolsep{2pt} \begin{eqnarray} E[(f(x;D)-y)^{2}] &=& E[(f(x;D)-E[f(x;D)])^{2}]+ \nonumber\\ &+&(E[f(x;D)]-y)^{2} \end{eqnarray} Suppose that we have m models, where the error of each model for each sample is {\em $e_{i}$}, the error obeys the multidimensional normal distribution of the zero mean, {\em $E[e^{2}_{i}]=v$} is the variance, and {\em $E[e_{i}e_{j}]=c$} is the covariance. Then, the average error expectation of the ensemble model prediction is {\setlength\arraycolsep{2pt} \begin{eqnarray} E[(\frac{1}{k}\sum_{i=1}^{m}e_{i})^{2}] &=&\frac{1}{k^{2}}E[\sum_{i=1}^{m}(e_{i}^{2}+\sum_{i\neq j}e_{i}e_{j})] \nonumber\\ &=&\frac{1}{k}v+\frac{k-1}{k}c \end{eqnarray} When the errors of the multiple models trained on the samples are consistent, that is, {\em c=v}, the mean square error of the ensemble model is still {\em v}. When the model's error for the sample is completely irrelevant, that is, {\em c = 0}, the mean square error of the ensemble model is reduced to {\em v/k}, and the model's error decreases linearly with the scale of the ensemble model. Therefore, if bagging can be expected to produce good results, the premise is that the error of a single model should be as small as possible and the difference between models may be significant. In all ensemble models, we train different models by randomly selecting 90\% of the training set. In the experiment, we find that only 80\% of the training set has a great impact on the performance of a single model. The number of ensemble models is 5. In section 3.3, we specifically discuss the impact of the number of ensemble models on the final results. \subsection{Implementation Details} In all experiments, the loss function of the model is set to the mean absolute error, the training batch size is 256, and the optimizer is Adam \cite{23-kingma2014adam}. We set the learning rate schedule for the optimizer. Although the learning rate of each iteration of Adam is self-adaptive, we find that the convergence of the model can be more stable by setting the learning rate schedule in the experiment. Adam's initial learning rate is 0.001, and the learning rate is divided by 10 for every 600 epochs. The total number of training epochs of the model is 1200. The parameters of all the models are initialized by using the truncated normal distribution with a mean value of 0 and a standard deviation of 1. These models are implemented in the Python 3.6 environment using Keras 2.1.0 and TensorFlow 1.9.0 as backends \cite{24-gulli2017deep,25-abadi2016tensorflow}. All experiments are run on a computer with GTX 1080 graphics card. It takes about 40 minutes to train the dense average network with data of two years for 1200 epochs. To evaluate prediction performance, three error indicators are used: mean absolute percentage error (MAPE), mean absolute error (MAE) and root mean squared error (RMSE). {\setlength\arraycolsep{2pt} \begin{eqnarray} MAPE&=&\frac{1}{N}\sum_{i=1}^{N}\left | \frac{y_{i}-\hat{y}_{i}}{y_{i}} \right |\times 100\% \\ MAE&=&\frac{1}{N}\sum_{i=1}^{N}\left | y_{i}-\hat{y}_{i} \right | \\ RMSE&=&\sqrt{\frac{1}{N}\sum_{i=1}^{N}(y_{i}-\hat{y}_{i})^2} \end{eqnarray} where {\em N} is the number of samples, $y_{i}$ is the actual load value, and $\hat{y_{i}}$ is the predicted load value. \section{Experiments} \subsection{Datasets} We use two public datasets (the ISO New England (ISO-NE) dataset and North-American Utility (NAU) dataset) to verify the validity of the proposed model. The ISO-NE dataset and NAU dataset both contain load and temperature data with a one-hour resolution. The time range of the ISO-NE dataset is from March 2003 to December 2014 and the time range of the NAU dataset is from January 1985 to October 1992. \subsection{Effectiveness of Dense Average Network} In this case, we mainly analyze the difference between the performance of the model based on the dense average block (dense average network, DaNet) and that of the model based on the fully connected layer (artificial neural network, ANN). The structure of DaNet is shown in Figure 2. We replace all the dense average blocks in Figure 2 with fully connected layers in the ANN structure for comparison. Since the dense average connection does not introduce new training parameters, as long as DaNet and ANN have the same number of layers and the number of neurons in each layer is the same, the parameters for the two models are consistent. For the sake of fairness, both models use the same training method and parameter initialization method. \begin{figure} \centering \includegraphics[ width=0.38\textwidth]{fig4.png} \caption{Test loss values of dense average network (DaNet) and artificial neural network (ANN) on the ISO-NE dataset. We train each model separately 5 times. The results of the solid line are obtained by averaging the test loss values of the 5 models.} \end{figure} \begin{table*}[] \centering \caption{MAPE(\%), MAE, MAX, and STD for ensemble methods with different numbers of models. MAX: Maximum prediction bias on the test set; SD: Standard deviation of the prediction bias for the test set.} \begin{tabular}{@{}lrrrrrrrrrrrr@{}} \toprule \multicolumn{13}{c}{Ensemble Size} \\ \midrule & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{3} & \multicolumn{1}{c}{4} & \multicolumn{1}{c}{5} & \multicolumn{1}{c}{6} & \multicolumn{1}{c}{7} & \multicolumn{1}{c}{8} & \multicolumn{1}{c}{9} & \multicolumn{1}{c}{10} & \multicolumn{1}{c}{11} & \multicolumn{1}{c}{12} \\ \hline MAPE & 0.3842 & 0.3698 & 0.3655 & 0.3658 & \textbf{0.3617} & 0.3655 & 0.3662 & 0.3673 & 0.3691 & 0.3718 & 0.3747 & 0.3765 \\ MAE & 58.93 & 57.46 & 56.69 & 56.62 & \textbf{56.03} & 56.53 & 56.65 & 56.81 & 57.08 & 57.49 & 57.92 & 58.21 \\ MAX & 387.05 & 382.14 & 390.93 & 380.40 & 375.77 & 377.93 & 374.79 & 375.15 & \textbf{372.03} & 372.06 & 374.19 & 375.80 \\ SD & 56.52 & 54.30 & 54.38 & \textbf{53.91} & 54.15 & 54.63 & 54.95 & 55.03 & 55.32 & 55.56 & 55.97 & 56.15 \\ \bottomrule \end{tabular} \label{table2} \end{table*} We use the ISO-NE dataset for performance comparison of this case. We use the year 2004 in the dataset as the training set and the year 2005 in the dataset as the test set. We divide the last month of the training set into the validation set. We train each model 5 times and average the test loss values to obtain the final result. From Figure 4, we can see that after the number of training epochs reaches 600, the fluctuation range of the loss value of the model on the test set is greatly reduced for both DaNet and ANN. This result shows that when using Adam as an optimizer, the models can have better convergence performance by setting the learning rate schedule. Additionally, after reducing the learning rate, the fluctuation range of the solid red line is significantly lower than that of the solid blue line, which means that DaNet has better convergence properties than ANN. We compare the final test loss of DaNet and ANN more specifically. The MAE is 67 for DaNet and 75 for ANN. Compared to ANN, DaNet reduces the MAE of the test set by 10.7\%. The final results of the experiment show that compared with ANN, DaNet has better prediction results and convergence performance. \subsection{Ensemble Scheme} In this case, we focus on the impact of the ensemble size on the prediction effect. Specifically, we focus on two questions: for the bagging ensemble method, what number of models can best predict the results? Additionally, do the standard deviation of the predicted results and the predicted extreme values of the ensemble model differ from those of the individual models? We focus on the differences between the standard deviation of the predicted results and the extreme deviation of the predicted results because the energy management efficiency of the smart grid may be strongly affected by the peak error, and a predictor with low variance may be favored over a predictor with lower average error but higher peak error. This is because underestimating energy demand can have a negative impact on the demand response, making it more difficult to control overload conditions; overestimating energy, however, can lead to unexpected overproduction. In both cases, the greater the estimation error, the higher the administrative costs involved. We use the ISO-NE dataset to analyze the performance of the ensemble model. The training set ranges from 2007 to July 2008, and the test set ranges from August 1, 2008 to August 31, 2008. We explore the effect of ensemble size from 1 to 12, the maximum prediction bias and the standard deviation of the prediction bias. The experimental results are shown in Table II. When the ensemble size is 5, the predicted performance is the best. At the same time, the prediction results of all the ensemble models are better than those of the single models. We find that the prediction bias and the maximum prediction bias of the model with lower MAPE or MAE are not the lowest. We also find that the maximum deviation of the prediction when more models are used for integration is significantly lower than that of a single model, which is of great significance for deploying the ensemble model into the actual environment. In the following experiments, all the ensemble sizes of the ensemble model are set to five. \subsection{Performance of the Proposed Model on the ISO-NE Dataset} In this use case, we compare the proposed model with existing methods on the ISO-NE dataset. Because some methods choose different test set time ranges, we performed 2 comparisons. We first compare with the existing 3 methods \cite{26-shamsollahi2001neural,9-guan2012very,10-li2015short}. In \cite{26-shamsollahi2001neural}, a prediction method based on ANN is proposed. In \cite{9-guan2012very}, a wavelet neural network method for data prefiltering is proposed. In \cite{10-li2015short}, a short-term load prediction method based on wavelet transform, a limit learning machine and an improved artificial bee colony algorithm is proposed. The training set is from January 1, 2007 to June 2008, with the last month being used as the validation set.The test set ranges from July 1, 2008 to July 31, 2008. Table III shows the final results of the experiment. The numerical results for ISO-NE and WNN are obtained from \cite{9-guan2012very}, while the numerical results for WT-ELM-MABC are obtained from \cite{10-li2015short}. From Table III, we can see that both the single model and ensemble model are better than ISO-NE, WNN and WT-ELM-MABC. Specifically, compared with WT-ELM-MABC, the single model improves the MAPE by 16\% and MAE by 13\%. The integration model improves the MAPE by 20\% and MAE by 16\%. \begin{table}[] \caption{One-hour ahead forecasting MAPE(\%) and MAE of proposed method and other methods on the ISO-NE dataset. + represents the results of the ensemble method. } \centering \begin{tabular}{@{}lrr@{}} \toprule & \multicolumn{1}{c}{MAPE} & \multicolumn{1}{c}{MAE} \\ \midrule ISO-NE & 0.81 & 138 \\ WNN & 0.49 & 84 \\ WT-ELM-MABC & 0.45 & 74.41 \\ Proposed & 0.38 & 64.75 \\ $Proposed^{+}$ & \textbf{0.36} & \textbf{62.43} \\ \bottomrule \end{tabular} \label{table3} \end{table} We also compare the proposed method with the other three methods \cite{36-pramono2019deep,37-tian2018deep,35-kong2017short}. The range of the training set is from 2004 to 2005, with the last month being used as the validation set. The range of the test set is May 2006. Table IV shows the results of the experiment, the numerical results of other methods come from \cite{36-pramono2019deep}. From Table IV, we can see that our method is still better than other methods. Specifically, compared to Pramono et al \cite{36-pramono2019deep}, the single model improves MAPE by 23.91\%, MAE by 22.29\%, and RMSE by 20.81\%. The integrated model improves MAPE by 28.26\%, MAE by 27.88\%, and RMSE by 31.05\%. \begin{table}[] \caption{One-hour ahead forecasting MAPE(\%) and MAE of proposed method and other methods on the ISO-NE dataset. + represents the results of the ensemble method. } \centering \begin{tabular}{@{}lccr@{}} \toprule & MAPE & MAE & RMSE \\ \midrule Tian et al & 0.66 & 89.07 & 141.97 \\ Kong et al & 0.48 & 65.12 & 100.50 \\ Wavenet & 0.57 & 78.02 & 125.11 \\ Pramono et al & 0.46 & 62.23 & 88.31 \\ Proposed & 0.35 & 48.36 & 69.05 \\ Proposed$^{+}$ & \textbf{0.33} & \textbf{44.88} & \textbf{60.89} \\ \bottomrule \end{tabular} \end{table} \subsection{Performance of the Proposed Model on the NAU Dataset} In this case, we compare performance of the proposed model on the NAU dataset with the existing 5 methods . In \cite{27-deihimi2012application}, ESN is applied to power load forecasting. In \cite{28-reis2005feature}, the discrete wavelet transform is embedded into the neural network for short-term load prediction. In \cite{29-amjady2009short}, the load data are decomposed through wavelet transform, and each component is predicted by combining a neural network and an evolutionary algorithm. In \cite{30-ceperic2013strategy}, particle swarm optimization is used for SVR superparameter optimization, and a parallel model consisting of 24 sets of support vectors is used for day-ahead load prediction. The training set ranges from January 1, 1988 to October 12, 1990, with the last month being used as the validation set. The test set covers the period from October 12, 1990 to October 12, 1992. We also perturb the temperature in the original data, and we only perturb the data in the training set. As suggested in \cite{27-deihimi2012application}, Gaussian noise with a mean value of zero and standard deviation of 0.6 is added to the actual temperature data. The experimental results are shown in Table V. In the actual temperature dataset, the effect of the single model is consistent with that of WT-ELM-MABC. The single model is slightly better than WT-ELM-MABC in predicting temperature with noise. Our ensemble model achieves the best results for both the real and noisy temperatures. We also notice that the effect of the ensemble model does not change for either the actual temperatures or the noisy temperatures. In fact, the final test MAEs of the ensemble model for actual temperatures and noisy temperatures are 14.431 and 14.436, respectively, and there is little difference between the two results. This shows that the proposed model is robust to temperature noise. \begin{table}[] \caption{One-hour ahead forecasting MAPE(\%) of proposed method and other methods on the NAU dataset. + represents the results of the ensemble method.} \centering \begin{tabular}{@{}lcc@{}} \toprule & \begin{tabular}[c]{@{}c@{}}Actual \\ Temperature\end{tabular} & \begin{tabular}[c]{@{}c@{}}Noisy\\ Temperature\end{tabular} \\ \midrule ESN & 1.14 & 1.21 \\ M2 & 1.10 & 1.11 \\ WT-NN-EA & 0.99 & - \\ SSA-SVR & 0.72 & 0.73 \\ WT-ELM-MABC & 0.67 & 0.69 \\ Proposed & 0.67 & 0.68 \\ $Proposed^{+}$ & \textbf{0.64} & \textbf{0.64} \\ \bottomrule \end{tabular} \end{table} \subsection{Performance between Proposed Model and Machine Learning Models} In this case, we compare the proposed model with common machine learning models. Specifically, we compare the proposed model with random forest, gradient boosting decision tree (GBDT) \cite{31-friedman2001greedy}, Xgboost \cite{32-chen2015xgboost} and Catboost \cite{33-dorogush2018catboost} on the NAU dataset. Our training set covers 1988-1989, with the last month as the validation set. The scope of the test set is 1990. In the interest of fairness, all the models use the same input. For the machine learning models, after adjusting the superparameters with the verification set, we also use the verification set for the final model training. As shown in Table VI, our single model improves the MAPE by 72\% and MAE by 45\% relative to those of Catboost. Our integration model improves the MAPE by 76\% and MAE by 47\%. Compared with current machine learning algorithms, our model has better generalization ability. \begin{table}[] \caption{One-hour ahead forecasting MAPE(\%) and MAE of proposed method and machine learning methods on the NAU dataset. + represents the results of the ensemble method.} \centering \begin{tabular}{@{}lrr@{}} \toprule & \multicolumn{1}{c}{MAPE} & \multicolumn{1}{c}{MAE} \\ \midrule Random forest & 3.23 & 71.10 \\ GBDT & 1.98 & 45.35 \\ Xgboost & 1.34 & 30.16 \\ Catboost & 1.24 & 29.43 \\ Proposed & 0.72 & 16.28 \\ $Proposed^{+}$ & \textbf{0.69} & \textbf{15.59} \\ \bottomrule \end{tabular} \end{table} \subsection{Robustness Analysis of the Proposed Model} In the actual deployment environment, there will be a slight deviation between the final acquired value and the actual actual value due to an error in the measurement equipment or an error in the value recording. This fact indicates that the model applied to short-term load forecasting should have good robustness, that is, the slight disturbance of the input by the model should not cause excessive deviation of the output. To ensure the reliability of the proposed model prediction, we perturb the load data and the temperature data to varying degrees to explore the robustness of the model. Specifically, we use a Gaussian distribution with a mean of 0 and a standard deviation of (0, 0.3, 0.6, 0.9, 1.2, 1.5, 1.8, and 2.1) to generate eight sets of noise. The maximum range of the generated noise is [-8.0, 10.57]. Then, these 8 sets of noise are added to the load data and temperature data. We only perturb the training set. We conduct the experiment using the ISO-NE dataset, with the training set ranging from January 2007 to June 2008 and the last month as the validation set. The scope of the test set is July 2008. The experimental results are shown in Figure 5. We find that there is no significant difference between the MAPEs of all the models, even when the temperature and load data are disturbed to different degrees. Interestingly, the MAPE of the model does not increase when the noise of the load value and temperature value increases. Conversely, adding noise improves the MAPE of the model. For example, when the perturbation variance of the temperature is 0 and the perturbation variance of the load value is 0.3, the MAPE is lower than that of the undisturbed model. The experimental results show that the proposed model is robust to data noise. \begin{figure} \centering \includegraphics[ width=0.38\textwidth]{fig5.png} \caption{MAPEs(\%) of models with varying degrees of perturbation of temperature and load data. The perturbation data is generated by Gaussian distribution with a mean value of 0 and a variance of (0, 0.3, 0.6, 0.9, 1.2, 1.5, 1.8, and 2.1).} \end{figure} \section{Conclusion} In this paper, we first propose the dense average connection. Dense average connection can solve the problem of gradient explosion well which makes it possible to build a deep model. Based on the dense average connection, we build the dense average network for load forecasting. Compared with existing methods on two public datasets, dense average network has better prediction effect. We also find that the ensemble model can reduce the standard deviation and peak value of the final prediction bias compared to a single model. To verify the reliability of the model predictions, we also disturb the input of the model to different degrees. The experimental results show that the proposed model has good robustness. In Section III, we found that the appropriate disturbance of load and temperature data will not significantly reduce the prediction performance, or even prompt the prediction effect. In the future work, we will explore whether data disturbance can be used as a way of data enhancement to further improve the effect of load forecasting. \ifCLASSOPTIONcaptionsoff \newpage \fi \input{main.bbl} \end{document}
3,212,635,537,868
arxiv
\section{Introduction} Cosmic strings are among the most important class of linear topological defects with the conical geometry outside the core \cite{Vile94}. The formation of these type of topologically stable structures during the cosmological expansion is predicted in most interesting models of high energy physics. They have a number of interesting observable consequences, the detection of which would provide an important link between cosmology and particle physics. In quantum field theory, the conical topology of the spacetime due to the presence of a cosmic string causes a number of interesting physical effects. In particular, many authors have considered the vacuum polarization effects for scalar, fermionic and vector fields induced by a planar angle deficit. In addition to the deficit angle parameter, the physical origin of a cosmic string is characterized by the gauge field flux parameter describing a magnetic flux running along the string's core. The latter induces additional polarization effects for charged fields. \cite{Dowk87}-\cite{Site12} Though the gauge field strength vanishes outside the string's core, the nonvanishing vector potential leads to Aharonov-Bohm-like effects on scattering cross sections and on particle production rates around the cosmic string. \cite{Alfo89}. For charged fields, the magnetic flux along the string core induces nonzero vacuum expectation value of the current density. The latter, in addition to the expectation values of the field squared and the energy-momentum tensor, is among the most important local characteristics of the vacuum state for quantum fields. The azimuthal current density for scalar and fermionic fields, induced by a magnetic flux in the geometry of a straight cosmic string, has been investigated in Ref.s~\refcite{Srir01}-\refcite{Brag14}. Here we shall consider the effects of the finite temperature and nonzero chemical potential on the expectation values of the charge and current densities for a massive fermionic field in the geometry of a straight cosmic string for arbitrary values of the planar angle deficit. \section{Geometry and Fermionic Modes} The background geometry corresponding to a straight cosmic string lying along the $z$-axis can be written through the line element \begin{equation} ds^{2}=dt^{2}-dr^{2}-r^{2}d\phi ^{2}-dz{}^{2}\ , \label{ds21} \end{equation}% where $r\geqslant 0$, $0\leqslant \phi \leqslant \phi _{0}=2\pi /q$, $% -\infty <t<+\infty $. The parameter $q\geqslant 1$ codifies the planar angle deficit. In the presence of an external electromagnetic field with the vector potential $A_{\mu }$, the dynamics of a massive charged spinor field in curved spacetime is described by the Dirac equation, \begin{equation} (i\gamma ^{\mu }{\mathcal{D}}_{\mu }-m)\psi =0\ ,\ {\mathcal{D}}_{\mu }=\partial _{\mu }+\Gamma _{\mu }+ieA_{\mu }, \label{Direq} \end{equation}% where $\gamma ^{\mu }$ are the Dirac matrices in curved spacetime and $% \Gamma _{\mu }$ are the spin connections. For the geometry at hand the gamma matrices can be taken in the form \begin{equation} \gamma ^{0}=\left( \begin{array}{cc} 1 & 0 \\ 0 & -1% \end{array}% \right) ,\;\gamma ^{l}=\left( \begin{array}{cc} 0 & \rho ^{l} \\ -\rho ^{l} & 0% \end{array}% \right) , \label{gamcurved} \end{equation}% where the $2\times 2$ matrices $\rho ^{l}$ are \begin{equation} \rho ^{1}=\left( \begin{array}{cc} 0 & e^{-iq\phi } \\ e^{iq\phi } & 0% \end{array}% \right) \ ,\ \rho ^{2}=-\frac{i}{r}\left( \begin{array}{cc} 0 & e^{-iq\phi } \\ -e^{iq\phi } & 0% \end{array}% \right) \ ,\ \rho ^{3}=\left( \begin{array}{cc} 1 & 0 \\ 0 & -1% \end{array}% \right) \ . \label{betl} \end{equation} We shall admit the existence of a gauge field with a constant vector potential as \begin{equation} A_{\mu }=(0,0,A_{\phi},0)\ . \label{Amu} \end{equation}% The azimuthal component $A_{\phi}$ is related to an infinitesimal thin magnetic flux, $\Phi $, running along the string by $A_{\phi}=-q\Phi /(2\pi )$. The field operator can be expanded in term of a complete set of normalized positive- and negative-energy solution $\{\psi _{\sigma }^{(+)},\psi _{\sigma }^{(-)}\}$ of (\ref{Direq}), specified by a set of quantum numbers $\sigma$, as: \begin{equation} \psi =\sum_{\sigma }[\hat{a}_{\sigma }\psi _{\sigma }^{(+)}+\hat{b}_{\sigma }^{+}\psi _{\sigma }^{(-)}] \ , \label{psiexp} \end{equation} where $\hat{a}_{\sigma }$ and $\hat{b}_{\sigma }^{+}$ represent the annihilation and creation operators corresponding to particles and antiparticles respectively. Here, we are interested in the effects of the presence of the cosmic string and magnetic flux on the expectation values of the charge and current densities assuming that the field is in thermal equilibrium at finite temperature $T$. The standard form of the density matrix for the thermodynamical equilibrium distribution at temperature $T$ is \begin{equation} \hat{\rho}=Z^{-1}e^{-\beta (\hat{H}-\mu ^{\prime }\hat{Q})},\;\beta =1/T \ \ , \ {\rm and} \ Z=\mathrm{tr}[e^{-\beta (\hat{H}-\mu ^{\prime }\hat{Q})}] \ , \label{rho} \end{equation} where $\hat{H}$ is the Hamilton operator, $\hat{Q}$ denotes a conserved charge and $\mu ^{\prime }$ is the corresponding chemical potential. The thermal average of the creation and annihilation operators are given by: \begin{eqnarray} \mathrm{tr}[\hat{\rho}\hat{a}_{\sigma }^{+}\hat{a}_{\sigma ^{\prime }}] &=& \frac{\delta _{\sigma \sigma ^{\prime }}}{e^{\beta (\varepsilon _{\sigma }^{(+)}-\mu )}+1}, \notag \\ \mathrm{tr}[\hat{\rho}\hat{b}_{\sigma }^{+}\hat{b}_{\sigma ^{\prime }}] &=& \frac{\delta _{\sigma \sigma ^{\prime }}}{e^{\beta (\varepsilon _{\sigma }^{(-)}+\mu )}+1}, \label{traa} \end{eqnarray}% where $\mu =e\mu ^{\prime }$ and $\pm \varepsilon _{\sigma }^{(\pm )}$ with $ \varepsilon _{\sigma }^{(\pm )}>0$, are the energies corresponding to the modes $\psi _{\sigma }^{(\pm )}$. The expectation value of the fermionic current density given by $\left\langle j^{\nu }\right\rangle =e\,\mathrm{tr}[\hat{\rho}\bar{\psi} (x)\gamma ^{\nu }\psi (x)]$, can be expressed by \begin{equation} \left\langle j^{\nu }\right\rangle =\left\langle j^{\nu }\right\rangle _{0}+\sum_{\chi =+,-}\left\langle j^{\nu }\right\rangle _{\chi }, \label{C1} \end{equation}% where \begin{equation} \left\langle j^{\nu }\right\rangle _{0}=e\sum_{\sigma }\bar{\psi}_{\sigma }^{(-)}(x)\gamma ^{\nu }\psi _{\sigma }^{(-)}(x), \label{Cvev} \end{equation}% is the vacuum expectation value and \begin{equation} \left\langle j^{\nu }\right\rangle _{\pm }=\pm e\sum_{\sigma }\frac{\bar{\psi }_{\sigma }^{(\pm )}\gamma ^{\nu }\psi _{\sigma }^{(\pm )}}{e^{\beta (\varepsilon _{\sigma }^{(\pm )}\mp \mu )}+1}. \label{jpm} \end{equation}% Here, $\left\langle j^{\nu }\right\rangle _{\pm }$ is the part in the expectation value coming from the particles for the upper sign and from the antiparticles for the lower sign. We shall use the normalized fermionic modes found in Ref.~\refcite{Beze13} specified by the set of quantum numbers $\sigma =(\lambda ,k,j,s)$ with \begin{equation} \lambda \geqslant 0,\;-\infty <k<+\infty ,\;j=\pm 1/2,\pm 3/2,\ldots ,\;s=\pm 1. \label{range} \end{equation} These functions are expressed as \begin{equation} \psi _{\sigma }^{(\pm )}(x)=C_{\sigma }^{(\pm )}e^{\mp iEt+ikz+iqj\phi }\left( \begin{array}{c} J_{\beta _{j}}(\lambda r)e^{-iq\phi /2} \\ sJ_{\beta _{j}+\epsilon _{j}}(\lambda r)e^{iq\phi /2} \\ \pm \frac{k-is\epsilon _{j}\lambda }{E\pm m}J_{\beta _{j}}(\lambda r)e^{-iq\phi /2} \\ \mp s\frac{k-is\lambda \epsilon _{j}}{E\pm m}J_{\beta _{j}+\epsilon _{j}}(\lambda r)e^{iq\phi /2}% \end{array}% \right) \ , \label{psi+n} \end{equation}% where $J_{\nu }(x)$ is the Bessel function, $|C_{\sigma }^{(\pm )}|^{2}=\frac{q\lambda (E\pm m)}{16\pi ^{2}E}$ and \begin{eqnarray} E=\varepsilon _{\sigma }^{(\pm )}=\sqrt{\lambda ^{2}+k^{2}+m^{2}} \ , \ \beta _{j}=q|j+\alpha |-\epsilon _{j}/2\ ,\;\alpha =eA_{\phi}/q=-\Phi /\Phi _{0} \ , \end{eqnarray} with $\epsilon _{j}=\mathrm{sgn}(j+\alpha )$ and $\Phi _{0}=2\pi /e$ being the flux quantum. \section{Charge Density} We start with the charge density corresponding to the $\nu =0$ component of (\ref{C1}). In Ref.~\refcite{Beze13} we have explicitly shown that the formal expression for the vacuum expectation value of charge density is given in terms of a divergent integral. In order to obtain a finite and well defined result we introduced a cutoff function. With this cutoff the integral could be evaluated. Our next steps were to subtract the Minkowskian part $(\alpha_0=0, \ q=1)$ and to remove the cutoff function. As final result a vanishing value for the renormalized charge density was obtained. Substituting the mode functions (\ref{psi+n}) into (\ref{jpm}), for the contributions coming from the particles and antiparticles we get \begin{equation} \left\langle j^{0}\right\rangle _{\pm }=\pm \frac{eq}{8\pi ^{2}}\sum_{\sigma }\lambda \frac{J_{\beta _{j}}^{2}(\lambda r)+J_{\beta _{j}+\epsilon _{j}}^{2}(\lambda r)}{e^{\beta (E\mp \mu )}+1}, \label{j0pm} \end{equation} where we use the notation \begin{equation} \sum_{\sigma }=\int_{-\infty }^{+\infty }dk\int_{0}^{\infty }d\lambda \ \sum_{s=\pm 1}\sum_{j}\ . \label{Sumsig} \end{equation} In the case $\mu =0$ the contributions from the particles and antiparticles cancel each other and the total charge density,% \begin{equation} \left\langle j^{0}\right\rangle =\left\langle j^{0}\right\rangle _{+}+\left\langle j^{0}\right\rangle _{-}, \label{j0tot} \end{equation}% is zero. From (\ref{j0pm}) one can see that the charge density is an even periodic function of the parameter $\alpha $ with the period equal to 1. Consequently the charge density is a periodic function of the magnetic flux with the period equal to the quantum flux. If we present this parameter as \begin{equation} \alpha =n_{0}+\alpha _{0},\;|\alpha _{0}|\leqslant 1/2, \label{alf0} \end{equation}% with $n_{0}$ being an integer, then the current density depends on $\alpha _{0}$ alone. Here in this paper we shall consider only the case $|\mu |<m$.\footnote{The analysis for the case $|\mu |>m$ is given in Ref.~\refcite{Mello}.} By using the expansion $(e^{y}+1)^{-1}=-\sum_{n=1}^{\infty }(-1)^{n}e^{-ny}$, the charge densities for particles and antiparticles can be presented in the form \begin{eqnarray} \left\langle j^{0}\right\rangle _{\pm } &=&\mp \frac{eq\beta }{4\pi ^{2}r^{4}% }\ \sum_{n=1}^{\infty }(-1)^{n}ne^{\pm n\beta \mu }\int_{0}^{\infty }dx\,x \notag \\ &&\times F(q,\alpha _{0},x)e^{-m^{2}r^{2}/2x-(1+n^{2}\beta ^{2}/2r^{2})x}\ , \label{j0pm2} \end{eqnarray}% where the notation% \begin{equation} F(q,\alpha _{0},x)=\sum_{j}\ \left[ I_{\beta _{j}}(x)+I_{\beta _{j}+\epsilon _{j}}(x)\right] , \label{Fq} \end{equation} is introduced. In Ref.~\refcite{Beze10b} we have shown that \begin{eqnarray} F(q,\alpha _{0},x) &=&\frac{4}{q}\left[ \frac{e^{x}}{2}% +\sum_{k=1}^{[q/2]}(-1)^{k}c_{k}\cos \left( 2\pi k\alpha _{0}\right) e^{x\cos (2\pi k/q)}\right. \notag \\ &&\left. +\frac{q}{\pi }\int_{0}^{\infty }dy\frac{h(q,\alpha _{0},2y)\sinh y% }{\cosh (2qy)-\cos (q\pi )}e^{-x\cosh {(2y)}}\right] \ , \label{Fq1} \end{eqnarray}% where $[q/2]$ means the integer part of $q/2$ and the notation% \begin{equation} h(q,\alpha _{0},x)=\sum_{\chi =\pm 1}\cos \left[ \left( 1/2+\chi \alpha _{0}\right) q\pi \right] \sinh \left[ \left( 1/2-\chi \alpha _{0}\right) qx% \right] , \label{h} \end{equation}% is assumed. Here and in what follows we use the notations% \begin{equation} c_{k}=\cos {(\pi k/q),\;s}_{k}=\sin {(\pi k/q).} \label{cksk} \end{equation} Substituting (\ref{Fq1}) into (\ref{j0pm2}), after integration over $x$, we find the expression% \begin{eqnarray} \left\langle j^{0}\right\rangle _{\pm } &=&\left\langle j^{0}\right\rangle _{% \mathrm{M}\pm }\mp \frac{2em^{4}\beta }{\pi ^{2}}\ \sum_{n=1}^{\infty }(-1)^{n}ne^{\pm n\beta \mu } \notag \\ &&\times \left[ \sum_{k=1}^{[q/2]}(-1)^{k}c_{k}\cos \left( 2\pi k\alpha _{0}\right) f_{2}(m\beta s_{n}(r/\beta ,k/q))\right. \notag \\ &&\left. +\frac{q}{\pi }\int_{0}^{\infty }dy\frac{\sinh \left( y\right) h(q,\alpha _{0},2y)}{\cosh (2qy)-\cos (q\pi )}f_{2}(m\beta c_{n}(r/\beta ,y))% \right] , \label{j0pm3} \end{eqnarray}% where% \begin{equation} \left\langle j^{0}\right\rangle _{\mathrm{M}\pm }=\mp \frac{e\beta m^{4}}{% \pi ^{2}}\sum_{n=1}^{\infty }(-1)^{n}ne^{\pm n\beta \mu }f_{2}(nm\beta ), \label{j0pmM} \end{equation}% is the corresponding charge density in Minkowski spacetime in the absence of the magnetic flux and the cosmic string ($\alpha _{0}=0$, $q=1$). Here we have introduced the notations% \begin{equation} f_{\nu }(x)=x^{-\nu }K_{\nu }(x), \label{fnu1} \end{equation}% with $K_{\nu }(x)$ being the MacDonald function and \begin{eqnarray} s_{n}(x,y) =\sqrt{n^{2}+4x^{2}\sin ^{2}(\pi y)}, \ c_{n}(x,y) =\sqrt{n^{2}+4x^{2}\cosh ^{2}y}\ . \label{sncn} \end{eqnarray} For the total charge density one gets \begin{eqnarray} \left\langle j^{0}\right\rangle &=&-\frac{4em^{4}\beta }{\pi ^{2}}\ \sum_{n=1}^{\infty }(-1)^{n}n\sinh (n\beta \mu )\left[ \frac{1}{2} f_{2}(m\beta n)\right. \notag \\ &&+\sum_{k=1}^{[q/2]}(-1)^{k}c_{k}\cos \left( 2\pi k\alpha _{0}\right) f_{2}(m\beta s_{n}(r/\beta ,k/q)) \notag \\ &&\left. +\frac{q}{\pi }\int_{0}^{\infty }dy\frac{h(q,\alpha _{0},2y)\sinh y }{\cosh (2qy)-\cos (q\pi )}f_{2}(m\beta c_{n}(r/\beta ,y))\right] \ . \label{j0} \end{eqnarray} We present in figure \ref{fig1}, the total charge density as a function of the parameter $\alpha _{0}$ (left panel) and the charge density induced by the string and magnetic flux as a function of the temperature (right panel). The numbers near the curves correspond to the values of the parameter $q$. The graphs on the left panel are plotted for $T/m=1$, $ mr=0.25 $, and $\mu /m=0.5$. On the right panel, the full and dashed curves correspond to the values $\alpha _{0}=1/2$ and $\alpha _{0}=0$, respectively (note that for $q=1$, $\alpha _{0}=0$ one has $\left\langle j^{0}\right\rangle -\left\langle j^{0}\right\rangle _{\mathrm{M}}=0$). For the graphs on the right panel we have taken $\mu /m=0.5$ and $mr=1/8$. \begin{figure}[tbph] {\includegraphics[width=6.0cm]{Sahafig1a}} {\includegraphics[width=6.2cm]{Sahafig1b}} \caption{The total charge density as a function of the parameter $\protect \alpha _{0}$ (left panel) and the charge density induced by the string and magnetic flux as a function of the temperature (right panel). The numbers near the curves correspond to the values of the parameter $q$. The graphs on the left panel are plotted for $T/m=1$, $mr=0.25 $, and $\protect\mu /m=0.5$ . On the right panel, the full and dashed curves correspond to the values $ \protect\alpha _{0}=1/2$ and $\protect\alpha _{0}=0$, respectively. For the graphs on the right panel we have taken $\protect\mu /m=0.5$ and $mr=1/8$ \label{fig1}} \end{figure} At large distance and high temperature the Minkowski contribution dominates and the one induced by the cosmic string and magnetic flux are exponentially suppressed.\cite{Mello}. In figure \ref{fig2}, we present the charge density as a function of the radial coordinate. The full and dashed lines correspond to the values $ \alpha _{0}=1/2$ and $\alpha _{0}=0$, respectively. The numbers near the curves present the values of the parameter $q$. The graphs are plotted for $T/m=1$ and $\mu /m=0.5$. \begin{figure}[tbph] \begin{center} {\includegraphics[width=6.0cm]{Sahafig2}} \caption{The charge density versus the radial coordinate. The full and dashed lines correspond to the values $\protect\alpha _{0}=1/2$ and $\protect% \alpha _{0}=0$, respectively. The numbers near the curves present the values of the parameter $q$. The graphs are plotted for $T/m=1$ and $\protect\mu /m=0.5$.} \label{fig2} \end{center} \end{figure} \section{Azimuthal Current} Now we turn to the investigation of the current density. The only nonzero component corresponds to the azimuthal current ($\nu =2$ in (\ref{C1})). By taking into account the expression for the mode functions, from (\ref{jpm}) for the physical components of the current densities of the particles and antiparticles, $\left\langle j_{\phi }\right\rangle _{\pm }=r\left\langle j^{2}\right\rangle _{\pm }$, we get% \begin{equation} \left\langle j_{\phi }\right\rangle _{\pm }=\frac{eq}{4\pi ^{2}}\sum_{\sigma }\epsilon _{j}\frac{\lambda ^{2}}{E}\frac{J_{\beta _{j}}(\lambda r)J_{\beta _{j}+\epsilon _{j}}(\lambda r)}{e^{\beta (E\mp \mu )}+1}, \label{j2pm} \end{equation}% where the upper and lower signs correspond to the particles and antiparticles respectively and the collective summation is defined by (\ref% {Sumsig}). For the case $|\mu |<m$, by using the same expansion for the denominator as we did in the previous case, the current densities (\ref{j2pm}) reads \begin{equation} \left\langle j_{\phi }\right\rangle _{\pm }=-\frac{eq}{2\pi ^{2}r^{3}} \sum_{n=1}^{\infty }(-1)^{n}e^{\mp n\beta \mu }\int_{0}^{\infty }dx\,xe^{-m^{2}r^{2}/(2x)-(1+n^{2}\beta ^{2}/(2r^{2}))x}G(q,\alpha _{0},x), \label{j2pm3} \end{equation} with the notation \begin{equation} G(q,\alpha _{0},x)=\sum_{j}\ \left[ I_{\beta _{j}}(x)-I_{\beta _{j}+\epsilon _{j}}(x)\right] . \label{Gq} \end{equation} By using the integral representation for the modified Bessel function we can write \begin{eqnarray} G(q,\alpha _{0},x) &=&\frac{4}{q}\sideset{}{'}{\sum} _{k=1}^{[q/2]}(-1)^{k}s_{k}\sin \left( 2\pi k\alpha _{0}\right) e^{x\cos (2\pi k/q)} \notag \\ &&+\frac{4}{\pi }\int_{0}^{\infty }dy\ \frac{g(q,\alpha _{0},2y)\cosh y}{ \cosh (2qy)-\cos (q\pi )}e^{-x\cosh {(2y)}}, \label{Gq1} \end{eqnarray} with the notation \begin{equation} g(q,\alpha _{0},x)=\sum_{\chi =\pm 1}\chi \cos \left[ \left( 1/2+\chi \alpha _{0}\right) q\pi \right] \cosh \left[ \left( 1/2-\chi \alpha _{0}\right) qx \right] \ . \label{g} \end{equation} The prime on the summation sign in (\ref{Gq1}) means that, in the case where $q$ is an even number, the term with $k=q/2$ should be taken with the coefficient 1/2. Substituting (\ref{Gq1}) into (\ref{j2pm3}), after integrating over $x$, we obtain \begin{align} \left\langle j_{\phi }\right\rangle _{\pm }& =-\frac{4em^{4}r}{\pi ^{2}} \sum_{n=1}^{\infty }(-1)^{n}e^{\mp n\beta \mu } \notag \\ & \times \left[ \ \sideset{}{'}{\sum}_{k=1}^{[q/2]}(-1)^{k}s_{k}\sin \left( 2\pi k\alpha _{0}\right) f_{2}\left( m\beta s_{n}(r/\beta ,k/q)\right) \right. \notag \\ & +\left. \frac{q}{\pi }\int_{0}^{\infty }dy\ \frac{\cosh \left( y\right) g(q,\alpha _{0},2y)}{\cosh (2qy)-\cos (q\pi )}f_{2}(m\beta c_{n}(r/\beta ,y)) \right] , \label{j2pm4} \end{align} where the functions in the arguments of $f_{2}(x)$ are defined by (\ref{sncn}). For the case $q=1$, i.e. in the absence of conical defect, the above expression reduces to\footnote{The Minkowskian contribution $(q=1, \ \alpha_0=0)$ vanishes.} \begin{align} \left\langle j_{\phi }\right\rangle _{\pm }& =\frac{4em^{4}r}{\pi ^{3}}\sin (\alpha _{0}\pi )\sum_{n=1}^{\infty }(-1)^{n}e^{\mp n\beta \mu } \notag \\ & \times \int_{0}^{\infty }dy\,\cosh (2\alpha _{0}y)f_{2}\left( m\beta c_{n}(r/\beta ,y)\right) \ . \label{j2pmq1} \end{align} Taking into account the expression for the vacuum expectation value of the current density from \cite{Beze13}, the total current density reads \begin{align} \left\langle j_{\phi }\right\rangle & =-\frac{8em^{4}r}{\pi ^{2}}% \sideset{}{'}{\sum}_{n=0}^{\infty }(-1)^{n}\cosh (n\beta \mu ) \notag \\ & \times \left[ \ \sideset{}{'}{\sum}_{k=1}^{[q/2]}(-1)^{k}s_{k}\sin \left( 2\pi k\alpha _{0}\right) f_{2}\left( m\beta s_{n}(r/\beta ,k/q)\right) \right. \notag \\ & +\left. \frac{q}{\pi }\int_{0}^{\infty }dy\ \frac{\cosh \left( y\right) g(q,\alpha _{0},2y)}{\cosh (2qy)-\cos (q\pi )}f_{2}\left( m\beta c_{n}(r/\beta ,y)\right) \right] , \label{j2} \end{align}% where the prime on the sign of the summation over $n$ means that the term $ n=0$ should be taken with the coefficient 1/2. This term corresponds to the vacuum expectation value of the current density, $\left\langle j_{\phi }\right\rangle _{0}$. Now we would like to analyze the case of a massless field. Because of the condition $|\mu |\leqslant m$, we should also take $\mu =0$. By using the asymptotic expression for the MacDonald function for small argument, the summation over $n$ takes the form $\sideset{}{'}{\sum}_{n=0}^{\infty }(-1)^{n}\left( n^{2}+x^{2}\right) ^{-2}$, that can be expressed in terms of the hyperbolic functions. So we get \begin{align} \left\langle j_{\phi }\right\rangle & =-\frac{eT}{2\pi r^{2}}\left[ \ % \sideset{}{'}{\sum}_{k=1}^{[q/2]}\frac{(-1)^{k}}{s_{k}^{2}}\sin \left( 2\pi k\alpha _{0}\right) h(2rTs_{k})\right. \notag \\ & \left. +\frac{q}{\pi }\int_{0}^{\infty }dy\ \frac{g(q,\alpha _{0},2y)}{% \cosh (2qy)-\cos (q\pi )}\frac{h(2rT\cosh y)}{\cosh ^{2}y}\right] \ , \label{j2pmm0} \end{align}% where we have introduced the function% \begin{equation} h(x)=\frac{1+\pi x\coth (\pi x)}{\sinh (\pi x)}. \label{hx} \end{equation} We plot in figure \ref{fig3}, for a massless field with $\mu =0$, the azimuthal current density as a function of the parameter $\alpha _{0}$ (left panel) and as a function of the temperature (right panel). The numbers near the curves correspond to the values of $q$. In the graphs on the left panel we assume $rT=0.25$ and on the right $\alpha _{0}=0.25$. \begin{figure}[tbph] {\includegraphics[width=6.0cm]{Sahafig3a}} {\includegraphics[width=6.2cm]{Sahafig3b}} \caption{The azimuthal current density for a massless field with zero chemical potential as a function of $\protect\alpha _{0}$ (left panel) and as a function of the temperature (right panel). The numbers near the curves correspond to the values of $q$. For the graphs on the left panel $rT=0.25$ and for the right panel $\protect\alpha _{0}=0.25$.\label{fig3}} \end{figure} \section{Conclusion} \label{sec:Conc} In this paper, we have analyzed the combined effects of the planar angle deficit and the magnetic flux on the charge and current densities for a massive fermionic field at thermal equilibrium considering nonzero chemical potential. These densities are decomposed into the vacuum expectation values and finite temperature contributions, coming from the particles and antiparticles. For the charge density the renormalized vacuum expectation value vanishes and the expectation value for the particles and antiparticles in the case $|\mu |\leqslant m$ are given by (\ref{j0pm3}). The charge density is an even periodic function of the magnetic flux with period equal to the quantum flux. For the zero chemical potential the contributions from the particles and antiparticles cancel each other and the total charge density, given by (\ref{j0}), vanishes. The only nonzero component of the expectation value for the current density corresponds to the current along the azimuthal direction. This current vanishes in the absence of the magnetic flux and is an odd periodic function of the latter with the period equal to the quantum flux. The azimuthal current density is an even function of the chemical potential. For the zero chemical potential, the contributions to the total current density from the particles and antiparticles coincide. \section*{Acknowledgments} The authors thank to brazilian agency CNPq for partial financial support.
3,212,635,537,869
arxiv
\section{Introduction} \label{sec:introduction} Globally, nearly a third of women report experiencing violence perpetrated by an intimate partner \cite{devries_global_2013} with far-reaching consequences for their health, their families, their opportunities, and their livelihoods \cite{ellsberg_intimate_2008}. This has prompted interest among funders, practitioners, and policy-makers in programs to reduce or prevent violence and in a research agenda to better understand empirically what types of programs work in reducing violence. Consequently, over the last two decades, there has been a surge in randomized evaluations of violence reduction programs, particularly in lower and middle income countries \cite{abramsky_findings_2014, hidrobo_effect_2016, jewkes_impact_2008, pronyk_effect_2006, wagman_effectiveness_2015}. Most of these evaluations measure violence using a standardized instrument, based on the Conflict Tactics Scale (CTS) \cite{straus_measuring_1979, straus_revised_1996}, as well as a standardized outcome coding originally developed for large cross-sectional prevalence assessments. While this standardization allows for some comparability across studies, to date, little work has been done to understand whether this coding choice is optimal in the context of a randomized evaluation. In this paper, we explore the consequences of this coding choice with respect to statistical bias and efficiency and we compare it to several alternatives, first via simulation, and then by re-analyzing data from several recent trials. To do so we return to the original conception of the CTS in the sociology literature to develop a generative model of violence. We also use potential outcomes and the Neyman-Rubin causal model \cite{splawa-neyman_application_1990, rubin_estimating_1974} to formalize reductions in violence in the context of randomized evaluations and to think structurally about how intervention effects operate. We then use this theory to simulate data from hypothetical trials under different violence reduction regimes and then compare outcome coding strategies. Finally, we re-analyze several trials using alternative coding strategies and re-interpret them in light of the insights gleaned from simulations. \subsection{The Conflict Tactics Scale} Studies that attempt to measure intimate partner violence generally rely on some form of the Conflict Tactics Scale (CTS) to quantify a participant's experience of violence with perhaps the most current implementation being the standardized questionnaire developed by the World Health Organization. The origins of the instrument are rooted in conflict theory which posits that conflict is an inevitable part of human relationships, but that the tactics employed to deal with conflict vary. Among these tactics are included those that involve physical force, coercion, or verbal aggression, which we may define as ``violent''. However, the instrument consciously disassociates these tactics from their personal or social meaning as ``violence'' by asking respondents to report the frequency that they experienced specific acts rather than how often they experience ``violence''. This allows for comparable objective assessment of tactics used during conflict even when definitions of violence may vary from person to person or from group to group. The original CTS instrument was designed to assess both perpetration as well as experiences of violence and to capture family violence more broadly including violence directed against children. However, in evaluations of intimate partner violence reduction programs an abbreviated version is often used as the focus is typically limited to violence experienced by women, although sometimes supplemented with male reports of perpetration. The violence items in the scale were chosen to capture different latent constructs. In the original scale, these items loaded on two latent factors representing psychological aggression and physical assault. A revised scale in 1996 added items representing sexual coercion and also introduced a shortened version which was the basis for the WHO questionnaire. In more recent literature the three latent factors into which items are grouped are more commonly referred to as emotional violence, physical violence, and sexual violence. Table \ref{tab:cts} below gives examples of the items in the scale from the WHO questionnaire. Items can sometimes be added or deleted or adapted to local context, however the basic structure is largely the same. All items in the scale refer to the same defined recall period, usually 12 months, although this can vary in randomized evaluations where repeated assessments may be made and where interest is generally in violence experienced since the start of a violence reduction program. \begin{table}[t] \centering \footnotesize{ \caption{Example of CTS style questions for measuring violence adapted from the WHO Domestic Violence questionnaire.} \label{tab:cts} \begin{tabular}[t]{p{0.5cm}p{10cm}p{0.5cm}p{0.5cm}p{0.5cm}p{0.5cm}} \hline \hline \multicolumn{6}{p{14cm}}{\hspace{2em} No matter how well a couple gets along, there are times when they disagree on major decisions, get annoyed about something the other person does, or just have spats or fights because they're in a bad mood or tired or for some other reason. They also use many different ways of trying to settle their differences. I'm going to read a list of some things that you and your (husband/partner) might have done when you had a dispute, and would first like you to tell me how often your (husband/partner) has done them in the past.} \vspace{1em} \Tstrut\Bstrut \\ \multicolumn{2}{l}{In the past 12 months, how often has your partner...} & \rotatebox{90}{Never} & \rotatebox{90}{Once} & \rotatebox{90}{A few times (2-4)} & \rotatebox{90}{Many times (5+)} \Tstrut\Bstrut\\ \hline 1. & insulted you or made you feel bad about yourself & 0 & 1 & 2 & 3 \Tstrut\\ 2. & belittled or humiliated you in front of others? & 0 & 1 & 2 & 3\\ 3. & did things to scare or intimidate you on purpose? & 0 & 1 & 2 & 3\\ 4. & threatened to hurt you or someone you care about? & 0 & 1 & 2 & 3\\ 5. & slapped you or thrown something at you that could hurt you? & 0 & 1 & 2 & 3 \Tstrut\\ 6. & pushed you or shoved you or pulled your hair? & 0 & 1 & 2 & 3\\ 7. & hit you with his fist or with something else that could hurt you? & 0 & 1 & 2 & 3\\ 8. & kicked you, dragged you or beaten you up? & 0 & 1 & 2 & 3\\ 9. & choked or burnt you on purpose? & 0 & 1 & 2 & 3\\ 10. & threatened you with or actually used a gun, knife or other weapon against you? & 0 & 1 & 2 & 3\\ 11. & physically forced you to have sex with him when you didn’t want to? & 0 & 1 & 2 & 3\\ 12. & used threats or intimidation to make you have sex when you did not want to? & 0 & 1 & 2 & 3\\ 13. & used physical force or threats to make you do something else sexual that you did not want to do? & 0 & 1 & 2 & 3 \Bstrut\\ \hline \end{tabular} } \end{table} To analyze the data from the CTS, traditionally the analyst collapses an individual's responses to the violence items into a single binary measure representing whether the respondent reports any act of violence during the recall period. The resulting prevalence outcome is then used as the basis for statistical inference. In randomized evaluations, the prevalence outcomes in the treatment and control groups are compared to assess the effect of the program. However, this coding strategy discards information about frequency and severity, information which could be useful in providing a more nuanced understanding of program impacts\footnote{Interestingly in the revised scale Straus et al. also suggested a summary measure they called \textit{chronicity} which was the sum of items scores among those reporting any violence}. We believe this coding strategy has become the norm in the field for several reasons. First, it was among the violence coding strategies suggested in \textcite{straus_measuring_1979} due to concerns about ``skewness'' due to a minority of highly violent relationships. Second, it is a historic legacy of the national and global prevalence surveys in the 1990s and 2000s where a prevalence measure was a natural summary statistic. Finally, third once adopted in original randomized evaluations it became important for subsequent studies to report for the purposes of comparability. In this paper, we evaluate this coding strategy in terms of its efficiency --i.e. the consequences in terms of the statistical power and precision for answering substantive research questions-- and compare it to alternative strategies. \section{Theory} \label{sec:theory} To better understand the implications of outcome coding choice in randomized evaluations, we first need a theoretical framework for describing how violence is distributed an how the effects of interventions operate. In this section, we develop a generative model for violence by returning to the original latent variable formulation of the CTS. We start in the simple case of defining a latent representation for a single act of violence and then move on to a model for multiple correlated acts which are more typical of how CTS-based instruments work. We then define causal effects on violence in the context of randomized evaluations where effects can vary across items and individuals. We conclude with the definition of several alternative coding strategies and a discussion of their theoretical advantages and disadvantages. \subsection{Single act model} \begin{figure}[t] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \caption{Poisson} \includegraphics[width = \linewidth]{figures/single_act_zip.pdf} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \centering \caption{Negative Binomial} \includegraphics[width = \linewidth]{figures/single_act_zinb.pdf} \end{subfigure} \caption{Distribution of observed acts vs. those simulated from (a) a zero-inflated Poisson with $\widehat{\lambda} = 2.36$ and $\widehat{\theta} = 0.84$ and (b) a zero-inflated Negative Binomial with $\widehat{\lambda} = 2.36$ and $\widehat{\theta} = 0.84$ based on maximum likelihood using data from Uganda.} \label{fig:single_act} \end{figure} Let $Y$ be the number of episodes of a specific violent act occurring over the follow up period, for instance the number of times the respondent is slapped. We assume that $Y$ is an i.i.d. realization from zero-inflated Poisson process of the form \begin{equation*} \begin{aligned} Y \sim \begin{cases} 0 & \text{with probability } \theta \\ \text{Poisson}(\lambda) &\text{with probability } 1 - \theta \end{cases} \end{aligned} \end{equation*} where the source of the excess zeros is a latent subpopulation of ``nonviolent'' couples. Here $\theta$ represents the probability that a woman is in a ``nonviolent'' relationship and $\lambda$ represents the average rate of violence among ``violent'' relationships. In addition to excess zeros, it's possible that further heterogeneity exists between ``violent'' couples. This may be due to a long tail of couples in which acts occur more frequently. In this case, another possible generative model is a zero-inflated Negative Binomial, i.e. \begin{equation*} \begin{aligned} Y \sim \begin{cases} 0 & \text{with probability } \theta \\ \text{NegBin}(\lambda, \phi) &\text{with probability } 1 - \theta \end{cases} \end{aligned} \end{equation*} where the parameter $\phi$ captures the additional dispersion in violent acts. As is common in the CTS, we assume that $Y$, the true number of acts, is further categorized at the time of survey measurement \[ Y^* = \begin{cases} 0 & \text{if }Y = 0 \\ 1 & \text{if } Y = 1 \\ 2 & \text{if } 2 \leq Y \leq4 \\ 3 & \text{if } Y \geq 5\end{cases} \] where 0 = ``Never'', 1 = ``Once'', 2 = ``A few times'', 3 = ``Many times''. In most surveys, we observe $Y^*$ while the true value $Y$ remains unknown. Figure \ref{fig:single_act} compares distribution of observed and simulated acts using data from on acts of slapping in a recent study in Uganda. We fit both Poisson and Negative Binomial models. Parameter values were estimated via maximum likelihood. We find that both models match the empirical data distribution quite well, but the more flexible Negative Binomial model better captures the distribution among violent couples. \subsection{Multiple act model} \label{sec:multi_act} In the CTS, violence is measured by multiple, potentially correlated, acts. We can generalize the single act model above by defining an i.i.d. vector $(Y_1, Y_2, \ldots, Y_{K})'$ where each element is now the number of reported acts of violence of each type given in Table 1. These are jointly distributed zero-inflated Poisson or zero-inflated Negative Binomial random variables \[(Y_1, Y_2, \ldots, Y_{K})' \sim \text{ZIP}(\mathbf{\lambda}, \mathbf{\theta}, \mathbf{\Sigma}) \] \[(Y_1, Y_2, \ldots, Y_{K})' \sim \text{ZINB}(\mathbf{\lambda}, \mathbf{\phi}, \mathbf{\theta}, \mathbf{\Sigma}) \] where $\mathbf{\lambda}$, $\mathbf{\phi}$, and $\mathbf{\theta}$ are the vector analogs to $\lambda$, $\phi$, and $\theta$ defined previously but $\mathbf{\Sigma}$ is now a $K \times K$ variance-covariance matrix specifying the correlation structure between the types of violence. Figure \ref{fig:multi_act} below shows an example of the data generated in this case. In panel (c) the correlation matrix is typical of relationship between acts in many contexts. The highest correlations are between acts of sexual violence and acts of less extreme physical violence. In general, physical acts are more highly correlated with each other than with sexual acts and vice versa. \begin{figure}[bp] \centering \begin{subfigure}[b]{0.9\textwidth} \centering \caption{Distribution of sum of $Y^*$} \includegraphics[width = \linewidth]{figures/multiple_act_pmf.pdf} \end{subfigure} \begin{subfigure}[b]{0.9\textwidth} \centering \caption{Distribution of each act} \includegraphics[width = \linewidth]{figures/multiple_act_pmf_act.pdf} \end{subfigure} \begin{subfigure}[b]{0.9\textwidth} \centering \caption{Correlation} \includegraphics[width = \linewidth]{figures/multiple_act_corr.pdf} \end{subfigure} \caption{Distribution of multi-act Zero-inflated Poisson.} \label{fig:multi_act} \end{figure} \subsection{Potential outcome model} \label{sec:po_model} Now that we have a model for distribution of violence, we develop a framework for causal effects of an anti-violence program in a randomized experiment using the Neyman-Rubin causal model \cite{splawa-neyman_application_1990, rubin_estimating_1974}. Consider a collection of units $i = 1,\ldots,n$ assigned either to a hypothetical violence reduction program ($z = 1$) or control ($z = 0$). The objective is to measure latent violence $Y$ at a specific time after assignment of each unit. Let $Y_i(z)$ be a potential outcome representing the value of $Y$ if unit $i$ is assigned treatment $z$ for $z = 0, 1$, i.e. $Y_i(1)$ is the latent violence for unit $i$ when assigned to the violence reduction program and $Y_i(0)$ is the latent violence for unit $i$ when assigned to the control. Then the causal effect of the program for unit $i$ is defined as a comparison between the potential outcomes, e.g. \[\tau_i = Y_i(1) - Y_i(0)\] representing the counterfactual contrast within the same individual if given the program versus not. We can extend this individual effect to sample or population summaries such as the mean difference or ratio, e.g. \[\mathbb{E}[\tau_i] = \mathbb{E}[Y_i(1)] - \mathbb{E}[Y_i(0)] \quad \text{or} \quad \mathbb{E}[\tau_i] = \frac{\mathbb{E}[Y_i(1)]}{\mathbb{E}[Y_i(0)]}\] In practice, we never observe both potential outcomes for any individual. Rather we assume we observe \[Y_i = Z Y_i(1) + (1 - Z) Y_i(0)\] where we observe $Y_i(1)$ for those assigned to treatment group and $Y_i(0)$ for those assigned to control. However, because randomization guarantees that distribution of potential outcomes will be independent of assignment $Z$, the average effect may be identified by a simple comparison of the observed outcomes in the treatment and control groups, e.g. \[\mathbb{E}[\tau_i] = \mathbb{E}[Y_i \mid Z = 1] - \mathbb{E}[Y_i \mid Z = 0]\] Based on our generative model above, we assume that violence under no intervention, $Y_i(0)$, follows a zero-inflated Poisson or Negative Binomial. To simulate effects in a trial we, also need to define possible changes in violence due to participation in an anti-violence program. Programs can affect violence in a variety of ways, they could have broad and consistent effects for all participants or only benefit a handful of the most active; they may influence certain types of violence or violent ``profiles'' more than others; they could have more mixed effects producing small improvements but also backlash; or they could prevent new cases of violence but leave those already experiencing violence without much benefit. Each of these in turn may have different implications for outcome coding choice. While it is possible that program can lead to initiation of violence, we believe this is rare and therefore consider anyone who would not have experienced violence in absence of program remain violence free with program (i.e. $Y_i(1) = 0$ if $Y_i(0) = 0)$. For those who do experience violence in absence of program (i.e. $Y_i(0) > 0$), we assume that program effects fit into one of 4 possible response types ($S$): \begin{enumerate} \item \textit{No effect} - the individual experiences the same violence regardless of whether they receive the program i.e. $Y_i(1) = Y_i(0)$. \item \textit{Cessation} - when exposed to the program, all violence stops regardless of frequency, i.e. $Y_i(1) = 0$ for all $Y_i(0)$. \item \textit{Reduction} - when exposed to the program, violence is reduced by a fixed amount, i.e. $Y_i(1) < Y_i(0)$. \item \textit{Increase} - when exposed to the program, violence increases by a fixed amount, i.e. $Y_i(1) > Y_i(0)$. \end{enumerate} \begin{table}[t] \centering \caption{Example potential outcomes for possible response types} \label{tab:po_example} \begin{threeparttable} \renewcommand{\TPTminimum}{\linewidth} \makebox[\linewidth]{ \begin{tabular}{cccccccc} \toprule ID & type ($S$) & Z & $Y_i(1)$ & $Y_i(0)$ & $Y_i^*(1)$ & $Y_i^*(0)$ & $Y^*$ \\ \midrule 1 & & 0 & 0 & 0 & 0 & 0 & 0 \\ 2 & 1 & 1 & 3 & 3 & 2 & 2 & 2 \\ 3 & 2 & 1 & 0 & 5 & 0 & 3 & 0 \\ 4 & 3 & 0 & 2 & 4 & 2 & 2 & 2 \\ 5 & 4 & 1 & 3 & 1 & 2 & 1 & 2 \\ \bottomrule \\ \end{tabular}} \begin{tablenotes}[flushleft] \item \scriptsize{\textit{Notes:} Example potential outcome values for the response types defined in Section 2.3. The first line shows an individual who experiences no violence in absence of program and therefore is assumed to also experience no violence when assigned to program $Y_i(1) = Y_i(0) = 0$. The second line is a type 1 individual for whom the program has no effect, i.e. the level of violence they experience when given the program is the same as when not given $Y_i(1) = Y_i(0) = 3$. The third line is a type 2 individual for whom violence ceases when given the program $Y_i(1) = 0$. The fourth line is a type 3 individual for whom violence is reduced when given the program $Y_i(1) = 2 < Y_i(0) = 4$. Finally, the last line is a type 4 individual for whom violence increases as a results of the program $Y_i(1) = 3 > Y_i(0) = 1$.} \end{tablenotes} \end{threeparttable} \end{table} Table \ref{tab:po_example} shows example values of $Y_i(1)$, $Y_i(0)$ as well as their survey-measured equivalents for each response type. The response types allow us to specify a variety of program effects in terms of the relative frequencies of the response types. For example, in a population where background violence prevalence is 40\%, among those with violence a particular program may have no effect for 50\%, lead to cessation entirely for 20\%, reduction, but not cessation for 20\%, and an increase for 10\%. In practice, we draw response types for violent couples from a multinomial distribution \[S \sim \operatorname{Multinomial}(p_s)\] where $p_s$ is a length 4 vector of relative proportions of each response type, then given $S$ the $Y_i(1)$ for each individual can be determined from \[Y_i(1) = \begin{cases} 0 & \text{if } Y_i(0) = 0 \\ Y_i(0) & \text{if } S = 1 \text{ and } Y_i(0) > 0 \\ 0 & \text{if } S = 2 \text{ and } Y_i(0) > 0 \\ Y_i(0) - x & \text{if } S = 3 \text{ and } Y_i(0) > 0 \\ Y_i(0) + x & \text{if } S = 4 \text{ and } Y_i(0) > 0 \end{cases}. \] Finally, we also allow for the possibility that programs affect different acts of violence differently. For instance a program may lead to reductions and/or cessations of moderate acts only or a consent-based program may affect sexual acts while leaving physical acts largely unchanged. \subsection{Outcome coding strategies} Analysts typically collapse responses to multiple acts into a single summary measure of violence. However, there are many possible strategies for doing so. Here, we consider two common outcome coding strategies. The first is based on the strategy discussed in section 1.1 which collapses the items in the CTS scale into a single binary measure representing whether the woman reported any acts of violence during the recall period. \[Y^*_{binary} = \begin{cases} 0 & \text{if all } Y^*_1 = 0, Y^*_2 = 0, \ldots, Y^*_{K} = 0 \\ 1 & \text{if any } Y^*_1 > 0 \text{ or } Y^*_2 > 0 \text{ or } \ldots \text{ or } Y^*_{K} > 0 \end{cases}\] Treatment effects based on this outcome represent difference in probability of reporting any violence. Depending on prior history, observed changes in this measure may consists of cessation of ongoing violence or prevention of new cases of violence or a combination of the two. The second outcome coding strategy is a continuous measure that is still straightforward to construct, but less common: a simple sum of the $K$ items. \[Y^*_{sum} = Y^*_1 + Y^*_2 + \ldots + Y^*_{K}\] Treatment effects based on this outcome represent differences in number of acts of violence if the true number of acts is recorded, but is a little more difficult to interpret if the CTS categories are used, i.e. 0 = ``Never'', 1 = ``Once'', 2 = ``A few times'', 3 = ``Many times''. It treats all acts as essentially the same (regardless of severity), but can reflect greater gradation in the amount of violence reported. In simulations, we divide $Y^*_{sum}$ by the total possible score to normalize values between 0 and 1 and to make variance comparisons easier as both $Y^*_{binary}$ and $Y^*_{sum}$ are then on the same scale. \subsection{Estimation} For both continous and binary outcomes, we use a least squares regression\footnote{We could use a nonlinear model such as logit or probit for the binary outcome or a poisson or negative binomial for the continuous, however we don't because (1) we are principally concerned with coding rather than estimation, (2) least squares with robust SEs is still unbiased, and (3) often the difference in average partial effects is quite small} estimator to estimate the average treatment effect of the form \[Y_i = \alpha + \tau Z_i + \varepsilon_i\] where $Z_i$ is the random assignment indicator and $\tau$ is the average treatment effect. We calculate robust standard errors using ``HC2'' formulation from \texttt{estimatr} package in R. \section{Simulation} \label{sec:data} To explore the effects of outcome coding choice on estimation, we conduct a series of finite sample Monte Carlo simulations using the \href{https://declaredesign.orgg}{DeclareDesign} package in R. We generate potential outcome data according to the model defined above and set values of $\mathbf{\lambda}$, $\mathbf{\phi}$, and $\mathbf{\theta}$ based on empirical estimates or draw directly from the empirical distributions. In all simulations we generate full set of potential outcomes for a sample of size $N$, randomize half to a hypothetical program and half to control, apply outcome coding choices, and estimate program effects. We repeat this 1000 times and calculate the following performance statistics: \begin{itemize} \item \textit{Bias}: average difference between estimate $\hat{\tau}_m$ in each simulation and the true value $\tau$. \[\frac{1}{M} \sum_{m=1}^M(\hat{\tau}_m - \tau)\] \item \textit{Root-mean-square error (RMSE)}: square root of the average squared distance between estimate $\hat{\tau}_m$ in each simulation and the true value $\tau$. \[\sqrt{\frac{1}{M}\sum_{m=1}^M(\hat{\tau}_m - \tau)^2}\] \item \textit{Power}: in a null hypothesis significance testing framework, the probability of correctly rejecting the null when the null is false. \item \textit{Coverage}: The proportion of simulations in which the confidence interval for $\hat{\tau}_m$ contains the true value $\tau$. \end{itemize} We conduct a series of experiments in which we vary the treatment effect structures and then compare different outcome coding choices under consistent estimation strategies. Building on our potential outcomes model in \ref{sec:po_model}, we consider four possible violence reduction scenarios consisting of different assumed proportions of response types: (1) \textit{cessation only} - violence ceases for 30\% of individuals and there is no effect for remaining 70\%, (2) \textit{cessation + reduction} - violence ceases for 10\% of individuals, is reduced but not ceased for 20\% and 70\% are unaffected, (3) \textit{reduction only} - violence reduces, but does not cease, for 30\% of individuals and 70\% are unaffected, (4) \textit{cessation + reduction + increase} - violence ceases for 10\%, reduces, but does not cease, for 15\%, increases for 5\% and 70\% are unaffected. In all scenarios we assume 70\% of individuals are unaffected, which may seem high, but consider that most trials are powered for to detect smaller effects than this\footnote{For example for the cessation scenario, the 30\% affected translates to a risk ratio of 0.7 for the binary measure}. Finally, we also vary whether reductions affect all acts equally or only a subset of acts. \input{tables/table1} Table \ref{tab:b1_sims} shows the results for simulations on the multiple act model in section \ref{sec:multi_act} based on empirical distribution of the Becoming One trial in Uganda. Both the sum and binary measures are unbiased and demonstrate good coverage for their respective estimands across all scenarios when we use the CTS coding as ``truth'' and ignore the latent true number of acts. If we do consider the latent number of acts then bias and poor coverage is possible. The RMSE is also always lower for the continuous measure as compared to the binary (after dividing by 30 to convert to similar range), reflecting less variability in estimates from simulation to simulation. For power the results are a bit more mixed, the binary measure is higher powered when cessation (or no effect) is the only possible effects of the program. This makes some intuitive sense as the extra information provided by the continuous measure is irrelevant if effects are either all or nothing. When there is some portion of the sample for whom violence is reduced, but does not cease, and no one's violence is increased by the program the continuous sum is better powered. When all response types are possible either measure can be higher powered. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figures/sims.pdf} \caption{Simulated power differences between binary and sum outcome codings across contexts.} \label{fig:dhs_results} \end{figure} To determine whether these findings are affected by different untreated distributions of violence, we examined additional $Y(0)$ outcome distributions based on empirical estimates in a variety of settings from the Demographic and Health Surveys. We chose nine representative countries, with 12-month prevalences ranging from X\% in Ukraine to X\% in Papua New Guinea. Figure \ref{fig:dists} in the appendix plots the distributions for each act in each country. In addition to variability in overall prevalence, these settings reflect heterogeneity in types of violence which predominate, with some countries having significantly more sexual violence than others or differences in distribution of act frequencies. Figure \ref{fig:dhs_results} plots the differences in power between binary and continuous sum. Despite the heterogeneity in prevalence and distribution of untreated violence, the findings are broadly similar. When the primary action is cessation, the binary measure is generally more powerful. When there is reduction but no increase the continuous sum is higher powered. When there is a subset for whom violence increases results are mixed, although in these nine examples the binary is often higher powered. In appendix Table \ref{tab:dhs_sims_power}, we include additional simulation results in which effects are concentrated on only physical, only sexual, or only moderate acts. When only physical acts are affected by the program, results are largely unchanged from above. When only sexual acts are affected by the program, the continuous sum measure now dominates the binary even when cessation is the only action. This is at least partly because there are only 3 acts of sexual violence (compared to 7 for physical violence) in the CTS-based scale, and these acts are correlated with physical acts. When only moderate acts are affected by the program, the continuous measure often, but not always is higher powered. Again, this depends on whether there are a subset of couples who only experience more moderate acts and how large this subset is relative to the other as well as how often those who experience normatively more severe acts also experience moderate ones. \section{Application} \label{sec:application} In this section, we assemble data from recent trials of anti-violence programs and examine whether outcome coding choice materially affected interpretation of program impacts. Seven trials contributed individual data which were re-analyzed: Bandebereho (Rwanda), Becoming One (Uganda), Indashyikirwa (Rwanda), MAISHA CRT01 (Tanzania), MAISHA CRT02 (Tanzania), Stepping Stones (South Africa), and Unite for Better Life (Ethiopia). In most trials the binary and continuous measures lead to similar conclusions. Table \ref{tab:application} highlights notable discrepancies from two trials: Becoming One (B1) and MAISHA CRT01. In the case of the former, at the 6 month follow up, a non-significant 2.6 percentage point reduction in the binary measure of violence was observed ($p = 0.265$). However, the reduction in the continuous sum measure of violence was significant ($p = 0.014$). Standard errors for each additionally suggested the greater precision of the continuous measure. In exploratory analyses, comparisons of the underlying distributions (Figure \ref{fig:b1_dist}) suggested that the additional reductions, but not cessations, in the tail of the distribution drove differences in the two measures. At endline, 12 months after the start of the program, both measures showed significant reductions. This was consistent with the hypothesis that changes in relationship dynamics took time to occur as couples engaged with the program. In the MAISHA trial, we see the opposite: a non-significant\footnote{at conventional pre-specified $\alpha$ level of 0.05} but precisely estimated reduction in the binary measure ($p = 0.051$) but a more imprecise non-significant decline in the continuous sum measure. This discrepancy could be consistent with either: (a) cessation being the primary mechanism of changes in violence due to the program or (b) a small subpopulation of already violent couples for whom the program may have caused increases in violence which is picked up by the continuous but not the binary measure. Given the sample size, it could always of course also be noise. Additional, exploratory analyses can help reveal what is more likely. However, comparing measures can have important implications for how trial results are interpreted \begin{table}[t] \centering \caption{Example discrepancies in results by outcome coding choice in recent trials \label{tab:application}} \begin{threeparttable} \begin{tabular}{lccccccccc} \toprule & & \multicolumn{4}{c}{$Y_{binary}$} & \multicolumn{4}{c}{$Y_{sum}$} \\ \cmidrule(l{3pt}r{3pt}){3-6} \cmidrule(l{3pt}r{3pt}){7-10} Trial & N & T & C & Diff & $p$-value & T & C & Diff & $p$-value \\ \midrule B1\tnote{a} & 1,680 & 35.1\% & 37.7\% & -2.6\% & 0.265 & 1.23 & 1.61 & -0.38 & 0.014 \\ MAISHA 1\tnote{b} & 919 & 23.1\% & 27.4\% & -4.3\% & 0.051 & 1.39 & 1.49 & -0.10 & 0.266 \\ \bottomrule \end{tabular} \begin{tablenotes}[para] \item[a] Midline (6 month) results from Becoming One study in Uganda. \item[b] Endline (24 month) results from MAISHA trial I in Tanzania. \\ \end{tablenotes} \end{threeparttable} \end{table} \section{Discussion} \label{sec:discussion} In this study, we developed a generative model for violence and violence reduction in trials and used it to better understand how outcome coding choice affects statistical efficiency. When comparing two simple measures: a binary indicator for whether any act of violence was reported or a continuous sum of act frequency categories. We find that no measure strictly dominates, as there are settings where the binary measure may be preferred, however the continuous measure was higher powered in the majority of scenarios considered. We therefore recommend that trialists report both measures, when possible in order to better facilitate interpretation, particularly in circumstances such as those highlighted in section \ref{sec:application} where there are discrepancies. We also encourage trialists to consider more detailed power simluations, such as those conducted here, when planning future trials. To make them as relevant as possible, these can be based on empirical distributions previously collected data or from baseline baseline. To assist in this we have made our code freely available\footnote{\href{https://github.com/boyercb/ipv-measurement}{https://github.com/boyercb/ipv-measurement}} to practitioners. Our main finding contrasts somewhat with the literature on ``dichotomania'' in the medical sciences, which holds that dichotomization of trial outcomes is often a scourge to be avoided, principally because dichotomous outcomes are less efficient. This is often based on theory which shows that the dichotomization of a single normally distribution outcome leads to loss of efficiency. However, the violence setting is unique. As developed in section \ref{sec:theory}, there are clear theoretical reasons why the effect of the program itself may be dichotomous for a subset or even a majority of people. Further, heterogeneity in effects may also lead to sub-populations for whom the program increases violence. This may be particularly true in settings where backlash is possible. Researchers may have reasons beyond efficiency for preferring one measure over another. In some settings, violence cessation may be the only substantively interesting or meaningful effect. In other settings, one measure may be perceived as more reliable than another. In this study, for clarity, we chose to focus on only a small subset of the trade-offs that trialists must make. However, our recommendation that, when possible, trialists report multiple measures can still aid in interpretation of findings. Our study contributes to a larger literature about the measurement of violence both within the context of randomized trials as well as more broadly. Several authors have focused on the notable limitation that most violence is self-reported, which is a particular concern in non-blinded randomized trials where certain incentives may drive differential reporting. Some have proposed alternative strategies like self-administration, list-randomization, or randomized response that confer a greater degree of anonymity. The present paper sidesteps this concern by focusing instead on the statistical properties of outcome coding choices. While we use the standard self-reported modules as the basis for our model our results are agnostic to how violence is measured and could equally be applied to violence as assessed via different means. However, as several previous commentators have noted several of these anonymized alternatives rely on asymptotic comparisons or marginal differences in responses and thereby sacrifice a considerable amount of statistical efficiency. An additional value of our study is that our ``first-principles'' approach can be applied to assist in other pertinent measurement questions in violence research. For instance, for some interventions to reduce violence there is a theoretical question as to whether ``backlash'' might ensue for some couples. Our approach makes it easy to simulate different ``backlash'' structures in a variety of settings and can assist in figuring out the optimal statistical procedure for estimating ``backlash''-type effects when they do exist. \clearpage \singlespacing \printbibliography \clearpage \section*{Appendix A.} \label{sec:appendixa} \addcontentsline{toc}{section}{Appendix A} \setcounter{figure}{0} \renewcommand\thefigure{A\arabic{figure}} \setcounter{table}{0} \renewcommand\thetable{A\arabic{table}} \begin{figure}[p] \centering \includegraphics[width=\textwidth]{figures/dhs_pmfs.pdf} \caption{Empirical distributions of violent acts from select Demographic and Health Surveys.} \label{fig:dists} \end{figure} \begin{landscape} \input{tables/tablea1} \end{landscape} \begin{figure}[p] \centering \includegraphics[width=\textwidth]{figures/b1_dist.pdf} \caption{Differences in distribution of continuous sum violence measure at midline in the Becoming One trial. } \label{fig:b1_dist} \end{figure} \end{document} \section{Introduction} \label{sec:introduction} Globally, nearly a third of women report experiencing violence perpetrated by an intimate partner \cite{devries_global_2013} with far-reaching consequences for their health, their families, their opportunities, and their livelihoods \cite{ellsberg_intimate_2008}. This has prompted interest among funders, practitioners, and policy-makers to develop programs to reduce or prevent violence and in a research agenda to better understand empirically what types of programs work in reducing violence. Consequently, over the last two decades, there has been a surge in randomized evaluations of violence reduction programs, particularly in lower and middle income countries \cite{abramsky_findings_2014, hidrobo_effect_2016, jewkes_impact_2008, pronyk_effect_2006, wagman_effectiveness_2015}. Most of these evaluations measure violence using a standardized instrument, based on the Conflict Tactics Scale (CTS) \cite{straus_measuring_1979, straus_revised_1996}, as well as a standardized outcome coding originally developed for large cross-sectional prevalence assessments. While this standardization allows for some comparability across studies, to date, little work has been done to understand whether this coding choice is optimal in the context of a randomized evaluation. In this paper, we explore the consequences of this coding choice with respect to statistical bias and efficiency and we compare it to several alternatives, first via simulation, and then by re-analyzing data from several recent trials. To do so we return to the original conception of the CTS in the sociology literature to develop a generative model of violence. We also use potential outcomes and the Neyman-Rubin causal model \cite{splawa-neyman_application_1990, rubin_estimating_1974} to formalize reductions in violence in the context of randomized evaluations and to think structurally about how intervention effects operate. We then use this theory to simulate data from hypothetical trials under different violence reduction regimes and then compare outcome coding strategies. Finally, we re-analyze data from seven recent trials in Southern and Eastern Africa using alternative coding strategies and re-interpret them in light of the insights gleaned from simulations. \subsection{The Conflict Tactics Scale} Studies that attempt to measure intimate partner violence generally rely on some form of the Conflict Tactics Scale (CTS) to quantify a participant's experience of violence with perhaps the most current implementation being the standardized questionnaire developed by the World Health Organization. The origins of the instrument are rooted in conflict theory which posits that conflict is an inevitable part of human relationships, but that the tactics employed to deal with conflict vary. Among these tactics are included those that involve physical force, coercion, or verbal aggression, which we may define as ``violent''. However, the instrument consciously disassociates these tactics from their personal or social meaning as ``violence'' by asking respondents to report the frequency that they experienced specific acts rather than how often they experience ``violence''. This allows for comparable objective assessment of tactics used during conflict even when definitions of violence may vary from person to person. The original CTS instrument was designed to assess both perpetration as well as experiences of violence and to capture family violence more broadly including violence directed against children. However, in evaluations of intimate partner violence reduction programs an abbreviated version is often used as the focus is typically limited to violence experienced by women, although sometimes supplemented with male reports of perpetration. The violence items in the scale were chosen to capture different latent constructs. In the original scale, these items loaded on two latent factors representing psychological aggression and physical assault. A revised scale in 1996 added items representing sexual coercion and also introduced a shortened version which was the basis for the WHO questionnaire. In more recent literature the three latent factors into which items are grouped are more commonly referred to as emotional violence, physical violence, and sexual violence. Table \ref{tab:cts} below gives examples of the items in the scale from the WHO questionnaire. Items can sometimes be added or deleted or adapted to local context, however the basic structure is largely the same. All items in the scale refer to the same defined recall period, usually 12 months, although this can vary in randomized evaluations where repeated assessments may be made and where interest is generally in violence experienced since the start of a violence reduction program. \begin{table}[t] \centering \footnotesize{ \caption{Example of CTS style questions for measuring violence adapted from the WHO Domestic Violence questionnaire.} \label{tab:cts} \begin{tabular}[t]{p{0.5cm}p{10cm}p{0.5cm}p{0.5cm}p{0.5cm}p{0.5cm}} \hline \hline \multicolumn{6}{p{14cm}}{\hspace{2em} No matter how well a couple gets along, there are times when they disagree on major decisions, get annoyed about something the other person does, or just have spats or fights because they're in a bad mood or tired or for some other reason. They also use many different ways of trying to settle their differences. I'm going to read a list of some things that you and your (husband/partner) might have done when you had a dispute, and would first like you to tell me how often your (husband/partner) has done them in the past.} \vspace{1em} \Tstrut\Bstrut \\ \multicolumn{2}{l}{In the past 12 months, how often has your partner...} & \rotatebox{90}{Never} & \rotatebox{90}{Once} & \rotatebox{90}{A few times (2-4)} & \rotatebox{90}{Many times (5+)} \Tstrut\Bstrut\\ \hline 1. & insulted you or made you feel bad about yourself & 0 & 1 & 2 & 3 \Tstrut\\ 2. & belittled or humiliated you in front of others? & 0 & 1 & 2 & 3\\ 3. & did things to scare or intimidate you on purpose? & 0 & 1 & 2 & 3\\ 4. & threatened to hurt you or someone you care about? & 0 & 1 & 2 & 3\\ 5. & slapped you or thrown something at you that could hurt you? & 0 & 1 & 2 & 3 \Tstrut\\ 6. & pushed you or shoved you or pulled your hair? & 0 & 1 & 2 & 3\\ 7. & hit you with his fist or with something else that could hurt you? & 0 & 1 & 2 & 3\\ 8. & kicked you, dragged you or beaten you up? & 0 & 1 & 2 & 3\\ 9. & choked or burnt you on purpose? & 0 & 1 & 2 & 3\\ 10. & threatened you with or actually used a gun, knife or other weapon against you? & 0 & 1 & 2 & 3\\ 11. & physically forced you to have sex with him when you didn’t want to? & 0 & 1 & 2 & 3\\ 12. & used threats or intimidation to make you have sex when you did not want to? & 0 & 1 & 2 & 3\\ 13. & used physical force or threats to make you do something else sexual that you did not want to do? & 0 & 1 & 2 & 3 \Bstrut\\ \hline \end{tabular} } \end{table} To analyze the data from the CTS, traditionally the analyst collapses an individual's responses to the violence items into a single binary measure representing whether the respondent reports any act of violence during the recall period. The resulting prevalence outcome is then used as the basis for statistical inference. In randomized evaluations, the prevalence outcomes in the treatment and control groups are compared to assess the effect of the program. However, this coding strategy discards information about frequency and severity, information which could be useful in providing a more nuanced understanding of program impacts\footnote{Interestingly in the revised scale Straus et al. also suggested a summary measure they called \textit{chronicity} which was the sum of items scores among those reporting any violence}. We believe this coding strategy has become the norm in the field for several reasons. First, it was among the violence coding strategies suggested in \textcite{straus_measuring_1979} due to concerns about ``skewness'' due to a minority of highly violent relationships. Second, it is a historic legacy of the national and global prevalence surveys in the 1990s and 2000s where a prevalence measure was a natural summary statistic. Third, once adopted in seminal randomized evaluations it became important for subsequent studies to code their outcomes similarly for the purposes of comparability. In this paper, we evaluate this coding strategy in terms of its efficiency --i.e. the consequences in terms of the statistical power and precision for answering substantive research questions-- and compare it to alternative strategies. \section{Theory} \label{sec:theory} To better understand the implications of outcome coding choice in randomized evaluations, we first need a theoretical framework for describing how violence is distributed and how the effects of interventions operate. In this section, we develop a generative model for violence by returning to the original latent variable formulation of the CTS. We start in the simple case of defining a latent representation for a single act of violence and then move on to a model for multiple correlated acts which are more typical of how CTS-based instruments work. We then define causal effects on violence in the context of randomized evaluations where effects can vary across items and individuals. We conclude with the definition of several alternative coding strategies and a discussion of their theoretical advantages and disadvantages. \subsection{Single act model} \begin{figure}[t] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \caption{Poisson} \includegraphics[width = \linewidth]{figures/single_act_zip.pdf} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \centering \caption{Negative Binomial} \includegraphics[width = \linewidth]{figures/single_act_zinb.pdf} \end{subfigure} \caption{Distribution of observed acts vs. those simulated from (a) a zero-inflated Poisson with $\widehat{\lambda} = 2.36$ and $\widehat{\theta} = 0.84$ and (b) a zero-inflated Negative Binomial with $\widehat{\lambda} = 2.36$ and $\widehat{\theta} = 0.84$ based on maximum likelihood using data from Uganda.} \label{fig:single_act} \end{figure} Let $Y$ be the number of episodes of a specific violent act occurring over the follow up period, for instance the number of times the respondent is slapped. We assume that $Y$ is an i.i.d. realization from zero-inflated Poisson process of the form \begin{equation*} \begin{aligned} Y \sim \begin{cases} 0 & \text{with probability } \theta \\ \text{Poisson}(\lambda) &\text{with probability } 1 - \theta \end{cases} \end{aligned} \end{equation*} where the source of the excess zeros is a latent subpopulation of ``nonviolent'' couples. Here $\theta$ represents the probability that a woman is in a ``nonviolent'' relationship and $\lambda$ represents the average rate of violence among ``violent'' relationships. In addition to excess zeros, it's possible that further heterogeneity exists between ``violent'' couples. This may be due to a long tail of couples in which acts occur more frequently. In this case, another possible generative model is a zero-inflated Negative Binomial, i.e. \begin{equation*} \begin{aligned} Y \sim \begin{cases} 0 & \text{with probability } \theta \\ \text{NegBin}(\lambda, \phi) &\text{with probability } 1 - \theta \end{cases} \end{aligned} \end{equation*} where the parameter $\phi$ captures the additional dispersion in violent acts. As is common in the CTS, we assume that $Y$, the true number of acts, is further categorized at the time of survey measurement \[ Y^* = \begin{cases} 0 & \text{if }Y = 0 \\ 1 & \text{if } Y = 1 \\ 2 & \text{if } 2 \leq Y \leq4 \\ 3 & \text{if } Y \geq 5\end{cases} \] where 0 = ``Never'', 1 = ``Once'', 2 = ``A few times'', 3 = ``Many times''. In most surveys, we observe $Y^*$ while the true value $Y$ remains unknown. Figure \ref{fig:single_act} compares distribution of observed and simulated acts using data from on acts of slapping in a recent study in Uganda. We fit both Poisson and Negative Binomial models. Parameter values were estimated via maximum likelihood. We find that both models match the empirical data distribution quite well, but the more flexible Negative Binomial model better captures the distribution among violent couples. \subsection{Multiple act model} \label{sec:multi_act} In the CTS, violence is measured by multiple, potentially correlated, acts. We can generalize the single act model above by defining an i.i.d. vector $(Y_1, Y_2, \ldots, Y_{10})'$ where each element is now the number of reported acts of violence of each the ten types of acts given in Table 1. These are jointly distributed zero-inflated Poisson or zero-inflated Negative Binomial random variables \[(Y_1, Y_2, \ldots, Y_{10})' \sim \text{ZIP}(\mathbf{\lambda}, \mathbf{\theta}, \mathbf{\Sigma}) \] \[(Y_1, Y_2, \ldots, Y_{10})' \sim \text{ZINB}(\mathbf{\lambda}, \mathbf{\phi}, \mathbf{\theta}, \mathbf{\Sigma}) \] where $\mathbf{\lambda}$, $\mathbf{\phi}$, and $\mathbf{\theta}$ are the vector analogs to $\lambda$, $\phi$, and $\theta$ defined previously but $\mathbf{\Sigma}$ is now a $10 \times 10$ variance-covariance matrix specifying the correlation structure between the types of violence. Figure \ref{fig:multi_act} below shows an example of the data generated in this case. In panel (c) the correlation matrix is typical of relationship between acts in many contexts. The highest correlations are between acts of sexual violence and acts of less extreme physical violence. In general, physical acts are more highly correlated with each other than with sexual acts and vice versa. \begin{figure}[bp] \centering \begin{subfigure}[b]{0.9\textwidth} \centering \caption{Distribution of sum of $Y^*$} \includegraphics[width = \linewidth]{figures/multiple_act_pmf.pdf} \end{subfigure} \begin{subfigure}[b]{0.9\textwidth} \centering \caption{Distribution of each act} \includegraphics[width = \linewidth]{figures/multiple_act_pmf_act.pdf} \end{subfigure} \begin{subfigure}[b]{0.9\textwidth} \centering \caption{Correlation} \includegraphics[width = \linewidth]{figures/multiple_act_corr.pdf} \end{subfigure} \caption{Distribution of multi-act Zero-inflated Poisson.} \label{fig:multi_act} \end{figure} \subsection{Potential outcome model} \label{sec:po_model} Now that we have a model for distribution of violence, we develop a framework for causal effects of an anti-violence program in a randomized experiment using the Neyman-Rubin causal model \cite{splawa-neyman_application_1990, rubin_estimating_1974}. Consider a collection of units $i = 1,\ldots,n$ assigned either to a hypothetical violence reduction program ($z = 1$) or control ($z = 0$). The objective is to measure latent violence $Y$ at a specific time after assignment of each unit. Let $Y_i(z)$ be a potential outcome representing the value of $Y$ if unit $i$ is assigned treatment $z$ for $z = 0, 1$, i.e. $Y_i(1)$ is the latent violence for unit $i$ when assigned to the violence reduction program and $Y_i(0)$ is the latent violence for unit $i$ when assigned to the control. Then the causal effect of the program for unit $i$ is defined as a comparison between the potential outcomes, e.g. \[\tau_i = Y_i(1) - Y_i(0)\] representing the counterfactual contrast within the same individual if given the program versus not. We can extend this individual effect to sample or population summaries such as the mean difference or ratio, e.g. \[\mathbb{E}[\tau_i] = \mathbb{E}[Y_i(1)] - \mathbb{E}[Y_i(0)] \quad \text{or} \quad \mathbb{E}[\tau_i] = \frac{\mathbb{E}[Y_i(1)]}{\mathbb{E}[Y_i(0)]}\] In practice, we never observe both potential outcomes for any individual. Rather we assume we observe \[Y_i = Z Y_i(1) + (1 - Z) Y_i(0)\] where we observe $Y_i(1)$ for those assigned to treatment group and $Y_i(0)$ for those assigned to control. However, because randomization guarantees that distribution of potential outcomes will be independent of assignment $Z$, the average effect may be identified by a simple comparison of the observed outcomes in the treatment and control groups, e.g. \[\mathbb{E}[\tau_i] = \mathbb{E}[Y_i \mid Z = 1] - \mathbb{E}[Y_i \mid Z = 0]\] Based on our generative model above, we assume that violence under no intervention, $Y_i(0)$, follows a zero-inflated Poisson or Negative Binomial. To simulate effects in a trial we, also need to define possible changes in violence due to participation in an anti-violence program. Programs can affect violence in a variety of ways, they could have broad and consistent effects for all participants or only benefit a handful of the most engaged; they may influence certain types of violence or violent ``profiles'' more than others; they could have more mixed effects producing small improvements but also backlash; or they could prevent new cases of violence but leave those already experiencing violence without much benefit. Each of these in turn may have different implications for outcome coding choice. While it is possible that program can lead to initiation of violence, we believe this is rare and therefore consider anyone who would not have experienced violence in absence of program remain violence free with program (i.e. $Y_i(1) = 0$ if $Y_i(0) = 0)$. For those who do experience violence in absence of program (i.e. $Y_i(0) > 0$), we assume that program effects fit into one of 4 possible response types ($S$): \begin{enumerate} \item \textit{No effect} - the individual experiences the same violence regardless of whether they receive the program i.e. $Y_i(1) = Y_i(0)$. \item \textit{Cessation} - when exposed to the program, all violence stops regardless of frequency, i.e. $Y_i(1) = 0$ for all $Y_i(0)$. \item \textit{Reduction} - when exposed to the program, violence is reduced by a fixed amount, i.e. $Y_i(1) < Y_i(0)$. \item \textit{Increase} - when exposed to the program, violence increases by a fixed amount, i.e. $Y_i(1) > Y_i(0)$. \end{enumerate} \begin{table}[t] \centering \caption{Example potential outcomes for possible response types} \label{tab:po_example} \begin{threeparttable} \renewcommand{\TPTminimum}{\linewidth} \makebox[\linewidth]{ \begin{tabular}{cccccccc} \toprule ID & type ($S$) & Z & $Y_i(1)$ & $Y_i(0)$ & $Y_i^*(1)$ & $Y_i^*(0)$ & $Y^*$ \\ \midrule 1 & & 0 & 0 & 0 & 0 & 0 & 0 \\ 2 & 1 & 1 & 3 & 3 & 2 & 2 & 2 \\ 3 & 2 & 1 & 0 & 5 & 0 & 3 & 0 \\ 4 & 3 & 0 & 2 & 4 & 2 & 2 & 2 \\ 5 & 4 & 1 & 3 & 1 & 2 &1 & 2 \\ \bottomrule \\ \end{tabular}} \begin{tablenotes}[flushleft] \item \scriptsize{\textit{Notes:} Example potential outcome values for the response types defined in Section 2.3. $Y$ is true number of acts and $Y^*$ is categorization using WHO coding (i.e. 0 = None, 1 = Once, 2 = A few times, 3 = Many times). The first line shows an individual who experiences no violence in absence of program and therefore is assumed to also experience no violence when assigned to program $Y_i(1) = Y_i(0) = 0$. The second line is a type 1 individual for whom the program has no effect, i.e. the level of violence they experience when given the program is the same as when not given $Y_i(1) = Y_i(0) = 3$. The third line is a type 2 individual for whom violence ceases when given the program $Y_i(1) = 0$. The fourth line is a type 3 individual for whom violence is reduced when given the program $Y_i(1) = 2 < Y_i(0) = 4$. Finally, the last line is a type 4 individual for whom violence increases as a results of the program $Y_i(1) = 3 > Y_i(0) = 1$.} \end{tablenotes} \end{threeparttable} \end{table} Table \ref{tab:po_example} shows example values of $Y_i(1)$, $Y_i(0)$ as well as their survey-measured equivalents for each response type. The response types allow us to specify a variety of program effects in terms of the relative frequencies of the response types. For example, in a population where background violence prevalence is 40\%, among those with violence a particular program may have no effect for 50\%, lead to cessation entirely for 20\%, reduction, but not cessation for 20\%, and an increase for 10\%. In practice, we draw response types for violent couples from a multinomial distribution \[S \sim \operatorname{Multinomial}(p_s)\] where $p_s$ is a length 4 vector of relative proportions of each response type, then given $S$ the $Y_i(1)$ for each individual can be determined from \[Y_i(1) = \begin{cases} 0 & \text{if } Y_i(0) = 0 \\ Y_i(0) & \text{if } S = 1 \text{ and } Y_i(0) > 0 \\ 0 & \text{if } S = 2 \text{ and } Y_i(0) > 0 \\ Y_i(0) - x & \text{if } S = 3 \text{ and } Y_i(0) > 0 \\ Y_i(0) + x & \text{if } S = 4 \text{ and } Y_i(0) > 0 \end{cases}. \] Finally, we also allow for the possibility that programs affect different acts of violence differently. For instance a program may lead to reductions and/or cessations of moderate acts only or a consent-based program may affect sexual acts while leaving physical acts largely unchanged. \subsection{Outcome coding strategies} Analysts typically collapse responses to multiple acts into a single summary measure of violence. However, there are many possible strategies for doing so. Here, we consider two common outcome coding strategies. The first is based on the strategy discussed in section 1.1 which collapses the items in the CTS scale into a single binary measure representing whether the woman reported any acts of violence during the recall period. \[Y^*_{binary} = \begin{cases} 0 & \text{if all } Y^*_1 = 0, Y^*_2 = 0, \ldots, Y^*_{K} = 0 \\ 1 & \text{if any } Y^*_1 > 0 \text{ or } Y^*_2 > 0 \text{ or } \ldots \text{ or } Y^*_{K} > 0 \end{cases}\] Treatment effects based on this outcome represent difference in probability of reporting any violence. Depending on prior history, observed changes in this measure may consists of cessation of ongoing violence or prevention of new cases of violence or a combination of the two. The second outcome coding strategy is a continuous measure that is still straightforward to construct, but less common: a simple sum of the $K$ items. \[Y^*_{sum} = Y^*_1 + Y^*_2 + \ldots + Y^*_{K}\] Treatment effects based on this outcome represent differences in number of acts of violence if the true number of acts is recorded, but is a little more difficult to interpret if the CTS categories are used, i.e. 0 = ``Never'', 1 = ``Once'', 2 = ``A few times'', 3 = ``Many times''. It treats all acts as essentially the same (regardless of severity), but can reflect greater gradation in the amount of violence reported. In simulations, we divide $Y^*_{sum}$ by the total possible score to normalize values between 0 and 1 and to make variance comparisons easier as both $Y^*_{binary}$ and $Y^*_{sum}$ are then on the same scale. \subsection{Estimation} For both continous and binary outcomes, we use a least squares regression\footnote{We could use a nonlinear model such as logit or probit for the binary outcome or a poisson or negative binomial for the continuous, however we don't because (1) we are principally concerned with coding rather than estimation, (2) least squares with robust SEs is still unbiased, and (3) often the difference in average partial effects is quite small} estimator to estimate the average treatment effect of the form \[Y_i = \alpha + \tau Z_i + \varepsilon_i\] where $Z_i$ is the random assignment indicator and $\tau$ is the average treatment effect. We calculate robust standard errors using ``HC2'' formulation from \texttt{estimatr} package in R. \section{Simulation} \label{sec:data} To explore the effects of outcome coding choice on estimation, we conduct a series of finite sample Monte Carlo simulations using the \href{https://declaredesign.orgg}{DeclareDesign} package in R. We generate potential outcome data according to the model defined above and set values of $\mathbf{\lambda}$, $\mathbf{\phi}$, and $\mathbf{\theta}$ based on empirical estimates or draw directly from the empirical distributions. In all simulations we generate a full set of potential outcomes for a sample of size $N$, randomize half to a hypothetical program and half to control, apply outcome coding choices, and estimate program effects. We repeat this 1000 times and calculate the following performance statistics: \begin{itemize} \item \textit{Bias}: average difference between estimate $\hat{\tau}_m$ in each simulation and the true value $\tau$. \[\frac{1}{M} \sum_{m=1}^M(\hat{\tau}_m - \tau)\] \item \textit{Root-mean-square error (RMSE)}: square root of the average squared distance between estimate $\hat{\tau}_m$ in each simulation and the true value $\tau$. \[\sqrt{\frac{1}{M}\sum_{m=1}^M(\hat{\tau}_m - \tau)^2}\] \item \textit{Power}: in a null hypothesis significance testing framework, the probability of correctly rejecting the null when the null is false. \item \textit{Coverage}: The proportion of simulations in which the confidence interval for $\hat{\tau}_m$ contains the true value $\tau$. \end{itemize} We conduct a series of experiments in which we vary the treatment effect structures and then compare different outcome coding choices under consistent estimation strategies. Building on our potential outcomes model in \ref{sec:po_model}, we consider four possible violence reduction scenarios consisting of different assumed proportions of response types: (1) \textit{cessation only} - violence ceases for 30\% of individuals and there is no effect for remaining 70\%, (2) \textit{cessation + reduction} - violence ceases for 10\% of individuals, is reduced but not ceased for 20\% and 70\% are unaffected, (3) \textit{reduction only} - violence reduces, but does not cease, for 30\% of individuals and 70\% are unaffected, (4) \textit{cessation + reduction + increase} - violence ceases for 10\%, reduces, but does not cease, for 15\%, increases for 5\% and 70\% are unaffected. In all scenarios we assume 70\% of individuals are unaffected, which may seem high, but consider that most trials are powered for to detect smaller effects than this\footnote{For example for the cessation scenario, the 30\% affected translates to a risk ratio of 0.7 for the binary measure}. Finally, we also vary whether reductions affect all acts equally or only a subset of acts. \input{tables/table1} Table \ref{tab:b1_sims} shows the results for simulations on the multiple act model in section \ref{sec:multi_act} based on empirical distribution of the Becoming One trial in Uganda. Both the sum and binary measures are unbiased and demonstrate good coverage for their respective estimands across all scenarios when we use the CTS coding as ``truth'' and ignore the latent true number of acts. If we do consider the latent number of acts then bias and poor coverage is possible. The RMSE is also always lower for the continuous measure as compared to the binary (after dividing by 30 to convert to similar range), reflecting less variability in estimates from simulation to simulation. For power the results are a bit more mixed, the binary measure is higher powered when cessation (or no effect) is the only possible effects of the program. This makes some intuitive sense as the extra information provided by the continuous measure is irrelevant if effects are either all or nothing. When there is some portion of the sample for whom violence is reduced, but does not cease, and no one's violence is increased by the program the continuous sum is better powered. When all response types are possible either measure can be higher powered. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figures/sims.pdf} \caption{Simulated power differences between binary and sum outcome codings across contexts.} \label{fig:dhs_results} \end{figure} To determine whether these findings are affected by different untreated distributions of violence, we examined additional $Y(0)$ outcome distributions based on empirical estimates in a variety of settings from the Demographic and Health Surveys. We chose nine representative countries, with 12-month prevalences ranging from 5.4\% in the Philippines to 47.6\% in Papua New Guinea. Figure \ref{fig:dists} in the appendix plots the distributions for each act in each country. In addition to variability in overall prevalence, these settings reflect heterogeneity in types of violence which predominate, with some countries having significantly more sexual violence than others or differences in distribution of act frequencies. Figure \ref{fig:dhs_results} plots the differences in power between binary and continuous sum. Despite the heterogeneity in prevalence and distribution of untreated violence, the findings are broadly similar. When the primary action is cessation, the binary measure is generally more powerful. When there is reduction but no increase the continuous sum is higher powered. When there is a subset for whom violence increases results are mixed, although in these nine examples the binary is often higher powered. In appendix Table \ref{tab:dhs_sims_power}, we include additional simulation results in which effects are concentrated on only physical, only sexual, or only moderate acts. When only physical acts are affected by the program, results are largely unchanged from above. When only sexual acts are affected by the program, the continuous sum measure now dominates the binary even when cessation is the only action. This is at least partly because there are only 3 acts of sexual violence (compared to 7 for physical violence) in the CTS-based scale, and these acts are correlated with physical acts. When only moderate acts are affected by the program, the continuous measure often, but not always is higher powered. Again, this depends on whether there are a subset of couples who only experience more moderate acts and how large this subset is relative to the other as well as how often those who experience normatively more severe acts also experience moderate ones. \section{Application} \label{sec:application} In this section, we assemble data from recent trials of anti-violence programs and examine whether outcome coding choice materially affected interpretation of program impacts. Seven trials contributed individual data which were re-analyzed: Bandebereho \cite{doyle_gender-transformative_2018} , Becoming One (Uganda), Indashyikirwa \cite{dunkle_effective_2020} (Rwanda), MAISHA CRT01 \cite{kapiga_social_2019} (Tanzania), MAISHA CRT02 (Tanzania), Stepping Stones \cite{jewkes_impact_2008} (South Africa), and Unite for Better Life \cite{sharma_effectiveness_2020} (Ethiopia). In most trials the binary and continuous measures lead to similar conclusions. Table \ref{tab:application} highlights notable discrepancies from two trials: Becoming One (B1) and MAISHA CRT01. In the case of the former, at the 6 month follow up, a non-significant 2.6 percentage point reduction in the binary measure of violence was observed ($p = 0.265$). However, the reduction in the continuous sum measure of violence was significant ($p = 0.014$). Standard errors for each additionally suggested the greater precision of the continuous measure. In exploratory analyses, comparisons of the underlying distributions (Figure \ref{fig:b1_dist}) suggested that the additional reductions, but not cessations, in the tail of the distribution drove differences in the two measures. At endline, 12 months after the start of the program, both measures showed significant reductions. This was consistent with the hypothesis that changes in relationship dynamics took time to occur as couples engaged with the program. In the MAISHA trial, we see the opposite: a non-significant\footnote{at conventional pre-specified $\alpha$ level of 0.05} but precisely estimated reduction in the binary measure ($p = 0.051$) but a more imprecise non-significant decline in the continuous sum measure. This discrepancy could be consistent with either: (a) cessation being the primary mechanism of changes in violence due to the program or (b) a small subpopulation of already violent couples for whom the program may have caused increases in violence which is picked up by the continuous but not the binary measure. Given the sample size, it could always of course also be noise. Additional, exploratory analyses can help reveal what is more likely. However, comparing measures can have important implications for how trial results are interpreted \begin{table}[t] \centering \caption{Example discrepancies in results by outcome coding choice in recent trials \label{tab:application}} \begin{threeparttable} \begin{tabular}{lccccccccc} \toprule & & \multicolumn{4}{c}{$Y_{binary}$} & \multicolumn{4}{c}{$Y_{sum}$} \\ \cmidrule(l{3pt}r{3pt}){3-6} \cmidrule(l{3pt}r{3pt}){7-10} Trial & N & T & C & Diff & $p$-value & T & C & Diff & $p$-value \\ \midrule B1\tnote{a} & 1,680 & 35.1\% & 37.7\% & -2.6\% & 0.265 & 1.23 & 1.61 & -0.38 & 0.014 \\ MAISHA 1\tnote{b} & 919 & 23.1\% & 27.4\% & -4.3\% & 0.051 & 1.39 & 1.49 & -0.10 & 0.266 \\ \bottomrule \end{tabular} \begin{tablenotes}[para] \item[a] Midline (6 month) results from Becoming One study in Uganda. \item[b] Endline (24 month) results from MAISHA trial I in Tanzania. \\ \end{tablenotes} \end{threeparttable} \end{table} \section{Discussion} \label{sec:discussion} In this study, we developed a generative model for violence and violence reduction in trials and used it to better understand how outcome coding choice affects statistical efficiency. When comparing two simple measures: a binary indicator for whether any act of violence was reported or a continuous sum of act frequency categories. We find that no measure strictly dominates, as there are settings where the binary measure may be preferred, however the continuous measure was higher powered in the majority of scenarios considered. We therefore recommend that trialists report both measures, when possible in order to better facilitate interpretation, particularly in circumstances such as those highlighted in section \ref{sec:application} where there are discrepancies. We also encourage trialists to consider more detailed power simluations, such as those conducted here, when planning future trials. To make them as relevant as possible, these can be based on empirical distributions previously collected data or from baseline. To assist in this we have made our code freely available\footnote{\href{https://github.com/boyercb/ipv-measurement}{https://github.com/boyercb/ipv-measurement}} to practitioners. Our main finding contrasts somewhat with the literature on ``dichotomania'' in the medical sciences \cite{senn_dichotomania_2005, senn_measurement_2009}, which holds that dichotomization of trial outcomes is often a scourge to be avoided, principally because dichotomous outcomes are less efficient. This is often based on theory which shows that the dichotomization of a single normally distribution outcome leads to loss of efficiency. However, the violence setting is unique. As developed in section \ref{sec:theory}, there are clear theoretical reasons why the effect of the program itself may be dichotomous for a subset or even a majority of people. Further, heterogeneity in effects may also lead to sub-populations for whom the program increases violence. This may be particularly true in settings where backlash is possible. Researchers may have reasons beyond efficiency for preferring one measure over another. In some settings, violence cessation may be the only substantively interesting or meaningful effect. In other settings, one measure may be perceived as more reliable than another. In this study, for clarity, we chose to focus on only a small subset of the trade-offs that trialists must make. However, our recommendation is that, when possible, trialists should report multiple measures that can still aid in interpretation of findings. Our study contributes to a larger literature about the measurement of violence both within the context of randomized trials as well as more broadly. Several authors have focused on the notable limitation that most violence is self-reported \cite{cullen_method_2020, park_private_2021, peterman_list_2018,stark_disclosure_2017,gibson_measuring_2022}, which is a particular concern in non-blinded randomized trials where certain incentives may drive differential reporting. Some have proposed alternative strategies like self-administration, list-randomization, or randomized response that confer a greater degree of anonymity. The present paper sidesteps this concern by focusing instead on the statistical properties of outcome coding choices. While we use the standard self-reported modules as the basis for our model, our results are agnostic to how violence is measured and could equally be applied to violence as assessed via different means. However, as previous commentators have noted several of these anonymized alternatives rely on asymptotic comparisons or marginal differences in responses and thereby sacrifice a considerable amount of statistical efficiency. An additional value of our study is that our ``first-principles'' approach can be applied to assist in other pertinent measurement questions in violence research. For instance, for some interventions to reduce violence there is a theoretical question as to whether ``backlash'' \cite{chin_male_2012} might ensue for some couples. Our approach makes it easy to simulate different ``backlash'' structures in a variety of settings and can assist in figuring out the optimal statistical procedure for estimating ``backlash''-type effects when they do exist. \clearpage \singlespacing \printbibliography \clearpage \section*{Appendix A.} \label{sec:appendixa} \addcontentsline{toc}{section}{Appendix A} \setcounter{figure}{0} \renewcommand\thefigure{A\arabic{figure}} \setcounter{table}{0} \renewcommand\thetable{A\arabic{table}} \begin{figure}[p] \centering \includegraphics[width=\textwidth]{figures/dhs_pmfs.pdf} \caption{Empirical distributions of violent acts from select Demographic and Health Surveys.} \label{fig:dists} \end{figure} \begin{landscape} \input{tables/tablea1} \end{landscape} \begin{figure}[p] \centering \includegraphics[width=\textwidth]{figures/b1_dist.pdf} \caption{Differences in distribution of continuous sum violence measure at midline in the Becoming One trial. } \label{fig:b1_dist} \end{figure} \end{document}
3,212,635,537,870
arxiv
\section{Introduction}\label{Introduction} \subsection{Coboundaries} Let $T$ be a bounded linear operator acting on a complex Banach space $\mathcal X$. An element $x$ of $\mathcal X$ is called a \emph{coboundary for} $T$ if there is $y\in \mathcal X$ such that $x = y - Ty$. Coboundaries are related to the behavior of the \emph{ergodic sums} $$ S_n(T) x := x + Tx + \dots + T^{n-1}x, \quad n \ge 1.$$ A variant of the mean ergodic theorem for power bounded operators on reflexive Banach spaces has been proved by von Neumann for Hilbert spaces and by Lorch in the general case~; see for instance \cite{Krengel}. Recall that $T$ is said to be \emph{power bounded} if $\sup_{n\ge 1} \|T^n\| < \infty$. We have $$ \mathcal X = \left\{ x \in\mathcal X : \lim_{n\to\infty} \frac{1}{n} S_n(T) x \text{ exists} \right\} = \{ y \in \mathcal X : Ty = y \} \oplus \overline{(I-T)\mathcal X}.$$ In particular, as a consequence of this ergodic decomposition, we have $$ x\in \overline{(I-T)\mathcal X} \quad \Leftrightarrow \quad \lim_{n\to\infty} \frac{1}{n} S_n(T) x = 0.$$ One can say more about the rate of convergence of $(1/n)S_n(T) x$ to zero when $x$ is a coboundary. Indeed, when there exists a solution $y$ of the equation $y - Ty = x$, the ergodic sums satisfy $S_n(T) x = y - T^ny$. It follows that $(S_n(T)x)_{n\in\mathbb{N}}$ is bounded. Therefore \begin{equation} \label{eq:11} \left\|\frac{1}{n} S_n(T) x\right\| = O \left(\frac{1}{n}\right). \end{equation} This rate of convergence to zero, namely $O(1/n)$, characterizes coboundaries of power bounded operators on reflexive spaces. Indeed, the converse result (whenever $T$ is power bounded and $\mathcal X$ is reflexive, an element $x$ satisfying \eqref{eq:11} is a coboundary for $T$) has been proved by Browder \cite{Browder} and rediscovered by Butzer and Westphal \cite{BW}. We also note (see for instance \cite{Gomilko}, \cite{BadeaMuller}, \cite{BadeaGrivauxMuller} and the references therein) that if $(I-T) \mathcal X$ is not closed, then for every sequence $(a_n)_{n\ge 1}$ of positive real numbers converging to zero, there exists $x \in \overline{(I-T)\mathcal X} \backslash (I-T)\mathcal X$ such that $$ \left\|\frac{1}{n} S_n(T) x\right\| \ge a_n, \quad \forall n \ge 1.$$ In particular, there is no general rate of convergence in the mean ergodic theorem outside coboundaries. \subsection{Rochberg's theorem.} Browder's theorem has been extended to the case that $T$ is a dual operator on a dual Banach space by Lin \cite{Lin} ; see also Lin and Sine \cite{LinSine}. We refer the reader to the introduction of \cite{CohenLin}, and the references cited therein, for the history of Browder's theorem and for other extensions and generalizations. We mention here only two references, namely \cite{Robinson} and \cite{Kozma}, dealing with the Hilbert space situation. Any of these Hilbert or Banach space abstract characterizations is not strong enough to obtain as consequences classical results of Fortet and Kac \cite{Fortet, Kac} who dealt with the case $\mathcal X = L^2(0,1)$ and $Sf(x) = f(2x)$. This operator $S$ is the Koopman operator associated with the doubling map on the torus~; see the last section of this manuscript for more information about coboundaries of $S$. This situation has been remedied by Rochberg \cite{Rochberg}, who showed that a condition of $o(\sqrt{n})$ growth of ergodic sums at $x$ is sufficient to ensure that $x$ is a coboundary for a unilateral shift on Hilbert space. Notice that the Koopman operator $S$ acts as a unilateral shift on the subspace of $L^2 (0,1)$ of functions whose zeroth Fourier coefficient vanishes. \smallskip We need the following classical definition in order to state Rochberg's abstract coboundary theorem. \begin{definition} Let $T$ be an isometry acting on Hilbert space $\mathcal H$. A closed subspace $\mathcal K$ of $\mathcal H$ is called \emph{wandering} for $T$ whenever $$ T^p\mathcal K \perp T^q \mathcal K \quad \text{for} \quad p,q \in \mathbb{N}, p\ne q.$$ The isometry $T$ is called a \emph{(unilateral) shift} if $\mathcal H$ possess a closed subspace $\mathcal K$, wandering for $T$ and such that $$ \bigoplus_{n=0}^\infty T^n \mathcal K = \mathcal H.$$ \end{definition} \begin{thm}[\cite{Rochberg}] Let $S$ be a shift and let $f$ be an element of $\mathcal H$. Using the notation of the preceding definition, we denote by $f_j$ the projection of $f$ onto the closed subspace $S^j \mathcal K$. Suppose that there exists $\beta > 0$ such that $$ \| f_j\| = O (2^{-\beta j}).$$ Then there exists $g$ in $\mathcal H$ such that $(I-S)g = f$ if and only if $$ \lim_{n\to \infty} \frac{1}{n} \left\|\sum_{k= 0}^n S^k f \right\|^2 = 0.$$ \end{thm} \begin{remarque} The condition $$ \| f_j\| = O (2^{-\beta j})$$ is of course dependent of the decomposition of $\mathcal H$ associated with the unilateral shift $S$. It implies $\| S^{* j} f \| = O (2^{-\beta j})$. \end{remarque} \subsection{Statement of the main results.} In the next theorem the unilateral shift $S$ is replaced by an arbitrary isometry $T$ and the growth of the norm of the projection $f_j$ by the convergence of the series $\sum_{j=0}^\infty j\| T^{* j} f \|$. The statement of the result does not depend on the Wold decomposition, at least not in an explicit way. For the convenience of the reader, the Wold decomposition theorem is recalled below. Theorem~\ref{t4.6} implies Rochberg's theorem and it allows to recover Kac's results about the coboundaries of the Koopman operator of the doubling map. \begin{thm}\label{t4.6} Let $T$ be an isometry acting on a Hilbert space $\mathcal H$ and let $x \in \mathcal H$. Suppose that \begin{equation} \label{eq:summab} \sum_{k=0}^\infty k \| T^{*k} x \| < \infty. \end{equation} Then there exists $y \in \mathcal H$ such that $x = (I-T)y$ if and only if $$ \lim_{n\to \infty} \frac{1}{n} \left\|\sum_{k= 0}^n T^k x \right\|^2 = 0.$$ \end{thm} Note however that the condition \eqref{eq:summab} implies that $x$ is necessarily an element of the shift part of the isometry $T$. Considering coboundaries of adjoints of isometries, we notice that the identity $I-T = (T^*-I)T$ shows that every coboundary of the isometry $T$ is also a coboundary for its adjoint $T^*$. It follows from \cite[Proposition 4.3]{DerLin} that when the isometry $T$ is not invertible (\emph{i.e.}, not a unitary operator), there are coboundaries for $T^*$ which are not coboundaries for $T$. The following result, more general than Theorem~\ref{t4.6}, is about coboundaries of contractions (operators of norm no greater than one). \begin{thm}\label{t4.7} Let $T$ be a linear operator acting on a Hilbert space $\mathcal H$ with $\|T\| \le 1$. Let $x \in \mathcal H$ and denote $S_n(T) x := x + Tx + \dots + T^{n-1}x$. Suppose that \eqref{eq:summab} holds, as well as \begin{equation} \label{eq:osqrt} \|S_n(T)x\| = o(\sqrt{n}), \quad n\to \infty \end{equation} and \begin{equation} \label{eq:withD} \sum_{k=1}^{n} \left(\|S_k(T)x\|^2 - \|TS_k(T)x\|^2 \right) = o(n), \quad n\to \infty . \end{equation} Then there exists $y \in \mathcal H$ such that $x = (I-T)y$. In addition, $y$ can be chosen such that $\|Ty\| = \|y\|$. \end{thm} We obtain the following consequence. \begin{cor}\label{t4.8} Let $T$ be a linear operator acting on a Hilbert space $\mathcal H$ with $\|T\| \le 1$. Let $x \in \mathcal H$ and denote $S_n(T) x := x + Tx + \dots + T^{n-1}x$. Suppose that \eqref{eq:summab} and \eqref{eq:osqrt} hold, as well as \begin{equation} \label{eq:kron} \sum_{k=1}^{\infty} \frac{\left(\|S_k(T)x\|^2 - \|TS_k(T)x\|^2 \right)}{k} < \infty. \end{equation} Then there exists $y \in \mathcal H$ such that $x = (I-T)y$ and $\|Ty\| = \|y\|$. \end{cor} Some remarks are in order. Theorem~\ref{t4.7} and its consequence Corollary~\ref{t4.8} shows that the coboundary equation can be solved within the \emph{maximal isometric subspace} $$M = \{x\in \mathcal H : \|T^nx\| = \|x\| \text{ for every }n\ge0\}.$$ We refer to \cite{Nagy} and \cite{levan} for the canonical decomposition of a contraction into the maximal isometric subspace and its orthogonal. Conditions \eqref{eq:withD} and \eqref{eq:kron} are easily verified when $T$ is an isometry. The conditions \eqref{eq:summab} and \eqref{eq:osqrt} are always satisfied when $\|T\| < 1$ ; however \eqref{eq:kron} is not, unless $x=0$. In fact, $\|Ty\| = \|y\|$ and $\|T\| < 1$ imply that $y=0$ and thus $x=0$. Of course, as $(I-T)$ is invertible when $\|T\| < 1$ by Carl Neumann's lemma, the coboundary equation $x = (I-T)y$ is always solvable in this case. \subsection{Outline of the paper.} A proof of Theorem~\ref{t4.6} is given in the next section. The more general Theorem~\ref{t4.7} and its consequence Corollary~\ref{t4.8} are proved in Section 3. Some applications to the functional equation $g(x) - g(2x) = f(x)$ are presented in the next section. The last section collects the acknowledgments, a dedication statement and (imposed) conflict of interest and data availability statements. \section{Proof of Theorem \ref{t4.6}} We first recall Wold's decomposition Theorem (see \cite[Chapter 1]{Nagy}). \begin{thm}[Wold decomposition] Let $T$ be an isometry on a Hilbert $\mathcal H$. Then $\mathcal H$ decomposes as an orthogonal sum $\mathcal H = \mathcal H_0 \oplus \mathcal H_1$ such that $\mathcal H_0$ and $\mathcal H_1$ are reducing for $T$, the restriction of $T$ to $\mathcal H_0$ is a unitary operator and the restriction of $T$ to $\mathcal H_1$ is a unilateral shift (one of the subspaces can eventually reduce to $\{0\}$). This decomposition is unique~; in particular, we have $$ \mathcal H_0 = \bigcap_{n=0}^\infty T^n \mathcal H \quad \text{ and }\quad \mathcal H_1 = \bigoplus_{n=0}^\infty T^n \mathcal K \quad \text{, where } \quad \mathcal K = \mathcal H \ominus T\mathcal H.$$ \end{thm} \begin{proof}[Proof of Theorem \ref{t4.6}] If $x = (I-T) y$, then $\sum_{k= 0}^n T^k x = x - T^{n+1}x$. Therefore $\sum_{k= 0}^n T^k x$ is bounded since the isometry $T$ is clearly power-bounded. In particular, $$ \lim_{n\to \infty} \frac{1}{n} \left\|\sum_{k= 0}^n T^k x \right\|^2 = 0.$$ Suppose now that $$ \lim_{n\to \infty} \frac{1}{n} \left\|\sum_{k= 0}^n T^k x \right\|^2 = 0.$$ We want to show the existence of a solution $y$ of the equation $(I-T)y = x$. Let $\mathcal H = \mathcal H_0 \oplus \mathcal H_1$ be the Wold's decomposition associated with $T$. We notice that $x \in \mathcal H_1$. Indeed, if $x = x_0 + x_1$ according to Wold's decomposition of $\mathcal H$, then $$ \lim_{k\to \infty} \| T^{*k} x_1 \| = 0 \quad \text{and}\quad \| T^{*n} x_0 \| = \|x_0\|, \quad \forall n \in \mathbb{N}.$$ Therefore $$\lim_{n\to \infty} \| T^{*n} x \| = \|x_0\|.$$ On the other hand, it follows from \eqref{eq:summab} that $$\lim_{n\to \infty} \| T^{*n} x \| = 0.$$ We obtain that $x \in \mathcal H_1$. In particular, if $\mathcal H_1$ is reduced to $\{0\}$, then $x = 0 = (I-T) 0$. Therefore, without loss of any generality, we can assume that $T$ is a shift. For each $n \in \mathbb{N}$, we denote by $P_n$ the projection onto the subspace $T^n \mathcal K$. For $u \in \mathcal H$, we set $u_n:=P_n(u)$, $u^n: = \sum_{j=0}^n u_j$ and $R_n := u-u^n$. Suppose that $y$ is solution of the equation $(I-T)y = x$. We first obtain, by projecting to $T^k \mathcal K$ for each $k \in \mathbb{N}$, the following system of equations : $$\begin{cases} x_0 = y_0 \\ x_1 = y_1- T y_0 \\ \vdots \\ x_k = y_k - T y_{k-1} \\ \vdots \end{cases}$$ We then obtain $$\begin{cases} y_0 = x_0 \\ y_1 = x_1 + T y_0 = x_1 + T x_0\\ \vdots \\ y_k = x_k + T y_{k-1} = x_k + T x_{k-1} + \dots + T^{k-1}x_1 + T^k x_0 \\ \vdots \end{cases}$$ Consider now, for each $r \in \mathbb{N}$, the element $$ y_r = \sum_{k=0}^r T^k x_{r-k} \in T^r \mathcal K.$$ We will prove that $\sum_{r=0}^\infty \|y_r\|^2$ is convergent, thus showing that $y = \sum_{r=0}^\infty y_r$ is well defined in $\mathcal H$. In that case, for every $r\in \mathbb{N}$, we have \begin{align*} P_r \big( (I-T) y \big) & = y_r - T y_{r-1} \\ & = \sum_{j=0}^r T^j x_{r-j} - \sum_{j=0}^{r-1} T^{j+1} x_{r-1-j} \\ & = x_r. \end{align*} This shows that $(I-T)y = x$. To prove that $\sum_{r=0}^\infty \|y_r\|^2$ is finite, we need two more results. \begin{lem}\label{l4.4} Let $u\in \mathcal H$ be such that $\sum_{j\ge 0} \|T^{*j}u \| < +\infty$. Then $$\lim_{n\to \infty} \frac{1}{n} \norm{\sum_{k=0}^n T^k u }^2 = \|u\|^2 + 2 Re \sum_{k=1}^\infty \langle u ; T^k u \rangle.$$ \end{lem} \begin{proof} We first notice that the sum $\sum_{k=1}^\infty \langle u ; T^k u \rangle$ is absolutely convergent since $( \|T^{*j}u \|)_{j\ge 0}$ is summable. For each $n \in \mathbb{N}^*$, we have \begin{align*} \frac{1}{n} \norm{ \sum_{k=0}^n T^k u }^2 & = \frac{1}{n} \left( \sum_{i=0}^n \|T^i u\|^2 + 2 Re \left( \sum_{0 \le i <j \le n } \langle T^i u ; T^j u \rangle \right) \right) \\ & = \frac{1}{n} \left( \sum_{i=0}^n \|u\|^2 + 2 Re \left( \sum_{0 \le i <j \le n } \langle u ; T^{j-i} u \rangle \right) \right) \\ & = \frac{n+1}{n} \|u\|^2 + \frac{2}{n} Re \left( \sum_{r=1}^n (n-r+1)\langle u ; T^r u \rangle \right) \\ & = \frac{n+1}{n} \|u\|^2 + 2 Re \left( \sum_{r=1}^n \langle u ; T^r u \rangle - \frac{1}{n} \sum_{r=1}^n (r-1)\langle u ; T^r u \rangle \right). \end{align*} On the other hand, we have $$ \left| \frac{1}{n} \sum_{r=1}^n (r-1)\langle u ; T^r u \rangle\right| \le \frac{1}{n} \|u\| \sum_{r=1}^n (r-1) \|T^{*r} u \|.$$ Using again the summability of the sequence $(\|T^{*j}u \|)_{j\ge 0}$ and the Kronecker's lemma (see for instance \cite[Lemma IV.3.2]{Shi}), we get $$ \frac{1}{n} \sum_{r=1}^n (r-1)\langle u ; T^r u \rangle \tend{n\to \infty} 0.$$ As the series $\sum_{k\ge1} \langle u ; T^k u \rangle$ is convergent, we obtain, as $n$ tends to infinity, $$\lim_{n\to \infty} \frac{1}{n} \norm{\sum_{k=0}^n T^k u }^2 = \|u\|^2 + 2 Re \sum_{k=1}^\infty \langle u ; T^k u \rangle.$$ \end{proof} \begin{lem}\label{l4.5} Let $u \in \mathcal H$. For every $r \in \mathbb{N}$ we have $$ \norm{ \sum_{j=0}^r T^j u_{r-j} }^2 = \lim_{n\to \infty} \frac{1}{n} \norm{ \sum_{j=0}^n T^j u^r}^2.$$ \end{lem} \begin{proof} Let $n \ge r$. For $k \in \mathbb{N}$ we have $$ P_k \left( \sum_{j=0}^n T^j u^r \right) = \begin{cases} \sum_{j=0}^k T^j u_{k-j} \quad & \text{if}\quad 0 \le k < r, \\ \sum_{j=0}^r T^j u_{k-j} \quad & \text{if}\quad r \le k \le n, \\ \sum_{j=k-n}^r T^j u_{k-j} \quad & \text{if}\quad n < k \le n+r, \\ 0 & \text{if} \quad k > n+r. \end{cases}$$ Using the decomposition of $\mathcal H$ as $\mathcal H = \bigoplus_{n=0}^\infty T^n \mathcal K$, we obtain $$\frac{1}{n} \norm{ \sum_{j=0}^n T^j u^r}^2 = \frac{1}{n} \sum_{k = 0}^{r-1} \left\| \sum_{j=0}^k T^j u_{k-j} \right\|^2 + \frac{1}{n} \sum_{k = r}^{n} \norm{ \sum_{j=0}^r T^j u_{k-j} }^2 + \frac{1}{n} \sum_{k = n+1}^{n+r} \norm{\sum_{j=k-n}^r T^j u_{k-j}}^2.$$ We have $$ \frac{1}{n} \left( \sum_{k = 0}^{r-1} \left\| \sum_{j=0}^k T^j u_{k-j} \right\|^2 \right) \tend{n\to \infty} 0$$ and $$\frac{1}{n} \sum_{k = n+1}^{n+r} \norm{\sum_{j=k-n}^r T^j u_{k-j}}^2 = \frac{1}{n} \left( \sum_{k = 1}^{r} \norm{\sum_{j=k}^r T^j u_{k-j}}^2 \right) \tend{n\to \infty} 0,$$ as well as $$\frac{1}{n} \sum_{k = r}^{n} \norm{ \sum_{j=0}^r T^j u_{k-j} }^2 = \frac{n-r+1}{n} \norm{ \sum_{j=0}^r T^j u_{r-j} }^2 \tend{n\to\infty} \norm{ \sum_{j=0}^r T^j u_{r-j} }^2.$$ We thus obtain $$ \lim_{n\to \infty} \frac{1}{n} \norm{ \sum_{j=0}^n T^j u^r}^2 = \norm{ \sum_{j=0}^r T^j u_{r-j} }^2 .$$ \end{proof} We finally show that $\sum_{r \ge 0} \|y_r\|^2 < \infty$. Using Lemma \ref{l4.5}, we have for each $r \in \mathbb{N}$, $$ \|y_r\|^2 = \norm{ \sum_{i=0}^r T^i x_{r-i} }^2 = \lim_{n\to \infty} \frac{1}{n} \norm{ \sum_{i=0}^n T^i x^r}^2.$$ Using the parallelogram identity for the vectors $x^r + R_r = x$, we get $$ \frac{2}{n} \norm{ \sum_{i=0}^n T^i x^r}^2 = \frac{1}{n} \norm{ \sum_{i=0}^n T^i x}^2 + \frac{1}{n} \norm{ \sum_{i=0}^n T^i (x^r - R_r)}^2 \\ - \frac{2}{n} \norm{ \sum_{i=0}^n T^i R_r}^2. $$ Make now $n$ tends to infinity. Using Lemma \ref{l4.4} for $R_r$ and $x^r - R_r$, and the hypothesis $\frac{1}{n} \norm{\sum_{k=0}^n T^k x }^2 \tend{n\to\infty} 0$, we obtain \begin{align*} 2 \|y_r\|^2 & = \lim_{n\to \infty} \frac{2}{n} \norm{ \sum_{i=0}^n T^i x^r}^2 \\ & = \lim_{n\to \infty} \frac{1}{n} \norm{ \sum_{i=0}^n T^i (x^r - R_r)}^2 - 2 \lim_{n \to \infty} \frac{1}{n} \norm{ \sum_{i=0}^n T^i R_r}^2\\ & = \| x^r - R_r \|^2 - 2 \|R_r\|^2 \\ & \quad + 2 Re \sum_{k=1}^\infty \Big( \langle x^r - R_r ; T^k(x^r - R_r) \rangle - 2 \langle R_r ; T^k R_r \rangle \Big) \\ & = \|x^r\|^2 - \|R_r\|^2 + 2 Re \sum_{k=1}^\infty \Big( \langle x^r ; T^k x^r \rangle - \langle x^r ; T^k R_r \rangle \\ & \quad - \langle R_r ; T^k x^r \rangle - \langle R_r ; T^k R_r \rangle \Big) \\ & = \|x^r\|^2 - \|R_r\|^2 + 2 Re \sum_{k=1}^\infty \langle x^r ; T^k x^r \rangle - 2 Re \sum_{k=1}^\infty \langle R_r ; T^k x \rangle. \end{align*} Using now Lemma~\ref{l4.4} applied to $x^r$ and Lemma~\ref{l4.5}, we get \begin{align*} \|x^r\|^2 + 2 Re \sum_{k=1}^\infty \langle x^r ; T^k x^r\rangle & = \lim_{n\to \infty} \frac{1}{n} \norm{\sum_{i=0}^\infty T^i x^r}^2 \\ & = \|y_r\|^2. \end{align*} We can infer that $$ 2 \|y_r\|^2 = \|y_r\|^2 - \|R_r\|^2 - 2 Re \sum_{k=1}^\infty \langle R_r ; T^k x\rangle,$$ so $$ \|y_r\|^2 = - \|R_r\|^2 - 2 Re \sum_{k=1}^\infty \langle R_r ; T^k x\rangle .$$ For each fixed $r$ we have $R_r = T^{r+1} T^{*(r+1)}x$. Thus $ \|R_r\| = \| T^{*(r+1)}x\|$. As $$\sum_{j=1}^{+\infty} j \|T^{*j}x\|< +\infty ,$$ we obtain that $(\|R_r\|^2)_r$ is summable. It suffices to show that $$\sum_{r=0}^\infty \abs{\sum_{k=1}^\infty \langle R_r ; T^k x \rangle } < \infty.$$ We have \begin{align*} \abs{ \sum_{k=1}^\infty \langle R_r ; T^k x \rangle } & = \abs{ \sum_{k=1}^r \langle R_r ; T^k x \rangle + \sum_{k = r+1}^\infty \langle R_r ; T^k x \rangle } \\ & = \abs{ \sum_{k=1}^r \langle R_r ; T^k x \rangle + \sum_{k=r+1}^\infty \langle R_k ; T^k x \rangle } \\ & \le \sum_{k= 1}^r \|R_r\| \|T^k x\| + \sum_{k=r+1}^\infty \|R_k\| \| T^k x\| \\ & \le \|x\| \left( r \|R_r\| + \sum_{k=r+1}^\infty \|R_k\| \right) \\ & \le \|x\| \left( r \|T^{*(r+1)}x\| + \sum_{k=r+1}^\infty \|T^{*(k+1)}x\| \right). \end{align*} Using again the summability of $(r \|T^{*r}x\|)_r$, we get $$ \sum_{r=0}^\infty \sum_{k=r+1}^\infty \|T^{*k}x\| = \sum_{k=0}^\infty k \|T^{*k}x\| < \infty.$$ Therefore $\sum_{r=0}^\infty \|y_r\|^2 <\infty.$ \end{proof} \section{The case of contractions} We now prove Theorem~\ref{t4.7} and its consequence Corollary~\ref{t4.8}. \begin{proof}[Proof of Theorem \ref{t4.7}] Let $D$ denote the defect operator $D = (I-T^*T)^{1/2}$, which is well defined since $T$ is a contraction. As $$ \|Tx\|^2 + \|Dx\|^2 = \scal{T^*Tx}{x} + \scal{(I-T^*T)x}{x} = \|x\|^2,$$ the operator $R : \ell^2(\mathcal H) \mapsto \ell^2(\mathcal H)$ given by $$ R(x_0, x_1, x_2, \cdots) = (Tx_0, Dx_0, x_1, x_2, \cdots)$$ and with matrix representation \begin{equation} R = \begin{bmatrix} T & & \\ D & & \\ & I & \\ & & I \\ & & & \ddots \end{bmatrix}, \end{equation} is an isometry. We can thus apply Theorem~\ref{t4.6} to $R$. The iterates of $R$ are given by $$ R^k(x_0, x_1, x_2, \cdots) = (T^kx_0, DT^{k-1}x_0, DT^{k-2}x_0,\cdots ,DTx_0, Dx_0, x_1, x_2, \cdots)$$ while their adjoints are given by $$ R^{*k}(x_0, x_1, x_2, \cdots) = (T^{*k}x_0 + T^{*(k-1)}Dx_1 + \cdots + T^*Dx_{k-1}, Dx_k, x_{k+1}, x_{k+2}, \cdots).$$ Denote $\tilde{x} = (x,0,0, \cdots) \in \ell^2(\mathcal H)$ and $\tilde{y} = (y,y_1,y_2, \cdots) \in \ell^2(\mathcal H)$. The equation $$\tilde{x} = (I-R)\tilde{y}$$ reduces to the system of equations $x = (I-T)y$, $y_1 = Dy$, $y_2 = y_1$, $y_3 = y_2$, etc. As $\tilde{y} \in \ell^2(\mathcal H)$, we obtain $y_1 = y_2 = \cdots = 0$. Therefore the equation $\tilde{x} = (I-R)\tilde{y}$ in $\ell^2(\mathcal H)$ is equivalent to $$ x = (I-T)y \quad \text{and} \quad Dy = 0 .$$ Every positive (i.e. positive semi-definite) operator has the same kernel as its positive square-root ; thus $(I-T^*T)y = 0$. Therefore $\|Ty\| = \|y\|$. An easy computation shows that the summability condition $\sum_{k=0}^\infty k \| R^{*k} \tilde{x} \| < \infty$ is equivalent to $\sum_{k=0}^\infty k \| T^{*k}x \| < \infty$. Notice now that $$ R^k \tilde{x} = R^k(x,0,0, \cdots) = (T^kx, DT^{k-1}x, \cdots, Dx, 0,0, \cdots).$$ Therefore $$ \sum_{k=0}^n R^k \tilde{x} = (\sum_{k=0}^n T^kx, D(\sum_{k=0}^{n-1} T^kx), D(\sum_{k=0}^{n-2} T^kx), \cdots, Dx, 0,0, \cdots).$$ Hence, using the notation $S_n(T) x = x + Tx + \dots + T^{n-1}x$, the $o(\sqrt{n})$ condition $$\|\sum_{k=0}^{n}R^k\tilde{x}\| = o(\sqrt{n})$$ is equivalent to $$ \|\sum_{k=0}^{n}T^k x\| = o(\sqrt{n}) \quad \text{and} \quad \sum_{k=0}^{n} \|D(S_k(T)x)\|^2 = o(n).$$ The proof is now complete using the identity $\|Du\|^2 = \|u\|^2 - \|Tu\|^2$. \end{proof} Corollary~\ref{t4.8} follows from Theorem~\ref{t4.7} and Kronecker's lemma, already used in the proof of Theorem~\ref{t4.6}. \section{Coboundaries of the doubling map} Let $\val_2(n)$ be the $2$-valuation of $n$, that is $$\val_2(n) = k \quad \text{if} \quad n = m 2^k\quad \text{with}\quad m \notin 2 \mathbb{Z}.$$ For $n\in \mathbb{Z}$, we denote by $ \hat{f}(n) = \int_0^1 f(t)e^{-int} \, dt$ the $n$-th Fourier coefficient of $f \in L^2 (0,1)$. \begin{cor}\label{valuation} Suppose $f$ is a periodic function of period $1$ such that $f \in L^2 (0,1)$, \begin{equation}\label{eq:41} \int_0^1 f(t) \, dt = 0 \end{equation} and there exists $\varepsilon >0$ such that \begin{equation}\label{eq:42} \sum_{n=-\infty}^\infty \val_2 (n)^{4+\varepsilon } \left|\hat{f}(n)\right|^2 < \infty . \end{equation} Then there is a function $g$ in $L^2(0,1)$ of period one such that $$ f(t) = g(t) - g(2t) \quad a.e. $$ if and only if $$ \lim_{n\to \infty} \frac{1}{n} \int_0^1 \left| \sum_{i=0}^n f(2^i t)\right|^2 dt = 0$$ \end{cor} \begin{proof} We use Theorem \ref{t4.6} applied to the isometry $T : L^2(0,1) \longrightarrow L^2(0,1)$ defined by $$ Tf (t) = f(2t), \quad t \in (0,1)\ \text{mod} \ 1.$$ We first remark that condition \eqref{eq:41} is justified by the fact that $T$ acts as a shift operator on the subspace of $L^2 (0,1)$ of functions whose zeroth Fourier coefficient vanishes. The condition $$ \lim_{n\to \infty} \frac{1}{n} \int_0^1 \left| \sum_{i=0}^n f(2^i t)\right|^2 dt = 0$$ is exactly the condition $$ \lim_{n\to \infty} \frac{1}{n} \left\|\sum_{k= 0}^n T^k f \right\|^2 = 0,$$ which appears in Theorem \ref{t4.6}. We want to show that $$ \sum_{k=0}^\infty k \| T^{*k} f \| < \infty.$$ Recall that $T$ acts as a shift operator on the subspace of $L^2 (0,1)$ of functions whose zeroth Fourier coefficient vanishes. Let $(a_n) = (\hat{f}(n))$ be the sequence of Fourier coefficients of $f$. We have $a_0 = 0$. The iterates of the adjoint of $T$ at $f$ can be computed as $$ T^{*k}f (t) = \sum_{j=-\infty}^\infty a_{j 2^k} e^{2i\pi j t}, \quad k \in \mathbb{N}.$$ For $\varepsilon >0$, using the change $n = j2^k$ in the order of summation, we get \begin{align*} \sum_{k=0}^\infty k \| T^{*k} f \| & = \sum_{k=0}^\infty k \left( \sum_{j=-\infty}^\infty | a_{j 2^k} |^2 \right)^{1/2} \\ & = \sum_{k=0}^\infty k^{-(1+\varepsilon )/2} \left( k^{3+\varepsilon } \sum_{j=-\infty}^\infty | a_{j 2^k} |^2 \right)^{1/2} \\ & \le \left( \sum_{k=0}^\infty k^{-(1+\varepsilon )} \right)^{1/2} \left( \sum_{k=0}^\infty k^{3+\varepsilon } \sum_{j=-\infty}^\infty |a_{j 2^k}|^2 \right)^{1/2} \\ & = \left( \sum_{k=0}^\infty k^{-(1+\varepsilon )} \right)^{1/2} \left( \sum_{n =-\infty}^\infty |a_n|^2 \sum_{k=0}^{\val_2(n)} k^{3+\varepsilon }\right)^{1/2} \\ & \le 2 \left( \sum_{k=0}^\infty k^{-(1+\varepsilon )} \right)^{1/2} \left( \sum_{n =-\infty}^\infty \val_2(n)^{4+\varepsilon } |a_n|^2 \right)^{1/2} . \end{align*} Thus, under our hypothesis about the Fourier coefficients, we have $$ \sum_{k=0}^\infty k \| T^{*k} f \| < \infty.$$ \end{proof} \begin{cor}\cite{Rochberg} Let $f$ be a periodic function of period $1$ such that $f \in L^2 (0,1)$, $$\int_0^1 f(t) \, dt = 0$$ and there exists $\alpha > 0$ such that \begin{equation}\label{e4.12} \sum_{k =-\infty}^\infty |\hat{f}((2k+1))2^i|^2 = O (2^{-\alpha i}). \end{equation} Then there is a function $g$ in $L^2(0,1)$ of period one such that $$ f(t) = g(t) - g(2t) \quad a.e. $$ if and only if $$ \lim_{n\to \infty} \frac{1}{n} \int_0^1 \left| \sum_{i=0}^n f(2^i t)\right|^2 dt = 0$$ \end{cor} \begin{proof} The result follows from Corollary \ref{valuation} with $\varepsilon = 1$, say. Indeed, using the condition \eqref{e4.12}, one can estimate \begin{align*} \sum_{n=-\infty}^\infty \val_2 (n)^{5} \left|\hat{f}(n)\right|^2 & = \sum_{i=1}^{\infty}\sum_{k=-\infty}^\infty i^5|\hat{f}((2k+1))2^i|^2\\ & \lesssim \sum_{i=1}^{\infty} \frac{i^5}{2^{\alpha i}} < \infty . \end{align*} \end{proof} \begin{remarque} Condition \eqref{e4.12} is condition (a) from Theorem~4 in \cite{Rochberg}. It has been proved in \cite{Rochberg} that each of other three conditions of Hölder type, called there (b), (c) and (d), implies the condition \eqref{e4.12}. Mark Kac has already considered in \cite{Kac} the case when $f$ is in the Hölder class $C^{0,\alpha}$ for some $\alpha > 1/2$. We refer to \cite{Fortet, Cie, Fuku} for other contributions concerning the functional equation $f(t) = g(t) - g(2t)$. \end{remarque} \begin{remarque} All the remarks at the end of the paper \cite{Rochberg} apply also in our situation. In particular, the generalization to the functional equation $f(t) = g(t) - g(nt)$ (for a fixed integer $n$) is immediate. \end{remarque} \section{Declarations} \subsection{Acknowledgments.} Some of the results presented here are part of the 2012 PhD thesis of the second-named author \cite{Devys} written under the supervision of the first-named author. We wish to thank several persons who encouraged us to present these results to a larger audience and/or to revisit\footnote{As Amor Towles said: ``For as it turns out, one can {\bf revisit} the past quite pleasantly, as long as one does so expecting nearly every aspect of it to have changed''.} them. Special thanks are due to Michael Lin for several interesting comments and remarks. We would like to thank the anonymous referee for a careful reading of the manuscript and very useful suggestions. The first-named author would like to thank the Max Planck Institute for Mathematics in Bonn for providing excellent working conditions and support. \subsection{Dedication.} We dedicate the article to the memory of J\"org Eschmeier, a nice person and a master of both abstract and concrete Operator Theory. \subsection{Conflict of interest statement.} On behalf of all authors, the corresponding author states that there is no conflict of interest. \subsection{Data availability statement.} No datasets were generated or analysed during the current study.
3,212,635,537,871
arxiv
\section*{Introduction} The aim of this paper is to prove several new results on elliptic curves with complex multiplication, most of which are generalisations of previous ones. Let $K$ be an imaginary quadratic field, $L$ an extension of $K$ and $E/L$ an elliptic curve with complex multiplication. In \S\S 1--2 we analyse isogenies between such curves and the associated adelic representations. We then give a simple classification of all such curves over $L$ in terms of these representations. The content of these two sections is most likely well-known (by those who well-know it), but it is difficult to find precise references and so we include it for convenience. In \S 3 we give a version of the criterion of N\'eron-Ogg-Shafarevich adapted to elliptic curves with complex multiplication. An interesting corollary of this is the following criterion for good reduction, a special case of which is Theorem 2 of \cite{CoatesWiles1977} where it shown for $L=K$ with class number one and $\ideal{f}$ a split prime of $K$: \begin{theo*} If there exists an ideal $\ideal{f}\subset O_K$ such that the $\ideal{f}$-torsion of $E$ is rational over $L$ and the map $O_K^\times\longrightarrow (O_K/\ideal{f})^\times$ is injective, then $E$ has good reduction everywhere. \end{theo*} In \S 4 we come to the main objects of our study, which are elliptic curves of `Shimura type'. An elliptic curve with complex multiplication $E/L$ is said to be of Shimura type if $L$ is an abelian extension of $K$ and if the torsion of $E$ is rational over the maximal abelian extension of $K$ (it is always so over the maximal abelian extension of $L$). We recall Shimura's Theorem on the existence of such elliptic curves with certain good reduction properties and then show that Shimura's result is sharp using the good reduction criterion of \S 3. The purpose of \S 5 is to give the following new characterisation of such elliptic curves in terms of commuting families of Frobenius lifts: \begin{theo*} If $L/K$ is an abelian extension and $E/L$ is an elliptic curve with complex multiplication then $E/L$ is of Shimura type if and only if: \begin{enumerate}[label=\textup{(\roman*)}] \item for all primes $\ideal{p}$ of $K$, where $E/L$ has good reduction and $L/K$ is unramified, there exists an isogeny \[\psi^\ideal{p}: E\longrightarrow \sigma_\ideal{p}^*(E),\] whose extension to the N\'eron model of $E/L$ reduces modulo $\ideal{p}$ to the $N\ideal{p}$-power relative Frobenius (here $\sigma_{\ideal{p}}\in G(L/K)$ is the Frobenius element at $\ideal{p}$), and \item for two prime ideals $\ideal{p}$ and $\ideal{l}$ as in \textup{(i)} the isogenies $\psi^\ideal{p}$ and $\psi^\ideal{l}$ commute in the sense that: \[\sigma_\ideal{l}^*(\psi^\ideal{p})\circ \psi^\ideal{l}=\sigma_\ideal{p}^*(\psi^\ideal{l})\circ \psi^\ideal{p}.\]\end{enumerate} \end{theo*} In \S6 we consider the existence of minimal models of elliptic curves of Shimura type. This questions was (in a sense) already considered by Gross in \cite{Gross82}, where it is shown that if $K$ has prime discriminant and $E/H$ is an elliptic curve of Shimura type then $E$ admits a global minimal model. We give the following generalisation of this: \begin{theo*} Let $L/K$ be a ray class field with conductor $\ideal{f}$ and let $E/L$ be an elliptic curve of Shimura type. If the $\ideal{f}$-torsion of $E$ is rational over $L$ then $E$ admits a global minimal model away from $\ideal{f}$. \end{theo*} Note that if $\ideal{f}=(1)$ then the $\ideal{f}$-torsion is always rational over the Hilbert class field $H=K((1))$ and so we find that every elliptic curve over Shimura type over $H$ admits a global minimal model (moreover, such curves always exist). The main result of \S 6 relies fundamentally on a certain principal ideal theorem which we prove in the appendix. Let $K$ be a number field and let $L/K$ be a wide ray class field. Write $\mathrm{Id}_{L/K}$ for the group of fractional ideals of $O_K$ generated by the primes which are unramified in the extension $L/K$. \begin{theo*} There exist elements $l(\ideal{a})\in L^\times$, indexed by the ideals $\ideal{a}$ prime to $\ideal{f}$, such that \begin{enumerate}[label=\textup{(\roman*)}] \item $l(\ideal{a})\cdot O_L=\ideal{a}\cdot O_L$ and \item $l(\ideal{a}\ideal{b})=l(\ideal{a})\sigma_{\ideal{a}}(l(\ideal{b}))$ \end{enumerate} for $\ideal{a}, \ideal{b}$ prime to $\ideal{f}$ where $\sigma_{\ideal{a}}\in G(L/K)$ denotes the `Frobenius element' at $\ideal{a}$. \end{theo*} A version of this result was proven by Tanaka in \cite{Tannaka58} (and indeed our proof is heavily based on his and several other classical results from class field theory). \section*{Acknowledgements} The author would like to thank James Borger for originally suggesting the topic of this article and for many fruitful discussions. \section*{Notation} Unless otherwise noted $K$ will denote an imaginary quadratic field, with fixed maximal abelian extension $K^\mathrm{ab}$ and algebraic closure $\overline{K}$. For a number field $L$, $O_L$ denotes its ring of integers, $\mathrm{Id}_{L}$ the group of fractional ideals, $\widehat{O}_L=O_L\otimes_{\mathbf{Z}}\widehat{\mathbf{Z}}$ the group of finite, integral adeles, and $I_L$ the group of (all) ideles. If $\ideal{P}$ is a prime of $L$ then $L_{\ideal{P}}$ and $O_{L_\ideal{P}}$ denote the $\ideal{P}$-adic completion of $L$ and $O_L$ respectively. Finally, if $\ideal{F}$ is an integral ideal of $L$ and $P\subset \mathrm{Id}_{L}$ is any subset, then $P^{\ideal{F}}$ denotes the subset of ideals of $P$ which are relatively prime to $\ideal{F}$. If $L/K$ is an abelian extension then we denote by $\mathrm{Id}_{L/K}$ the group of fractional ideals of $K$ generated by the primes which are unramified in $L/K$. We denote by \[\mathrm{Id}_{L/K}\longrightarrow G(L/K): \ideal{a}\mapsto \sigma_{\ideal{a}}\] the unique surjective homomorphism which sends a prime ideal $\ideal{p}\in \mathrm{Id}_{L/K}$ to the unique automorphism lifting $N\ideal{p}$-power Frobenius modulo $\ideal{p}$, we write $P_{L/K}\subset \mathrm{Id}_{L/K}$ for its kernel and for $\ideal{a}\in \mathrm{Id}_{L/K}$ we call $\sigma_{\ideal{a}}$ the Frobenius element at $\ideal{a}$. The ray class field of conductor $\ideal{f}$ is denoted $K(\ideal{f})$ and when $\ideal{f}=(1)$ we write $H=K((1))$ for the Hilbert class field of $K$. Finally, we denote by \[\theta_K: (\widehat{O}_K\otimes_{O_K}K)^\times \longrightarrow G(K^\mathrm{ab}/K)\] the reciprocity map of class field theory. Note that as $K$ is imaginary quadratic, this homomorphism is surjective with kernel $K^\times$ and its restriction to $\widehat{O}_K^\times\subset (\widehat{O}_K\otimes_{O_K}K)^\times$ induces a surjective map \[\theta_K: \widehat{O}_K^\times\longrightarrow G(K^\mathrm{ab}/H)\] with kernel $O_K^\times.$ We will also use the symbol $\theta_K$ to denote the induced isomorphisms between the corresponding quotient group and the relevant Galois group. \section{Isogenies between elliptic curves with complex multiplication} \subsection{} Let $S$ be an $O_K$-scheme. An elliptic curve $E$ over $S$ is a smooth, proper, geometrically connected $S$-group scheme of relative dimension one. The tangent space at the identity is denoted $\mrm{Lie}_{E/S}$ (and is a locally free $\mathscr{O}_S$-module of rank one). An elliptic curve with complex multiplication by $O_K$ over $S$ is an elliptic curve $E/S$ equipped with a homomorphism \[O_K\longrightarrow \mrm{End}_S(E): a\mapsto [a]_E\] such that the induced action of $[a]_E$ on $\mrm{Lie}_{E/S}$ coincides with the action of $a$ coming from the structure map $S\longrightarrow \mrm{Spec}(O_K)$. We also call these curves simply `CM elliptic curves'. \subsection{} We now recall a construction of Serre (Chapter XIII of \cite{CasselsFrohlich67}, also \cite{ASENS_1969_4_2_4_521_0}). Let $E/S$ be a CM elliptic curve. For each $S$-scheme $S'$ the group $\mrm{Hom}_S(S', E)=E(S')$ is an $O_K$-module and so given a rank one projective $O_K$-module $M$, we may define a functor on the category of $S$-schemes via \[M\otimes_{O_K}E: S'/S\mapsto M\otimes_{O_K} E(S').\] \begin{prop}[Serre]\label{prop:serre-tensor} The functor $M\otimes_{O_K} E$ is representable by a \textup{CM} elliptic curve over $S$. \end{prop} \begin{proof} Every rank one projective $O_K$-module $M$ can be generated by a pair of elements and so there exists a surjective homomorphism $O_K^2\longrightarrow M$. Since $M$ is projective, we can split this homomorphism and realise $M$ as the kernel of an idempotent endomorphism $f_M: O_K^2\longrightarrow O_K^2$. The endomorphism $f_M$ induces an idempotent endomorphism of group schemes $f_M: E^2\longrightarrow E^2$ and an isomorphism (of functors on $S$-schemes): \[M\otimes_{O_K} E\stackrel{\sim}{\longrightarrow} \ker(f_M: E^2\longrightarrow E^2).\] It follows that $M\otimes_{O_K} E$ is representable by a unique group scheme over $S$ with which we now identify it. As $f_M: E^2\longrightarrow E^2$ is idempotent, $M\otimes_{O_K} E$ is a direct factor of the smooth, proper and geometrically connected group scheme $E^2$. Hence, $M\otimes_{O_K} E$ is itself smooth, proper and geometrically connected. Finally, the additivity of functor $\mrm{Lie}$ applied to the left split exact sequence \[0\longrightarrow M\otimes_{O_K} E\longrightarrow E^2\longrightarrow E^2\] induces a canonical isomorphism \[\mrm{Lie}_{M\otimes_{O_K} E/S}\stackrel{\sim}{\longrightarrow} M\otimes_{O_K}\mrm{Lie}_{E/S}\] from which it follows that $M\otimes_{O_K} E$ is of both of relative dimension one and a CM elliptic curve, i.e. the induced action of $O_K$ on $\mrm{Lie}_{M\otimes_{O_K} E/S}$ is via the structure map $S\longrightarrow \mrm{Spec}(O_K)$. \end{proof} We identify the functor $M\otimes_{O_K}E$ with the representing CM elliptic curve. \begin{rema} We also make a few remarks about this construction. \begin{enumerate}[label=\textup{(\roman*)}] \item In the proof of (\ref{prop:serre-tensor}) it is shown more generally that if $G/S$ is any group scheme equipped with an action of $O_K$ and $M$ is a rank one projective $O_K$-module then the functor $M\otimes_{O_K} G$ is again representable by a group scheme over $S$ and inherits any properties possessed by direct factors of the group scheme $G^2$, e.g. affine, flat, finite locally free, \'etale and so on. We will use this from time to time when $G$ is a torsion sub-group of $E$ or when $G$ is the N\'eron model of an elliptic curve. \item The additivity of the Hom functor also shows that, for a pair of CM elliptic curves $E$ and $E'$ over $S$ and a rank one projective $O_K$-module $M$, the natural map \[M\otimes_{O_K}\mrm{Hom}^{O_K}_S(E, E')\stackrel{\sim}{\longrightarrow} \mrm{Hom}_S^{O_K}(E, M\otimes_{O_K} E')\] is bijective. \end{enumerate} \end{rema} \subsection{} We now apply Serre's construction in the special case where $M=\ideal{a}^{-1}$ for a non-zero integral ideal $\ideal{a}\subset O_K$ to obtain a CM elliptic curve $\ideal{a}^{-1}\otimes_{O_K} E$. There is a natural homomorphism \[i_\ideal{a}: E\longrightarrow \ideal{a}^{-1}\otimes_{O_K} E\] induced by the inclusion $O_K\longrightarrow \ideal{a}^{-1}$ whose kernel we denote by $E[\ideal{a}]$. The sub-group scheme $E[\ideal{a}]$ is the $\ideal{a}$-torsion of $E$: the $S'$-valued points of $E[\ideal{a}]$ is \[E[\ideal{a}](S')=\{x\in E(S'): [a]_E(x)=0 \text{ for all } a\in \ideal{a}\}.\] If $\ideal{a}=(a)$ is principal then $E[\ideal{a}]=\ker([a]_E).$ - \begin{prop} The homomorphism $i_\ideal{a}: E\longrightarrow \ideal{a}^{-1}\otimes_{O_K}E$ is finite locally free of degree $N\ideal{a}$ and is \'etale if and only if $\ideal{a}$ is invertible on $S$. \end{prop} \begin{proof} The homomorphism $i_\ideal{a}$ is \'etale if and only if the induced map \[\mrm{Lie}_{E/S}\longrightarrow \mrm{Lie}_{\ideal{a}^{-1}\otimes_{O_K} E}=\ideal{a}^{-1}\otimes_{O_K}\mrm{Lie}_{E/S}\] is an isomorphism. This map is induced by the inclusion $O_K\longrightarrow \ideal{a}^{-1}$ and is therefore an isomorphism if and only if $\ideal{a}$ is invertible on $S$. Regarding the degree of $i_\ideal{a}$, by rigidity we may decompose $S$ into a disjoint union of schemes over which $i_\ideal{a}$ is either the zero morphism or finite locally free of constant degree. As $\ideal{a}$ is non-zero $i_\ideal{a}$ cannot be the zero morphism and is therefore finite locally free of constant degree. This degree is one if and only if $\ideal{a}=O_K$ in which case the claim is clear. Therefore, we may assume that $i_\ideal{a}$ is finite locally free of constant degree greater than one. Given another (non-zero) ideal $\ideal{b}$ of $O_K$, the exactness of the functor $\ideal{b}^{-1}\otimes_{O_K}-$ implies that the kernel of \[\ideal{b}^{-1}\otimes_{O_K} i_\ideal{a}: \ideal{b}^{-1}\otimes_{O_K} E\longrightarrow \ideal{b}^{-1}\otimes_{O_K} \ideal{a}^{-1}\otimes_{O_K} E=(\ideal{a}\ideal{b})^{-1}\otimes_{O_K} E\] is equal to $\ideal{b}^{-1}\otimes_{O_K} E[\ideal{a}]$. Choosing an isomorphism $\ideal{b}^{-1}\otimes_{O_K}(O_K/\ideal{a})\stackrel{\sim}{\longrightarrow} O_K/\ideal{a}$ we also obtain an isomorphism \[\ideal{b}^{-1}\otimes_{O_K}E[\ideal{a}]\stackrel{\sim}{\longrightarrow} E[\ideal{a}]\] and it follows that $\deg(i_\ideal{a}\otimes_{O_K}\ideal{b}^{-1})=\deg(i_\ideal{a})$ and so \[\deg(i_{\ideal{a}\ideal{b}})=\deg((i_\ideal{b}\otimes_{O_K}\ideal{b}^{-1})\circ i_\ideal{b})=\deg(i_\ideal{a})\deg(i_\ideal{b}).\] As $N{\ideal{a}\ideal{b}}=N\ideal{a} N\ideal{b}$ and $\deg(i_{\ideal{a}\ideal{b}})=\deg(i_\ideal{a})\deg(i_\ideal{b})$ it is enough to show that $i_\ideal{p}$ has degree $N\ideal{p}$ whenever $\ideal{p}$ is a non-zero prime ideal. If $\overline{\ideal{p}}$ denotes the complex conjugate of $\ideal{p}$ then \[\deg(i_\ideal{p})\deg(i_{\overline{\ideal{p}}})=\deg(i_{\ideal{p}\overline{\ideal{p}}})=\deg([N\ideal{p}]_E)=N\ideal{p}^2.\] Therefore, if $\ideal{p}=\overline{\ideal{p}}$ then $\deg(i_\ideal{p})^2=N\ideal{p}^2$ and we must have $\deg(i_\ideal{p})=N\ideal{p}$. On the other hand if $\ideal{p}\neq \overline{\ideal{p}}$ then $N\ideal{p}$ is prime and as both $\deg(i_\ideal{p})$ and $\deg(i_{\overline{\ideal{p}}})$ are greater than one we must also have $\deg(i_\ideal{p})=N\ideal{p}$. \end{proof} \begin{prop}\label{prop:cm-subgroups} Let $L/K$ be a finite extension, let $S$ be either $\mrm{Spec}(L)$ or an open subset of $\mrm{Spec}(O_L)$ and let $E/S$ be a \textup{CM} elliptic curve. The only finite locally free sub-group schemes of $E$ which are stable under the action of $O_K$ are those of the form $E[\ideal{a}]$ for $\ideal{a}\subset O_K$. \end{prop} \begin{proof} If $S=\mrm{Spec}(L)$ then this follows immediately after noting that $E_\mathrm{tors}$ is an ind-finite locally free scheme over $\mrm{Spec}(L)$ which, after base change to an algebraic closure of $L$, is isomorphic as an $O_K$-module group scheme to the constant $O_K$-module group scheme associated to $K/O_K$ whose only finite $O_K$-sub-modules are given by $\ideal{a}$-torsion $\ideal{a}^{-1}/K\subset K/O_K$ for $\ideal{a}\subset O_K.$ If $S\subset \mrm{Spec}(O_L)$ is an open sub-scheme and $C\subset E$ a finite locally free subgroup scheme stable under $O_K$ then $C\times_{S}\mrm{Spec}(L)=(E\times_{S}\mrm{Spec}(L))[\ideal{a}]$ for some ideal $\ideal{a}$ by the above. Therefore, the degree of $C$ is equal to the degree of $E[\ideal{a}]$ and by Corollary 1.3.5 of \cite{KatzMazur85} there is a unique maximal closed sub-scheme $Z\subset S$ over which $C$ and $E[\ideal{a}]$ are equal. Since $C$ and $E[\ideal{a}]$ are equal over the generic fibre $\mrm{Spec}(L)\longrightarrow S$ and $Z\subset S$ is closed it follows that $Z=S$ and hence that $C=E[\ideal{a}].$ \end{proof} \section{Classification of elliptic curves with complex multiplication} For details regarding the content of this section see Chapter 1 of \cite{Serre} and Chapter 2 of \cite{Gross1980}. \subsection{} Let $K\subset L\subset \overline{K}$ be a finite extension with maximal abelian extension $L^\mathrm{ab}\subset \overline{K}$ and let $E/L$ be a CM elliptic curve. The $O_K$-module $E(\overline{K})_\mathrm{tors}$ is isomorphic to $K/O_K$ and $G(\overline{K}/L)$ acts on this module via a character \[\rho_{E/L}: G(L^\mrm{ab}/L)\longrightarrow \widehat{O}_K^\times=\mathrm{Aut}_{O_K}(K/O_K).\] It is customary to classify CM elliptic curves via their Hecke characters, however it is conceptually simpler to instead use their ad\`elic representations $\rho_{E/L}$ directly, keeping in mind that those which appear as such satisfy a certain special property (see (\ref{eqn:commutative}) below). Indeed, this property is exactly what allows one to convert $\rho_{E/L}$ into the associated algebraic Hecke character $\psi_{E/L}$ as we know explain. Write \[\widetilde{N}_{L/K}: I_L\longrightarrow (\widehat{O}_K\otimes_{O_K} K)^\times\] for the composition of the norm $N_{L/K}: I_L\longrightarrow I_K$ with the projection $I_K\longrightarrow (\widehat{O}_K\otimes_{O_K} K)^\times$ which forgets the archimedian factor. The algebraic Hecke character (see Theorem 10 of \cite{SerreTate68} for an alternative definition) associated to $E/L$ is \[\psi_{E/L}:=\rho_{E/L}^{-1}\cdot \widetilde{N}_{L/K}: I_L\longrightarrow K^\times\subset (\widehat{O}_K\otimes_{O_K}K)^\times.\] A priori $\psi_{E/L}$ takes values in $(\widehat{O}_K\otimes_{O_K}K)^\times$, however it can be shown to take values in $K^\times$ (as follows from Theorem 11 of \cite{SerreTate68}, for example) which is equivalent to the ad\`elic representation $\rho_{E/L}$ having the property that the following diagram commutes: \begin{equation}\begin{gathered}\xymatrix{G(L^\mrm{ab}/L)\ar[r]^-{\rho_{E/L}} \ar[dr]_{\mathrm{res}}&\widehat{O}_K^\times\ar[d]^{\theta_K}\\ & G(K^\mrm{ab}/K) }\label{eqn:commutative}\end{gathered}\end{equation} where the diagonal arrow is the restriction map. This implies that the extension $K\subset L$ contains the Hilbert class field $K\subset H\subset \overline{K}$. Moreover, there always exist CM elliptic curves defined over $H$ (see Chapter I of \cite{Serre}). \subsection{} Now fix an embedding $K\subset \overline{K}\longrightarrow \mathbf{C}$ and let $E/\mathbf{C}$ be a CM elliptic curve. By GAGA the functor $E\mapsto E^\mrm{an}$, sending $E$ to its analytification, is an equivalence of categories between CM elliptic curves over $\mathbf{C}$ and complex tori of dimension one together with an action of $O_K$, which acts through the inclusion $O_K\subset \mathbf{C}$ on the tangent space at the identity. The exponential map \[\mrm{Lie}_{E^\mrm{an}}\longrightarrow E^\mrm{an}\] is holomorphic, surjective and $O_K$-linear, its kernel $T_{O_K}(E)$ is a rank one projective $O_K$-module and the functor $E\mapsto T_{O_K}(E)$ from the category of CM elliptic curves over $\mrm{Spec}(\mathbf{C})$ to the category of rank one projective $O_K$-modules is an equivalence. We denote by $CL_K$ the class group of $K$ and we denote by $[M]\in CL_K$ the class of a rank one projective $O_K$-module $M$. If $L\subset \overline{K}$ is a finite extension of $K$ and $E/L$ is a CM elliptic curve we write \[c_{E/L}:=[T_{O_K}(E_{\mathbf{C}}^\mrm{an})]\in \mrm{CL}_{K}\] where $E_\mathbf{C}=E\times_{\mrm{Spec}(L)}\mrm{Spec}(\mathbf{C})$. \subsection{} We record the following properties of $\rho_{E/L}$ and $c_{E/L}$: \begin{enumerate}[label=\textup{(\roman*)}] \item If $\chi: G(L^\mrm{ab}/L)\longrightarrow \mrm{Aut}_L^{O_K}(E)=O_K^\times$ is a character and $E^\chi$ is the twist of $E$ by $\chi$ then \[\rho_{E^\chi/L}=\chi\cdot \rho_{E/L} \quad \text{ and } \quad c_{E^\chi/L}=c_{E/L}.\] \item If $\sigma\in G(L/K)$ then \[\rho_{\sigma^*(E)/L}=\rho_{E/L}^\sigma \quad \text{ and } \quad c_{\sigma^*(E)/L}=[\ideal{a}]^{-1}c_{E/L}\] where \[\rho_{E/L}^\sigma(-)=\rho_{E/L}(\widetilde{\sigma}\circ - \circ \widetilde{\sigma}^{-1})\] for any extension of $\sigma\in G(L/K)$ to $\widetilde{\sigma}\in G(L^\mrm{ab}/K)$ and $\sigma|_H=\sigma_{\ideal{a}}$. \item If $\ideal{a}$ is a fractional ideal of $K$ then \[\rho_{\ideal{a}\otimes_{O_K} E/L}=\rho_{E/L} \quad \text{ and } \quad c_{\ideal{a}\otimes_{O_K} E/L}=[\ideal{a}]c_{E/L}.\] \end{enumerate} \begin{prop}\label{theo:classification} The assignment \[E/L\mapsto (\rho_{E/L}, c_{E/L})\] from isomorphism classes of \textup{CM} elliptic curves over $L$ to the set of pairs $(\rho, c)$ where \begin{enumerate}[label=\textup{(\roman*)}] \item $\rho: G(L^\mrm{ab}/L)\longrightarrow \widehat{O}_K^\times$ is a continuous character such that \[\xymatrix{G(L^\mrm{ab}/L)\ar[r]^-{\rho}\ar[dr]_{\mathrm{res}}&\widehat{O}_K^\times\ar[d]^{\theta_K}\\ & G(K^\mrm{ab}/K) }\] commutes, and \item $c\in CL_K$ \end{enumerate} is bijective. Moreover, if $E$ and $E'$ are a pair of \textup{CM} elliptic curves over $L$ then $E$ and $E'$ are isogenous if and only if $\rho_{E/L}=\rho_{E'/L}$. \end{prop} \begin{proof} Let $(\rho, c)$ be a pair as above. The fact that (\ref{eqn:commutative}) commutes implies that the image of $G(L^\mrm{ab}/L)$ in $G(K^\mrm{ab}/K)$ (via restriction) lands in the subgroup $G(K^\mrm{ab}/H)$ so that $H\subset L$. As there exists a CM elliptic curve $H$, base changing we obtain a CM elliptic curve $E/L$. If $(\rho_{E/L}, c_E)$ is the pair corresponding to $E/L$ then there exists a character $\chi: G(L^\mrm{ab}/L)\longrightarrow O_K^\times$ and a fractional ideal $\ideal{a}$ of $K$ such that \[(\rho, c)=(\chi\rho_{E/L}, [\ideal{a}]c).\] But \[(\rho_{\ideal{a}\otimes_{O_K}E^\chi/L}, c_{\ideal{a}\otimes_{O_K}E^\chi/L})=(\chi\rho_{E/L}, [\ideal{a}]c)\] and so the map in question is surjective. On the other hand, let $E$ and $E'$ be a pair of CM elliptic curves over $L$ such that \[(\rho_{E/L}, c_E)=(\rho_{E'/L}, c_{E'}).\] The equality $c_{E/L}=c_{E'/L}$ implies that $E_{\mathbf{C}}$ and $E'_{\mathbf{C}}$ are isomorphic, which in turn implies that $E_{\overline{K}}$ and $E'_{\overline{K}}$ are isomorphic and thus there exists a character $\chi: G(L^\mrm{ab}/L)\longrightarrow O_K^\times$ such that $E'$ and $E^\chi$ are isomorphic. We then find \[\rho_{E/L}=\rho_{E'/L}=\chi\cdot \rho_{E/L}\] which is possible if and only if $\chi$ is trivial. Therefore $E'=E^\chi$ is isomorphic to $E$ and the map in question is bijective. For the last statement let $E$ and $E'$ be a pair of CM elliptic curves over $L$. Then $E$ and $E'$ are isogenous (over $L$) if and only if $E'=\ideal{a}^{-1}\otimes_{O_K} E$ for some integral ideal $\ideal{a}$ of $O_K$ as any isogeny $f$ has $\ker(f)=E[\ideal{a}]$ for some $\ideal{a} \subset O_K$. It is clear that $\rho_{E/L}=\rho_{\ideal{a}\otimes_{O_K}E/L}$. Conversely, if $\rho_{E/L}=\rho_{E'/L}$ it follows that $E'$ is isomorphic to $M\otimes_{O_K}E$ for some rank one projective $O_K$-module $M$ (by the bijectivity already shown) and choosing any non-zero element $m\in M$ we obtain an isogeny \[E\longrightarrow E'=M\otimes_{O_K}E: x\mapsto m\otimes x.\] \end{proof} \section{Good reduction} Since it doesn't seem to appear elsewhere, let us give the following version of the criterion of N\'eron-Ogg-Shafarevich adapted to CM elliptic curves (see \cite{SerreTate68} for the original). \begin{theo}\label{prop:interia-groups} Let $L/K$ be a finite extension, $E/L$ be a \textup{CM} elliptic curve, $\ideal{P}\subset O_L$ a prime ideal lying over the prime $\ideal{p}\subset O_K$ and let $I_\ideal{P}\subset G(L^\mrm{ab}/L)$ be the inertia subgroup. Then \[\rho_{E/L}(I_\ideal{P})\subset O_K^\times\cdot O_{K_\ideal{p}}^\times\subset \widehat{O}_K^\times\] and $E/L$ has good reduction at $\ideal{P}$ if and only if $\rho_{E/L}(I_\ideal{P})\subset O_{K_\ideal{p}}^\times$. \end{theo} \begin{proof} The image of $I_\ideal{P}$ under the restriction map $G(L^\mrm{ab}/L)\longrightarrow G(K^\mrm{ab}/K)$ is contained in the inertia group $I_\ideal{p}\subset G(K^\mrm{ab}/K)$ which in turn is equal to the image of the sub-group $O_{K_\ideal{p}}^\times\subset \widehat{O}_K^\times/O_K^\times$ under the map $\theta_K:\widehat{O}_K^\times/O_K^\times\longrightarrow G(K^\mrm{ab}/K)$. This observation combined with the fact that the diagram \[\xymatrix{G(L^\mrm{ab}/L)\ar[r]^-{\rho_{E/L}}\ar[dr]_{\mathrm{res}}&\widehat{O}_K^\times\ar[d]^{\theta_K}\\ & G(K^\mrm{ab}/K)}\] commutes shows that $\rho_{E/L}(I_\ideal{P})\subset O_K^\times \cdot O_{K_\ideal{p}}^\times\subset \widehat{O}_K^\times$. For the claim regarding good reduction, let $\ell$ be a rational prime such that $\ell\cdot O_K$ is prime to $\ideal{p}$. By the usual criterion of N\'eron-Ogg-Shafarevich, $E/L$ has good reduction at $\ideal{P}$ if and only if the action of $I_\ideal{P}$ on $E[\ell^\infty](L^\mrm{ab})$ is trivial and this action is trivial if and only if the image of $\rho_{E/L}(I_\ideal{P})\subset \widehat{O}_K^\times$ along the projection $\widehat{O}_K^\times\longrightarrow (O_K\otimes_{\mathbf{Z}}\mathbf{Z}_\ell)^\times$ is trivial. Now, as the intersection of $O_K^\times$ and $O_{K_\ideal{p}}^\times$ inside $\widehat{O}_K^\times$ is trivial, we have $O_K^\times\cdot O_{K_\ideal{p}}^\times=O_K\times O_{K_\ideal{p}}^\times$ and so the restriction of $\rho_{E/L}$ to $I_\ideal{P}$ is a product of two characters \[\rho_{E/L}|_{I_\ideal{P}}=\alpha\cdot\lambda: I_\ideal{P}\to O_K^\times\times O_{K_\ideal{p}}^\times = O_K^\times\cdot O_{K_\ideal{p}}^\times\subset \widehat{O}_K^\times.\] As $\ideal{p}$ and $\ell\cdot O_K$ are coprime, the image of $\rho_{E/L}(I_\ideal{P})$ in $(O_K\otimes_{\mathbf{Z}}\mathbf{Z}_\ell)^\times$ coincides with the image of $\alpha(I_\ideal{P})$, and as the map $O_K^\times \longrightarrow (O_K\otimes_{\mathbf{Z}}\mathbf{Z}_\ell)^\times$ is injective, it follows that the image of $\alpha(I_\ideal{P})$ is trivial if and only if $\alpha(I_{\ideal{P}})$ is itself trivial. This in turn is true if and only if $\rho_{E/L}(I_\ideal{P})\subset O_{K_\ideal{p}}^\times.$ \end{proof} We also obtain the following useful corollary, a special case of which is Theorem 2 of \cite{CoatesWiles1977}: \begin{coro}\label{coro:good-red-const} Let $L/K$ be a finite extension, let $E/L$ be a \textup{CM} elliptic curve and let $\ideal{f}\subset O_K$ be an ideal such that the map $O_K^\times\longrightarrow (O_K/\ideal{f})^\times$ is injective. If $E[\ideal{f}]$ is rational over $L$ then $E$ has good reduction everywhere. \end{coro} \begin{proof} If $\ideal{P}$ is an prime ideal of $O_L$ lying over the prime ideal $\ideal{p}$ of $O_K$, and $I_\ideal{P}\subset G(L^\mrm{ab}/L)$ denotes the inertia group at a prime $\ideal{P}$ of $L$ then by (\ref{prop:interia-groups}) $E/L$ has good reduction at $\ideal{P}$ if and only if the sub-group $\rho_{E/L}(I_\ideal{P})\subset O_K^\times\cdot O_{K_\ideal{p}}^\times\subset \widehat{O}_K^\times$ is contained in $O_{K_\ideal{p}}^\times\subset \widehat{O}_K^\times.$ Since $O_K^\times$ and $O_{K_\ideal{p}}^\times$ have trivial intersection (as subgroups of $\widehat{O}_K^\times$) and the map $O_K^\times\longrightarrow (O_K/\ideal{f})^\times$ is injective, this is equivalent to the image of $\rho_{E/L}(I_{\ideal{P}})$ along $\widehat{O}_K^\times\longrightarrow (O_K/\ideal{f})^\times$ being trivial. However, $E[\ideal{f}]$ is constant so that $G(L^\mrm{ab}/L)$ acts trivially on $E[\ideal{f}](\overline{K})$ and so the image of $\rho_{E/L}(G(L^\mrm{ab}/L))\subset \widehat{O}_K^\times$ (and a fortiori that of $\rho_{E/L}(I_\ideal{P})$) is trivial in $(O_K/\ideal{f})^\times$. \end{proof} \section{Elliptic curves of Shimura type} \subsection{} Let $L$ be an abelian extension of $K$. A CM elliptic curve $E/L$ is said to be of Shimura type if the action of $G(\overline{K}/L)$ on $E(\overline{K})_\mathrm{tors}$ factors through $G(K^\mrm{ab}/L)$. Note that if $K$ has class number one then every CM elliptic curve over $K$ is of Shimura type and such curves always exist. More generally, we have the following result of Shimura: \begin{theo}[Shimura]\label{prop:shimura-lambda-curves-over-hilbert} There exist infinitely many prime ideals $\ideal{p}$ of $K$ with the property that there exists a \textup{CM} elliptic curve $E/H$ of Shimura type with good reduction at all prime ideals of $H$ prime to $\ideal{p}$. \end{theo} \begin{proof} By Proposition 7, \S 5 of \cite{Shimura71} there exists infinitely many primes $\ideal{p}$ of $K$ with $N\ideal{p}=p$ a rational prime, $N\ideal{p}=1\bmod w$ and $(N\ideal{p}-1)/w$ prime to $w$, where $w=\# O_K^\times$. Given such a prime $\ideal{p}$ it follows that the reduction map \[O_K^\times\longrightarrow (O_K/\ideal{p})^\times\] is the inclusion of a direct factor. Therefore, we may define a retraction $\alpha: \widehat{O}_K^\times\longrightarrow O_K^\times$ of the inclusion $O_K^\times\to \widehat{O}_K^\times$ by \[\widehat{O}_K^\times\longrightarrow (O_K/\ideal{p})^\times\longrightarrow O_K^\times\] where the first map is the quotient map and the second is a retraction of $O_K^\times \subset (O_K/\ideal{p})^\times$. We now define a character $\rho: G(\overline{K}/H)\longrightarrow \widehat{O}_K^\times$ by \[G(\overline{K}/H)\longrightarrow G(K^\mrm{ab}/H) \stackrel{\sim}{\longrightarrow} \widehat{O}_K^\times/O_K^\times\longrightarrow \widehat{O}_K^\times\] where the last map sends the class of $s\in \widehat{O}_K^\times$ to $s^{-1}\alpha(s)$. The character $\rho$ satisfies the conditions of (\ref{theo:classification}) so that there exists a (not necessarily unique) CM elliptic curve $E/H$ with $\rho_{E/H}=\rho$. Moreover, the action of $G(\overline{K}/H)$ on $E(\overline{K})$ factors through $G(K^\mrm{ab}/H)$ so that $E/H$ is a CM elliptic curve of Shimura type and $\rho=\rho_{E/H}$ is a character $G(K^\mrm{ab}/H)\longrightarrow \widehat{O}_K^\times.$ Finally, for any prime $\ideal{L}$ of $O_H$ lying over a prime $\ideal{l}\neq \ideal{p}$ of $O_K$, the image of $I_{\ideal{L}}\subset G(H^\mrm{ab}/H)$ along \[G(H^\mrm{ab}/H)\longrightarrow G(K^\mrm{ab}/H)\longrightarrow \widehat{O}_K^\times\] is equal to the image of $I_{\ideal{l}}\subset G(K^\mrm{ab}/H)$ under $\rho$. By (\ref{prop:interia-groups}) $E/H$ has good reduction at the prime $\ideal{L}$ if and only if the composition \[O_{K_\ideal{l}}^\times\longrightarrow \widehat{O}_K^\times/O_K^\times\stackrel{\sim}{\longleftarrow} G(K^\mrm{ab}/H) \stackrel{\rho}{\longrightarrow}\widehat{O}_K^\times \] has image in $O_{K_\ideal{l}}^\times$. As $\ideal{l}\neq \ideal{p}$, this composition is just the inclusion $O_{K_\ideal{l}}^\times\subset \widehat{O}_K^\times$ by the definition of $\rho$ and hence $E/H$ has good reduction at $\ideal{L}$. \end{proof} It is easy to show that the set of primes of $L$ where a CM elliptic curve $E/L$ of Shimura type has good reduction is stable under $G_{L/K}$ and is therefore equal to the set of primes which are prime to some ideal $\ideal{a}$ of $K$. Thus the following shows that (\ref{prop:shimura-lambda-curves-over-hilbert}) is sharp: \begin{prop}\label{prop:no-good-reduction-shimura} There does not exist a \textup{CM} elliptic curve $E/H$ of Shimura type with good reduction everywhere. \end{prop} \begin{proof} Let $E/H$ be a CM elliptic curve of Shimura type. The defining property of such curves implies that the character $\rho_{E/H}: G(H^\mrm{ab}/H)\longrightarrow \widehat{O}_K^\times$ factors through the restriction map $G(H^\mrm{ab}/H)\longrightarrow G(K^\mrm{ab}/H)$ and we shall use the same symbol for the induced map. Composing the reciprocal $\rho_{E/H}$ with the isomorphism $\theta_K:\widehat{O}_K^\times/O_K^\times\stackrel{\sim}{\longrightarrow} G(K^\mrm{ab}/H)$ we obtain a homomorphism \[\eta:\widehat{O}_K^\times/O_K^\times\longrightarrow \widehat{O}_K^\times\] which is a section of the quotient map $\widehat{O}_K^\times\longrightarrow \widehat{O}_K^\times/O_K^\times$. The elliptic curve $E/H$ has good reduction at all places of $H$ lying above a prime $\ideal{p}$ of $O_K$ if and only if the composition \[O_{K_\ideal{p}}^\times\subset \widehat{O}_K^\times/O_K^\times\stackrel{\eta}{\longrightarrow} \widehat{O}_K^\times\] coincides with the inclusion $O_{K_\ideal{p}}^\times\subset \widehat{O}_K^\times$. Therefore, $E/H$ has good reduction everywhere if and only if the composition \begin{equation}\label{eqn:f-comp-quot}\widehat{O}_K^\times\longrightarrow \widehat{O}_K^\times/O_K^\times \stackrel{\eta}{\longrightarrow} \widehat{O}_K^\times\end{equation} is equal to the identity on the sub-group of $\widehat{O}_K^\times$ generated by the sub-groups $O_{K_\ideal{p}}^\times\subset \widehat{O}_K^\times$ for all primes $\ideal{p}$ of $O_K$. However, this sub-group is dense and $\eta$ is continuous so that (\ref{eqn:f-comp-quot}) itself must be the identity, which is clearly impossible. \end{proof} \begin{rema} In contrast to (\ref{prop:no-good-reduction-shimura}) above, there may exist CM elliptic curves over $H$ with good reduction everywhere. Indeed, Rohrlich has shown \cite{Rohrlich1982} that this is the case precisely when the discriminant of $K$ is divisible by at least two primes congruent to $3\bmod 4.$ \end{rema} \section{Lifts of the Frobenius} \subsection{}\label{subsec:cm-over-a-field-shimura-integral} We now fix some notation. Let $L/K$ be an abelian extension and let $E/L$ be a CM elliptic curve (not necessarily of Shimura type). Let $\ideal{g}\subset O_K$ be an ideal with the property that $S=\mrm{Spec}(O_L[\ideal{g}^{-1}])$ is unramified over $\mrm{Spec}(O_K)$ and $E$ has good reduction over $S$, so that the N\'eron model $\mathscr{E}/S$ of $E/L$ is a CM elliptic curve over $S$. We write $\mrm{Id}_K^\ideal{g}$ for the set of ideals of $O_K$ prime to $\ideal{g}$ and for a prime $\ideal{p}\in \mrm{Id}_K^\ideal{g}$ we write $S_\ideal{p}=S\times_{\mrm{Spec}(O_K)}\mrm{Spec}(O_K/\ideal{p})$, $\mathscr{E}_\ideal{p}=\mathscr{E}\times_S S_\ideal{p}$ and $\sigma_\ideal{p}: S\longrightarrow S$ for the Frobenius element at $\ideal{p}$. \begin{lemm} For each $\ideal{p}\in \mrm{Id}_K^\ideal{g}$, there is at most one homomorphism \[\psi^{\ideal{p}}:\mathscr{E}\longrightarrow \sigma_\ideal{p}^*(\mathscr{E})\] lifting the $N\ideal{p}$-power relative Frobenius map of $\mathscr{E}_\ideal{p}$ and if such a map exists its kernel is equal to $\mathscr{E}[\ideal{p}].$ \end{lemm} \begin{proof} By rigidity the difference of two such homomorphisms is equal to the zero map on some open and closed sub-scheme of $S$, the only choices of which are $S$ and $\emptyset$. Therefore, as any two such homomorphisms must agree on the non-empty sub-scheme $S_\ideal{p}\subset S$, they must agree everywhere. By (\ref{prop:cm-subgroups}) we have $\ker(\psi^\ideal{p})=\mathscr{E}[\ideal{a}]$ for some integral ideal $\ideal{a}$ of $O_K$. Since $S$ is connected and $\psi^\ideal{p}$ lifts the $N\ideal{p}$-power relative Frobenius it must have degree $N\ideal{p}$. If $\ideal{p}=\overline{\ideal{p}}$ then $\mathscr{E}[\ideal{p}]$ is the unique sub-group scheme of $\mathscr{E}$ stable under $O_K$ of degree $\ideal{p}$ so that $\ker(\psi^\ideal{p})=E[\ideal{p}]$. If $\ideal{p}\neq \overline{\ideal{p}}$ then $\mathscr{E}[\ideal{p}]$ and $\mathscr{E}[\overline{\ideal{p}}]$ are the only sub-group schemes of $\mathscr{E}$ of degree $N\ideal{p}$ stable under $O_K$ so that $\ker(\psi^\ideal{p})=E[\ideal{p}]$ or $\ker(\psi^\ideal{p})=E[\overline{\ideal{p}}]$. In the latter case we see that $\psi^\ideal{p}$ is \'etale when restricted to $S_{\overline{\ideal{p}}}\subset S$, which is absurd, hence $\ker(\psi^\ideal{p})=E[\ideal{p}]$. \end{proof} \begin{theo}\label{theo:shimura-is-lambda} In the notation of \textup{(\ref{subsec:cm-over-a-field-shimura-integral})}, the following are equivalent: \begin{enumerate}[label=\textup{(\roman*)}] \item For each $\ideal{p}\in \mrm{Id}_K^\ideal{g}$ there is a unique homomorphism \[\psi^\ideal{p}: \mathscr{E}\longrightarrow \sigma_\ideal{p}^*(\mathscr{E})\] lifting the $N\ideal{p}$-power relative Frobenius of $\mathscr{E}_\ideal{p}/S_\ideal{p}$ and for each pair of primes $\ideal{p}, \ideal{l}\in \mrm{Id}_K^\ideal{g}$ the diagram \[\xymatrix{\mathscr{E}\ar[r]^{\psi^\ideal{l}}\ar[d]_{\psi^\ideal{p}} & \sigma_\ideal{l}^*(\mathscr{E})\ar[d]^{\sigma_\ideal{l}^*(\psi^\ideal{p})}\\ \sigma_\ideal{p}^*(\mathscr{E})\ar[r]^{\sigma_\ideal{p}^*(\psi^\ideal{l})} & \sigma_{\ideal{p}\ideal{l}}^*(\mathscr{E})}\] commutes. \item $E/L$ is a \textup{CM} elliptic curve of Shimura type. \end{enumerate} \end{theo} \begin{proof} (i) implies (ii): It is enough to show that the action of $G(L^\mrm{ab}/L)$ on $E[\ideal{a}](L^\mrm{ab})$ factors through $G(K^\mrm{ab}/L)$ for all $\ideal{a}$ divisible by $\ideal{g}$. Since the claim takes place only on the generic fibre, we may replace $S=\mrm{Spec}(O_L[\ideal{g}^{-1}])$ by $\mrm{Spec}(O_L[\ideal{a}^{-1}])$ and assume that $\ideal{a}=\ideal{g}$. Then $\mathscr{E}[\ideal{a}]$ is a finite \'etale $S$-scheme and therefore a finite \'etale $\mrm{Spec}(O_K[\ideal{a}^{-1}])$-scheme. It follows that $\mathscr{E}[\ideal{a}]=\amalg_i \mrm{Spec}(O_{L_i}[\ideal{a}^{-1}])$ where each $L_i/K$ is a finite extension unramified away from $\ideal{a}$. For each prime ideal $\ideal{p}$ of $O_K$ prime to $\ideal{a}$, the $\mrm{Spec}(O_K)$-linear morphism $\varphi_\ideal{p}: \mathscr{E}[\ideal{a}]\longrightarrow \mathscr{E}[\ideal{a}]$ induced by the restriction of $\psi_{\mathscr{E}/L}^\ideal{p}$ to $\mathscr{E}[\ideal{a}]$ lifts the absolute $N\ideal{p}$-power Frobenius map of $\mathscr{E}[\ideal{a}]\times_S S_\ideal{p}$ and therefore its restriction to each $\mrm{Spec}(O_{L_i}[\ideal{a}^{-1}])$ is the Frobenius element corresponding to $\ideal{p}$. As this is true for all primes prime to $\ideal{a}$ it follows that each of the extensions $L_i/K$ is abelian and that $E/L$ is of Shimura type. (ii) implies (i): We first note that if $E/L$ is a CM elliptic curve of Shimura type then the fixed ideal $\ideal{g}$ cannot be equal to $O_K$. If this were the case then we would have $L=H$ and the elliptic curve $E/H$ would have good reduction everywhere which is impossible by (\ref{prop:no-good-reduction-shimura}). Moreover, the assumptions and the claims of the theorem are unchanged after replacing $\ideal{g}$ by some power of itself so that as $\ideal{g}\neq O_K$ we may do so in such a way that the reduction map \[O_K^\times\longrightarrow (O_K/\ideal{g})^\times\] is injective. Now as $E/L$ is of Shimura type it follows that for each $\sigma\in G(L/K)$ we have $\rho_{\sigma^*(E)/L}=\rho_{E/L}$. Therefore, for each prime $\ideal{p}\in \mrm{Id}_K^\ideal{g}$ we have \[(\rho_{\sigma^*_\ideal{p}(E)/L}, c_{\sigma_\ideal{p}^*(E)/L})=(\rho_{E}, c_{\ideal{p}^{-1}\otimes_{O_K}E/L})=(\rho_{\ideal{p}^{-1}\otimes_{O_K}E}, c_{\ideal{p}^{-1}\otimes_{O_K}E/L}).\] In particular, there exists an isomorphism \[f: \ideal{p}^{-1}\otimes_{O_K} E\stackrel{\sim}{\longrightarrow} \sigma_\ideal{p}^*(E)\] whose extension to the N\'eron models (relative to $S$) we again denote by $f$. For each prime $\ideal{P}$ of $O_L$ lying over $\ideal{p}$ there exists a unique element $\epsilon_\ideal{P}\in O_K^\times$ such that \[f_\ideal{P}:=\epsilon_\ideal{P} f: \ideal{p}^{-1}\otimes_{O_K} \mathscr{E}\stackrel{\sim}{\longrightarrow} \sigma_\ideal{p}^*(\mathscr{E})\] reduces modulo $\ideal{P}$ to the unique isomorphism \[\ideal{p}^{-1}\otimes_{O_K} \mathscr{E}_\ideal{P}\stackrel{\sim}{\longrightarrow} \mrm{Fr}_{S_\ideal{P}}^{N\ideal{p}*}(\mathscr{E}_\ideal{P})\] whose composition with $i_\ideal{p}: \mathscr{E}\longrightarrow \ideal{p}^{-1}\otimes_{O_K}\mathscr{E}$ is the $N\ideal{p}$-power relative Frobenius. We write $\psi^\ideal{P}=i_\ideal{p}\circ f_\ideal{P}: \mathscr{E}\to \sigma_\ideal{p}^*(\mathscr{E})$ for this composition. Since $E/L$ is an elliptic curve of Shimura type \[\mathscr{E}[\ideal{g}]=\amalg_{i} \mrm{Spec}(O_{L_i}[\ideal{g}^{-1}])\] where each $L_i/K$ is an abelian extension, unramified away from $\ideal{g}$. Let $\varphi^{\ideal{p}}: \mathscr{E}[\ideal{g}]\to\sigma_\ideal{p}^*(\mathscr{E}[\ideal{g}])$ be the sum of the Frobenius elements at $\ideal{p}$ of the extensions $L_i/L$. As $\mrm{Spec}(O_L[\ideal{g}^{-1}])$ is connected, the reduction map \[\mrm{Hom}_{O_L[\ideal{g}^{-1}]}(\mathscr{E}[\ideal{g}], \sigma_\ideal{p}^*(\mathscr{E})[\ideal{g}])\to \mrm{Hom}_{O_L/\ideal{P}}(\mathscr{E}_\ideal{P}[\ideal{g}], \sigma_\ideal{p}^*(\mathscr{E})_\ideal{P}[\ideal{g}])\] is injective and by definition the images of $\psi^\ideal{P}|_{\mathscr{E}[\ideal{g}]}$ and $\varphi^\ideal{p}$ coincide. Hence, $\psi^\ideal{P}|_{\mathscr{E}[\ideal{g}]}$ depends only on $\ideal{p}$. However, for $\ideal{P}, \ideal{P}'$ each dividing $\ideal{p}$, we have $\psi^{\ideal{P}'}=\epsilon \psi^{\ideal{P}}$ for some $\epsilon \in O_K^\times.$ As \[\psi^\ideal{P}|_{\mathscr{E}[\ideal{g}]}=\psi^{\ideal{P}'}|_{\mathscr{E}[\ideal{g}]}=\epsilon\psi^{\ideal{P}}|_{\mathscr{E}[\ideal{g}]}\] and $O_K^\times \to (O_K/\ideal{g})^\times$ is injective, it follows that $\epsilon=1$ and that $\psi^{\ideal{P}}$ depends only on $\ideal{p}.$ We write \[\psi^{\ideal{p}} : \mathscr{E}\longrightarrow \sigma_\ideal{p}^*(\mathscr{E})\] for this common value, which lifts the $N\ideal{p}$-power Frobenius modulo $\ideal{p}$ by construction. For a pair of prime ideals $\ideal{p}$, $\ideal{l}\in \mrm{Id}_K^\ideal{g}$ consider the diagram \[\xymatrix{\mathscr{E}\ar[r]^{\psi^\ideal{l}}\ar[d]_{\psi^\ideal{p}} & \sigma_\ideal{l}^*(\mathscr{E})\ar[d]^{\sigma_\ideal{l}^*(\psi^\ideal{p})}\\ \sigma_\ideal{p}^*(\mathscr{E})\ar[r]^{\sigma_\ideal{p}^*(\psi^\ideal{l})} & \sigma_{\ideal{p}\ideal{l}}^*(\mathscr{E})}\] and let $h: \mathscr{E}\longrightarrow \sigma_{\ideal{p}\ideal{l}}^*(\mathscr{E})$ be the difference of the two compositions from top left to bottom right. The diagram commutes when restricted to the $\ideal{a}$-torsion for any ideal $\ideal{a}$ prime to $\ideal{g}$, as both compositions induce the `Frobenius element' corresponding to $\ideal{p}\ideal{l}$ of the finite \'etale $S$-schemes $\mathscr{E}[\ideal{a}]$. Therefore $\mathscr{E}[\ideal{a}]\subset \ker(h)$ for all $\ideal{a}$ prime to $\ideal{g}$ and this is only possible if $\ker(h)=\mathscr{E}$ so that $h=0$ and the diagram commutes. \end{proof} \section{Minimal models} \subsection{} \label{subsec:lambda-module-for-cm-curves} We now consider the existence of certain global minimal models of elliptic curves of Shimura type. First, we consider some consequences of the existence of commuting families of Frobenius lifts. We continue with the notation of (\ref{subsec:cm-over-a-field-shimura-integral}) but will also assume that $E/L$ is of Shimura type. Thus for each $\ideal{p}\nmid \ideal{g}$ there is a (unique) isomorphism \[\nu_\ideal{p}: \ideal{p}^{-1}\otimes_{O_K} \mathscr{E}\longrightarrow \sigma^*(\mathscr{E})\] with the property that $\nu_\ideal{p}\circ i_\ideal{p}=\psi^\ideal{p}: \mathscr{E}\longrightarrow \sigma_\ideal{p}^*(\mathscr{E})$ lifts the $N\ideal{p}$-power relative Frobenius. For a pair of primes $\ideal{p}$, $\ideal{l}$ prime to $\ideal{g}$ semi-commutativity of the isogenies $\psi^{\ideal{p}}$ and $\psi^{\ideal{l}}$ (\ref{theo:shimura-is-lambda}) expressed in terms of the isomorphisms $\nu_{\ideal{l}}$ and $\nu_\ideal{p}$ becomes: \begin{equation}\sigma_{\ideal{l}}^*(\nu_\ideal{p})\circ (\ideal{p}^{-1}\otimes_{O_K}\nu_{\ideal{l}})=\sigma_{\ideal{p}}^*(\nu_{\ideal{l}})\circ (\ideal{l}^{-1}\otimes_{O_K}\nu_{\ideal{p}}).\label{eqn:frob-commute-prime}\end{equation} For any ideal $\ideal{a}\in \mrm{Id}_{O_K}^\ideal{g}$, choosing a prime factorisation of $\ideal{a}$, we may define isomorphisms \[\nu_\ideal{a}: \ideal{a}^{-1}\otimes_{O_K}\mathscr{E}\stackrel{\sim}{\longrightarrow} \sigma_\ideal{a}^*(\mathscr{E})\] by composing the $\nu_\ideal{p}$ the appropriate number of times for for $\ideal{p}|\ideal{a}$. The resulting isomorphism $\nu_\ideal{a}$ is independent of the order of the composition by virtue of (\ref{eqn:frob-commute-prime}) and for any pair of ideals $\ideal{a}$ and $\ideal{b}$ prime to $\ideal{g}$ they satisfy: \begin{equation}\sigma_{\ideal{a}}^*(\nu_\ideal{b})\circ (\ideal{b}^{-1}\otimes_{O_K}\nu_{\ideal{a}})=\sigma_{\ideal{b}}^*(\nu_{\ideal{a}})\circ (\ideal{a}^{-1}\otimes_{O_K}\nu_{\ideal{b}}).\label{eqn:frob-commute-all}\end{equation} \subsection{} If $\ideal{a}$ is an ideal prime to $\ideal{g}$ such that $\sigma_{\ideal{a}}=\mathrm{id}_L$ then $\nu_{\ideal{a}}$ is an isomorphism \[\nu_\ideal{a}: \ideal{a}^{-1}\otimes_{O_K}\mathscr{E}\longrightarrow \mathscr{E}\] and so must be of the form $l(\ideal{a})\otimes \mrm{id}_{\mathscr{E}}$ where $l(\ideal{a})\in O_K$ is a generator of $\ideal{a}$. Thus if $P_{L/K}^{\ideal{g}}$ denotes the monoid of ideals $\ideal{a}$ of $O_K$ which are prime to $\ideal{g}$ and which satisfy $\sigma_{\ideal{a}}=\mathrm{id}_L\in G(L/K)$ we obtain a multiplicative map \[l:P_{L/K}^{\ideal{g}}\longrightarrow O_K: \ideal{a}\mapsto l(\ideal{a})\] satisfying $l(\ideal{a})\cdot O_K=\ideal{a}$. If $\ideal{f}$ is an ideal of $O_K$ with the property that the group scheme $E[\ideal{f}]$ is constant then $\mathscr{E}[\ideal{f}]$ is also constant and the composition \[\mathscr{E}[\ideal{f}]\longrightarrow \ideal{a}^{-1}\otimes_{O_K}\mathscr{E}[\ideal{f}]\stackrel{\nu_\ideal{a}}{\longrightarrow} \mathscr{E}[\ideal{f}]\] is multiplication by $l(\ideal{a})$. However, it is also equal to the sum of the Frobenius elements of the connected components of $\mathscr{E}[\ideal{f}]$ which, as $\mathscr{E}[\ideal{f}]$ is constant, must be the identity. Therefore if $E[\ideal{f}]$ is constant, then $l$ satisfies $l(\ideal{a})=1\bmod \ideal{f}$. \subsection{} By the N\'eron mapping property the isomorphisms $\nu_\ideal{a}$ extend to isomorphisms on the full N\'eron model over $\mrm{Spec}(O_L)$ (which is no longer an elliptic curve, only a smooth one dimensional group scheme) \[\nu_\ideal{a}: \ideal{a}^{-1}\otimes_{O_K}\mrm{Ner}_{O_L}(E)=\mrm{Ner}_{O_L}( \ideal{a}^{-1}\otimes_{O_K}E)\stackrel{\sim}{\longrightarrow} \sigma_\ideal{a}^*(\mrm{Ner}_{O_L}(E))\] satisfying the same commutativity condition. Writing \[T=\underline{\mrm{Lie}}_{\mrm{Ner}_{O_L}(E)/O_L}\] for the Lie algebra of the N\'eron model, which is a projective rank one $O_L$-module, the isomorphisms $\nu_\ideal{a}$ induce $O_L$-isomorphisms (which we denote by the same letter) \[\nu_\ideal{a}: \ideal{a}^{-1}\otimes_{O_K}T\stackrel{\sim}{\longrightarrow} \sigma_\ideal{a}^*(T)\] for each $\ideal{a}\in \mrm{Id}_{O_K}^\ideal{g}$. These satisfy the same commutativity condition (\ref{eqn:frob-commute-all}) as the (original) $\nu_\ideal{a}$ and moreover if $\ideal{a}\in P_{L/K}^\ideal{g}$ then $\nu_\ideal{a}=l(\ideal{a})\otimes_{O_K}\mathrm{id}_T.$ \begin{theo}\label{theo:shimura-curves-minimal-model} In the notation of \textup{(\ref{subsec:lambda-module-for-cm-curves})}, if $L=K(\ideal{f})$ is a ray class field and $E[\ideal{f}]$ is constant then $T\otimes_{O_K}O_K[\ideal{f}^{-1}]$ is free. In other words, $E/\mrm{Spec}(K(\ideal{f}))$ admits a global minimal model away from $\ideal{f}$. \end{theo} \begin{proof} We apply (\ref{prop:tannaka-result-general}) to extend the map $l: P^\ideal{g}_{L/K}\longrightarrow O_K$ to a map \[l:\mrm{Id}_{O_K}^{\ideal{g}}\longrightarrow O_{K(\ideal{f})}\] satisfying \begin{equation} l(\ideal{a})\cdot O_{K(\ideal{f})}=\ideal{a}\cdot O_{K(\ideal{f})} \quad \text{ and } \quad l(\ideal{a}\ideal{b})=l(\ideal{a})\sigma_\ideal{a}(l(\ideal{b}))\label{eqn:l-com-condition}\end{equation} for all $\ideal{a}, \ideal{b}\in \mrm{Id}_K^{\ideal{g}}$. We then define, for each $\ideal{a}\in \mathrm{Id}_{K}^\ideal{g}$, an isomorphism $t_\ideal{a}: T\longrightarrow \sigma_\ideal{a}^*(T)$ by \[T\stackrel{l(\ideal{a})^{-1}\otimes \mrm{id}}{\longrightarrow} \ideal{a}^{-1}\otimes_{O_K}T\stackrel{\nu_\ideal{a}}{\longrightarrow} \sigma^*_a(T).\] If $\ideal{a}\in \mrm{P}_{L/K}^{\ideal{g}}$ then by (\ref{eqn:l-com-condition}) we have \[t_\ideal{a}=l(\ideal{a})^{-1}\otimes l(\ideal{a})=\mrm{id}_T.\] This, combined with the commutativity conditions (\ref{eqn:frob-commute-all}) on the $\nu_\ideal{a}$ and (\ref{eqn:l-com-condition}) on the $l(\ideal{a})$, shows that $t_\ideal{a}$ depends only on the class $\sigma_\ideal{a}\in G(K(\ideal{f})/K)$, so that we may instead write $t_{\ideal{a}}=t_{\sigma_\ideal{a}}$. We now have a collection of isomorphisms $t_\sigma: T\stackrel{\sim}{\longrightarrow} \sigma^*(T)$ indexed by $\sigma\in G(K(\ideal{f})/K)$ which satisfy: \[t_{\mrm{id}_{K(\ideal{f})}}=\mrm{id}_T \quad \text{ and } \quad t_{\sigma\tau}=t_\sigma\circ \sigma^*(t_\tau)\] for $\sigma, \tau\in G(K(\ideal{f})/K)$. In other words, the isomorphisms $t_\sigma$ define Galois descent data on $T$ relative to $O_{K}\longrightarrow O_{K(\ideal{f})}$. The homomorphism $O_K\longrightarrow O_{K(\ideal{f})}$ is finite and \'etale after inverting $\ideal{f}$ and after doing so the isomorphisms $t_\sigma$ define actual decent data relative to $O_{K}[\ideal{f}^{-1}]\longrightarrow O_{K(\ideal{f})}[\ideal{f}^{-1}]$. Therefore there exists an $O_K[\ideal{f}^{-1}]$-module $T_0$ such that \[T_0\otimes_{O_K[\ideal{f}^{-1}]} O_{K(\ideal{f})}[\ideal{f}^{-1}]\stackrel{\sim}{\longrightarrow} T\otimes_{O_{K(\ideal{f})}}O_{K(\ideal{f})}[\ideal{f}^{-1}].\] However, as $K(\ideal{f})$ contains the Hilbert class field $H=K(1)$, all rank one projective $O_{K}[\ideal{f}^{-1}]$-modules become free after base change to $O_{K(\ideal{f})}[\ideal{f}^{-1}]$, it follows that \[T_0\otimes_{O_K}O_{K}[\ideal{f}^{-1}]\stackrel{\sim}{\longrightarrow} T\otimes_{O_K}O_{K}[\ideal{f}^{-1}]\] is free. \end{proof} The above result is in fact really only interesting when $\ideal{f}$ is small. In fact, if $\ideal{f}$ has the property that $O_K^\times\longrightarrow (O_K/\ideal{f})^\times$ is injective then there is (up to automorphisms of $K(\ideal{f})$) only one CM elliptic curve over $K(\ideal{f})$ and by (\ref{coro:good-red-const}) it has good reduction everywhere. Thus we get a CM elliptic curve $\mathscr{E}/\mrm{Spec}(O_{K(\ideal{f})})$ which is nothing more than the universal CM elliptic curve with level-$\ideal{f}$ structure.\footnote{$O_{K(\ideal{f})}$ being the moduli stack of such CM elliptic curves.} However, if $\ideal{f}=(1)$ we obtain the following corollary which is a strengthening of a result of Gross (Corollary 4.4 of \cite{Gross82}) who proved it for elliptic curves of Shimura type\footnote{Technically, Gross' result is for CM elliptic curves over defined over the Hilbert class field of an imaginary quadratic with prime discriminant whose Hecke character is $G(H/K)$-invariant. However, these conditions actually imply that $E/H$ is a CM elliptic curve of Shimura type. Indeed, the primality of the discriminant implies that the class number of $K$ is prime to the order of $O_K^\times$, which combined with the $G(H/K)$-invariance of the Hecke character implies that $E/H$ is of Shimura type. For a result along these lines see Proposition 2 of \cite{Gilles85}.} over $H$ in the case where $K$ has prime discriminant: \begin{coro} If $E/H$ is an elliptic curve of Shimura type then $E$ admits a global minimal model. \end{coro} \begin{proof} This is just (\ref{theo:shimura-curves-minimal-model}) with $\ideal{f}=(1)$, noting that $E[1]=\mrm{Spec}(H)$ is always constant. \end{proof}
3,212,635,537,872
arxiv
\section{Introduction} The Garden of Eden theorem, originally established by Moore~\cite{moore} and Myhill~\cite{myhill} in the early 1960s, is an important result in symbolic dynamics and coding theory. It provides a necessary and sufficient condition for a cellular automaton to be surjective. More specifically, consider a finite set $A$ and the set $A^\Z$ consisting of all bi-infinite sequences $x = (x_i)$ with $x_i \in A$ for all $i \in \Z$. We equip $A^\Z$ with its \emph{prodiscrete topology}, that is, with the topology of pointwise convergence (this is also the product topology obtained by taking the discrete topology on each factor $A$ of $A^\Z$). A \emph{cellular automaton} is a continuous map $\tau \colon A^\Z \to A^\Z$ that commutes with the shift homeomorphism $\sigma \colon A^\Z \to A^\Z$ given by $\sigma(x) = (x_{i - 1})$ for all $x = (x_i) \in A^\Z$. Two sequences $x = (x_i),y = (y_i) \in A^\Z$ are said to be \emph{almost equal} if one has $x_i = y_i$ for all but finitely many $i \in \Z$. A cellular automaton $\tau \colon A^\Z \to A^\Z$ is called \emph{pre-injective} if there exist no distinct sequences $x, y \in A^\Z$ that are almost equal and satisfy $\tau(x) = \tau(y)$. The Moore-Myhill Garden of Eden theorem states that a cellular automaton $\tau \colon A^\Z \to A^\Z$ is surjective if and only if it is pre-injective. The implication surjective $\Rightarrow$ pre-injective was first established by Moore~\cite{moore}, and Myhill~\cite{myhill} proved the converse implication shortly after. \par The Moore-Myhill Garden of Eden theorem has been extended in several directions. There are now versions of it for cellular automata over amenable groups \cite{machi-mignosi}, \cite{ceccherini}, cellular automata over subshifts \cite{gromov-esav}, \cite{fiorenzi-sofic}, \cite{Fiorenzi-strongly}, and linear cellular automata over linear shifts and subshifts \cite{csc-linear-goe}, \cite{cc-goe-lin-sub} (the reader is refered to the monograph \cite{book} for a detailed exposition of some of these extensions, as well as historical comments and additional references). \par In this note, we present an analogue of the Garden of Eden theorem for Anosov diffeomorphisms on tori. This reveals one more connection between symbolic dynamics and the theory of smooth dynamical systems. Actually our motivation came from a phrase of Gromov \cite[p.~195]{gromov-esav} which mentioned the possibility of extending the Garden of Eden theorem to a suitable class of hyperbolic dynamical systems. \par Let $(X,f)$ be a dynamical system consisting of a compact metrizable space $X$ equipped with a homeomorphism $f \colon X \to X$. Two points in $X$ are called $f$-\emph{homoclinic} if their $f$-orbits are asymptotic both in the past and the future (see Section~\ref{sec:background} for a precise definition). Homoclinicity defines an equivalence relation on $X$. An \emph{endomorphism} of the dynamical system $(X,f)$ is a continuous map $\tau \colon X \to X$ commuting with $f$. We say that an endomorphism $\tau$ of $(X,f)$ is \emph{pre-injective} (with respect to $f$) if the restriction of $\tau$ to each $f$-homoclinicity class is injective (i.e., there is no pair of distinct $f$-homoclinic points in $X$ having the same image under $\tau$) (in the particular case when $X = A^\Z$ and $f = \sigma$ is the shift homeomorphism, the endomorphisms of $(X,f)$ are precisely the cellular automata and this definition of pre-injectivity is equivalent to the one given above, see e.g. \cite[Proposition~2.5]{csc-myhyp}). We say that the dynamical system $(X,f)$ has the \emph{Moore property} if every surjective endomorphism of $(X,f)$ is pre-injective and that $(X,f)$ has the \emph{Myhill property} if every pre-injective endomorphism of $(X,f)$ is surjective. We say that the dynamical system $(X,f)$ has the \emph{Moore-Myhill property}, or that it satisfies the \emph{Garden of Eden theorem}, if $(X,f)$ has both the Moore and the Myhill properties. \par A $C^1$-diffeomorphism $f$ of a compact $C^r$-differentiable ($r \geq 1$) manifold $M$ is called an \emph{Anosov diffeomorphism} if the tangent bundle of $M$ splits as a direct sum $TM = E_s \oplus E_u$ of two invariant subbundles $E_s$ and $E_u$ such that, with respect to some (or equivalently any) Riemannian metric on $M$, the differential $df$ is uniformly contracting on $E_s$ and uniformly expanding on $E_u$ (see~\cite{smale}, \cite{brin-stuck}, \cite{dgs_ergodic-theory}, \cite{kh-modern-theory-ds}, \cite{shub-global-stability}). \par Our main result is the following. \begin{theorem}[Garden of Eden theorem for toral Anosov diffeomorphisms] \label{t:anosov-torus} Let $f$ be an Anosov diffeomorphism of the $n$-dimensional torus $\T^n$. Then the dynamical system $(\T^n,f)$ has the Moore-Myhill property. In other words, if $\tau \colon \T^n \to \T^n$ is a continuous map commuting with $f$, then $\tau$ is surjective if and only if the restriction of $\tau$ to each homoclinicity class of $f$ is injective. \end{theorem} The paper is organized as follows. In Section 2, we fix notation and present some background material on dynamical systems. In Section 3, we establish Theorem~\ref{t:anosov-torus}. The proof uses two classical results in the theory of hyperbolic dynamical systems. The first one is the Franks-Manning theorem~\cite{franks}, \cite{manning}, which states that any Anosov diffeomorphism on $\T^n$ is topologically conjugate to a hyperbolic toral automorphism. The second is a theorem due to Walters~\cite{walters} which asserts that all endomorphisms of a hyperbolic toral endomorphism are affine. This allows us to reduce the proof to an elementary question in linear algebra. In the final section, we discuss some examples and give an extension of the Myhill implication of the Garden of Eden theorem for topologically mixing basic sets of Axiom A diffeomorphisms. \section{Background} \label{sec:background} In this section, we review some basic facts about dynamical systems. For more details, the reader is referred to the monographs \cite{brin-stuck}, \cite{dgs_ergodic-theory}, \cite{kh-modern-theory-ds}, \cite{lind-marcus}, and~\cite{shub-global-stability}. \subsection{Dynamical systems} Throughout this paper, by a \emph{dynamical system}, we mean a pair $(X,f)$, where $X$ is a compact metrizable space and $f \colon X \to X$ is a homeomorphism. Sometimes, we shall simply write $f$ or $X$ instead of $(X,f)$ if there is no risk of confusion. We denote by $d$ a metric on $X$ that is compatible with the topology. \par The \emph{orbit} of a point $x \in X$ is the set $\{f^n(x) : n \in \Z\} \subset X$. The point $x$ is called \emph{periodic} if its orbit is finite. A subset $Y \subset X$ is said to be \emph{invariant} if $f(Y) = Y$. If $Y \subset X$ is an invariant subset, we denote by $f\vert_Y$ the restriction of $f$ to $Y$, i.e., the map $f\vert_Y \colon Y \to Y$ given by $f\vert_Y(y) := f(y)$ for all $y \in Y$. \par One says that the dynamical systems $(X,f)$ and $(Y,g)$ are \emph{topologically conjugate} if there exists a homeomorphism $\varphi \colon X \to Y$ such that $\varphi \circ f = g \circ \varphi$. \subsection{Homoclinicity} Two points $x, y \in X$ are called \emph{homoclinic} with respect to $f$ (or $f$-\emph{homoclinic}) if one has $d(f^n(x),f^n(y)) \to 0$ as $|n| \to \infty$. Homoclinicity is an equivalence relation on $X$. By compactness, this equivalence relation is independent of the choice of the metric $d$. \begin{proposition} \label{p:periodic-homoclinic} Let $(X,f)$ be a dynamical system. Suppose that $x$ and $y$ are periodic points of $f$. If $x$ and $y$ are $f$-homoclinic, then $x = y$. \end{proposition} \begin{proof} Since $x$ and $y$ are periodic, there are integers $m, n \geq 1$ such that $f^n(x) = x$ and $f^m(y) = y$. If $x$ and $y$ are $f$-homoclinic, then, given any $\varepsilon > 0$, we have $d(x,y) = d(f^{kmn}(x),d^{kmn}(y)) < \varepsilon$ for $k$ large enough. This implies $x = y$. \end{proof} \begin{proposition} \label{p:inamge-homoclinic-equiv} Let $(X,f)$ and $(Y,g)$ be two dynamical systems. Suppose that $\psi \colon X \to Y$ is a continuous map such that $\psi \circ f = g \circ \psi$. If two points in $X$ are $f$-homoclinic, then their images under $\psi$ are $g$-homoclinic. \end{proposition} \begin{proof} Suppose that the points $x,y \in X$ are $f$-homoclinic. Let $d$ (resp.~$d'$) be a metric on $X$ (resp.~$Y$) that is compatible with the topology. Then, we have that $$ d'(g^n(\psi(x)),g^n(\psi(y))) = d'(\psi(f^n(x)),\psi(f^n(y))) \to 0 $$ as $|n| \to \infty$ since $d(f^n(x),f^n(y)) \to 0$ as $|n| \to \infty$ and $\psi$ is uniformly continuous. This shows that the points $\psi(x)$ and $\psi(y)$ are $g$-homoclinic. \end{proof} \subsection{Hyperbolic toral automorphisms} Consider the $n$-dimensional torus $\T^n := \R^n/\Z^n$. For $x \in \R^n$, we write $\overline{x} := x + \Z^n \in \T^n$. \par Let $\M_n(\Z)$ denote the ring of $n \times n$ matrices with integral entries. Every matrix $A \in \M_n(\Z)$ induces a differentiable group endomorphism $f_A \colon \T^n \to \T^n$ given by $f_A(\overline{x}) = \overline{Ax}$ for all $x \in \R^n$. One says that $f_A$ is the \emph{toral endomorphism} associated with $A$. \par The group of invertible elements of $\M_n(\Z)$ is the group $\GL_n(\Z)$ of $n \times n$ matrices with integral entries and determinant $\pm 1$. If $A \in \GL_n(\Z)$, then $f_A$ is a differentiable automorphism of $\T^n$ and one says that $f_A$ is the \emph{toral automorphism} associated with $A$. If $f$ is a toral automorphism of $\T^n$, the homoclinicity class of $\overline{0}$ is a subgroup of $\T^n$, called the \emph{homoclinicity group} of $f$, and denoted by $\Delta(f)$ (cf.~\cite{lind-schmidt}). Note that two points $p,q \in \T^n$ are $f$-homoclinic if and only if $p - q \in \Delta(f)$. Observe also that every point in $\Q^n/\Z^n$ is $f$-periodic so that $\Delta(f) \cap \Q^n/\Z^n = \{\overline{0}\}$ by Proposition~\ref{p:periodic-homoclinic}. \par A matrix $A \in \GL_n(\Z)$ is called \emph{hyperbolic} if its complex spectrum does not meet the unit circle. A diffeomorphism $f$ of $\T^n$ is called a \emph{hyperbolic automorphism} if there is a hyperbolic matrix $A \in \GL_n(\Z)$ such that $f = f_A$. \par There are hyperbolic automorphisms on any torus of dimension $n \geq 2$ and every hyperbolic automorphism is Anosov. On the other hand, by the Franks-Manning theorem (cf.~\cite{franks} and \cite{manning}), every Anosov diffeomorphism of $\T^n$ is topologically conjugate to a hyperbolic automorphism. \section{Proof of the main result} \label{sec:proof} \begin{proof}[Proof of Theorem~\ref{t:anosov-torus}] By the Franks-Manning theorem mentioned above, we can assume that $f$ is a hyperbolic automorphism of $\T^n$. Let $A \in \GL_n(\Z)$ be a hyperbolic matrix such that $f = f_A$. Let $\Delta(f) \subset \T^n$ denote the homoclinicity group of $f$. \par Let $\tau \colon \T^n \to \T^n$ be a continuous map commuting with $f$. Since $f$ is a hyperbolic automorphism of $\T^n$ and $\tau$ commutes with $f$, it follows from \cite[Theorem~2]{walters} that $\tau$ is an affine toral endomorphism, that is, there is a matrix $B \in \M_n(\Z)$ and $c \in \R^n$ such that $\tau(\overline{x}) = \overline{Bx +c}$ for all $x \in \R^n$. Note that the map $\overline{x} \mapsto \tau(\overline{x}) - \overline{c}$ is a group endomorphism of $\T^n$. \par Suppose first that $\tau$ is surjective. We claim that $\det(B) \not= 0$. Indeed, otherwise, the image of $\R^n$ under the affine map $x \mapsto Bx+c$ would be an affine subspace $L \subset \R^n$ with empty interior and we would deduce from the Baire category theorem that $L + \Z^n \subsetneqq \R^n$, which would contradict the surjectivity of $\tau$. Thus, we have $\det(B) \not= 0$ and hence $B \in \GL_n(\Q)$. \par Let $x, y \in \R^n$ such that the points $\overline{x}$ and $\overline{y}$ are $f$-homoclinic and satisfy $\tau(\overline{x}) = \tau(\overline{y})$. We then have $B(x - y) \in \Z^n$ and hence $x -y \in \Q^n$. This implies that the point $\overline{x - y} $ is $f$-periodic. On the other hand, since the points $\overline{x}$ and $\overline{y}$ are $f$-homoclinic, we have that $\overline{x - y} = \overline{x} - \overline{y} \in \Delta(f)$. By applying Proposition~\ref{p:periodic-homoclinic}, we deduce that $\overline{x} - \overline{y} = \overline{0}$ and therefore $\overline{x} = \overline{y}$. This shows that $\tau$ is pre-injective and hence that $(\T^n,f)$ has the Moore property. \par It remains to show that $(\T^n,f)$ has the Myhill property. So, let us assume now that $\tau$ is pre-injective. Since $f$ is a hyperbolic automorphism, it is known that the group $\Delta(f)$ is isomorphic to $\Z^n$ (see \cite[Example~3.3]{lind-schmidt}). On the other hand, since $\tau(\overline{0}) = \overline{c}$, we have that $\tau(\Delta(f)) - \overline{c} \subset \Delta(f)$ by Proposition~\ref{p:inamge-homoclinic-equiv}. As the restriction of $\tau$ to $\Delta(f)$ is injective by our pre-injectivity hypothesis, we deduce that $\tau(\Delta(f)) - \overline{c}$ is a finite-index subgroup of $\Delta(f)$. It is also known that $\Delta(f)$ is dense in $\T^n$ (see again \cite[Example~3.3]{lind-schmidt}). Consider now the closure $C \subset \T^n$ of $\tau(\Delta(f)) - \overline{c}$. As $C$ is a closed subgroup of $\T^n$ and hence a torus, we must have $C = \T^n$ since otherwise the group $\Delta(f)$ would be contained in the union of a finite number of translates of a torus of dimension less than $n$ and then could not be dense in $\T^n$. It follows that the closure of $\tau(\Delta(f))$ is also equal to $\T^n$. By continuity, this shows that $\tau$ is surjective. Consequently, $(\T^n,f)$ has the Myhill property. \end{proof} \newpage \section{Concluding remarks} \subsection{Examples of non-injective pre-injective endomorphisms} \label{ss:injective-not-pre} Injectivity trivially implies pre-injectivity for endomorphisms of dynamical systems $(X,f)$. However, the converse is false if $f$ is an Anosov diffeomorphism of $\T^n$. Indeed, if $f$ is a hyperbolic automorphism of $\T^n$ and $m \in \Z$ satisfies $|m| \geq 2$, then multiplication by $m$ on $\T^n$ is an endomorphism of $(\T^n,f)$ that is surjective and hence pre-injective but not injective (its kernel has cardinality $|m|^n$). \par This can be generalized in the following way. Let $f_i$ be a hyperbolic automorphism of the $n_i$-dimensional torus $\T^{n_i}$, where $n_i \geq 2$ and $1 \leq i \leq k$. Then $f := f_1 \times \dots \times f_k$ is a hyperbolic toral automorphism of the $N$-torus $$ \T^N = \T^{n_1} \times \dots \times \T^{n_k}, $$ where $N := n_1 + \dots + n_k$. Let us fix some non-zero integers $m_i \in \Z$, $1\leq i \leq k$, with $|m_i| \geq 2$ for at least one $i$, and consider the endomorphism $\tau$ of $(\T^N,f)$ defined by $$ \tau(x) := (m_1 x_1,\dots,m_k x_k) $$ for all $x = (x_1,\dots,x_k) \in \T^N$. Clearly $\tau$ is surjective and hence pre-injective. On the other hand, the kernel of $\tau$ has cardinality $|m_1|^{n_1} \cdots |m_k|^{n_k}$ and therefore $\tau$ is not injective. \subsection{Ergodic toral automorphisms} Let $A \in \GL_n(\Z)$ and $f_A \colon \T^n \to \T^n$ the associated toral automorphism. It is well known (see e.g.\ \cite[Proposition 24.1]{dgs_ergodic-theory}) that $f_A$ is ergodic (with respect to the Lebesgue measure on $\T^n$) if an only if $A$ has no eigenvalues which are roots of unity. This implies in particular that every hyperbolic toral automorphism of $\T^n$ is ergodic. Observe that  the proof of the Moore property for hyperbolic toral automorphisms given in Section~\ref{sec:proof}  applies verbatim to ergodic toral automorphisms. Indeed, a continuous map $\tau \colon \T^n \to \T^n$ commuting with an ergodic toral automorphism $f_A$ is necessarily affine (cf. \cite[Theorem~2]{walters}). We thus have the following: \begin{proposition} Let $f \colon \T^n \to \T^n$ be an ergodic toral automorphism. Then the dynamical system $(\T^n,f)$ has the Moore property. In other words, if $\tau \colon \T^n \to \T^n$ is a surjective continuous map commuting with $f$, then the restriction of $\tau$ to each $f$-homoclinicity class is injective. \qed \end{proposition} We know that hyperbolic toral automorphisms also satisfy the Myhill property by Theorem~\ref{t:anosov-torus}. For ergodic toral endomorphism, however, the Myhill property fails to hold in general. Consider for instance the matrix \[ A = \begin{pmatrix} 0 & 0 & 0 & 1\\ -1 & 0 & 0 & 2\\ 0 & -1 & 0 & 1\\ 0 & 0 & -1 & 2 \end{pmatrix} \in \GL_4(\Z). \] Its eigenvalues are \[ \begin{split} \lambda_1 & = \frac{1}{2} - \frac{1}{\sqrt{2}} + i\frac{\sqrt{\sqrt{8}+1}}{2}\\ \lambda_2 & = \frac{1}{2} - \frac{1}{\sqrt{2}} - i\frac{\sqrt{\sqrt{8}+1}}{2}\\ \lambda_3 & = \frac{1}{2} + \frac{1}{\sqrt{2}} - \frac{\sqrt{\sqrt{8}-1}}{2}\\ \lambda_4 & = \frac{1}{2} + \frac{1}{\sqrt{2}} + \frac{\sqrt{\sqrt{8}-1}}{2} \end{split} \] and satisfy $|\lambda_1| = |\lambda_2| = 1$ and $0 < \lambda_3 < 1 < \lambda_4$. Since none of these eigenvalues is a root of unity, the associated toral automorphism $f_A \colon \T^4 \to \T^4$ is ergodic. On the other hand, the characteristic polynomial $\chi_A(x) = x^4 - 2x^3 + x^2 - 2x +1$ is irreducible over $\Q$. It follows that  the homoclinicity group $\Delta(f_A)$ is reduced to $0$ (cf. \cite[Example 3.4]{lind-schmidt}). Consequently, every endomorphism of the dynamical system $(\T^4,f_A)$ is pre-injective with respect to $f_A$. Since the zero endomorphism is not surjective, we conclude that $(\T^4,f_A)$ does not have the Myhill property. \subsection{The Myhill property for elementary basic sets} Let us first recall some definitions. \par Let $f$ be a homeomorphism of a compact metrizable space $X$. One says that the dynamical system $(X,f)$ is \emph{expansive} if there exists a constant $\delta > 0$ such that, for every pair of distinct points $x,y \in X$, there exists $n = n(x,y) \in \Z$ such that $d(f^n(x),f^n(y)) \geq \delta$ (here $d$ denotes any metric on $X$ that is compatible with the topology). One says that the dynamical system $(X,f)$ is \emph{topologically mixing} if for any pair of non-empty open subsets $U,V \subset X$, there is an integer $N = N(U,V) \in \Z$ such that $f^n(U)$ meets $V$ for all $n \geq N$. \par One says that the dynamical system $(X,f)$ is a \emph{factor} of the dynamical system $(Y,g)$ if there exists a continuous surjective map $\pi \colon Y \to X$ such that $\pi \circ g = f \circ \pi$. Such a map $\pi$ is then called a \emph{factor map}. A factor map $\pi \colon Y \to X$ is said to be \emph{uniformly bounded-to-one} if there is an integer $K \geq 1$ such that each $x \in X$ has at most $K$ pre-images in $Y$. \par Let $A$ be a finite set and let $\sigma$ denote the shift homeomorphism on $A^\Z$. A $\sigma$-invariant closed subset $\Sigma \subset A^\Z$ is called a \emph{subshift}. A subshift $\Sigma \subset A^\Z$ is said to be \emph{of finite type} if there is an integer $n \geq 1$ and a subset $P \subset A^n$ such that $X$ consists of the sequences $x = (x_i) \in A^\Z$ that satisfy $$ (x_i, x_{i + 1}, \dots , x_{i + n - 1}) \in P $$ for all $i \in \Z$. \par We have the following result. \begin{theorem} \label{t:myhill-property} Let $f \colon X \to X$ be a homeomorphism of a compact metrizable space $X$. Suppose that the dynamical system $(X,f)$ is expansive and that there exist a finite set $A$, a topologically mixing subshift of finite type $\Sigma \subset A^\Z$, and a uniformly bounded-to-one factor map $\pi \colon \Sigma \to X$. Then the dynamical system $(X,f)$ has the Myhill property. \end{theorem} \begin{proof} This is a special case of \cite[Theorem~1.1]{csc-myhyp} since the group $\Z$ is amenable and every topologically mixing subshift of finite type over $\Z$ is strongly irreducible. \end{proof} Note that there exist dynamical systems $(X,f)$ satisfying all the hypotheses of Theorem~\ref{t:myhill-property} that do not have the Moore property. An example of such a dynamical system is provided by the \emph{even subshift}, that is, the subshift $X \subset \{0,1\}^\Z$ consisting of all bi-infinite sequences of $0$s and $1$s with an even number of $0$s between any two $1$s. Indeed, if $\Sigma \subset \{0,1\}^\Z$ denotes the \emph{golden subshift}, that is, the subshift consisting of all bi-infinite sequences of $0$s and $1$s with no consecutive $1$s, it is known that $\Sigma$ is a topologically mixing subshift of finite type and that there is a factor map $\pi \colon \Sigma \to X$ such that each configuration in $X$ has at most $2$ pre-images in $\Sigma$ (see e.g. \cite[Example~4.1.6]{lind-marcus}). Thus, the even subshift satisfies the hypotheses of Theorem~\ref{t:myhill-property}. On the other hand, Fiorenzi \cite[Section~3]{fiorenzi-sofic} proved that the even subshift does not have the Moore property. \par Let $M$ be a compact $C^r$-differentiable ($r \geq 1$) manifold and $f$ a $C^1$-diffeomorphism of $M$. One says that a closed $f$-invariant subset $\Lambda \subset M$ is a \emph{hyperbolic set} if the restriction to $\Lambda$ of the tangent bundle of $M$ continuously splits as a direct sum of two invariant subbundles $E_s$ and $E_u$ such that, with respect to some (or equivalently any) Riemannian metric on $M$, the differential $df$ of $f$ is uniformly contracting on $E_s$ and uniformly expanding on $E_u$, i.e., there are constants $C > 0$ and $0 < \lambda < 1$ such that $\Vert df^n(v) \Vert \leq C\lambda^n \Vert v \Vert$ and $\Vert df^{-n}(w) \Vert \leq C \lambda^n \Vert w \Vert$ for all $x \in \Lambda$, $v \in E_s(x)$, $w \in E_u(x)$, and $n \geq 0$. Thus, $f$ is an Anosov diffeomorphism if and only if the whole manifold $M$ is a hyperbolic set for $f$. A point $x \in M$ is called \emph{non-wandering} if for every neighborhood $U$ of $x$, there is an integer $n \geq 1$ such that $f^n(U)$ meets $U$. The set $\Omega(f)$ consisting of all non-wandering points of $f$ is a closed invariant subset of $M$. One says that $f$ satisfies Smale's \emph{Axiom A} if the set $\Omega(f)$ is hyperbolic and the periodic points of $f$ are dense in $\Omega(f)$ (cf.~\cite{smale}). If $f$ is Axiom A, then $\Omega(f)$ can be uniquely written as a disjoint union of closed invariant subsets $\Omega(f) = X_1 \cup \dots \cup X_k$, such that the restriction of $f$ to each $X_i$ is topologically transitive for $1 \leq i \leq k$ (spectral decomposition theorem). These subsets $X_i$ are called the \emph{basic sets} of $(M,f)$. A basic set $X_i$ is called \emph{elementary} if the restriction of $f$ to $X_i$ is topologically mixing. \begin{corollary} \label{cor:goe-hyperbolic-set} Let $f$ be a $C^1$-diffeomorphism of a compact $C^r$-differentiable ($r \geq 1$) manifold $M$ that satisfies Axiom A. Suppose that $X$ is an elementary basic set of $(M,f)$ and let $f\vert_X \colon X \to X$ denote the restriction of $f$ to $X$. Then the dynamical system $(X,f\vert_X)$ has the Myhill property. \end{corollary} \begin{proof} The fact that the dynamical system $(X,f\vert_X)$ satisfies the hypothesis of Theorem~\ref{t:myhill-property} follows from the classical work of Rufus Bowen on Axiom A diffeomorphisms. The expansivity of $f\vert_X$ is shown in \cite[Lemma~3]{bowen-markov-1970}. On the other hand, Bowen used a Markov partition to show that one can find a finite set $A$ (the set of rectangles of the Markov partition) and a topologically mixing subshift of finite type $\Sigma \subset A^\Z$ such that there exists a uniformly bounded-to-one factor map $\pi \colon \Sigma \to X$ (cf. \cite[Theorem~28 and Proposition~30]{bowen-markov-1970} and \cite[Proposition~10]{bowen-markov-minimal_AJM-1970}). \end{proof} It is an open question whether every Anosov diffeomorphism is topologically mixing. However, for an Anosov diffeomorphism $f \colon M \to M$, the following conditions are known to be equivalent (see e.g.~\cite[Theorem 5.10.3]{brin-stuck}): (a) $f$ is topologically mixing; (b) $f$ is topologically transitive; (c) every point in $M$ is non-wandering; (d) the periodic points of $f$ are dense in $M$. This implies in particular that every topologically mixing Anosov diffeomorphism $f$ of a compact manifold $M$ is Axiom A with $\Omega(f) = M$. Thus, as a particular case of Corollary~\ref{cor:goe-hyperbolic-set}, we get the following result, which extends the Myhill part of Theorem~\ref{t:anosov-torus} to all topologically mixing Anosov diffeomorphisms. \begin{corollary} \label{cor:goe-anosov} If $f$ is a topologically mixing Anosov diffeomorphism of a compact $C^r$-differentiable ($r \geq 1$) manifold $M$, then the dynamical system $(M,f)$ has the Myhill property. \qed \end{corollary} \subsection{Zero-dimensional basic sets} If $X$ is a zero-dimensional basic set of an Axiom A diffeomorphism $f$, it follows from \cite[Theorem~6.6]{bowen-top-entropy-axiomA} that $(X,f\vert_X)$ is topologically conjugate to an irreducible subshift of finite type. As every irreducible subshift of finite type has the Moore-Myhill property by \cite[Corollary~2.19]{fiorenzi-sofic}, we deduce that $(X,f\vert_X)$ has the Moore-Myhill property. \par In view of this observation and of Theorem~\ref{t:anosov-torus}, it is very tempting to conjecture that the restriction of an Axiom A diffeomorphism to a (possibly non-elementary) basic set always has the Moore-Myhill property. \bibliographystyle{siam}
3,212,635,537,873
arxiv
\section{Introduction} Semantic representation is an essential part of NLP. For this reason, several semantic representation paradigms have been proposed. Among them we find PropBank \cite{palmer2005proposition} and FrameNet Semantics \cite{baker1998berkeley}, Abstract Meaning Representation (AMR) \cite{banarescu2013abstract}, Universal Decompositional Semantics \cite{white2016universal} and Universal Conceptual Cognitive Annotation (UCCA) \cite{abend2013universal}. These constantly improving representations, along with the advances in semantic parsing, have proven to be beneficial in many NLU tasks such as Question Answering \cite{inproceedings_QA}, text summarization \cite{abstractive_summary}, dialog systems \cite{inproceedings_slu_srl}, information extraction \cite{W13-3820} and machine translation \cite{Liu:2010:SRF:1873781.1873862}. UCCA is a cross-lingual semantic representation scheme, has demonstrated applicability in English, French and German (with pilot annotation projects on Czech, Russian and Hebrew). Despite the newness of UCCA, it has proven useful for defining semantic evaluation measures in text-to-text generation and machine translation \cite{birch2016hume}. UCCA represents the semantics of a sentence using directed acyclic graphs (DAGs), where terminal nodes correspond to text tokens, and non-terminal nodes to higher level semantic units. Edges are labelled, indicating the role of a child in the relation to its parent. UCCA parsing is a recent task and since UCCA has several unique properties, adapting syntactic parsers or parsers from other semantic representations is not straight-forward. Current state of the art parser TUPA \cite{hershcovich2017transition} uses a transition based parsing to build UCCA representations. Building over previous work on FrameNet Semantic Parsing \cite{marzinotto:calor,marzinotto:hal-01731385} we chose to perform UCCA parsing using sequence tagging methods along with a graph decoding policy. To do this we propose a recursive strategy in which we perform a first inference on the sentence to extract the main scenes and links and then we recursively apply our model on the sentence with a masking mechanism at the input in order to feed information about the previous parsing decisions. \section{Model} \label{sec:model} Our system consists of a sequence tagger that is first applied on the sentence to extract the main scenes and links and then it is recursively applied on the extracted element to build the semantic graph. At each step of the recursion we use a masking mechanism to feed information about the previous stages into the model. In order to convert the sequence labels into nodes of the UCCA graph we also apply a decoding policy at each stage. Our tagger is implemented using deep bi-directional GRU (\emph{biGRU}). This simple architecture is frequently used in semantic parsers across different representation paradigms. Besides its flexibility, it is a powerful model, with close to state of the art performance on both PropBank \cite{he2017deep} and FrameNet semantic parsing \cite{SoinFrameParsing, marzinotto:hal-01731385}. More precisely, the model consists of a 4 layer bi-directional Gated Recurrent Unit (GRU) with highway connections \cite{SrivastavaGS15}. Our model uses has a rich set of features including syntactic, morphological, lexical and surface features, which have shown to be useful in language abstracted representations. The list is given below: \begin{itemize} \setlength\itemsep{-4pt} \item Word embeddings of 300 dimensions \footnote{Obtained from https://github.com/facebookresearch/MUSE}. \item Syntactic dependencies of each token\footnote{\label{note1} Using Universal Dependencies categories. }. \item Part-of-speech and morphological features such as gender, number, voice and degree\footnotemark[2]. \item Capitalization and word length encoding. \item Prefixes and Suffixes of 2 and 3 characters. \item A language indicator feature. \item Boolean indicator of idioms and multi word expression. Detailed in section \ref{sec:expressions}. \item Masking mechanism, which indicates, for a given node in the graph, the tokens within the span as well as the arc label between the node and its parent. See details in section \ref{sec:masking}. \end{itemize} Except for words where we use pre-trained embeddings, we use randomly initialized embedding layers for categorical features. \begin{figure*}[htbp] \centering \includegraphics[width=0.95\linewidth]{images/recursive_parsing.pdf} \caption{ Masking mechanism through recursive calls. \texttt{Step 1} parses the sentence to extract parallel scenes (H) and links (L). Then \texttt{Steps 2.A 2.B} use a different mask to parse these scenes and extract arguments (A) and processes (P) which will be recursively parsed until terminal nodes are reached.} \label{fig:hwlstm} \vspace{-4mm} \end{figure*} \subsection{Masking Mechanism} \label{sec:masking} We introduce an original masking mechanism in order to feed information about the previous parsing stages into the model. During parsing, we first do an initial inference step to extract the main scenes and links. Then, for each resulting node, we build a new input which is essentially the same, but with a categorical sequence masking feature. For the input tokens in the node span, this feature is equal to the label of the arc between the node and its parent. Outside of the node span, this mask is equal to \texttt{O}. A diagram of this masking process is shown in figure \ref{fig:hwlstm}. The process continues and the model recursively extracts the inner semantic structures (the node's children) in the graph, until the terminal nodes are reached. To train such a model, we build a new training corpus in which the sentences are repeated several times. More precisely, a sentence appears $N$ times ($N$ being the number of non terminal nodes in the UCCA graph) each one a with different mask. \subsection{Multi-Task UCCA Objective} Along with the UCCA-XML graph representations, a simplified tree representation in CoNLL format was also provided. Our model combines both representations using a multitask objective with two tasks. \texttt{TASK1} consists in, for a given node and its corresponding mask, predicting the children and their arc labels. \texttt{TASK1} encodes the children spans using a BIO scheme. The \texttt{TASK2} consists in predicting the CoNLL simplified UCCA structure of the sentence. More precisely, \texttt{TASK2} is a sequence tagger that predicts the UCCA-CoNLL function of each token. \texttt{TASK2} is not used for inference purposes. It is only a support that help the model to extract relevant features, allowing it to model the whole sentence even when parsing small pre-terminal nodes. \subsection{Label Encoding} We have previously stated that \texttt{TASK1} uses BIO encoded labels to model the structure of the children of each node in the semantic graph. In some rare cases, the BIO encoding scheme is not sufficient to model the interaction between parallel scenes. For example, when we have two parallel scenes and one of them appears as a clause inside the other. In such cases, BIO encoding does not allow to determine whether the last part of the sentence belongs to the first scene or to the clause. Despite this issue, prior experiments testing more complete label encoding schemes (BIEO, BIEOW) showed that BIO outperforms the other schemes on the validation sets. \subsection{Graph Decoding} During the decoding phase, we convert the BIO labels into graph nodes. To do so, we add a few constraints to ensure the outputs are feasible UCCA graphs that respect the sentence's structure: \begin{itemize} \setlength\itemsep{-4pt} \item We merge parallel scenes (H) that do not have either a verb or an action noun to the nearest previous scene having one. \item Within each parallel scene, we force the existence of one and only one \texttt{State} (S) or \texttt{Process} (P) by selecting the token with the highest probability of \texttt{State} or \texttt{Process}. \item For scenes (H) and arguments (A) we do not allow to split multi word expressions (MWE) and chunks into different graph nodes. If the boundary between two segments lies inside a chunk or MWE segments are merged. \end{itemize} \subsection{Remote Edges} Our approach easily handles remote edges. We consider remote arguments as those detected outside the parent's node span (see \texttt{REM} in Fig.\ref{fig:hwlstm}). Our earlier models showed low recall on remotes. To fix this, we introduced a detection threshold on the output probabilities. This increased the recall at the cost of some precision. The optimal detection threshold was optimized on the validation set. \section{Data} \subsection{UCCA Task Data} In table \ref{tab:data} we show the number of annotations for each language and domain. Our objective is to build a model that generalizes to the French language despite of having only 15 training samples. \begin{table}[t!] \centering \begin{tabular}{lrrr} Corpus & Train & Dev & Test \\ \hline English Wiki & 4113 & 514 & 515 \\ English 20K & - & - & 492 \\ German 20K & 5211 & 651 & 652 \\ French 20K & 15 & 238 & 239 \\ \end{tabular} \caption{number of UCCA annotated sentences in the partitions for each language and domain} \label{tab:data} \vspace{-4mm} \end{table} \begin{table*}[!htbp] \centering \begin{tabular}{l|ccc|ccc|ccc|ccc|} & \multicolumn{3}{c}{Ours Labeled} & \multicolumn{3}{|c|}{Ours Unlabeled} & \multicolumn{3}{|c|}{TUPA Labeled} & \multicolumn{3}{c|}{TUPA Unlabeled}\\ \cline{2-13} Open Tracks & \multicolumn{1}{|p{0.6cm}|}{\centering Avg \\ F1} & \multicolumn{1}{|p{0.6cm}|}{\centering Prim \\ F1} & \multicolumn{1}{|p{0.6cm}|}{\centering Rem \\ F1} & \multicolumn{1}{|p{0.6cm}|}{\centering Avg \\ F1} & \multicolumn{1}{|p{0.6cm}|}{\centering Prim \\ F1} & \multicolumn{1}{|p{0.6cm}|}{\centering Rem \\ F1} & \multicolumn{1}{|p{0.6cm}|}{\centering Avg \\ F1} & \multicolumn{1}{|p{0.6cm}|}{\centering Prim \\ F1} & \multicolumn{1}{|p{0.6cm}|}{\centering Rem \\ F1} & \multicolumn{1}{|p{0.6cm}|}{\centering Avg \\ F1} & \multicolumn{1}{|p{0.6cm}|}{\centering Prim \\ F1} & \multicolumn{1}{|p{0.6cm}|}{\centering Rem \\ F1} \\ \hline Dev English Wiki & \textbf{70.8} & 71.3 & 58.7 & 82.5 & 83.8 & 37.5 & \textbf{74.8} & 75.3 & 51.4 & 86.3 & 87.0 & 51.4 \\ Dev German 20K & \textbf{74.7} & 75.4 & 40.5 & 87.4 & 88.6 & 40.9 & \textbf{79.2} & 79.7 & 58.7 & 90.7 & 91.5 & 59.0 \\ Dev French 20K & \textbf{\underline{63.6}} & 64.4 & 19.0 & 78.9 & 79.6 & 20.5 & \textbf{\underline{51.4}} & 52.3 & 1.6 & 74.9 & 76.2 & 1.6 \\ \hline Test English Wiki & \textbf{68.9} & 69.4 & 42.5 & 82.3 & 83.1 & 42.8 & \textbf{73.5} & 73.9 & 53.5 & 85.1 & 85.7 & 54.3 \\ Test English 20K & \textbf{66.6} & 67.7 & 24.6 & 82.0 & 83.4 & 24.9 & \textbf{68.4} & 69.4 & 25.9 & 82.5 & 83.9 & 26.2 \\ Test German 20K & \textbf{74.2} & 74.8 & 47.3 & 87.1 & 88.0 & 47.6 & \textbf{79.1} & 79.6 & 59.9 & 90.3 & 91.0 & 60.5\\ Test French 20K & \textbf{\underline{65.4}} & 66.6 & 24.3 & 80.9 & 82.5 & 25.8 & \textbf{\underline{48.7}} & 49.6 & 2.4 & 74.0 & 75.3 & 3.2 \\ \hline \end{tabular} \caption{Our model vs TUPA baseline performance for each open track} \label{tab:results_pt1} \vspace{-2mm} \end{table*} \begin{table*}[!htbp] \centering \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|} Tracks & D & C & N & E & F & G & L & H & A & P & U & R & S \\ \hline EN Wiki & 64.3 & 71.4 & 68.5 & 69.6 & 76.7 & 0.0 & 71.4 & 61.3 & 60.0 & 64.0 & 99.7 & 89.2 & 25.1 \\ EN 20K & 47.2 & 75.2 & 62.5 & 72.3 & 71.5 & 0.2 & 57.9 & 49.5 & 55.7 & 69.8 & 99.7 & 83.2 & 19.5 \\ DE 20K & 69.4 & 83.8 & 57.7 & 80.5 & 83.8 & 59.2 & 68.4 & 62.2 & 67.5 & 68.9 & 97.1 & 86.9 & 25.9 \\ FR 20K & 46.1 & 76.0 & 58.9 & 71.2 & 53.3 & 4.8 & 59.4 & 50.4 & 52.8 & 67.6 & 99.6 & 83.5 & 16.9 \\ \hline \end{tabular} \caption{Our model's Fine-grained F1 by label on Test Open Tracks } \label{tab:results_pt2} \vspace{-4mm} \end{table*} When we analyse data in details we observe that there are several tokenization errors. Specially in the French corpus. These errors propagate to the POS tagging and dependency parsing as well. For this reason, we retokenized and parsed all the corpus using a enriched version of UDpipe that we trained ourselves \cite{udpipe:2017} using the Treebanks from Universial Dependencies\footnote{\url{https://universaldependencies.org/}}. For French we enriched the Treebank with XPOS from our lexicon. Finally, since tokenization is pre-established in the UCCA corpus we projected the improved POS and dependency parsing into the original tokenization of the task. \subsection{Supplementary lexicon} \label{sec:expressions} We observed that a major difficulty in UCCA parsing is analyzing idioms and phrases. The unawareness about these expressions, which are mostly used as links between scenes, mislead the model during the early stages of the inference and errors get propagated through the graph. To boost the performance of our model when detecting links and parallel scenes we developed an internal list with about 500 expression for each language. These lists include prepositional, adverbial and conjunctive expressions and are used to compute Boolean features indicating the words in the sentence which are part of an expression. \subsection{Multilingual Training} This model uses multilingual word embeddings trained using fastText \cite{bojanowski2017enriching} and aligned using MUSE \cite{conneau2017word}. This is done in order to ease cross-lingual training. In prior experiments we introduced an adversarial objective similar to \cite{D17-1302, naacl-advlearning} to build a language independent representation. However, the language imbalance on the training data did not allow us to take advantage from this technique. Hence, we simply merged training data from different languages. \section{Experiments} We focus on obtaining the model that best generalizes on the French language. We trained our model for 50 epochs and we selected the best one on the validation set. In our experiments we did not use any product of experts or bagging technique and we did not run any hyper parameter optimization. We trained several models building different training corpora composed of different language combinations. We obtained our best model using the training data for all the languages. This model \texttt{FR+DE+EN} achieved 63.6\% avg. F1 on the French validation set. Compared to 63.1\% for \texttt{FR+DE}, 62.9\% for \texttt{FR+EN} and 50.8\% for only \texttt{FR}. \subsection{Main Results} In Table \ref{tab:results_pt1} we provide the performance of our model for all the open tracks and we provide the results for TUPA baseline in order to establish a comparison. Our model finishes 4th in the French Open Track with an average F1 score of 65.4\%, very close to the 3rd place which had a 65.6\% F1. For languages with larger training corpus, our model did not outperform the monolingual TUPA. \subsection{Error Analysis} In Table \ref{tab:results_pt2} we give the performance by arc type. We observe that the main performance bottleneck is in the parallel scene segmentation (H). Due to our recursive parsing approach, this kind of error is particularly harmful to the model performance, because scene segmentation errors at the early steps of the parsing may induce errors in the rest of the graph. To assert this, we used the validation set to compare the performance of the mono scene sentences (with no potential scene segmentation problems) with the multi scene sentences. For the French track we obtained 67.2\% avg. F1 on the 114 mono scene sentences compared to 61.9\% avg. F1 on the 124 multi scene sentences. \section{Conclusions} We described an original approach to recursively build the UCCA semantic graph using a sequence tagger along with a masking mechanism and a decoding policy. Even though this approach did not yield the best results in the UCCA task, we believe that our original recursive, mask-based parsing can be helpful in low resource languages. Moreover, we believe that this model could be further improved by introducing a global criterion and by performing further hyper parameter tuning.
3,212,635,537,874
arxiv
\section{Introduction} \label{sec:intro} In the last twenty years large and systematic digital imaging surveys of the sky have revolutionized astronomical exploration. Beginning with the Sloan Digital Sky Survey \citep[SDSS;][]{York2000}, ground-based surveys like the PS1 \citep[Pan-STARRS1][]{Chambers2016}, Dark Energy Survey \citep[DES;][]{Abbott2017}, Legacy Surveys \citep[LS;][]{Dey2019}, the DECam Plane Survey \citep[DECaPS;][]{Schlafly2018}, Zwicky Transient Factory \cite[ZTF;][]{Bellm2015}, and others have mapped the sky at multiple bands, epochs and cadences. The wealth of data from these large surveys has enabled a wide variety of discoveries and expanded our ability to explore the universe with large statistical samples. The surveys have yielded, for example, dozens of new Milky Way satellite dwarf galaxies \citep[by DES; e.g.,][]{Bechtol2015,Drlica-Wagner2015}, to systematic searches of variable stars in the Galactic halo \citep[by PS1;][]{Sesar2017b}, and troves of supernovae in distant galaxies \citep[e.g.,][]{Perley2020}, and much more. In the near future, the Legacy Survey of Space and Time \citep[LSST;][]{Ivezic2008} with the Rubin Observatory will further revolutionize astronomy by mapping the southern skies every three nights for ten years. \begin{figure*}[!ht] \begin{center} \includegraphics[width=1.0\hsize,angle=0]{f1.jpg} \end{center} \caption{Density of the 3.9 billion NSC objects on the sky in Galactic coordinates. The higher densities from the Galactic midplane and Bulge as well as the LMC and SMC are readily apparent. The density is a combination of the true density of objects as well as the particular exposure times of the various observing programs.} \label{fig_bigmap} \end{figure*} A great resource that is often overlooked is the large wealth of public imaging data that exist in national observatory data archives. These data are inhomogeneous, including both large systematic surveys and smaller PI-driven programs. Hence, significant effort is required in order to make the entire dataset useful to the community as a combined ``survey'', with uniform reductions and calibrations suitable for astronomical exploration. Similar efforts been undertaken with other facilities, resulting in, e.g., the Chandra Source Catalog \citep{Evans2010} and Hubble Source Catalog \citep{Whitmore2016}. However, the variable observing conditions and large variety of telescope and instrument combinations have made this effort more formidable for ground-based optical and NIR imaging archival data. The NOIRLab Source Catalog (NSC)\footnote{Formerly known as the NOAO Source Catalog.} is an effort to create such a uniformly processed dataset using the images in the NOIRLab Astro Data Archive\footnote{\url{https://astroarchive.noao.edu/}}. The first data release \citep[NSC DR1;][]{Nidever2018} consisted of 2.9 billion object with 34 billion individual measurements from over 195,000 images. Here we present the second public data release of the NSC (NSC DR2). It catalogs 3.9 billion unique sources, representing the largest single astronomical source catalog to date. The 68 billion individual measurements from 412,116 images more than doubles the total data volume from NSC DR1. Besides more data, NSC DR2 includes some important processing updates. We use recently released wide-area catalogs (ATLAS-Refcat2, \citealt{Tonry2018}; and Skymapper DR1, \citealt{Wolf2018}) to improve our photometric calibration in the south, and more accurate extinction estimates \citep[e.g., RJCE method][]{Majewski2011} for the smaller number of model magnitudes that we still employ for zero point estimates. The Gaia DR2 \citep{Gaia2016,GaiaDR2} astrometry and proper motion corrections are used to obtain improved astrometric calibration of the images which, in turn, produce more accurate NSC proper motion measurements. A more sophisticated algorithm is used to group individual measurements into ``objects'' using DBSCAN clustering. In addition, eight photometric variability metrics are computed for each object and 10$\sigma$ outliers are automatically flagged. These enhancements improve the precision of the data, reduce systematics, and add more valuable information that will make it easier for users to exploit the data for a variety of scientific goals. The paper is laid out in the following manner. The imaging dataset is described in Section \ref{sec:data} while a description of the data reduction and processing steps is given in Section \ref{sec:phot}. A brief discussion of caveats are presented in Section \ref{sec:caveats}. The overall catalog and the achieved performance and reliability are discussed in Section \ref{sec:performance}. A number of science use cases of NSC DR2 are presented in Section \ref{sec:science}. Finally, Section \ref{sec:summary} gives a brief summary. \begin{center} \begin{figure}[t] \includegraphics[width=1.0\hsize,angle=0]{f2.pdf} \caption{Histogram of rms scatter around the astrometric fit per CCD (averaged across the exposure) for DR1 and DR2. The use of Gaia DR2 astrometry, including proper motion corrections, reduced the median scatter from 21.3 mas in the NSC DR1 to 16.7 mas in the NSC DR2. The astrometric scatter is now more tightly peaked around 14 mas.} \label{fig_astrms} \end{figure} \end{center} \section{Dataset} \label{sec:data} All sources in NSC DR2 are measured from public images drawn from the NOIRLab Astro Data Archive\footnote{\url{https://astroarchive.noao.edu/}}. The majority of the images used in NSC DR2 are CTIO-4m Blanco + DECam (340,952 exposures). In addition, there are 41,561 exposures from KPNO-4m Mayall + Mosaic3 (the majority from the Mayall $z$-band Legacy Survey; MzLS; \citealt{Dey2016}) and 29,603 exposures from the Steward Observatory Bok-2.3m + 90Prime (from the Beijing-Arizona Sky Survey; BASS; \citealt{Zou2017,Zou2018,Zou2019}). A large fraction of the images are data obtained by the Dark Energy Survey \citep{Abbott2017} and the Legacy Surveys imaging projects \citep{Dey2019}. \section{Reduction and Photometry} \label{sec:phot} The reduction and analysis tools used are essentially the same as those used for NSC DR1 \citep[see][for details]{Nidever2018}. We provide a brief summary here and describe the few changes. The NSC data use images that are processed by the NOAO Community Pipelines for instrumental calibration (\citealt{Valdes2014}; Valdes et al., in preparation)\footnote{\url{https://www.noao.edu/noao/staff/fvaldes/CPDocPrelim}}. Source Extractor\footnote{\url{https://www.astromatic.net/software/sextractor}} \citep{Bertin1996} is used to perform source detection, aperture photometry, and morphological parameter estimation from the images. Finally, custom software\footnote{\url{https://github.com/noaodatalab/noaosourcecatalog}} (written in Python and IDL) is used to perform photometric and astrometric calibration, to spatially cluster sources measured on different images into unique objects, and to measure their mean object properties. The NSC processing is split into three main steps: (1) measurement, (2) calibration, and (3) combination. These steps are described in more detail below. \subsection{Measurement} \label{subsec:measure} The measurement step includes detection of objects in the images, the measurement of position and aperture photometry, and the measurement of morphological parameters. We use Source Extractor (SExtractor) with the same setup as described in \citet{Nidever2018}. For exposures taken (and publicly available) prior to UT 2017 October 11 (the NSC DR1 cutoff date), we used the SExtractor files previously used for NSC DR1 files. We ran SExtractor anew on exposures taken (and publicly available) after UT 2019 October 17. SExtractor measurement catalogs for 482,630 exposures were considered for inclusion in NSC DR2. This was later trimmed down to 412,116 after the application of quality cuts (see section \ref{subsec:combine}). \subsection{Calibration} \label{subsec:calibrate} The second major NSC processing step is the astrometric and photometric calibration. The methods are nearly identical to those used in NSC DR1 \citep[see][for details]{Nidever2018}, with two major improvements: (1) we use Gaia DR2 proper motions in the astrometric calibration, and (2) Skymapper DR1 \citep{Wolf2018} and ATLAS-Refcat2 \citep{Tonry2018} to derive photometric zeropoints for southern data (i.e., where PS1 data are not available). \subsubsection{Astrometry} \label{subsubsec:astrometry} The astrometric calibration, as in NSC DR1, is performed on an exposure catalog and linear correction terms are derived using a reference catalog. In NSC DR2, Gaia DR2 was used for the reference and the coordinates of the reference stars were precessed to the epoch of the observation using the Gaia DR2 coordinates (J2015.5) and proper motions. Robust standard deviations of the residuals of the astrometric fit are calculated for each exposure. The median rms of the astrometric residuals decreased from 21 mas in NSC DR1 to is 17 mas in NSC DR2, and the distributions have become much more sharply peaked (see Fig.\ \ref{fig_astrms}). The biggest improvement is in the proper motions, which are explained further in section \ref{sec:performance}. The rms of the {\em average} coordinates for bright stars when compared to Gaia DR2 is 7--8 mas. \begin{center} \begin{figure*}[!ht] $\begin{array}{cc} \includegraphics[trim={0cm 4.9cm 2cm 1cm},clip,width=0.50\hsize,angle=0]{f3a.pdf} \includegraphics[trim={0cm 4.9cm 2cm 1cm},clip,width=0.50\hsize,angle=0]{f3b.pdf} \\ \includegraphics[trim={0cm 4.9cm 2cm 1cm},clip,width=0.50\hsize,angle=0]{f3c.pdf} \includegraphics[trim={0cm 4.9cm 2cm 1cm},clip,width=0.50\hsize,angle=0]{f3d.pdf} \\ \includegraphics[trim={0cm 4.9cm 2cm 1cm},clip,width=0.50\hsize,angle=0]{f3e.pdf} \includegraphics[trim={0cm 4.9cm 2cm 1cm},clip,width=0.50\hsize,angle=0]{f3f.pdf} \\ \includegraphics[trim={0cm 4.9cm 2cm 1cm},clip,width=0.50\hsize,angle=0]{f3g.pdf} \end{array}$ \caption{Maps of the NSC DR2 photometric rms of bright stars (with more than two measurements) for the seven $u, g, r, i, z, Y$ and {\em VR} bands on a logarithmic scale in equatorial Aitoff projection.} \label{fig_photscatter_maps} \end{figure*} \end{center} \begin{center} \begin{figure*}[!ht] $\begin{array}{cc} \includegraphics[trim={0cm 4.9cm 2cm 1cm},clip,width=0.48\hsize,angle=0]{f4a.pdf} \includegraphics[trim={0cm 4.9cm 2cm 1cm},clip,width=0.48\hsize,angle=0]{f4b.pdf} \\ \includegraphics[trim={0cm 4.9cm 2cm 1cm},clip,width=0.48\hsize,angle=0]{f4c.pdf} \includegraphics[trim={0cm 4.9cm 2cm 1cm},clip,width=0.48\hsize,angle=0]{f4d.pdf} \\ \includegraphics[trim={0cm 4.9cm 2cm 1cm},clip,width=0.48\hsize,angle=0]{f4e.pdf} \includegraphics[trim={0cm 4.9cm 2cm 1cm},clip,width=0.48\hsize,angle=0]{f4f.pdf} \\ \includegraphics[trim={0cm 4.9cm 2cm 1cm},clip,width=0.48\hsize,angle=0]{f4g.pdf} \end{array}$ \caption{Maps of the mean photometric zero point in each HEALPix relative to the mean across all exposures in a given band (in equatorial Aitoff projection). The airmass-dependent extinction effects per exposure and long-term temporal variations in the zero points have been removed.} \label{fig_zeropoint_maps} \end{figure*} \end{center} \begin{center} \begin{figure*}[!ht] $\begin{array}{cc} \includegraphics[trim={1.7cm 4.9cm 2cm 1cm},clip,width=0.49\hsize,angle=0]{f5a.pdf} \includegraphics[trim={1.7cm 4.9cm 2cm 1cm},clip,width=0.49\hsize,angle=0]{f5b.pdf} \\ \includegraphics[trim={1.7cm 4.9cm 2cm 1cm},clip,width=0.49\hsize,angle=0]{f5c.pdf} \includegraphics[trim={1.7cm 4.9cm 2cm 1cm},clip,width=0.49\hsize,angle=0]{f5d.pdf} \\ \includegraphics[trim={1.7cm 4.9cm 2cm 1cm},clip,width=0.49\hsize,angle=0]{f5e.pdf} \includegraphics[trim={1.7cm 4.9cm 2cm 1cm},clip,width=0.49\hsize,angle=0]{f5f.pdf} \\ \includegraphics[trim={1.7cm 4.9cm 2cm 1cm},clip,width=0.49\hsize,angle=0]{f5g.pdf} \end{array}$ \caption{Depth maps (95th percentile) for all seven $u, g, r, i, z, Y$ and {\em VR} bands in equatorial Aitoff projection.} \label{fig_depths} \end{figure*} \end{center} \subsubsection{Photometry} \label{subsubsec:photometry} While the PS1 catalog made it fairly straightforward to calibrate the majority ($grizY$ band) of the northern photometry in NSC DR2, the lack of large-scale photometric surveys and publicly available catalogs made it somewhat challenging to photometrically calibrate the southern data. We, therefore, relied on ``model magnitudes'', which are linear combinations of photometric measurements from catalogs such as 2MASS \citep{Skrutskie2006} and APASS \citep{Henden2015} that best approximated PS1 $grizY$ and SMASH $u$-band photometry. Fortunately, the release of Skymapper DR1 and the ATLAS-Refcat2 (which combine data from PS1, Skymapper DR1, ATLAS and other catalogs) made it easier to calibrate southern data in NSC DR2 and decreased our reliance on the 2MASS-APASS-derived model magnitudes. For exposures in $grizY$ bands with $\delta$$>$$-$29\degr, zeropoints were derived with PS1 and stars with 0.0$\leq$$(g_{\rm PS1}-i_{\rm PS1})$$\leq$3.0. For the southern ($\delta$$<$$-$29\degr) exposures in the $griz$ bands, zeropoints were derived using ATLAS-Refcat2 stars with 0.20$\lesssim$$(g_{\rm ATL}-i_{\rm ATL})$$\lesssim$0.80. For $u$-band exposures with $-$90\degr$\leq$$\delta$$<$0\degr, zeropoints were derived using Skymapper DR1 and stars with 0.80$\leq$$(G_{\rm GAIA}-J)_0$$\leq$1.1. For $VR$-band exposures, we used the average of the $r$-band (PS1 in the north and ATLAS-Refcat2 in the south) and Gaia DR2 $G$ magnitudes and stars with 0.0$\leq$$(g-i)$$\leq$3.0 to derive zeropoints. Finally, model magnitudes were used for $u$-band exposures with $\delta$$>$0$^{\circ}~$ and $Y$-band exposures with $\delta$$<$-29$^{\circ}~$ (see Table 1). We improved our extinction measurements in high extinction regions by using the Rayleigh-Jeans Color Excess \citep[RJCE;][]{Majewski2011}, which uses near- and mid-infrared photometry to derive accurate extinction values star-by-star. In low extinction regions ($|b|$$>$$16$$^{\circ}~$ and $R_{\rm LMC}$$>$$5$$^{\circ}~$ and $R_{\rm SMC}$$>$$4$$^{\circ}~$ and maximum $E(B-V)<0.2$) the SFD \citep{Schlegel1998} reddening value is used (converted to $E(J-K_{\rm s})$ with a factor of 0.453). In high extinction regions, RJCE reddening values are used with 2MASS near-infrared photometry \citep{Skrutskie2006} and mid-infrared photometry from $Spitzer$, where possible (from GLIMPSE \citet{Benjamin2003} in the Galactic midplane and SAGE \citet{Meixner2006} in the Magellanic Clouds), or AllWISE \citep{Cutri2013}. The equation used with $Spitzer$ data is: \begin{equation} E(J-K_{\rm s}) = 1.377 (H-[4.5\mu]-0.08); \end{equation} and with AllWISE data is: \begin{equation} E(J-K_{\rm s}) = 1.377 (H-W2-0.05) \end{equation} Figure \ref{fig_photscatter_maps} shows the rms of photometric measurements of bright stars across the sky in each of the seven bands. The photometric precision is $\lesssim$10 mmag in all bands (except for $u$-band) across most of the sky. As in NSC DR1, the photometric scatter is higher in crowded regions like the Galactic midplane and the centers of the LMC and SMC, reaching values of $\sim$50 mmag. The precision should improve in these regions once PSF photometry is used for measurement. Figure \ref{fig_zeropoint_maps} shows the maps of mean NSC DR2 zero points with airmass-dependent extinction effects and long-term temporal variations removed. Table 2 gives statistics on the zero point rms for each band. Overall, the zeropoints are quite spatially smooth except for crowded regions. Since we ``absorb'' any aperture correction term into the zero point value, it is not unexpected for this correction to change in crowded versus uncrowded regions and show up in these mean zero point maps. In addition, the jump in the mean zero point of $Y$ in the Galactic midplane across the $\delta$=$-$29 boundary, going from PS1 as the reference in the north to model magnitudes with 2MASS photometry in the south, suggests a systematic issue related to extinction, crowding or aperture corrections in one or both surveys (e.g., PS1 and 2MASS). \begin{center} \begin{figure*}[ht] \includegraphics[trim={1cm 5cm 1cm 1cm},clip,width=1.0\hsize]{f6.pdf} \caption{Number of NSC exposures on a logarithmic scale in equatorial coordinates.} \label{fig_nexp} \end{figure*} \end{center} \begin{center} \begin{deluxetable*}{lc} \tablecaption{NSC DR2 Model Magnitude Equations} \tablecolumns{2} \tablehead{ \colhead{Model Magnitude} & \colhead{Color Range} } \startdata \vspace{0.1cm} $u$ = 0.2301$\times$$NUV_{\rm GALEX}$ + 0.7616$\times$$G_{\rm Gaia}$ + 0.4937$\times$$(G-J)_0$ + 0.8327$\times$$E(J-K_{\rm s})$ + 0.1344 & 0.8$\le$$(G-J)_0$$\le$1.1 \\ \vspace{0.1cm} $Y$ = $J$ + 0.54482$\times$$(J-K_{\rm s})_0$ + 0.422$\times$$E(J-K_{\rm s})$ + 0.66338 & 0.4$\le$$(J-K_{\rm s})_0$$\le$0.7 \\ \hline \\ \vspace{0.1cm} $(G-J)_0$ = $G_{\rm Gaia}$ $-$ $J$ $-$ 3.27$\times$$E(J-K_{\rm s})$ & \\ $(J-K_{\rm s})_0$ = $J$ $-$ $K_{\rm s}$ $-$ $E(J-K_{\rm s})$ & \enddata \label{table_modelmags} \end{deluxetable*} \end{center} \begin{center} \begin{deluxetable}{lcccc} \tablecaption{Zero Point Statistics} \tablecolumns{5} \tablehead{ \colhead{Filter} & \colhead{$\delta$ Range} & \colhead{Median} & \colhead{Median} & \colhead{Median} \\ & & \colhead{ZP RMS} & \colhead{ZP Error} & \colhead{N$_{\rm reference}$} } \startdata $u$ & all & 0.070 & 0.0070 & 552 \\ $g$ & $>-29$ & 0.039 & 0.0005 & 5717 \\ $g$ & $<-29$ & 0.038 & 0.0007 & 3411 \\ $r$ & $>-29$ & 0.036 & 0.0008 & 7290 \\ $r$ & $<-29$ & 0.040 & 0.0010 & 4746 \\ $i$ & $>-29$ & 0.038 & 0.0007 & 9712 \\ $i$ & $<-29$ & 0.056 & 0.0012 & 3473 \\ $z$ & $>-29$ & 0.045 & 0.0018 & 1683 \\ $z$ & $<-29$ & 0.064 & 0.0010 & 4786 \\ $Y$ & $>-29$ & 0.059 & 0.0008 & 7348 \\ $Y$ & $<-29$ & 0.030 & 0.0016 & 1267 \\ {\em VR} & $>-29$ & 0.030 & 0.0002 & 15024 \\ {\em VR} & $<-29$ & 0.021 & 0.0003 & 6575 \enddata \label{table_zpstats} \end{deluxetable} \end{center} \begin{center} \begin{figure*}[ht] \includegraphics[width=1.0\hsize,angle=0]{f7.pdf} \caption{Cumulative histogram of ({\em Left}) area and ({\em Right}) number of objects with numbers of exposures greater than some value.} \label{fig_nexp_cumhist} \end{figure*} \end{center} \subsection{Combination} \label{subsec:combine} The final step in the NSC processing is ``combination'' in which the measurements from multiple exposures are spatially cross-matched and average properties are calculated for each unique object. \subsubsection{Quality Cuts} \label{subsubsec:combineqacuts} Before the combination process, we first apply quality cuts to the exposures, selecting only data satisfying the following: \begin{enumerate} \item public, as of 2019-10-17; \item all chips astrometrically calibrated (using Gaia DR2) in NSC calibration step; \item median $\alpha$/$\delta$ RMS across all chips $\leq$0.15\arcsec; \item seeing FWHM $\leq$2\arcsec; \item zero point (corrected for airmass extinction) within 0.5 mag of the temporally-smoothed\footnote{The zero points were B-spline smoothed over $\approx$200 nights to track system throughput variations.} zero point for that band; \item zero point uncertainty $\leq$0.05 mag; \item number of photometric reference stars $\geq$5 (per CCD); \item spatial variation (RMS across chips) of zero point $\leq$0.15 mag ($|b|$$>$10\degr) or $\leq$0.55 mag ($|b|$$\leq$10\degr) (only for DECam with number of chips with well-measured chip-level zero points $>$5); \item not in a survey's bad exposure list (currently only for the Legacy Surveys and SMASH data). \end{enumerate} The same quality cuts used in NSC DR1 are applied to the individual measurements. We only use measurements: \begin{enumerate} \item with no CP mask flags set; \item with no SExtractor object or aperture truncation flags; \item not detected on the bad amplifier of DECam CCDNUM 31 (if MJD$>$56,600 or big background jump between amplifiers); \item with S/N$\geq$5. \end{enumerate} \subsubsection{Grouping Measurements} \label{subsubsec:groupmeas} In NSC DR1, we used a ``sequential clustering'' algorithm to cluster source measurements into objects. Sources were successively crossmatched (with a 0.5\arcsec~matching radius) to existing ``objects" or were added as new objects if no match was found. Average properties were calculated in a cumulative fashion as measurements were ``added'' to an object. While this algorithm was efficient, it did not allow the use of robust statistics (e.g., outlier rejection), the calculation of photometric variability indices, or the ability to detect fast-moving objects. In NSC DR2, we employed a hybrid spatial clustering algorithm to group measurements into objects. As in NSC DR1, the HEALPix scheme \citep{Gorski2005} with NSIDE=128 is used to tile the sky into smaller regions to efficiently parallelize the computation during this combination step. For a given HEALPix, all measurements passing the above-mentioned quality cuts of chip images overlapping the HEALPix and its neighboring HEALPix are loaded. For HEALPix with many measurements (over 1 million), the combination algorithm is performed on smaller HEALPix subregions (up to 64 nside=1024) and the results later merged together. The two steps of the hybrid spatial clustering algorithm are (1) clustering with DBSCAN \citep[Density-based spatial clustering of applications with noise][]{Ester96} using a small clustering distance to generate object centers, followed by (2) sequential clustering of the leftover measurements using the object centers. The first step allows the definition of objects themselves (i.e., their central positions) using their spatial coherence which should be roughly on the scale of the median astrometric uncertainty. Therefore, the eps parameter, the maximum distance that two points within a given cluster can be separated, is set to three times the median astrometric uncertainty or a minimum of 0.3\arcsec; on average eps$\approx$0.4\arcsec. The minimum number of points to define a cluster is either three or the total number of exposures (if this is $<3$). The second step is needed because the DBSCAN clustering does not take into account the astrometric uncertainty of individual measurements. The measurements not clustered in the DBSCAN step are crossmatched to the existing object centers using a crossmatch radius of three times their astrometric uncertainty or a minimum of the DBSCAN eps value. The crossmatching is done successively, with the leftover measurements from one exposure at a time. Any measurements not matched to existing objects are added as new objects to the object list. We then calculate average properties for each object from the calibrated and grouped measurements. These include flux-weighted mean coordinates, robust proper motions, mean magnitude, uncertainties, RMS, some morphology parameters per band, and mean morphology parameters across all measurements. \subsubsection{Photometric Variabilty Metrics} \label{subsubsec:photvar} The new clustering method allows for the calculation of photometric variability indices. We calculate eight variability metrics: RMS, MAD, IQR, von Neumann ratio $\eta$, Stetson's J and K indices, $\chi$, and RoMS. \citet{Sokolovsky2017} give detailed descriptions and comparisons of these and other metrics and helped guide our work in this area. The metrics we used can be separated into two groups: (1) metrics using only the magnitude residuals (relative to the flux-weighted mean magnitude in each band; i.e., MAD, RMS, IQR, $\eta$), and (2) metrics using both the magnitude residuals and their uncertainties (J, K, $\chi$, RoMS). Examples of the eight photometric variability indices for one HEALPix are shown in Figure \ref{fig_photvar}. The photometric variability indices alone are not enough to select photometrically variable objects as the average value of the metric will change with magnitude. An additional analysis is performed on each HEALPix to calculate the median value of the metric and the robust scatter as a function of magnitude. We cannot use a single band for this magnitude because not all objects will have data in that band. Therefore, we construct a ``fiducial magnitude'' which is the first band in the prioritized list [$r$, $g$, $i$, $z$, $Y$, $VR$, $u$] that has been observed for a given object. \autoref{fig_photvar} shows objects within 3$\sigma$ of the median metric value as a function of fiducial magnitude (black dashed line) as filled red circles. Objects with variability 10$\sigma$ or more above the median are indicated by blue $\times$ symbols (the 10$\sigma$ cutoff is denoted by the green solid line). We decided to use the MAD variability index for identifying variable sources. All objects that are 10$\sigma$ above the median are flagged \texttt{VARIABLE10SIG} (23,270,027 objects). The offset of each object in units of $sigma$ from the median is reported in the catalog as \texttt{NSIGVAR}, to aid users who desire a different $\sigma$ cutoff. The use of the NSC DR2 variability information to study variable stars and quasi-stellar objects (QSOs) is discussed in Sections \ref{subsec:variables} and \ref{subsec:qsos}, respectively. \begin{center} \begin{figure*}[ht] $\begin{array}{cc} \includegraphics[width=0.33\hsize,angle=0]{f8a.png} \includegraphics[width=0.33\hsize,angle=0]{f8b.png} \includegraphics[width=0.33\hsize,angle=0]{f8c.png} \\ \includegraphics[width=0.33\hsize,angle=0]{f8d.png} \includegraphics[width=0.33\hsize,angle=0]{f8e.png} \includegraphics[width=0.33\hsize,angle=0]{f8f.png} \\ \includegraphics[width=0.33\hsize,angle=0]{f8g.png} \includegraphics[width=0.33\hsize,angle=0]{f8h.png} \end{array}$ \caption{The eight photometric variability indices computed in NSC DR2. RMS, MAD, IQR, von Neumann ratio $\eta$, Stetson's J and K indices, $\chi$, and RoMS. Each index is shown versus a ``fiducial magnitude'' which is the first band in a prioritized list ($r$, $g$, $i$, $z$, $Y$, $VR$, $u$) that has been observed for a given object. The filled red circles are objects within 3$\sigma$ of the median as a function magnitude (black dashed line). The blue $\times$ symbols are objects 10$\sigma$ above the median; this threshold is indicated by the green line. } \label{fig_photvar} \end{figure*} \end{center} \section{Caveats} \label{sec:caveats} Users of the NSC DR2 should be aware of the following caveats. As the observations are taken over a range of observing conditions and instruments, two distinct neighboring objects may be spatially resolved in some exposures but not others. This causes inherent problem when combining measurements at the catalog-level. Figure \ref{fig_deblending} shows one example, where the measured object centers cluster into three groups: the individual centers of the two stars from good-seeing exposures, and a position between the two resulting from the poor-seeing exposures where the sources remain confused. There is no clear-cut ``correct'' way to handle this situation, without a more sophisticated source modeling approach \citep[e.g., {\it Tractor}][]{Tractor}. For now, we have chosen the simple approach: we have left the three clusters as three separate objects, but flagged the object of the unresolved pair of stars as a \texttt{PARENT}. This flag is set for any object that contains other objects inside its ellipse footprint defined by its central coordinates and the \texttt{ASEMI}, \texttt{BSEMI}, and \texttt{THETA} shape parameters. \begin{center} \begin{figure}[ht] \includegraphics[width=1.0\hsize,angle=0]{f9.png} \caption{Combining measurements of objects taken under different seeing conditions results in source confusion. The background image is a good-seeing exposure showing two resolved stars. The dark filled circles are the centers of individual measurements color-coded by their spatial FWHM (darker colors for smaller seeing FWHM). The red ellipses are the measured shapes of those measurements. The better seeing data results in two sources associated with the two stars, whereas the poor seeing data results in a common source with a larger ellipticity.} \label{fig_deblending} \end{figure} \end{center} As mentioned in Section \ref{subsec:combine}, the eps parameter was determined independently in each HEALPix based on the measurements and the median astrometric uncertainty within that HEALPix. While this was meant to allow the clustering of measurements into objects to be determined by the data itself, it had unforeseen consequences at the boundaries of HEALPix regions. Each HEALPix region includes measurements in a 10\arcsec\ boundary around it. Only objects, and their constituent measurements, are included in the HEALPix catalog if the final central position is inside the HEALPix boundary. If a neighboring HEALPix clusters the measurements at the boundary in the same way, as was done in NSC DR1, then the measurements and objects are appropriately parceled out to their correct HEALPix. In NSC DR2 the clustering parameter changes slightly from one HEALPix to the next, resulting in rare instances when measurements are either not grouped into an object or grouped to multiple objects. This mostly happens in very crowded regions such as in the Galactic bulge or the centers of the LMC and SMC. The NSC DR2 contains 77,273 missing and 9,345 duplicate measurements. While this is a non-negligible number, it is nonetheless a small fraction of the total 68 billion total measurements. In a future data release, the DBSCAN clustering parameter will be fixed for all HEALPix. \section{Description and Achieved Performance of Final Catalog} \label{sec:performance} The NSC DR2 covers more than 35,000 square degrees of the sky and catalogs over 3.9 billion unique objects (Fig.\ \ref{fig_bigmap}). It includes more than 68 billion individual measurements --- twice the number in NSC DR1 --- from 412,116 exposures spanning over 7 years. Most of the sky is covered in multiple bands, with 33,028 square degrees having two bands and 30,860 square degrees having three bands. Almost 1.9 billion objects have data in three or more bands and can be used to construct color-color diagrams. Maps of the 95th percentile depths are shown in Figure \ref{fig_depths}. The median depths are 22.6, 23.6, 23.2, 22.8, 22.3, 21.0, 23.3 mag in the $u, g, r, i, z, Y,$ and {\em VR} bands. The photometric precision (Fig.\ \ref{fig_photscatter_maps}) is $\lesssim$10~mmag is all bands with the exception of the $u$-band, and is fairly uniform across the sky. Although an effort has been made to improve the photometric calibration in crowded and dusty regions by using more accurate extinction corrections (e.g., the RJCE method) and newer reference catalogs in the southern sky (e.g., Skymapper DR1 and ATLAS-Refcat2) some issues remain. We advise caution when using the photometry in the very crowded and high extinction regions. Most of the sky is covered by multiple exposures giving rise to a valuable time-series dataset (Fig.\ \ref{fig_nexp}). Cumulative histograms of the area and number of objects with a certain number of exposures is shown in Figure \ref{fig_nexp_cumhist}. Roughly 500 million objects have 30 or more exposures, which should be enough to reliably detect and classify many classes of variable stars (e.g., see Section \ref{subsec:variables}). The large numbers of repeat observations of individual sources also permits reliable estimates of their proper motion. Figure \ref{fig_pmcomparison} shows a comparison of well-measured NSC DR2 proper motions (S/N$>$3 or a proper motion error $<$3 mas yr$^{-1}$ in both $\mu_{\alpha}$ and $\mu_{\delta}$) to those in Gaia DR2 in a 700 square degree region around ($\alpha$,$\delta$)=(45\degr,$-$30\degr). The two datasets agree very well with the median offset in $\mu_\alpha$/$\mu_\delta$ being $-$0.248/$-$0.065 mas yr$^{-1}$ with a scatter of 2.45/2.36 mas yr$^{-1}$. \begin{center} \begin{figure}[t] \includegraphics[width=1.0\hsize,angle=0]{f10.pdf} \caption{Comparison of NSC DR2 proper motion measurements with those from Gaia DR2 for 1,365,136 stars in a 700 degree squared region centered on ($\alpha$,$\delta$)=(45\degr,$-$30\degr). Only stars with proper motion S/N$>$3 or proper motion error $<$3 mas yr$^{-1}$ (in both $\mu_{\alpha}$ and $\mu_{\delta}$) in both catalogs and at least three detections in the NSC and a temporal baseline of 200 days were selected. The one-to-one line is shown in red. The median offset in $\mu_\alpha$/$\mu_\delta$ is 0.248/0.065 mas yr$^{-1}$ (NSC$-$Gaia) with a robust scatter of 2.45/2.36 mas yr$^{-1}$.} \label{fig_pmcomparison} \end{figure} \end{center} \begin{center} \begin{figure*}[ht] \includegraphics[width=0.47\hsize,angle=0]{f11a.png} \includegraphics[trim={-2cm -0.5cm 0cm 0cm},clip,width=0.5\hsize,angle=0]{f11b.png} \caption{\textit{(left)} ``Tracklets'' of solar system objects detected from the NSC in an area near the ecliptic plane, in equatorial coordinates [$^{\circ}$]. Individual measurements are color-coded by their observation time. \textit{(right)} Proper motion [\arcsec/hr] of tracklets from left panel, in ecliptic coordinates. 3 groups of objects are shown: (a) Main Belt Objects, (b) Hilda asteroids, and (c) Jupiter Trojans. } \label{fig_tracklets} \end{figure*} \end{center} NSC DR2 is being released through the NOIRLab's Astro Data Lab\footnote{\url{https://datalab.noirlab.edu}} \citep{Fitzpatrick2016,Nikutta2020}. The database tables can be accessed via direct SQL queries using the Data Lab client software (Python) or via a TAP service\footnote{\url{http://datalab.noao.edu/tap}}. The column descriptions can be viewed using the Data Lab query interface page\footnote{\url{https://datalab.noirlab.edu/query.php}}. Data analysis and exploration can be performed using the Astro Data Lab's Jupyter Hub Notebook server running next to the data which provides fast access. \section{Example Science Use Cases} \label{sec:science} There are many science use cases for a large catalog like the NSC DR2. Below we describe a handful of them: Solar System objects (\S \ref{subsec:solarsystem}), stellar streams (\S \ref{subsec:stellarstreams}), variable stars (\S \ref{subsec:variables}), proper motion searches (\S \ref{subsec:propermotion}), and, QSO variability (\S \ref{subsec:qsos}). \subsection{Solar System Objects} \label{subsec:solarsystem} The large temporal baseline and multiple repeat observations available in the NSC make it ideal for exploring Solar System objects (SSOs). Figure \ref{fig_tracklets} shows 3,313 tracklets detection in the area of one DECam field near the ecliptic plane. It is immediately obvious that a large fraction of the tracklets are in the same direction, reflecting the tendency of SSOs to have predominantly prograde orbits. Of all SSOs, identifying Near Earth Objects (NEOs) is of particular interest because of the danger they pose to the Earth. Catastrophic effects can result from both large and small bodies; an asteroid with a diameter of 15 km likely caused the mass extinction event 65 million years ago that is widely believed to have killed a significant fraction of the non-avian dinosaurs, whereas the object that flattened 2,000 km$^2$ of forest in Tunguska in 1908 was ``only'' 200 m in diameter. Concern over past and future impacts led the U.S. Congress to introduce the Spaceguard directive in the 1990's, directing NASA to find 90\% of NEOs with a diameters $\geq$1km \citep{Morrison}. In 2011 NEOWISE \citep{Mainzer} reported the completion of the Spaceguard goal, and are now working towards the new goal of detecting 90\% of NEOs greater than 140 m in diameter\footnote{https://www.nasa.gov/planetarydefense/neoo} along with the Catalina Sky Survey (CSS), ATLAS \citep{atlas2018}, and LINEAR. The NSC expands the search carried out by projects such as the Palomar Transient Factory \citep{Law2009}, ZTF, CSS, and PS1, and the ones that will soon be possible with the Rubin Observatory's LSST. In addition to being deeper than many existing surveys (and therefore able to detect smaller NEOs), the NSC adds data coverage in sparsely observed regions of the sky (e.g., the southern sky that PS1 does not reach and the Galactic plane that is unobserved by CSS). The NSC's depth suggests that it probably contains many detections of objects further from the sun. Studying properties of the distant Kuiper belt objects (KBOs) will provide stronger constraints on planet formation theories, as KBOs are likely remnants of the primordial solar system. Further detections of both KBOs and the even more distant Inner Oort Cloud objects can reveal the effects of external forces such as the Galactic tide, passing stars, or distant unknown planets \citep[such as the proposed Planet 9;][]{Sheppard2014} on our solar system and its formation history. The Planet 9 hypothesis stems from an observed clustering in the orientation and phase of the orbits of distant solar system objects \citep{Sheppard2014}. Although to date it remains undetected, the NSC data could contain detections of the elusive Planet 9---or additional distant solar system objects that could shed further light on Planet 9's existence. Objects found in the NSC may also clarify the size distribution of solar system bodies. The number of asteroids detected in groups of similar radii does not match the predictions \citep{Shep2010}, but the results of a search through the NSC data could alter the situation. By investigating the range of asteroid sizes, new parameters will be established regarding the accretion and formation history of the solar system. Therefore, with the right analysis techniques in hand, the deep, multi-band, time-series NSC information of 3.9 billion objects will allow us to markedly improve the census of solar system bodies and help our understanding of planet formation. \begin{center} \begin{figure}[t] \includegraphics[width=1.0\hsize,angle=0]{f12.png} \caption{The Palomar 5 stellar stream as seen in NSC DR2, which includes data used by \citet{Bonaca2020}.} \label{fig_stream} \end{figure} \end{center} \subsection{Stellar Streams} \label{subsec:stellarstreams} Stellar streams are the remnants of old globular clusters or dwarf galaxies that have been tidally disrupted and stretched apart by interactions with the Milky Way \citep[e.g, the Sagittarius stream;][]{Majewski2003,Koposov2012}. These linear over-densities of stars are very valuable for constraining the Galactic gravitational potential and cn potentially reveal dark matter sub-halos that disturb the otherwise uniform stream shape. With the knowledge that streams form from old star clusters and dwarf galaxies (which typically have a small range in stellar age), search algorithms can be tuned for these characteristics. Using isochrones, we can search the sky for populations that fall within a small tolerance of these curves in color-magnitude space \citep[i.e., masked filters][]{Grillmair2006b}. Stellar density maps at a large range of distance moduli can then be created and searched for linear overdensities, the tell-tale sign of a stream. The broad spatial coverage and depth in multiple bands make the NSC DR2 very useful for detecting new stellar steams, especially in the southern hemisphere which has not yet been systematically searched the way the northern hemisphere has with SDSS and PS1. Figure \ref{fig_stream} shows an example of the application of this technique to a region of sky near the well-known Palomar 5 stellar stream \citep[e.g.,][]{Odenkirchen2001,Grillmair2006a, Bonaca2020}. Searching the NSC DR2 with an isochrone with metallicity [Fe/H]=$-$0.5, an age of 11 Gyr, and distance modulus of 17.57 mag (33 kpc), the resulting density map clearly reveals the stream-like tidal tails of Pal 5. The NSC DR2 catalog covers new areas that haven't been searched extensively before and could reveal new stream candidates. \subsection{Variable Stars} \label{subsec:variables} The NSC DR2's temporal baseline and depth are also very useful for detecting and studying variable stars, especially since the DR2 reports photometric variability metrics and an automatic selection of over 23 million variable objects. Figure \ref{fig_rrlyrae} shows an example RR Lyrae lightcurve using data from NSC DR2. Since variable stars are ``standard candles'', we can determine their distances accurately and use them as probes to study the structure of our Milky Way galaxy. RR Lyrae variables, in particular, are plentiful and luminous and have been used for decades to explore the stellar structure of the Milky Way stellar halo. \citet{Sesar2017b} used $\sim$40,000 RR Lyrae stars from PS1 to detect a new feature of the Outer Virgo Overdensity in the outer MW, while \citet{Hernitschek2017} used the same dataset to create an accurate 3D map of the Sgr stellar stream. The deep NSC data can be used to detect RR Lyrae (and other variables) over nearly the entire sky and to larger distances than previously possible, extending these types of studies throughout the Milky Way and its satellites. \begin{center} \begin{figure}[t] \includegraphics[width=1.0\hsize,angle=0]{f13.png} \caption{Example lightcurve of an RR Lyrae star showing three bands. Rejected outlier points are marked with $\times$s.} \label{fig_rrlyrae} \end{figure} \end{center} \subsection{Proper Motion Searches} \label{subsec:propermotion} \textit{Gaia} DR2 has revolutionized astrometry, but the NSC nevertheless provides a valuable complement by providing proper motion measurements that push much fainter at optical wavelengths. At $g$ band, NSC is $\sim$2.5 magnitudes deeper than \textit{Gaia}. NSC DR2 will thus enable proper motion searches for distant stars with high tangential velocities over a volume $\sim$25 times larger than \textit{Gaia}, extending the many prior \textit{Gaia}-based studies of hypervelocity and runaway stars \citep[e.g.,][]{shen_hypervelocity_wd, kenyon14, brown_hypervelocity}. NSC can also measure motions for white dwarfs much fainter than those accessible to \textit{Gaia}, expanding the census of white dwarfs in the solar neighborhood. Accurate NSC proper motion measurements for faint white dwarfs will also help purify selections of faint quasars, and provide more opportunities to uncover valuable ultra-cool white dwarf binaries where metallicity and radial velocity can be obtained from a main sequence companion \citep[e.g.,][]{ucwd_benchmarking}. Figure \ref{fig_hpm} shows two examples of high proper motion stars well detected in the NSC DR2 data. By virtue of its excellent red-optical sensitivity and sky coverage, NSC DR2 will also provide many exciting opportunities to search for very late type stars and brown dwarfs in the solar neighborhood. CatWISE 2020 \citep{catwise_catalog} currently represents the best available infrared proper motion catalog, but NSC DR2 will offer capabilities not possible with CatWISE. At its faint end, CatWISE motions are only significant above $\sim$150--200 mas/yr. On the other hand, NSC measures motions many times smaller than this at high significance. Reliably identifying late type objects with low proper motions and accurately measuring those small motions are critical steps toward pinpointing young planetary mass brown dwarfs, such as those in nearby moving groups \citep[e.g.,][]{schneider_l_dwarfs}. NSC $Y$ band is also typically deeper than WISE for brown dwarfs in the late M to early T regime, whereas \textit{Gaia} is shallower than WISE for all brown dwarf types. The $\sim$1$''$ angular resolution of NSC can also enable motion searches that are not feasible with WISE (which has FWHM $\sim$ 6$''$ from 3-5$\mu$m). For instance, NSC can be queried for pairs of faint/red objects with similar proper motions, to find closely spaced (few arcsecond separation) brown dwarf visual binaries. Similarly, NSC can be used to find close late-type co-moving companions to white dwarfs, providing valuable benchmark systems for the typically difficult task of estimating brown dwarf ages. In both of these examples, CatWISE would merely show one blended moving source rather than the resolved pair provided by NSC. \begin{center} \begin{figure}[t] \includegraphics[width=1.0\hsize,angle=0]{f14a.jpg} \includegraphics[width=1.0\hsize,angle=0]{f14b.jpg} \caption{Examples of high proper motion stars in NSC DR2. (Top) A co-moving pair of objects (21st and 22nd magnitude) with a proper motion of 270 mas/yr. (Bottom) A 21st magnitude star with a total proper motion of 205 mas/yr. The Legacy Survey Viewer (\url{https://www.legacysurvey.org/viewer}) was used to generate the background RGB images.} \label{fig_hpm} \end{figure} \end{center} \subsection{QSO Variability} \label{subsec:qsos} Variations in the brightness of QSOs can be due to changes in the accretion disks and/or in the obscuration as dense absorbers might occult the central point source along our line of sight. Depending on its physical origin, QSO variability can occur over a range of timescales, with month-to-year long variations of $>$$1$~mag and shorter timescale (days-to-weeks) variability as large as $>$$0.1$~mag. Variability measurements of QSOs are used to (1) identify them; and (2) infer physical properties (e.g., black hole masses from reverberation mapping, changes in accretion rates and/or obscuration). \begin{center} \begin{figure}[t] \includegraphics[width=1.0\hsize,angle=0]{f15.png} \caption{(Top) Image cutouts (30 arcsec wide) of a $z\approx0.7$ variable QSO. (Middle) Lightcurve showing three bands as a function of the Modified Julian Date (MJD) of the observations. Vertical tick marks indicate when the spectra from the bottom panel were taken. (Bottom) SDSS spectra from three different MJDs, where the most striking differences are found in the spectral region around the Mg II line.} \label{fig_qso1} \end{figure} \end{center} Optical identification of QSOs typically relies on a point-source morphology (which may not strictly hold at low redshifts when the QSO host galaxy is resolved) and/or on color cuts to differentiate them from stars and galaxies. However, optical colors sometimes overlap between these various classes. Thus, using the unique signatures of QSO variability (which can be distinguished from stellar variability) can enable us to select samples of quasars across a range of redshifts. For instance, \citet{Palanque2011} found that QSO variability selection is more complete at $2.7<z<3.5$ compared to traditional optical color selections which suffer from overlap with stellar-like colors in this redshift range. Recently, researchers have used, e.g., SDSS Stripe 82 multi-epoch data \citep{Palanque2016}, Palomar Transient Factory \citep{Myers2015}, or the Catalina Real-time Transient Survey \citep{Graham2020} to search for QSOs based on variability. The NSC DR2 tends to reach fainter magnitudes than these datasets, but does not uniformly include as many epochs. Therefore, one could build from these previous efforts by comparing quasars that overlap, and devising a selection function tailored to the NSC measurements and pre-computed variability metrics. \begin{center} \begin{figure}[t] \includegraphics[width=1.0\hsize,angle=0]{f16.png} \caption{(Top) As in Figure~\ref{fig_qso1}, but for a $z\approx1.55$ variable QSO. (Middle) Lightcurve showing three bands as a function of the MJD of the observations. Vertical tick marks indicate when the spectra from the bottom panel were taken. The overall trend indicates fading by $\sim$0.8~mag. (Bottom) SDSS spectra from three different MJDs. The last spectrum (light blue) displays fainter emission blueward of $\sim2000$~\AA.} \label{fig_qso2} \end{figure} \end{center} In addition to enabling population studies, the wide footprint of the NSC allows the search for rare sources like the {\it Changing-Look Quasars} (CLQ) or other sub-classes of AGN with the most extreme variations \citep[e.g.,][]{LaMassa2015,MacLeod2019}. As a proof of concept, we used the Astro Data Lab platform to cross-match the SDSS DR14Q quasar catalog \citep{Paris2018} with NSC DR2, finding a match within $1\arcsec$ for 527,552 quasars. We required NDET$>15$ to ensure a minimum sampling of NSC light-curves, yielding 133,013 quasars. We then examined cases with the most extreme variations (NSIGVAR$>10$), which also have multiple spectra from SDSS ($\geq$3~spectra). We show two different examples with obvious variations in both their SDSS spectra, and their NSC light curves in Figures~\ref{fig_qso1} and \ref{fig_qso2}. The NSC is ideally suited to reveal many more interesting cases, and can further be extended beyond the SDSS footprint. It is sensitive enough to include QSOs out to higher redshifts ($z>2$), which is relevant given recent findings reporting the first cases of CLQs at $z>2$ \citep{Ross2020}. NSC photometric lightcurves will also complement future spectroscopic surveys such as the upcoming Dark Energy Spectroscopic Instrument (DESI) survey. \section{Summary} \label{sec:summary} We present the second public data release of the NOIRLab Source Catalog (NSC DR2) based on over 412,000 public images from the NOIRLab Astro Data Archive from both the northern and southern hemispheres. The catalog contains 68 billion individual measurements to depths of $\approx$23rd magnitude of 3.9 billion unique objects across 86\% of the sky and over baselines of $\approx$7 years. Due to the wealth of temporal information --- half a billion objects have 30 measurements or more --- the NSC DR2 delivers reliable proper motions (many stars fainter than the giant limit) as well as multiple photometric variability metrics. The catalog enables a number of exciting science topics including (1) a census of Solar System bodies to faint depths, (2) searches for stellar streams and dwarf satellite galaxies in areas not previously probed, (3) cataloging variety types of variable stars, and (4) using QSO variability to identify and/or study these objects. \acknowledgments This project used data obtained with the Dark Energy Camera (DECam) at the Blanco 4m telescope at Cerro Tololo Inter-American Observatory. DECam was constructed by the Dark Energy Survey (DES) collaborating institutions: Argonne National Lab, University of California Santa Cruz, University of Cambridge, Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas-Madrid, University of Chicago, University College London, DES-Brazil consortium, University of Edinburgh, ETH-Zurich, University of Illinois at Urbana-Champaign, Institut de Ciencies de l'Espai, Institut de Fisica d'Altes Energies, Lawrence Berkeley National Lab, Ludwig-Maximilians Universit\"at, University of Michigan, National Optical Astronomy Observatory, University of Nottingham, Ohio State University, University of Pennsylvania, University of Portsmouth, SLAC National Lab, Stanford University, University of Sussex, and Texas A\&M University. Funding for DES, including DECam, has been provided by the U.S. Department of Energy, National Science Foundation, Ministry of Education and Science (Spain), Science and Technology Facilities Council (UK), Higher Education Funding Council (England), National Center for Supercomputing Applications, Kavli Institute for Cosmological Physics, Financiadora de Estudos e Projetos, Funda\c{c}\~ao Carlos Chagas Filho de Amparo a Pesquisa, Conselho Nacional de Desenvolvimento Cientfico e Tecnol\'ogico and the Minist\'erio da Ci\^encia e Tecnologia (Brazil), the German Research Foundation-sponsored cluster of excellence "Origin and Structure of the Universe" and the DES collaborating institutions. The Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. This project also incorporates observations obtained at Kitt Peak National Observatory, National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under cooperative agreement with the National Science Foundation. The Kitt Peak data are largely drawn from the Mayall $z$-band Legacy Survey (MzLS), which was part of the Legacy Surveys project which imaged the footprint of the planned DESI survey. The Legacy Surveys imaging (which also included data taken using DECam) is supported by the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under Contract No. DE-AC02-05CH1123, by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract. The paper also contains data from the Steward Observatory Bok 90" telescope, which is located on Kitt Peak and operated by the University of Arizona. The data obtained using the Bok telescope were obtained by the Beijing-Arizona Sky Survey, a key project of the Telescope Access Program (TAP), which has been funded by the National Astronomical Observatories of China, the Chinese Academy of Sciences (the Strategic Priority Research Program "The Emergence of Cosmological Structures" Grant \# XDB09000000), and the Special Fund for Astronomy from the Ministry of Finance. The BASS is also supported by the External Cooperation Program of Chinese Academy of Sciences (Grant \# 114A11KYSB20160057), and Chinese National Natural Science Foundation (Grant \# 11433005). The authors are honored to be permitted to conduct astronomical research on Iolkam Du'ag (Kitt Peak), a mountain with particular significance to the Tohono O'odham. This research uses services or data provided by the Astro Data Lab at NSF's National Optical-Infrared Astronomy Research Laboratory. NSF's NOIR Lab is operated by the Association of Universities for Research in Astronomy (AURA), Inc. under a cooperative agreement with the National Science Foundation. This publication makes use of data from the Pan-STARRS1 Surveys (PS1) and the PS1 public science archive, which have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. Some of the results in this paper have been derived using the healpy and HEALPix package. \software{ \package{Astropy} \citep{astropy}, \package{IPython} \citep{ipython}, \package{matplotlib} \citep{mpl}, \package{numpy} \citep{numpy}, \package{scipy} \citep{scipy}, \package{healpy} \citep{Zonca2019}, \package{SExtractor} \citep{Bertin1996}, \package{scikit-learn} \citep{scikit-learn} } \facilities{CTIO:Blanco (DECam), KPNO:Mayall (Mosaic-3), Steward:Bok (90Prime), Gaia, PS1, CTIO:2MASS, FLWO:2MASS, Sloan, Skymapper, WISE, Spitzer, GALEX, Astro Data Lab} \bibliographystyle{aasjournals}
3,212,635,537,875
arxiv
\section{Introduction} Social behaviors often provide useful insights in determining the way problems are solved in computer science. For instance, matters of security in decision making or in collaborative content filtering/production are enforced by implementing reputation and trust strategies \cite{JosangQ09,schillo99,Sabater2007,Konig2009}. This relation between science and the inspiration basin of social behaviors is particularly evident in the field of distributed algorithms where the mutual dependence between local actions and global effects is fundamental, such as in the consensus or in the election problems \cite{Kossmann} \cite{Peleg96b}, \cite{Thomas79}, \cite{Burman09}, \cite{Shao09} and \cite{Garcia85}. The {\em information diffusion} has been modeled as the spread of an information within a group through a process of social influence, where the diffusion is driven by the so called {\em influential network} \cite{Kats2005}. Such a process, which has been intensively studied under the name of {\em viral marketing} (see for instance \cite{Domingos2001}), has the goal to select an initial good set of individuals that will promote a new idea (or message) by spreading the "rumor" within the entire social network through the word-of-mouth. The first computational study about this process \cite{Granovetter85} used the {\em linear threshold model} where the group is represented by a graph and the threshold triggering the adoption (activation) of a new idea to a node is given by the number of the active neighbors. The impossibility of a node to return (or not) in its initial state determines the monotone (or non-monotone) behavior of the activation process. In a graph detecting the presence of the minimal number of nodes that will activate the whole network, namely the target set selection process (TSS), has been proved to be NP-hard through a reduction to the vertex cover\cite{Kempe03}. In \cite{Chang09a,Ching09b} has been studied the maximum size of a {\em minimum perfect target set} under simple and strong majority thresholds -- e.g., $\lceil d(v)/2 \rceil$ and $\lceil d(v)+1/2 \rceil$ with $d(v)$ denoting the degree of a node $v$. Other works have carried out the dynamics of majority based systems in context of fault tolerance on different networks topologies \cite{Lodi98}, \cite{Santoro03} \cite{Carvaja07}, \cite{Ching09b}, \cite{ChoudharyR09}, \cite{Mustafa04} and \cite{MustafaP01}. In these works the major effort was in determining the distribution of initial faults leading the entire system to a faulty behavior. Such a pattern, also known as dynamic monopoly (or shortly {\em dynamo}), was introduced by Peleg \cite{Peleg96b}, and intensively studied addressing the bounds of the size of monopolies, the time to converge to a fixed point, and the topologies of the systems (see \cite{BermondBPP96}, \cite{BermondBPP03}, \cite{Bermond98}, \cite{NayakPS92} \cite{PelegSurvey}). In dynamic monopolies, the propagation of a faulty behavior starts by a well placed set of faulty elements and can be described as a vertex-coloring game on graphs where vertices are colored black (faulty) or white (non-faulty) and change their color at each round according to the colors of their neighbors. Starting from the work of { \em Flocchini et al} \cite{Lodi98} we introduce an additional element to the original problem's setting: the set of the nodes' states is not limited to white or black, but vertices can assume a value from a finite and ordered set. Such a protocol, when applied on a toroidal mesh, can be described as follows: a node $x$ increments its value ($v(x)$) of one step toward the value of its neighbors if at least a couple of them has the same color greater than ($v(x)$) and a) or the remaining two have a different color in between or b) the two remaining vertices have the same color greater than $v(x)$. As nodes are hard to persuade, due to the slow convergence process caused by the gradual convergence toward the neighbors' color, we refer to them as {\it stubborn}. This protocol is a clean combinatorial formulation for new contexts arising in economy, sociology, cognitive sciences, where collective decisions could be influenced by local behaviors, and a slow convergence process (due to an implicit trust strategy implemented in the protocol) would be desirable (\cite{quattrociocchi2010d,amblard01,Castellano2007}). Our studies focus on the initial distribution of colors leading the system to a monochromatic configuration in a finite time. In this paper we provide a) upper and lower bounds to the size of a dynamo, and b) some special classes of dynamos by means of a new approach based on recoloring patterns. Due to the constant degree of nodes and regularity, toroidal meshes are efficient frameworks for these investigations. However, we note that results of Proposition \ref{propnew} can be easily generalized to non-constant degree graphs. In the current paper first we analyze the coloring properties induced by the our multi-colored protocol. Then bounds on the size of monotone dynamos are shown. We conclude the paper by characterizing special classes of dynamos and outlining the next envisioned steps of our work. \section{Notation and Definitions} In this paper we study the global effects caused by the interaction among {\it stubborn} entities when disposed on a toroidal mesh. \begin{definition} A toroidal mesh $T : (V,E)$ of $m \times n$ vertices is a mesh where each vertex $v_{i,j}$ ( $0\leq i \leq m-1$ and $0\leq j \leq n-1$) is connected to the four vertices $v_{(i-1) \mod m,j}$ , $v_{(i+1) \mod m,j}$ , $v_{i,(j-1) \mod n}$ and $v_{i,(j+1) \mod n}$. \end{definition} Let $\mathcal{C}=\{1,\ldots, k\}$ be a finite ordered set of colors. A {\em coloring} of a torus $T$ is a function $r:\; V\rightarrow \mathcal{C}$. If $r$ is a coloring of $T$ defined on two colors we refer to $T$ as a \textbf{bi-colored torus}, while if $r$ is a coloring of $T$ based on more than two colors we call $T$ a \textbf{multi-colored torus}. $N(x)$ denotes the neighborhood of any vertex $x$ in $V$, since we are studying toroidal meshes we have that $|N(x)|=4$. Given a coloring $r$ of $V$, we can define the following irreversible simple majority rule \textbf{(StubSM-Protocol)}: \begin{center} \begin{algorithmic}[h!] \FOR {all $x$ $\in$ $V$} \STATE let $N(x)=\{a,b,c,d\}$ \IF { $(r(a)=r(b)> r(x))$ $\wedge$ $((r(c) \neq r(d)) \vee (r(c) = r(d) > r(x)))$} \STATE $r(x) \gets r(x)+1$ \ENDIF \ENDFOR \end{algorithmic} \end{center} For instance, let $\mathcal{C}=\{1,\ldots, 6\}$ be the finite ordered set of colors, and $T$ be the multi-colored torus shown in Figure \ref{figmulticol}. We represent $T$ as a matrix where the entry at the $i$th row and $j$th column is the color of $v_{i,j}$. \begin{figure}[h!] \begin{center} 6 4 2 4 \\ 4 3 5 1 \\ 6 5 2 6 \\ 1 4 4 3 \\ \caption{A multi-colored torus.} \label{figmulticol} \end{center} \end{figure} Figure \ref{figexample} illustrates the recoloring process under the \textbf{(StubSM-Protocol)} of the multicolored torus shown in \ref{figmulticol}. \begin{figure}[h!] \begin{center} 6 4 2 4 $\dashrightarrow$ 6 4 \textbf{4} 4 $\dashrightarrow$ 6 4 \textbf{4} 4 $\dashrightarrow$ 6 4 4 4 $\dashrightarrow$ 6 4 4 4 $\dashrightarrow$ 6 4 4 4 \\ 4 3 5 1 $\dashrightarrow$ \textbf{5} \textbf{4} 5 \textbf{2} $\dashrightarrow$ \textbf{6} \textbf{5} 5 \textbf{3} $\dashrightarrow$ 6 5 5 \textbf{4} $\dashrightarrow$ 6 5 5 \textbf{5} $\dashrightarrow$ 6 5 5 \textbf{6} \\ 6 5 2 6 $\dashrightarrow$ 6 5 \textbf{3} 6 $\dashrightarrow$ 6 5 \textbf{4} 6 $\dashrightarrow$ 6 5 \textbf{5} 6 $\dashrightarrow$ 6 5 5 6 $\dashrightarrow$ 6 5 5 6 \\ 1 4 4 3 $\dashrightarrow$ \textbf{2} 4 4 \textbf{4} $\dashrightarrow$ \textbf{3} 4 4 4 $\dashrightarrow$ \textbf{4} 4 4 4 $\dashrightarrow$ 4 4 4 4 $\dashrightarrow$ 4 4 4 4 \\ \caption{The coloring process of the multi-colored torus under the \textbf{(StubSM-Protocol)}.} \label{figexample} \end{center} \end{figure} Let $r^i(x)$ be the color of $x$ after $i$ iterations of the protocol. We notice that if $r(a)=r(b)$, then $x$ recolors itself only if $r(c)\neq r(d)$ or $r(c)=r(d)>r(a)=r(b)$. Then $r^i(x)=r(a)$ with $i=r(a)-r(x)$ unless a recoloring of its neighbors occurs. Similarly if $r(x)<r(c)=r(d)\leq r(a)=r(b)$, then $r^i(x)=r(c)$ with $i=r(c)-r(x)$ unless a recoloring of its neighbors occurs. The concept can be expressed more formally as follows: \begin{lemma} Let $x$ be in $V$, and $N(x)=\{a,b,c,d\}$ such that $k\geq r(a)=r(b)> r(c) \neq r(d)$, and, let $i=r(a)-r(x)$. Let $0\leq t^c_1\leq \ldots \leq t^c_i \leq i$ and $0\leq t^d_1\leq \ldots\leq t^d_i\leq i$ be the numbers of recolorings of $r(c)$ and $r(d)$, respectively, at $1,\ldots,i$ time steps under the \textbf{StubSM-Protocol}. If nodes a and b do not change color and $r^{t^c_1}(c)\neq r^{t^d_1}(d), \ldots, r^{t^c_i}(c)\neq r^{t^d_i}(d)$, then $r^i(x)=r(a)$. \label{lem1} \end{lemma} \begin{proof} At each time step $x$ recolors itself except if $c$ and $d$ assume the same color. \end{proof} \begin{lemma} Let $x$ be in $V$, and $N(x)$ = $\{a, b, c, d\}$ such that $k \geq r(a) = r(b) > r(c) = r(d)>r(x)$. Let $0 \leq t_1^c \leq...\leq t_i^c\leq i$ and $0 \leq t_1^d \dots \leq t_i^d\leq i$ be the numbers of recolorings of $r(c)$ and $r(d)$, respectively, at $1, \dots ,i$ time steps under the StubSM-Protocol. If nodes $a$ and $b$ do not change color and $r^{t_1^c}(c),r^{t_1^d}(d)<r^{t_1^x}(x),\dots,r^{t_i^c}(c),r^{t_i^d}(d)<r^{t_i^x}(x)$, then $r^i(x) = r(a)$. \label{lem1bis} \end{lemma} In Figure \ref{fig:lem1bis} an example of a configuration as expressed in Lemma \ref{lem1bis} is shown. \begin{figure}[h!] \begin{center} 3 2 1 3 2 1 \\ 2 3 2 2 3 2 \\ 1 2 3 1 2 3 \\ 3 2 1 3 2 1 \\ 2 3 2 2 3 2 \\ 1 2 3 1 2 3 \\ \caption{A multi-colored torus as expressed in Lemma \ref{lem1bis}} \label{fig:lem1bis} \end{center} \end{figure} \begin{cor} Let $x$ be in $V$, and $N(x)=\{a,b,c,d\}$ such that $r(a)=r(b)=k$ and $r(c) > r(d)$. Then $r^i(x)=k$ with $i=k-r(x)$, if one of the two following conditions holds: \begin{itemize} \item[1.] $r(c)-r(d)\geq k-r(x)$; \item[2.] $c$ and $d$ do not recolor themselves. \end{itemize} \label{cor1} \end{cor} \begin{proof} Let $t^c_1,\ldots,t^c_i$ and $t^d_1,\ldots,t^d_i$ be recoloring of $r(c)$ and $r(d)$ under the \textbf{StubSM-Protocol}, respectively, at time $1,\ldots,i$. If $r(c)-r(d)\geq k-r(x)$, then $r^{t^c_1}(c)\neq r^{t^d_1}(d), \ldots, r^{t^c_i}(c)\neq r^{t^d_i}(d)$, and so $r^i(x)=k$. If $c$ and $d$ do not change their color, $t^c_1=\ldots=t^c_i=0$ and $t^d_1=\ldots=t^d_i=0$, and since $r(c) > r(d)$, the thesis follows by Lemma \ref{lem1}. \end{proof} By condition $r(c)-r(d)\geq k-r(x)$ it follows that if $r(c)\geq r(x)$, then $r(d)\leq r(x)$, where equalities are not contemporary true. \begin{lemma} Let $x$ be in $V$, and $N(x)=\{a,b,c,d\}$ such that $k\geq r(a)>r(b)> r(c) > r(d)$ or $k\geq r(a)>r(b)=r(x)=r(c) > r(d)$. If $a$ and $b$ recolor at each time step contemporary and $c$ and $d$ do not recolor themselves, then $x$ does not recolor before $k-r(b)(>k-r(a)\geq 0)$ steps. \label{lem2} \end{lemma} \begin{proof} Vertex $x$ changes its color when at least two of its neighbors have the same color.This condition is achieved only when $a$ and $b$ assume color $k$, since the recoloring increases the color of the node $x$, and at each step the colors of $a$ and $b$ are different. \end{proof} We denote with $V^h$ the subset of $V$ containing all the $h$-colored vertices and the subset of $T$ of all $h$-colored vertices with $S^h$ ($h\in \mathcal{C}$). Furthermore we denote the size of the smallest rectangle containing any $F\subseteq T$ by and $m_F\times n_F$. The recoloring process represents the dynamic of the system. Depending on the initial coloring of $T$, we get different dynamics. Among the possible initial configurations (that is, assignments of colors) we are interested in those leading the system to a monochromatic configuration, namely dynamos. Formally, \begin{definition} Given an initial coloring of $T$ using colors $\mathcal{C}=\{1,\ldots, k\}$, the set $S^k$ is a \textbf{dynamo} if an all $k$-color configuration is reached from $S^k$ in a finite number of steps under the \textbf{StubSM-Protocol}. \end{definition} Besides the following definitions are needed. \begin{definition} Given an initial coloring of $T$ using colors in $\mathcal{C}=\{1,\ldots, k\}$, a $\textbf{h-block}$ $B^h$ is a connected subset of $T$ composed by vertices having the same color $h$ and each node has at least \bf{two} neighbors in $B^h$, where $h\in \mathcal{C}$. \end{definition} Note that vertices in $B^h$ will never change their color. For example, $B^h$ can be a $h$-colored column (or row), any submatrix of the adjacent rows and columns that we call a \textbf{window}, or any $h$-colored cycle such that $v_{i,j}, v_{i,j+1}, \ldots, v_{i,j'},$ $v_{i-1,j'}, \ldots, v_{i',j'},$ $v_{i',j'-1}, \ldots v_{i',j},$ $v_{i'-1,j},\ldots v_{i,j}$ that we call a \textbf{frame}. \begin{definition} A $\textbf{non-k-block}$ $NB^k$ is a connected subset of $T$ made up of vertices of colors in $\mathcal{C}\setminus \{k\}$ each one having at least \bf{three} neighbors in $NB^k$. \end{definition} This definition implies that every vertex in $NB^k$ has at most one $k$-colored neighbor, that is vertices in $NB^k$ will never assume color $k$. For example, two adjacent rows or columns of vertices not $k$ constitute a non-$k$-block in a toroidal mesh. \section{Bounds on the size of a dynamo} By means of corollary \ref{cor1}.1 we can derive the following Proposition. \begin{proposition} Given a coloring of the torus $T$ of size $m\times n$ such that for every vertex $x$ in $V$ and $N(x)=\{a,b,c,d\}$, $r(a)=r(b)=k$ and $r(c) \neq r(d)$ or $r(c)= r(d)> r(x)$ then $S^k$ is a dynamo of size greater or equal to $mn/3$ \label{propnew} \end{proposition} \begin{proof} By Lemma \ref{lem1} and \ref{lem1bis} immediately follows that $S^k$ is a dynamo. For each node, with a color different from $k$, it has at least two neighbors of colors $k$, and so $2$ nodes out of $5$ are $k$-colored. No conditions are imposed for the coloring of the neighborhood of $k$-colored nodes, and so $1$ node out of $5$ is $k$-colored. As a consequence, $|S^k|\geq 2(mn-|S^k|)/5+|S^k|/5$, and hence $|S^k|\geq mn/3$. \end{proof} The lower bound provided by this proposition can be improved. Indeed, we are interested in determining the minimum size dynamo under the \textbf{StubSM-Protocol} for a multi-colored toroidal mesh. This is obtained by first computing a lower bound on the size and then an upper bound close to the lower bound. These bounds can be derived by a reduction to the bi-colored case (where $1$ and $2$ correspond to colors white and black, respectively). For sake of completeness we recall here some definitions of \cite{Lodi98}. Under the \textit{reversible simple majority rule} a white vertex turns black if at least two of its neighbors are black, otherwise the vertex does not change color, and a black vertex becomes white only if at least three of its neighbors are white; under the \textit{irreversible strong majority rule} a white vertex turns black if at least three of its neighbors are black, else the vertex does not change color, and a black vertex does not change its color. A \textit{simple} (respectively, \textit{strong}) \textit{white block} is a subset of $T$ composed of all white vertices, each of which has at least three (or respectively, two) neighbors in the block. A dynamo is \textit{monotone} if the set of black vertices at time $t$ is a subset of the one at time $t+1$. We define a polynomial time transformation $\phi: \mathcal{C}\rightarrow \mathcal{C}$ such that $\phi(h)=1$, for $h=1,\ldots,k-1$, and $\phi(k)=2$. This transformation allows us to map a multi-colored torus into a bi-colored torus. Moreover under transformation $\phi$, a $non$-$k$-block corresponds to a simple white block and a $h$-block corresponds to a strong white block. \begin{proposition} A lower bound to the size of a dynamo in a bi-colored torus under the (reversible) simple majority rule is a lower bound to the size of a dynamo in a multi-colored torus under the \textbf{StubSM-Protocol}. \label{prop1} \end{proposition} Indeed, a lower bound consists in the smallest size of $S^k$ such that no $non$-$k$-blocks can arise in the multi-colored problem, and in the smallest size of $S^2$ (initial set of black vertices) such that no simple white blocks can arise in the bi-colored setting. Because of the correspondence between a $non$-$k$-block and a simple white block the claim follows. Therefore we derive (see Theorem 9 of \cite{Lodi98} for simple monotone dynamos): \begin{theorem} Let $S^k$ be a dynamo for a colored toroidal mesh of size $m \times n$ under the \textbf{StubSM-Protocol}. We have \begin{itemize} \item (i) $m_{S^k}\geq m-1,\; n_{S^k}\geq n-1$ \item (ii) $|S^k|\geq m+n-2$. \end{itemize} \label{t9} \end{theorem} \begin{figure}[h!] \begin{center} 2 2 1 1 1 1 1 1\\ 2 2 1 1 1 1 1 1\\ 1 1 2 2 1 1 1 1\\ 1 1 2 2 1 2 2 1\\ 1 1 1 1 1 2 2 1\\ 1 1 1 1 1 1 1 1 \caption{A monotone dynamo of size $m+n-2$.} \label{figB1} \end{center} \end{figure} Figure \ref{figB1} illustrates a monotone dynamo (under the simple reverse majority rule) in a bi-colored torus. We notice that by mapping color $2$ with $k$, it is possible to find out an assignment of colors of $\mathcal{C}\setminus\{k\}$ for $1$-colored vertices such that the obtained multi-colored torus leads to a monochromatic configuration under the \textbf{StubSM-Protocol}. \begin{proposition} An upper bound to the size of a dynamo in a bi-colored torus under the (irreversible) strong majority rule is an upper bound to the size of a dynamo in a multi-colored torus under the \textbf{StubSM-Protocol}. \label{prop2} \end{proposition} Indeed in order to establish an upper bound to the size of $S^k$, no $h$-blocks have to appear with $h=1,\ldots, k-1$ and successive derived $k$-colored sets of vertices have to contain the set $V$ of all vertices at the end of the process. We have that: a) strong white blocks correspond to $h$-blocks; b) irreversible strong majority rule is more restrictive than \textbf{StubSM-Protocol}: hence, under the irreversible strong majority rule, a vertex recolors itself if there are three vertices in its neighborhood having the same color, whereas under the \textbf{StubSM-Protocol} are needed two neighbors with the same color (and the reamaining ones with different colors in between). Hence, in order to obtain an upper bound to the size of $S^2$, no strong white blocks have to arise and successive derived black sets of vertices have to contain the set $V$ of all the vertices at the end of the process, in the bi-colored problem, because of a) and b) the claim follows. Therefore we get (see Theorem 8 of \cite{Lodi98}): \begin{theorem} Let $S^k$ be a dynamo for a colored toroidal mesh of size $m \times n$ under the \textbf{StubSM-Protocol}. Then $|S^k|\geq \lceil{m/3}\rceil (n+1)$. \label{t8} \end{theorem} \begin{figure}[h!] \begin{center} 2 1 1 1 1 1 1 1\\ 1 2 1 2 1 2 1 2\\ 2 1 2 1 2 1 2 1\\ 2 1 1 1 1 1 1 1\\ 1 2 1 2 1 2 1 2\\ 2 1 2 1 2 1 2 1\\ 2 1 1 1 1 1 1 1\\ 1 2 1 2 1 2 1 2\\ 2 1 2 1 2 1 2 1 \caption{A strong irreversible dynamo of size $\lceil m/3\rceil (n+1)$.} \label{figB2} \end{center} \end{figure} Figure \ref{figB2} illustrates a strong irreversible dynamo of size $\lceil m/3\rceil (n+1)$. Note that for every assignment of colors of $\mathcal{C}\setminus\{k\}$, $S^k$ illustrated in Figure \ref{figB2} is a dynamo. \section{A minimum size dynamo} Theorem \ref{t8} establishes an upper bound far from the lower bound determined in Theorem \ref{t9}. In this section a minimum size dynamo is derived. If we choose $S^k$ as made up of the first row and column in the torus, then $|S^k|=m+n-1$ that is close to the lower bound in Theorem \ref{t9}. \begin{lemma} Let $S^k$ be a dynamo. Then, $T-S^k$ does not contain any $h$-block, with $h\in \mathcal{C}\setminus \{k\}$. \label{l2} \end{lemma} Our choice for $S^k$ implies that no $h$-colored column or $h$-colored row, but an $h$-colored window or an $h$-colored frame can arise, with $h\in \mathcal{C}\setminus \{k\}$. As a consequence we require that:\\ \noindent for every $2\times 2$ window in $T$, $r(v_{i,j})\neq r(v_{i+1,j+1})$ and $r(v_{i,j+1})\neq r(v_{i+1,j})$; otherwise $r(v_{i,j})= r(v_{i+1,j+1})=k$ ($r(v_{i,j+1})=r(v_{i+1,j})=k$).\\ This requirement does not forbid the appearance of a $h$-colored window during the recoloring process as shown Figure \ref{fig1}. \begin{figure}[h!] \begin{center} 8 8 8 8 8 8 $\dashrightarrow$ 8 8 8 8 8 8\\ 8 7 7 3 4 6 $\dashrightarrow$ 8 8 8 6 8 8\\ 8 1 4 3 7 5 $\dashrightarrow$ 8 5 \textbf{4} \textbf{4} 7 8\\ 8 1 4 3 2 4 $\dashrightarrow$ 8 3 \textbf{4} \textbf{4} 2 8\\ 8 1 4 3 2 4 $\dashrightarrow$ 8 6 4 3 4 8 \caption{An Example in which a $4$-block emerges after five steps.} \label{fig1} \end{center} \end{figure} Therefore we focus on the recoloring process providing a certain $h$-block in order to avoid it. \begin{remark} Let $a_W,b_W,c_W,d_W$ be the vertices of any $2\times 2$ window $W$ with $r(a_W)\leq r(b_W) \leq r(c_W)\leq r(d_W)$ and $i$ and $j$ be the number of recoloring of $a_W$ and $d_W$, respectively, under the \textbf{StubSM-Protocol}. No $h$-block can appear into $W$ during the recoloring process if $i-j<r(d_W)-r(a_W)$, with $(k>)h\geq r(d_W)$. In the example illustrated in Figure \ref{fig1} $r(a_W)=r(v_{3,3})=3=r(b_W)=r(v_{2,3}), \;r(c_W)=r(v_{3,2})=4=r(d_W)=r(v_{2,2})$: as a consequence of the recoloring of $v_{1,3}$, node $v_{2,3}$ assumes color $4$, and hence at the next (fourth) step $v_{3,3}$ recolors with $4$ by causing the formation of a $4$-block. Note that $i-j=1-0=4-3$. \end{remark} Remark suggests that a dynamo can be outlined by an assignment of the initial distribution of colors which takes into account the recoloring pattern due to the $S^k$ considered. Let us add a fictitious color $\infty$ to the finite set $\mathcal{C}$ of colors. This color is the greatest color, but two colors $\infty$ are not comparable. \begin{definition} A \textbf{NordWest-window} $T^{NW}(i^*,j^*)$ of a $m\times n$ toroidal mesh $T$ in $(i^*,j^*)$ is the submesh of $T$ having vertices $v_{i,j}$, $0\leq i<i^*<m$ and $0\leq j<j^*<n$ augmented by a $i^*$-th row and by a $j^*$-th column of vertices colored by $\infty$. \end{definition} \begin{definition} A \textbf{NordEast-window} $T^{NE}(i^*,j^*)$ of a $m\times n$ toroidal mesh $T$ in $(i^*,j^*)$ is the submesh of $T$ having vertices $v_{i,j}$, $0\leq i<i^*<m$ and $0<j^*< j< n$ augmented by a $i^*$-th row and by a $j^*$-th column of vertices colored by $\infty$. \end{definition} \begin{definition} A \textbf{SouthWest-window} $T^{SW}(i^*,j^*)$ of a $m\times n$ toroidal mesh $T$ in $(i^*,j^*)$ is the submesh of $T$ having vertices $v_{i,j}$, $0<i^*<i < m$ and $0\leq j<j^*< n$ augmented by a $i^*$-th row and by a $j^*$-th column of vertices colored by $\infty$. \end{definition} \begin{definition} A \textbf{SouthEast-window} $T^{SE}(i^*,j^*)$ of a $m\times n$ toroidal mesh $T$ in $(i^*,j^*)$ is the submesh of $T$ having vertices $v_{i,j}$, $0<i^*<i < m$ and $0<j^*< j < n$ augmented by a $i^*$-th row and by a $j^*$-th column of vertices colored by $\infty$. \end{definition} \begin{lemma} Let $T^{NW}(i^*,j^*)$ be the NordWest-window of a $m\times n$ toroidal mesh $T$ such that \begin{itemize} \item $v_{i,0}=v_{0,j}=k$, for $i=0,\ldots,i^*-1$ and $j=1,\ldots, j^*-1$; \item $v_{i,1}\geq \ldots \geq v_{i,j^*-1}$, for $0<i<i^*$; \item $v_{1,j}\geq \ldots \geq v_{i^*-1,j}$, for $0<j<j^*$; \item $v_{i,j} >v_{i+1,j-1}$ for all $i,j$ such that $i+j-1=l$, for $3\leq l < i^*+j^*-3$. \end{itemize} Then, all the vertices of $T^{NW}(i^*,j^*)\cap T$ recolor by $k$ after $$ M(i^*-1,j^*-1)= max \left\{ \begin{array}{c} M(i^*-1,j^*-2) \\ M(i^*-2,j^*-1))\end{array} +k-r(v_{i^*-1,j^*-1})\right.$$ steps, with $M(0,j)=M(i,0)=0$, for $i=0,\ldots,i^*-1$ and $j=1,\ldots, j^*-1$. \label{lemNW} \end{lemma} \begin{proof} Let $M(i,j)$ denote the number of steps needed for vertex $v_{i,j}$ to reach $k$. By the first condition we get that $M(0,j)=M(i,0)=0$, for $i=0,\ldots,i^*-1$ and $j=1,\ldots, j^*-1$. Given the initial configuration of $T^{NW}(i^*,j^*)$, in the first round only the node $v_{1,1}$ recolors itself, since $r(v_{0,1})=r(v_{1,0})=k$ and $r(v_{1,2})>r(v_{2,1})$, whereas all the other vertices have neighbors with different colors or two neighbors of color $\infty$. In the second round, the recoloring of $v_{1,1}$ changes the chromatic configuration of the neighborhoods of $v_{1,2}$ and $v_{2,1}$. By Lemma \ref{lem2} with $x=v_{1,2}$ and $r(a)=r(v_{0,2})=k$ ($x=v_{2,1}$ and $r(a)=r(v_{2,0})=k$), $v_{1,2}$ does not change color until node $v(1,1)$ reaches $k$. This pattern happens in $k-r(v_{1,1})$ steps, hence $M(1,1)=k-r(v_{1,1})$ verifies the relation. At the $k-r(v_{1,1})+1$th step, $v_{1,2}$ and $v_{2,1}$ recolor themselves while all the other vertices do not change. The recoloring of $v_{1,2}$ and $v_{2,1}$ changes the colors of the neighborhoods of $v_{1,3},\; v_{2,2}$ and $v_{3,1}$. By Lemma \ref{lem2} $v_{2,2}$ does not advance and nodes $v_{1,3}$ and $v_{3,1}$ do the same. The node $v_{1,2}$ recolors after additional $k-r(v_{1,2})$, that is $M(1,2)=max(M(1,1),0)+k-r(v_{1,2})= k-r(v_{1,1})+ k-r(v_{1,2})$, and $M(2,1)=max(0,M(1,1))+k-r(v_{2,1})= k-r(v_{1,1})+ k-r(v_{2,1})$. Therefore, first node $v_{1,3}$ and, then $v_{2,2}$ and $v_{3,1}$, start recoloring, being $r(v_{1,2})>r(v_{2,1})$. By the same considerations mentioned before, we can conclude that $v_{1,3}$ and, then $v_{2,2}$ and $v_{3,1}$ become $k$-colored after $M(1,3)=max(M(1,2),0)+k-r(v_{1,3})$, $M(2,2)=max(M(2,1),M(1,2))+k-r(v_{2,2})$ and $M(3,1)=max(0, M(2,1))+k-r(v_{3,1})$ steps respectively. At the end of the process all vertices in $T^{NW}(i^*,j^*)\cap T$ are $k$-colored. \end{proof} An analogous lemmas can be stated for the NordEast-window, SouthWest-window and SouthEast-window of $T$. \begin{lemma} Let $T^{NE}(i^*,j^*)$ be the NordEast-window of a $m\times n$ toroidal mesh $T$ such that \begin{itemize} \item $v_{i,n-1}=v_{0,j}=k$, for $i=0,\ldots,i^*-1$ and $j=j^*+1,\ldots, n-1$; \item $v_{i,j^*+1}\leq \ldots \leq v_{i,n-1}$, for $0<i<i^*$; \item $v_{1,j}\geq \ldots \geq v_{i^*-1,j}$, for $j^*<j<n$; \item $v_{i,j} >v_{i+1,j+1}$ for all $i,j$ such that $n-j+i=l$, for $3\leq l < n-j^*+i^*-3$. \end{itemize} Then, all the vertices of $T^{NE}(i^*,j^*)\cap T$ recolor by $k$ after $$ M(i^*-1,j^*+1)= max \left\{ \begin{array}{c} M(i^*-1,j^*+2) \\ M(i^*-2,j^*+1))\end{array} +k-r(v_{i^*-1,j^*+1})\right.$$ steps, with $M(0,j)=M(i,n-1)=0$, for $i=0,\ldots,i^*-1$ and $j=j^*+1,\ldots, n-1$. \label{lemNE} \end{lemma} \begin{lemma} Let $T^{SW}(i^*,j^*)$ be the SouthWest-window of a $m\times n$ toroidal mesh $T$ such that \begin{itemize} \item $v_{i,0}=v_{m-1,j}=k$, for $i=i^*+1,\ldots,m-1$ and $j=1,\ldots, j^*-1$; \item $v_{i,1}\geq \ldots \geq v_{i,j^*-1}$, for $i^*+1<i<m$; \item $v_{1,j}\leq \ldots \leq v_{i^*-1,j}$, for $0<j<j^*$; \item $v_{i,j} <v_{i+1,j-1}$ for all $i,j$ such that $i+j-1=l$, for $i^++2\leq l < m+j^*-4$. \end{itemize} Then, all the vertices of $T^{NW}(i^*,j^*)\cap T$ recolor by $k$ after $$ M(i^*+1,j^*-1)= max \left\{ \begin{array}{c} M(i^*+1,j^*-2) \\ M(i^*+2,j^*-1))\end{array} +k-r(v_{i^*+1,j^*-1})\right.$$ steps, with $M(m-1,j)=M(i,0)=0$, for $i=i^*+1,\ldots,m-1$ and $j=1,\ldots, j^*-1$. \label{lemSW} \end{lemma} \begin{lemma} Let $T^{SE}(i^*,j^*)$ be the SouthEast-window of a $m\times n$ toroidal mesh $T$ such that \begin{itemize} \item $v_{i,n-1}=v_{m-1,j}=k$, for $i=i^*+1,\ldots,m-1$ and $j=j^*+1,\ldots, n-1$; \item $v_{i,j^*+1}\leq \ldots \leq v_{i,n}$, for $0<i<i^*$; \item $v_{1,j}\leq \ldots \leq v_{i^*-1,j}$, for $j^*<j<n$; \item $v_{i,j} <v_{i+1,j+1}$ for all $i,j$ such that $n-j+i=l$, for $i^*+3\leq l < n-j^*+m-3$. \end{itemize} Then, all the vertices of $T^{SE}(i^*,j^*)\cap T$ recolor by $k$ after $$ M(i^*+1,j^*+1)= max \left\{ \begin{array}{c} M(i^*+1,j^*+2) \\ M(i^*+2,j^*+1))\end{array} +k-r(v_{i^*+1,j^*+1})\right.$$ steps, with $M(m-1,j)=M(i,n-1)=0$, for $i=i^*+1,\ldots,m-1$ and $j=j^*+1,\ldots, n-1$. \label{lemSE} \end{lemma} Let $r_i=r(v_{i,1})=r(v_{i,2})=\ldots =r(v_{i,n-1})$, for $i=0,\ldots, m-1$, and $c_0=r(v_{0,0})=r(v_{1,0})=\ldots =r(v_{m-1,0})$. \begin{theorem} Given a coloring $r$ of the vertices of a $m\times n$ toroidal mesh $T$, let $S^k$ be constituted by the first row and column, i.e. $r_0=k$ and $c_0=k$; let $r_i=r_{m-i}$, $r_i>r_{i+1}$, and $r_{m-i}>r_{m-i-1}$ for $i=1,\ldots,\lceil m/2\rceil-2$;\begin{itemize} \item if $m$ is even: let 1) $r_{m/2-1}>r_{m/2}, r_{m/2+1}$, 2) $r_{m/2+1}> r_{m/2}$, 3) $r_{m/2-1}+r_{m/2}<2 r_{m/2+1}$; \item if $m$ is odd: let 1) $r_{\lceil m/2\rceil-1}>r_{\lceil m/2\rceil}$, 2) $k+r_{\lceil m/2\rceil}<2 r_{\lceil m/2\rceil-1}$; \end{itemize} where $k-r_{\lceil m/2 \rceil-1}\geq \lceil m/2 \rceil-1$. Then $S^k$ is a dynamo. \end{theorem} \begin{proof} By Lemmas \ref{lemNW}-\ref{lemSE} all the vertices of $T^{NW}(\lceil m/2\rceil-1,\lceil n/2 \rceil)$, $T^{NE}(\lceil m/2\rceil-1,\lceil n/2 \rceil-1)$, $T^{SW}(\lceil m/2\rceil+1,\lceil n/2 \rceil)$, $T^{SE}(\lceil m/2\rceil+1,\lceil n/2 \rceil-1)$ recolors by $k$ at the end of the recoloring process. Consider now rows $\lceil m/2\rceil-1$, $\lceil m/2\rceil$, $\lceil m/2\rceil+1$. \begin{itemize} \item Let $m$ be even: since $r_{m/2-1}>r_{m/2}, r_{m/2+1}$ (condition 1), we have that $v_{m/2,1}$ ($v_{m/2,n-1}$) starts to recolor itself after that $v_{m/2-1,1}$ ($v_{m/2-1,n-1}$) recolored with $k$. We are going to show that vertices of rows $m/2-1$ and $m/2+1$ recolor with $k$ by Corollary \ref{cor1}.2, because every vertex of the $m/2$th-row starts to change recolor only when its neighbors on the rows $m/2-1$ and $m/2+1$ are $k$-colored. We prove that the color assumed by $v_{m/2,1}$ during recoloring is different from the pattern of $v_{m/2+1,2}$. The number of recolorings of $v_{m/2,1}$ needed to obtain the same color as $r(v_{m/2+1,2})$ is $ r_{m/2+1}- r_{m/2}$. Since $r(v_{m/2+1,2})>r(v_{m/2,1})$ (condition 2) and $v_{m/2+1,2}$ starts to recolor after that $v_{m/2+1,1}$ recolored with $k$, if this happen before $ r_{m/2+1}- r_{m/2}$ steps, the colors of $v_{m/2,1}$ and $v_{m/2+1,2}$ at each time step are different. We have that $k-(r_{m/2+1}+k-r_{m/2-1})=r_{m/2-1}-r_{m/2+1}$ which is less than $ r_{m/2+1}- r_{m/2}$ by condition 3. Therefore, in a first phase $v_{m/2+1,1}$ recolors by $k$, then $v_{m/2,2}$ starts to recolor after that $v_{m/2+1,2}$ recolored with $k$ and in a third moment, by Lemma \ref{lem1}, $v_{m/2,2}$ will become $k$-colored. Finally this can be proved for all the vertices on the $m/2$th-row, in order that no $h$-block could arise with $h\neq k$, and at the end of the process all the vertices will be $k$-colored. \item Let $m$ be odd: $v_{\lceil m/2\rceil-1,1}$ and $v_{\lceil m/2\rceil,1}$ start to recolor at the same time. Although $r_{\lceil m/2\rceil-1}>r_{\lceil m/2\rceil}$ (condition 1), we are going to show that the color of $v_{\lceil m/2\rceil,1}$ assumed during recolorings is different from the color of $v_{\lceil m/2\rceil-1,2}$. Indeed $r_{\lceil m/2\rceil}+k-r_{\lceil m/2\rceil-1}<r_{\lceil m/2\rceil-1}$ by condition 2. Therefore every vertex of row ${\lceil m/2\rceil}$ starts to recolor when it has two neighbors of color $k$, and remaining neighbors of different colors. By Lemma \ref{lem1} every vertex will recolor by $k$, and no $h$-block will appear with $h\neq k$. \end{itemize} \end{proof} \begin{figure}[h!] \begin{center} 6 6 6 6 6 \hspace{.2in} 6 6 6 6 6\\ 6 5 5 5 5 \hspace{.2in} 6 5 5 5 5\\ 6 4 4 4 4 \hspace{.2in} 6 4 4 4 4\\ 6 1 1 1 1 \hspace{.2in} 6 1 1 1 1\\ 6 3 3 3 3 \hspace{.2in} 6 5 5 5 5\\ 6 5 5 5 5 \hspace{.82in} \caption{An example of a dynamo for $m$ even and for $m$ odd.} \label{figF} \end{center} \end{figure} \section{Conclusions} In this paper we introduced the {\em multicolored dynamos}, a new problem that is an extension of the original {\em target set selection} (TSS) problem. In our settings the set of the nodes' states is not limited to white or black, such as for the dynamic monopolies, but vertices can assume values from a finite and ordered set. This protocol finds application in contexts where the collective decisions can be influenced by malicious behaviors, e.g. partial copies of corrupted data or faulty sensors, and a slow convergence process (due to an implicit trust strategy implemented in the protocol) would be desirable. In this work we characterized the nodes' coloring patterns in terms of neighbors' influence, that is a function of the nodes' degrees. At the end of this work, there are some case studies and several interesting questions that still remain open. For instance, other kinds of topologies could be considered under the \textbf{SMP-Protocol} such as scale-free networks or random graphs to have a comparitive analysis with respect to other algorithmic models of social influence and viral marketing on social network, e.g. the bounded confidence model \cite{amblard01}. Furthermore, considering the growing attention to the dynamic aspects of social networks such a protocol should be studied on graphs where the availability of links and nodes is subject to change during time \cite{CFQS2010a}. This statement leads to a different definition of majority and the deriving propagation patterns should be investigated and characterized. Moreover, instead of studying initial configurations leading to a monochromatic configuration, the problem of determining initial configurations avoiding the converge of the whole system toward a monochromatic fixed point could be investigated. \bibliographystyle{plain}
3,212,635,537,876
arxiv
\section{Introduction\label{ss:theory_r}} The two predominant astrophysical mechanisms for production of heavy elements in the universe are the slow neutron-capture process (s-process) and the rapid neutron-capture process (r-process) \cite{RevModPhys.29.547}. The r-process occurs in high neutron-density areas in the universe, where the average time for neutron capture is smaller than the half life of the radioactive isotope, and an equilibrium between neutron capture, (n,$\gamma$), and photodisintegration, ($\gamma$,n), has established itself. After this so called steady phase of the r-process, the free neutrons disappear and the nuclei $\beta$ decay back to stability, a process called freeze out. The most dominant features of the abundance distribution of elements created during the steady phase and freeze out are the large peaks at $A\approx80$, $A\approx130$ and $A\approx195$ that are due to the r-process flow through closed shells. The second most pronounced feature is the peak at $A\approx160$ in the rare-earth region. While the closed shell peaks are well understood to be formed during the steady phase \cite{RevModPhys.29.547}, it has been argued that the $A\approx160$ peak is due to the deformation in the nuclei created after the steady phase, just before freeze out \cite{surman1}. An explanation that has been proposed is that as the deformation maximum is reached the nucleus cannot deform more so the next heavier nucleus will be less stable, an effect that can mimic closed shells \section{Deformation systematics} One of the standard references for nuclear masses, and deformations, is the calculations made by M\"{o}ller and Nix using the finite range liquid drop model \cite{mollernix}. This reference has, for example, been used in the calculations in \cite{surman1, surman2}, reproducing the $A=160$ r-process peak position but slightly underestimating the low-$A$ side and slightly overestimating the high-$A$ side. In \cite{mollernix}, the deformations behave smoothly with a maximum in the region around $N\approx102$--$104$. In figure~\ref{fig:mnsystematics}, the evaluated experimental deformations from \cite{nndc} are shown, but the calculations from \cite{mollernix} do not appear to follow the same pattern as the experimental data. Besides the larger absolute variation in deformations for different $Z$ values, the deformation maximum for each $Z$ does not appear to be as stable around $N\approx104$ as in the M\"{o}ller and Nix calculations. Rather, the deformations seem to peak at lower values of $N$ for lower values of $Z$, further discussed in \cite{naturalVMI}. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{fig1.pdf} \caption{Experimentally measured $\beta_2$ values (left) and $J_0$ Harris parameters (right) for a selection of even-even nuclei \protect\cite{my_thesis}.} \label{fig:mnsystematics} \end{figure} As the experimental data on nuclear deformation is quite sparse in this neutron-rich region, other ways of understanding the evolution of nuclear deformations have to be investigated. For example, one can use the excitation energy spectrum together with the VMI model, which is based on the Harris parameters $[J_0,J_1]$. The parameter $J_0$ can be related to the deformation of the nucleus and $J_1$ can be related to the amount of freezing of the internal structure, that is the rigidity. In this way it is possible to obtain a much richer set of data than when using only experimental deformations directly. Recently, an experiment to study the yrast band in the mid-shell nuclei \isotope{168,170}{Dy} ($N=102,104)$ was carried out at the PRISMA and CLARA set-up at LNL, the results of which are shown in figure~\ref{fig:dy_syst}. The VMI fits have been made according to the procedure in \cite{naturalVMI}, with the inclusion of the data on \isotope{168}{Dy} \cite{PhysRevC.81.034310} and \isotope{170}{Dy} \cite{PhysRevC.81.034310,chinphyslett}. The experimental data from \cite{nndc} is shown together with the Harris parameters $J_0$ in figure~\ref{fig:mnsystematics}. \begin{figure} \centering \includegraphics[angle=-90,width=0.8\textwidth]{fig2.pdf} \caption[Ground state rotational bands for dysprosium isotopes with $N=94-104$]{Ground state rotational bands for dysprosium isotopes with $N=94-104$ from \protect\cite{nndc} and for $6^+$--$10^+$ in $^{168}$Dy and the $4^+\to2^+$ transition in \isotope{170}{Dy} \protect\cite{PhysRevC.81.034310}. The $2^+\to0^+$ transition in \isotope{170}{Dy} is from the calculations in \protect\cite{chinphyslett}.} \label{fig:dy_syst} \end{figure} These new data points show an increase in $J_0$, suggesting that the $N=104$ deformation maximum could be reasonably stable at least from Dy to Hf, $66\leq Z \leq 72$. However, further investigations at lower $Z$ and higher $N$, as well as direct measurements of the deformation parameters, are important for a full understanding of the evolution of nuclear deformations. \section{Outlook} Taking advantage of new detector technology, like AGATA \cite{agata}, and the possibility to use a heavier beam, like \isotope{136}{Xe}, both the detection power and production cross-section of the experiment described in \cite{PhysRevC.81.034310} could be improved considerably. Thus, it would be possible to extend the systematic studies of collectivity further into the neutron-rich region using multi-nucleon transfer reactions. The production cross-section of dysprosium isotopes for three different types of ion beams calculated using the \texttt{grazing} code \cite{grazing1,grazing2} are shown in figure~\ref{fig:grazing-Dy}. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{fig3a.pdf} \includegraphics[width=0.4\textwidth]{fig3b.pdf} \caption[Grazing calculations of production cross-sections for dysprosium isotopes]{Grazing calculations of production cross-sections for dysprosium isotopes (left) using an \isotope{170}{Er} target and a \isotope{48}{Ca} beam with an energy of 230~MeV (dotted), a \isotope{82}{Se} beam with an energy of 460~MeV (dashed) \protect\cite{PhysRevC.81.034310} and a \isotope{136}{Xe} beam with an energy of 1000~MeV (solid). Calculated production cross-sections of the Dy isotopic chain in the FRS (right) for the primary beams \isotope{176}{Yb} (solid), \isotope{186}{W} (dashed), \isotope{197}{Au} (dotted) and \isotope{198}{Pt} (dash-dotted). See \protect\cite{my_thesis} for details.} \label{fig:grazing-Dy} \end{figure} A valuable complement to the multi-nucleon transfer reaction measurements are the fragmentation reactions. Using these it is possible to establish $B(\mathrm{E2})$ values to determine the electric quadrupole moments and the degree of triaxial deformation, and thus the evolution of quadrupole collectivity, for a range of neutron-rich rare-earth nuclei \cite{0954-3899-36-11-115104}. By relativistic Coulomb excitation it would be possible to determine the \betwo{0^{+}}{2^{+}_{1}} and \betwo{0^{+}}{2^{+}_{2}} for a large range of neutron-rich even-even nuclei in the rare-earth region \cite{pregan}, see for example the Dy isotopes in figure~\ref{fig:grazing-Dy}. In this figure the production cross-sections, calculated by the LISE++ code \cite{lise1,lise2}, for a couple of primary beams from the fragment separator (FRS) at GSI with energies of 800~MeV per nucleon and projectile fragmentation on a 4~g/cm$^{2}$ beryllium target are shown. A problem with this kind of measurement, however, is the large background from bremsstrahlung radiation for $\gamma$-ray energies $E_{\gamma}<300$~keV, why detailed simulations are required.
3,212,635,537,877
arxiv
\section{Introduction} \setcounter{equation}{0} Consider the classical Lane-Emden system \begin{align}\label{1.1} -\Delta u = v^p, \quad-\Delta v= u^\theta,\quad u,v>0\quad\mbox{in }\; \mathbb{R}^N, \quad\mbox{where }\; p,\theta >0. \end{align} There is a famous conjecture who states that: {\sl Let $p, \theta > 0$. If the pair $(p, \theta)$ is subcritical, i.e.~if \begin{align}\label{LE} \frac{1}{p+1} + \frac{1}{\theta +1} > \frac{N-2}{N}, \end{align} then there is no smooth solution to \eqref{1.1}.} \medskip The critical curve given by the equality in \eqref{LE} is called the Sobolev hyperbola, which is introduced independently by Mitidieri \cite{em} and Van der Vorst \cite{van}, it plays a crucial role in the analysis of \eqref{1.1}. It is well known that if $(p, \theta)$ lies on or above the Sobolev hyperbola, \eqref{1.1} admits radial classical solutions (see \cite{em1, Zs}), and the Lane–Emden conjecture can be restated as the following: There has no smooth solution to \eqref{1.1} if the positive pair $(p, \theta)$ lies below the Sobolev critical hyperbola. \medskip The conjecture is proved to be true for radial functions by Mitidieri \cite{em1}, Serrin-Zou \cite{Zs1}. For the full conjecture, Souto \cite{s}, Mitidieri \cite{em1} and Serrin-Zou \cite{Zs} proved that there is no supersolution to \eqref{1.1}, if $p\theta \leq 1$ or $\max(\alpha, \beta) \geq N-2$, where \begin{align}\label{ab} \alpha = \frac{2(p+1)}{p\theta - 1}, \quad \beta = \frac{2(\theta +1)}{p\theta - 1}, \quad p\theta > 1. \end{align} Moreover, we can check readily that if $p\theta > 1$, the condition \eqref{LE} is equivalent to \begin{align} \label{LEbis} N < 2 + \alpha + \beta. \end{align} Therefore, the Lane-Emden conjecture is true in dimensions $N = 1, 2$. More recently, the conjecture is proved in dimensions $N = 3, 4$, by Souplet and his collaborators, see \cite{pqs, sou}. For $N \geq 5$, the conjecture is known to be true for $(p, \theta)$ verifying \eqref{LE} and one of the following extra conditions: \begin{itemize} \item If $p, \theta < \frac{N+2}{N-2}$, see Felmer-de Figuereido \cite{ff}. \item If $\max(p, \theta) \geq N-3$, see Souplet \cite{sou}. \item If $\min(\alpha, \beta) \geq \frac{N-2}{2}$, see Busca-Man\'asevich \cite{bm}\footnote{In \cite{bm}, there is another extra condition, which is no longer necessary after the work of Souplet \cite{sou}.}. \item If $p = 1$ or $\theta = 1$, see Lin \cite{lin}. \end{itemize} These partial results enable us a more restrictive new region for the exponents $(p, \theta)$, which is illustrated by the following figure. In other words, the Lane-Emden conjecture stands open for $N \geq 5$, $p,\theta > 0$ such that \begin{align*} p, \theta \ne 1, \quad \min(\alpha, \beta) < \frac{N-2}{2} \quad\mbox{and}\quad \max(\alpha, \beta) < N-3. \end{align*} \begin{figure}[h] \begin{center} \includegraphics[width=8cm,height=6cm]{Lane-Emden.png} \caption[le titre]{The remained open region (shaded) for the Lane-Emden conjecture ($N \geq 5$)} \label{monlabel} \end{center} \end{figure} On the other hand, in the last decade, many efforts were made to obtain some Liouville type result for solutions with finite Morse index, or more generally, which are stable at infinity. To define the notion of stability, we consider a general system given by \begin{align}\label{1.222} -\Delta u = f(x,v),\quad -\Delta v= g(x,u)\;\; \mbox{in $K$, a bounded regular domain }\; \subset \mathbb R^N, \end{align} where $f,g\in C^1(K \times \mathbb{R}).$ Following Montenegro \cite{MO}, a smooth solution $(u,v)$ of \eqref{1.222} is said to be stable in $K$ if the following eigenvalue problem \begin{align*} -\Delta \xi = f_{v}(x,v)\zeta +\eta \xi, \quad -\Delta \zeta = g_{u}(x,u)\xi + \eta\zeta \quad \mbox{in }\, K \end{align*} has a nonnegative eigenvalue $\eta$, with a positive smooth eigenfunctions pair $(\xi, \zeta)$. We say that a pair of solutions $(u, v)$ to \eqref{1.1} is stable outside a compact set or stable at infinity, if there is a compact set $K\subset \mathbb R^N$ such that $(u, v)$ is stable in any bounded domain of $\mathbb R^N\backslash K$. \medskip For the corresponding second order equation \begin{align} \label{2nd} \Delta u+ |u|^{q-1}u =0 \quad \mbox{in } \mathbb R^N, \quad q > 1, \end{align} Farina \cite{Far} obtained the optimal Liouville type result for solutions stable at infinity. Indeed, he proved that a smooth nontrivial solution to \eqref{2nd} exists, if and only if $q> p_{JL}$ and $N\geq 11,$ or $q =\frac{N+2}{N-2}$ and $N\geq 3.$ Here $p_{JL}$ denotes the so-called Joseph-Lundgren exponent (see \cite{CW, Far}). For the biharmornic equation $\Delta^2 u = |u|^{q-1}u$, $q > 1$, D\'{a}vila-Dupaigne-Wang-Wei \cite{ddww} derived a striking monotonicity formula, which led them to the optimal classification result for solutions stable at infinity, using blow down analysis. \medskip Coming back to the Lane-Emden system \eqref{1.1}, Chen-Dupaigne-Ghergu \cite{WLD} studied the stability of radial solutions when $p, \theta \geq 1.$ They introduced a new critical hyperbola, called the Joseph-Lundgren curve. More precisely, they proved that if $p, \theta \geq 1,$ then a radial solution of \eqref{1.1} is unstable if and only if $N \leq 10$, or $N \geq 11$ and \begin{equation*} \left[\frac{(N-2)^2-(\alpha-\beta)^2}{4}\right]^2<p\theta\alpha\beta(N-2-\alpha)(N-2-\beta). \end{equation*} Moreover, Cowan proved in \cite{cow} that if $p, \theta\geq 2$ and $N \leq 10$, there does not exist any stable solution (radial or not) to \eqref{1.1}. Recently, Hajlaoui-Harrabi-Mtiri \cite{Hfh} established some Liouville theorems for smooth stable solutions of \eqref{1.1} with $p > 1$, see Theorem {\bf A} below. We mention also the celebrated result of Ramos \cite{rm}, which states that if $p, \theta > 1$ satisfies \eqref{LE}, then the system $$-\Delta u = |v|^{p-1}v, \;\; -\Delta v = |u|^{\theta-1}u \quad \mbox{in } \mathbb R^N$$ does not admit any smooth solutions having finite {\sl relative} Morse index in the sense of Abbondandolo. \medskip In this paper, our motivation are twofold. We want to obtain the classification results for solutions (radial or not) to \eqref{1.1} which are just stable at infinity, and we want to handle the case where $p, \theta$ are allowed to be less than 1. So a natural question is: Can we prove the Lane-Emden conjecture with the extra condition that $(u, v)$ is stable at infinity? The answer is affirmative. \begin{thm} \label{main4} For any $p, \theta > 0$ satisfying \eqref{LE}, the system \eqref{1.1} has no classical solution stable outside a compact set. \end{thm} If $\theta = p$, using Souplet's comparison result (Lemma 2.7 in \cite{sou}), we get $u \equiv v$, so the optimal classification result for solutions stable at infinity was already given by Farina. The classification is also known for $p\theta \leq 1$ as mentioned above. Without loss of generality, we consider only $\theta > p > 0$ and $p \theta > 1$. As we will see soon, the $\theta > p \geq 1$ case can be handled by the results in \cite{Hfh}, so our main concern is the case $$\theta p > 1 > p > 0.$$ \medskip Let $(u, v)$ be a smooth solution to \eqref{1.1} with $\theta > p^{-1} > 1 > p > 0$. Our approach is based on the formal equivalence noticed in \cite{DEN, dos}, between the Lane-Emden system \eqref{1.1} and a fourth order problem, called the $m$-biharmonic equation. More precisely, let $m:=\frac{1}{p}+1>2$, as $v =(-\Delta u)^{m-1}$, we derive that $u$ satisfies $\Delta^{2}_{m} u := \Delta (|\Delta u|^{m-2}\Delta u) = u^\theta$ in $\mathbb R^N$. So we are led to consider $\theta > m-1 > 1$ and \begin{equation}\label{1} \Delta^{2}_{m} u := \Delta (|\Delta u|^{m-2}\Delta u) =|u|^{\theta-1}u. \end{equation} \smallskip Let $\Omega \subset \mathbb R^N$, we say that $u \in W^{2,m}_{loc}(\Omega)\cap L^{\theta +1}_{loc}(\Omega)$ is a weak solutions of \eqref{1} in $\Omega$, if for any regular bounded domain $K \subset \Omega$, $u$ is a critical point of the following functional $$I(v)=\frac{1}{m}\int_K |\Delta v|^m dx-\frac{1}{\theta+1}\int_K |v|^{\theta+1} dx, \quad \forall\; v \in W^{2,m}(K)\cap L^{\theta +1}(K).$$ Naturally, a weak solution to \eqref{1} is said stable in $\Omega \subset \mathbb R^N$, if \begin{equation}\label{fb1} \Lambda_u (h):= (m-1)\int_{\Omega} |\Delta u|^{m-2}|\Delta h|^2 dx-\theta \int_{\Omega}|u|^{\theta-1} h^2dx\geq0,\;\;\forall\; h\in C_c^2(\Omega). \end{equation} A key point for our approach is to remark a relationship between the stability for the system \eqref{1.1} and the stability for the equation \eqref{1} (see Lemma 2.1 below). This will permit us to handle the case $0 < p < 1$ in \eqref{1.1} by using the structure of the $m$-biharmonic equation. In fact, we can prove the following Liouville type result. \begin{thm}\label{main3} Let $\theta > m-1 > 1$ and $u\in W^{2,m}_{loc}(\mathbb{R}^N)\cap L^{\theta + 1}_{loc}(\mathbb R^N)$ be a weak solution of \eqref{1} which is stable outside a compact set. Assume that \begin{align}\label{new8} N < \frac{2m(\theta + 1)}{\theta -(m-1)}, \end{align} then $u\equiv 0.$ \end{thm} A direct calculation yields that if $p\theta > 1$ (or equivalently $\theta > m-1$), \begin{align*} N < 2 + \alpha + \beta = \frac{2(p+1)(\theta+1)}{p\theta - 1} \;\; \Leftrightarrow \;\; \eqref{new8} \;\; \Leftrightarrow \;\; \theta < \frac{N(m-1)+2m}{(N-2m)_+}. \end{align*} It means that the range of pairs $(p, \theta)$ satisfying \eqref{LE} and $p\theta > 1$ corresponds exactly to the subcritical case of the $m$-biharmonic equation \eqref{1}. \medskip Another crucial step in our approach is to classify first the stable solutions of \eqref{1.1}, see also Proposition \ref{p12bis} below for the $m$-biharmonic equation. \begin{prop}\label{p12} If $p, \theta > 0$ satisfies \eqref{LE}, then \eqref{1.1} has no smooth stable solution. \end{prop} Establishing a Liouville type result for stable solution of \eqref{1.1} or \eqref{1} is delicate, even we can borrow some ideas from \cite{wy, ddww}. We use as usual the stability to get some integral estimates, but the integrations by parts argument yields here many terms which are difficult to control, for example, the local $L^m$ norm of $\nabla u$, see Lemma \ref{l.2.7a} below. Furthermore, the classification of weak solutions stable at infinity to \eqref{1} is more involved than to handle \eqref{1.1}, since the weak solutions to \eqref{1} are not $C^2$ functions. We will derive a variant of the Pohozaev identity with cut-off functions, which allows us to avoid the spherical integral terms in the standard Pohozaev identity. \medskip The paper is organized as follows. In section 2, we give the proof of Proposition \ref{p12}. The proofs of Theorem \ref{main4} and Theorem \ref{main3} are given respectively in sections 3 and 4. In the following, $C$ denotes always a generic positive constant, which could be changed from one line to another. \section{Classification of stable solutions} \setcounter{equation}{0} We prove here Proposition \ref{p12}. As mentioned before, we need only to consider the case $\theta > p$ and $p \theta > 1$. We split the proof into two cases: $\theta > p \geq 1$ and $\theta > p^{-1} > 1 > p > 0$. \subsection{ The case $\theta>p \geq 1$.} Let us recall a consequence of Theorem 1.1 (with $\alpha = 0$ there) in \cite{Hfh}. \begin{taggedtheorem}{A} Let $x_0$ be the largest root of the polynomial \begin{align}\label{newH} H(x)=x^4 -p\theta\alpha\beta \left[4x^{2}-2(\alpha+\beta)x+1\right]. \end{align} \begin{enumerate} \item[(i)] If $\frac{4}{3}< p \leq \theta$ then \eqref{1.1} has no stable classical solution if $N<2+2x_0.$ \item[(ii)] If $1 \leq p\leq \min(\frac{4}{3}, \theta)$ and $p\theta > 1$, then \eqref{1.1} has no stable classical solution, if $$N < 2 + 2x_0 \left[\frac{p}{2}+\frac{(2-p)(p \theta -1)}{(\theta+p-2)(\theta+1)}\right].$$ \end{enumerate} \end{taggedtheorem} Performing the change of variables $x=\frac{\beta}{2}s$ in \eqref{newH}, a direct computation shows that $H(x)=\left(\frac{\beta}{2}\right)^4L(s)$ where $$L(s):=s^4-\frac{16p\theta(p+1)}{\theta+1}s^2+\frac{16p\theta(p+1)(p+\theta+2)}{(\theta+1)^2}s-\frac{16p\theta(p+1)^2}{(\theta+1)^2}.$$ Denote by $s_0$ the largest root of $L,$ hence $x_0=\frac{\beta}{2}s_0$ and $ H(x)<0$ if and only if $L(s)<0$. For $\theta > p \geq 1$, there holds \begin{align*} L(p+1) & = (p+1)^4 -\frac{16p\theta(p+1)^3}{(\theta+1)} +\frac{16p\theta(p+1)^2(p+\theta+2)}{(\theta+1)^2} -\frac{16p\theta(p+1)^2}{(\theta+1)^2}\\ & = (p+1)^4 -\frac{16p\theta(p+1)^3}{(\theta+1)} + \frac{16p\theta(p+1)^2}{(\theta+1)} + \frac{16p\theta(p+1)^3}{(\theta+1)^2} -\frac{16p\theta(p+1)^2}{(\theta+1)^2}\\ & = (p+1)^2\left[(p+1)^2 -\frac{16p^{2}\theta}{(\theta+1)} + \frac{16p^{2}\theta}{(\theta+1)^2}\right]\\ &=\left(\frac{p+1}{\theta+1}\right)^{2}\left[(p+1)^2(\theta+1)^2 -16p^{2}\theta^{2}\right] < 0. \end{align*} The last inequality holds true since $$4p\theta- (p+1)(\theta + 1) > 4p^2 - (p+1)^2 \geq 0, \quad \forall\; \theta > p \geq 1.$$ As $\lim_{s\rightarrow \infty}L(s)= \infty,$ it follows that $s_0>p+1.$ We get then \begin{align*} 2x_0>(p+1)\beta=2+\alpha+\beta,\quad \forall\; \theta > p\geq 1. \end{align*} If $p>\frac{4}{3}$, by $(i)$ of Theorem \textbf{A}, the system \eqref{1.1} has no classical stable solution if $N<2+\alpha+\beta$. Suppose now $1\leq p \leq \min(\frac{4}{3}, \theta)$. Observe that for all $\theta \geq p \geq 1$, \begin{align*} \left[p+\frac{2(2-p)(p \theta -1)}{(\theta+p-2)(\theta+1)}\right]\beta \geq \alpha + \beta & \Leftrightarrow \left[p+\frac{2(2-p)(p \theta -1)}{(\theta+p-2)(\theta+1)}\right](\theta + 1) \geq p + \theta + 2\\ & \Leftrightarrow p\theta - 1 + \frac{2(2-p)(p \theta -1)}{\theta+p-2} \geq \theta + 1\\ & \Leftrightarrow (p\theta - 1)\left[ 1 + \frac{2(2-p)}{\theta+p-2}\right] \geq \theta + 1\\ & \Leftrightarrow (p\theta - 1)(\theta + 2 - p) \geq (\theta+p-2)(\theta + 1)\\ & \Leftrightarrow p\theta^2 - \theta + (2 - p)p\theta \geq \theta^2 + (p-1)\theta\\ & \Leftrightarrow (p-1)(\theta - p) \geq 0. \end{align*} As $s_0>p+1\geq 2,$ we have $x_0 = \frac{\beta s_0}{2} \geq \beta$ and $$2+\alpha+\beta \leq 2 + \beta \left[p+\frac{2(2-p)(p \theta -1)}{(\theta+p-2)(\theta+1)}\right] \leq 2 + x_0\left[p+\frac{2(2-p)(p \theta -1)}{(\theta+p-2)(\theta+1)}\right].$$ If $N < 2+\alpha+\beta,$ using $(ii)$ of Theorem \textbf{A}, we are done. \medskip To conclude, for all $\theta > p \geq 1$ and $N < 2+\alpha+\beta,$ \eqref{1.1} has no smooth stable solution. \qed \subsection{The case $\theta p > 1>p > 0$. } Here we handle the case $0<p<1.$ First of all, we need the following lemma which plays an important role in dealing with Proposition \ref{p12}. \begin{lem} \label{l.1} Let $(u,v)$ be a solution of system \eqref{1.1} with $\theta > \frac{1}{p}:=m-1>1$. Suppose that $(u, v)$ is stable in a regular bounded domain $\Omega$, then $u$ is a stable solution of equation \eqref{1}. \end{lem} \noindent{\bf Proof.} By the definition of stability, there exist smooth positive functions $\xi$, $\zeta$ and $\eta \geq 0$ such that $$-\Delta \xi = pv^{p-1}\zeta + \eta \xi, \; \; -\Delta\zeta = \theta u^{\theta -1}\xi + \eta\zeta \quad \mbox{in }\; \Omega.$$ Using $(\xi, \zeta)$ as super-solution, $(\min_{\overline\Omega}\xi, \min_{\overline\Omega}\zeta)$ as sub-solution, and the standard monotone iterations, we can claim that there exist positive smooth functions $\varphi$, $\chi$ verifying \begin{align*} -\Delta \varphi = p v^{p-1}\chi, \quad -\Delta \chi = \theta u^{\theta-1}\varphi\quad \mbox{in }\, \Omega. \end{align*} Therefore, we have \begin{align*} \theta u^{\theta-1}\varphi=\Delta\left(\frac{1}{p} v^{1-p} \Delta \varphi\right) \quad \mbox{in}\;\;\Omega. \end{align*} Let $\gamma \in C_c^2(\Omega)$. Multiplying the above equation by $\gamma^{2}\varphi^{-1}$ and integrating by parts, there holds \begin{align}\label{0.255} \begin{split} \int_{\Omega} \theta u^{\theta-1}\gamma^{2}dx & = \frac{1}{p}\int_{\Omega} v^{1-p} \Delta \varphi\Delta(\gamma^{2}\varphi^{-1})dx\\ &= \frac{1}{p} \int_{\Omega}v^{1-p} \Delta \varphi\left(-4\gamma\frac{\nabla \varphi\cdot\nabla\gamma}{\varphi^{2}}+\frac{2|\nabla\gamma |^2}{\varphi}+\frac{2\gamma\Delta\gamma}{\varphi}+\frac{2\gamma^{2}|\nabla\varphi |^2}{\varphi^{3}} -\frac{\gamma^{2}\Delta\varphi}{\varphi^{2}}\right) dx. \end{split} \end{align} Using Cauchy-Schwarz's inequality and the fact that $-\Delta \varphi >0,$ we get \begin{align}\label{2.La} \begin{split} \left| -4\int_{\Omega}\frac{v^{1-p}}{p} \Delta \varphi\frac{\nabla \varphi\cdot\nabla\gamma}{\varphi^{2}}\gamma dx\right| \leq -2\int_{\Omega}\frac{v^{1-p}}{p} \Delta \varphi\frac{|\nabla\gamma |^2}{\varphi} dx -2\int_{\Omega}\frac{v^{1-p}}{p} \Delta \varphi\frac{\gamma^{2}|\nabla\varphi |^2}{\varphi^{3}} dx. \end{split} \end{align} Combining \eqref{0.255} and \eqref{2.La}, one obtains, using again the Cauchy-Schwartz inequality, \begin{align*} \int_{\Omega} \theta u^{\theta-1}\gamma^{2}dx & \leq \frac{2}{p} \int_{\Omega}v^{1-p} \Delta \varphi\frac{\gamma\Delta\gamma}{\varphi} dx-\frac{1}{p} \int_{\Omega}v^{1-p}\frac{(\Delta \varphi)^{2}}{\varphi^{2}}\gamma^{2} dx\\ & \leq \frac{1}{p}\int_{\Omega}v^{1-p}\frac{(\Delta \varphi)^{2}}{\varphi^{2}}\gamma^{2} dx + \frac{1}{p}\int_{\Omega}v^{1-p}(\Delta\gamma)^{2} dx -\frac{1}{p} \int_{\Omega}v^{1-p}\frac{(\Delta \varphi)^{2}}{\varphi^{2}}\gamma^{2} dx\\ & = \frac{1}{p} \int_{\Omega}v^{1-p}(\Delta\gamma)^{2} dx. \end{align*} Recall that $p = \frac{1}{m-1}$ and $(-\Delta u )^{\frac{1}{p}}= v,$ we obtain the desired result \eqref{fb1}. \qed \medskip Therefore, to prove Proposition \ref{p12} in the case $p \in (0, 1)$ and $p\theta > 1$, we need only to prove \begin{prop}\label{p12bis} Let $\theta > m - 1 > 1$, if $u$ is a weak stable solution to the equation \eqref{1} in $\mathbb R^N$ with $N$ verifying \eqref{new8}, then $u \equiv 0$. \end{prop} To prove Proposition \ref{p12bis}, we use first the stability condition \eqref{fb1} to get the following crucial lemma which provides an important integral estimate for $u$ and $\Delta u$. \begin{lem} \label{lemnewBN} Let $u\in W^{2,m}_{loc}(\Omega)\cap L^{\theta+1}_{loc}(\Omega)$ be a weak stable solution of \eqref{1} in $\Omega$, with $\theta>m-1 > 1$. Then, for any integer $$k\geq \max \left( m, \frac{m(\theta+1)}{2(\theta+1-m)}\right) ,$$ there exists a positive constant $ C=C(N, \epsilon, m, k)$ such that for any $\zeta\in C_c^2(\Omega)$ satisfying $0 \leq\zeta \leq 1$, \begin{align}\label{new7} \begin{split} \int_{\Omega}|\Delta u|^{m} \zeta^{4k} dx +\int_{\Omega} |u|^{\theta+1}\zeta^{4k} dx \leq C\left[\int_{\Omega}\left(|\Delta \zeta|^{m}+|\nabla \zeta|^{2m} +|\nabla^{2} \zeta|^{m}\right)^{\frac{\theta+1}{\theta-(m-1)}} dx\right]. \end{split} \end{align} \end{lem} \noindent{\bf Proof.} For any $\epsilon \in (0, 1)$ and $\eta \in C^2(\Omega)$, there holds \begin{align} \label{newesTt47} \begin{split} \int_{\Omega}|\Delta u|^{m-2} [\Delta (u\eta)]^2 dx = & \; \int_{\Omega}|\Delta u|^{m-2}\left(u \Delta \eta+2\nabla u\nabla \eta + \eta\Delta u\right)^{2} dx\\ \leq & \;\left(1+\epsilon\right)\int_{\Omega}|\Delta u|^m {\eta}^2 dx + \frac{C}{\epsilon}\int_{\Omega}|\Delta u|^{m-2} \Big(u^2|\Delta\eta|^2 + |\nabla u|^2|\nabla \eta|^2\Big)dx. \end{split} \end{align} Take $\eta = \zeta^{2k}$ with $\zeta \in C_c^2(\Omega)$, $0 \leq\zeta \leq 1$ and $k \geq m > 2$. Apply Young's inequality, we get \begin{align*} \int_{\Omega}|u|^{2}|\Delta u|^{m-2}|\Delta (\zeta^{2k})|^2 dx & \leq C_{k}\int_{\Omega}|u|^{2}|\Delta u|^{m-2}\left(|\Delta \zeta|^{2}+|\nabla \zeta|^4\right)\zeta^{4k-4} dx\\ & \leq \epsilon^{2} \int_{\Omega}|\Delta u|^m \zeta^{4k} dx + C_{\epsilon, k, m}\int_{\Omega} |u|^m \left(|\Delta \zeta|^{2}+|\nabla \zeta|^4\right)^{\frac{m}{2}}\zeta^{4k-2m} dx \end{align*} and \begin{align*} \int_{\Omega}|\Delta u|^{m-2} |\nabla u|^2|\nabla (\zeta^{2k})|^2 dx & =4 k^{2}\int_{\Omega}|\Delta u|^{m-2} |\nabla u|^2|\nabla \zeta|^2\zeta^{4k-2} dx\\ & \leq \epsilon^{2} \int_{\Omega}|\Delta u|^m \zeta^{4k} dx + \frac{C_{m,k}}{\epsilon^{2}}\int_{\Omega} |\nabla u|^m |\nabla \zeta|^{m}\zeta^{4k-m} dx. \end{align*} Inserting the two above estimates into \eqref{newesTt47}, we arrive at \begin{align}\label{newest4} \begin{split} \int_{\Omega}|\Delta u|^{m-2} [\Delta (u\zeta^{2k})]^2 dx \leq & \; \left(1+C\epsilon\right) \int_{\Omega}|\Delta u|^m \zeta^{4k} dx+\frac{C_{m,k}}{\epsilon^{3}}\int_{\Omega} |\nabla u|^m |\nabla \zeta|^{m}\zeta^{4k-m} dx\\ &+ C_{\epsilon, m, k}\int_{\Omega} |u|^m \left(|\Delta \zeta|^{2}+|\nabla \zeta|^4\right)^{\frac{m}{2}}\zeta^{4k-2m} dx. \end{split} \end{align} \medskip We need also the following technical lemma, which proof is given later. \begin{lem}\label{l.2.7a} Let $k \geq m/2 > 1$ and $\epsilon > 0$, there exists $C_{N, \epsilon, m, k} >0$ such that for any $u\in W^{2,m}_{loc}(\Omega)$ verifying \eqref{fb1} and $\zeta \in C_c^{\infty}(\Omega)$ with $0 \leq\zeta \leq 1$, there holds \begin{align} \label{new13} \int_{\Omega} |\nabla u|^m |\nabla \zeta|^{m}\zeta^{4k-m} dx \leq \epsilon\int_{\Omega}|\Delta u|^{m}\zeta^{4k} dx + C_{N, \epsilon, m ,k} \int_{\Omega}|u|^{m}\left(|\nabla \zeta|^{2m} +|\nabla^{2} \zeta|^{m}\right)\zeta^{4k-2m}dx. \end{align} \end{lem} Using Lemma \ref{l.2.7a} with $\epsilon^4$ and \eqref{newest4}, we see that \begin{align}\label{nt4} \begin{split} \int_{\Omega}|\Delta u|^{m-2} [\Delta (u\zeta^{2k})]^2 dx \leq& \; C_{N, \epsilon, m,k}\int_{\Omega} |u|^{m} \left(|\Delta \zeta|^{m}+|\nabla \zeta|^{2m} +|\nabla^{2} \zeta|^{m}\right)\zeta^{4k-2m} dx\\ & \;+\left(1 + C_{m,k}\epsilon\right) \int_{\Omega}|\Delta u|^{m} \zeta^{4k} dx. \end{split} \end{align} Thanks to the approximation argument, the stability property \eqref{fb1} holds true with $u\zeta^{2k}$. We deduce then, for any $\epsilon>0$, there exists $C_{N, \epsilon, m,k} > 0$ such that \begin{align}\label{f1} \begin{split} & \;\theta\int_{\Omega} |u|^{\theta+1}\zeta^{4k} dx -\left(m-1\right)\left(1+ C_{m,k}\epsilon\right) \int_{\Omega}|\Delta u|^{m} \zeta^{4k} dx\\ \leq & \;C_{N, \epsilon, m,k}\int_{\Omega} |u|^{m} \left(|\Delta \zeta|^{m}+|\nabla \zeta|^{2m} +|\nabla^{2} \zeta|^{m}\right)\zeta^{4k-2m} dx. \end{split} \end{align} Moreover, multiplying the equation \eqref{1} by $u \zeta^{4k}$ and integrating by parts, there holds \begin{align*} \int_{\Omega}|\Delta u|^{m} \zeta^{4k} dx-\int_{\Omega} |u|^{\theta+1}\zeta^{4k} dx \leq \int_{\Omega}|u||\Delta u|^{m-1} |\Delta(\zeta^{4k})| dx + C\int_{\Omega}|\Delta u|^{m-1} |\nabla u||\nabla (\zeta^{4k})| dx. \end{align*} Using Young's inequality and applying again Lemma \ref{l.2.7a}, we can conclude that for any $\epsilon>0$, there exists $C_{N,\epsilon, m,k} > 0$ such that \begin{align}\label{f12} \begin{split} & \;\left(1- C_{m,k}\epsilon\right) \int_{\Omega}|\Delta u|^{m} \zeta^{4k} dx-\int_{\Omega} |u|^{\theta+1}\zeta^{4k} dx\\ \leq& \; C_{N,\epsilon, m,k}\int_{\Omega} |u|^{m} \left(|\Delta \zeta|^{m}+|\nabla \zeta|^{2m} +|\nabla^{2} \zeta|^{m}\right)\zeta^{4k-2m} dx. \end{split} \end{align} Taking $\epsilon > 0$ but small enough, multiplying \eqref{f12} by $\frac{(m-1)(1+ 2C_{m,k}\epsilon)}{1- C_{m,k}\epsilon}$, adding it with \eqref{f1}, we get \begin{align*} &(m-1) C_{m,k}\epsilon \int_{\Omega}|\Delta u|^{m} \zeta^{4k} dx + \left[\theta-\frac{(m-1)(1+ 2C_{m,k}\epsilon)}{1- C_{m, k}\epsilon}\right]\int_{\Omega} |u|^{\theta+1}\zeta^{4k} dx\\ \leq & \; C_{N,\epsilon, m,k}\int_{\Omega} |u|^{m} \left(|\Delta \zeta|^{m}+|\nabla \zeta|^{2m} +|\nabla^{2} \zeta|^{m}\right)\zeta^{4k-2m} dx. \end{align*} As $\theta > m - 1 > 1$, using $\epsilon > 0$ small enough, we have \begin{align}\label{f1xx2xx} \int_{\Omega}|\Delta u|^{m} \zeta^{4k} dx +\int_{\Omega} |u|^{\theta+1}\zeta^{4k} dx\leq C\int_{\Omega} |u|^{m} \left(|\Delta \zeta|^{m}+|\nabla \zeta|^{2m} +|\nabla^{2} \zeta|^{m}\right)\zeta^{4k-2m} dx. \end{align} For $k\geq \frac{m(\theta+1)}{2(\theta+1-m)}$ so that $4km\leq(4k-2m)(\theta+1),$ Applying H\"older inequality, we conclude then \begin{align*} & \;\int_{\Omega}|\Delta u|^{m} \zeta^{4k} dx +\int_{\Omega} |u|^{\theta+1}\zeta^{4k} dx\\ \leq & \; C\left[\int_{\Omega}\left(|\Delta \zeta|^{m}+|\nabla \zeta|^{2m} +|\nabla^{2} \zeta|^{m}\right)^{\frac{\theta+1}{\theta-(m-1)}} dx\right]^{\frac{\theta-(m-1)}{\theta+1}} \left(\int_{\Omega} |u|^{\theta+1}\zeta^{\frac{(4k-2m)(\theta+1)}{m}}dx\right)^{\frac{m}{\theta+1}}\\ \leq & \; C\left[\int_{\Omega}\left(|\Delta \zeta|^{m}+|\nabla \zeta|^{2m} +|\nabla^{2} \zeta|^{m}\right)^{\frac{\theta+1}{\theta-(m-1)}} dx\right]^{\frac{\theta-(m-1)}{\theta+1}} \left(\int_{\Omega} |u|^{\theta+1}\zeta^{4k}dx\right)^{\frac{m}{\theta+1}}. \end{align*} We get readily the estimate \eqref{new7}. \qed \medskip Now we choose $\phi_0$ a cut-off function in $C_c^\infty(B_2)$ verifying $0 \leq \phi_0 \leq 1$ and $\phi_0=1$ in $B_1$. Applying \eqref{new7} with $\zeta = \phi_0(R^{-1}x)$ for $R > 0$, there holds \begin{align*} \int_{B_{R}} |u|^{\theta+1}dx\leq \int_{\mathbb R^N} |u|^{\theta+1}\zeta^{4k} dx \leq C R^{\frac{N(\theta -m+1)}{\theta + 1} -2m}. \end{align*} Under the assumption \eqref{new8}, tending $R \to \infty$, we obtain $u \equiv 0$, we prove then Proposition \ref{p12bis}, hence the case $\theta p > 1 > p > 0$ for Proposition \ref{p12}. \medskip \noindent{\bf Proof of Lemma \ref{l.2.7a}.} A direct calculation gives \begin{align} \label{new1} \begin{split} \int_{\Omega} |\nabla u|^m |\nabla \zeta|^{m}\zeta^{4k-m} dx = &\;\int_{\Omega} \nabla u\cdot\nabla u|\nabla u|^{m-2}|\nabla \zeta|^{m}\zeta^{4k-m}dx\\ =& \; - \int_{\Omega} \mathrm{div} \left(\nabla u|\nabla u|^{m-2}\right) u|\nabla \zeta|^{m}\zeta^{4k-m}dx\\ &\; -\int_{\Omega} u|\nabla u|^{m-2}\nabla u \cdot\nabla\left(|\nabla \zeta|^{m}\zeta^{4k-m}\right)dx\\ := & \; I_1 + I_2. \end{split} \end{align} The integral $I_1$ can be estimated as \begin{align*} I_1 & = -\left(m-2\right) \int_{\Omega} u |\nabla u|^{m-4}|\nabla \zeta|^{m}\nabla^{2}u (\nabla u, \nabla u)\zeta^{4k-m}dx -\int_{\Omega} u\Delta u|\nabla u|^{m-2}|\nabla \zeta|^{m}\zeta^{4k-m}dx &\;\\ & \leq C_m\int_{\Omega} |u|| \nabla^{2} u||\nabla u|^{m-2}|\nabla \zeta|^{m}\zeta^{4k-m}dx + \int_{\Omega} |u||\Delta u||\nabla u|^{m-2}|\nabla \zeta|^{m}\zeta^{4k-m}dx. \end{align*} Applying Young's inequality, there holds, for any $\epsilon > 0$, \begin{align}\label{new3} \begin{split} & \int_{\Omega} |u|| \Delta u||\nabla u|^{m-2}|\nabla \zeta|^{m}\zeta^{4k-m}dx\\ \leq & \;C_{ \epsilon, m}\int_{\Omega} |u|^{\frac{m}{2}}| \Delta u|^{\frac{m}{2}}|\nabla \zeta|^{m}\zeta^{4k-m}dx + \epsilon\int_{\Omega}|\nabla u|^{m}|\nabla \zeta|^{m}\zeta^{4k-m}dx\\ \leq &\;C_{\epsilon, m}\int_{\Omega} |u|^{m}|\nabla \zeta|^{2m}\zeta^{4k-2m}dx+ \epsilon\int_{\Omega}| \Delta u|^{m}\zeta^{4k}dx + \epsilon\int_{\Omega}|\nabla u|^{m}|\nabla \zeta|^{m}\zeta^{4k-m}dx. \end{split} \end{align} On the other hand, \begin{align}\label{new2} \begin{split} & \;\int_{\Omega} |u|| \nabla^{2} u||\nabla u|^{m-2}|\nabla \zeta|^{m-2+2}\zeta^{4k-m}dx\\ \leq &\; C_{\epsilon, m}\int_{\Omega} |u|^{\frac{m}{2}}| \nabla^{2} u|^{\frac{m}{2}}|\nabla \zeta|^{m}\zeta^{4k-m}dx + \epsilon\int_{\Omega}|\nabla u|^{m}|\nabla \zeta|^{m}\zeta^{4k-m}dx\\ \leq &\;C_{\epsilon, m}\int_{\Omega} |u|^{m}|\nabla \zeta|^{2m}\zeta^{4k-2m}dx+ \epsilon\int_{\Omega}| \nabla^{2} u|^{m}\zeta^{4k}dx+ \epsilon\int_{\Omega}|\nabla u|^{m}|\nabla \zeta|^{m}\zeta^{4k-m}dx. \end{split} \end{align} Now we shall estimate the integral $$\int_{\Omega}| \nabla^{2} u|^{m}\zeta^{4k}dx.$$ Remark that there exists $C_0(N, m) > 0$ such that \begin{align} \label{W2m} \int_{\mathbb R^N} |\nabla^2\varphi|^m dx \leq C_0(N, m)\int_{\mathbb R^N}|\Delta\varphi|^m dx, \quad \forall\; \varphi \in W^{2, m}(\mathbb R^N). \end{align} We can prove it firstly for $\varphi \in W^{2, m}_0(B_1)$ with elliptic theory, then for general $\varphi \in W^{2, m}(\mathbb R^N)$ with approximation and scaling argument. As $u\zeta \in W^{2, m}_0(\Omega) \subset W^{2, m}(\mathbb R^N)$, \eqref{W2m} implies that \begin{align*} \int_{\Omega} |\nabla^2(u\zeta^{\frac{4k}{m}})|^m dx \leq & \; C_0(N, m) \int_{\Omega}|\Delta (u \zeta^{\frac{4k}{m}})|^m dx\\ \leq & \; C_{N, m}\int_{\Omega}|\Delta u|^{m}\zeta^{4k} dx + C_{N, m,k}\int_{\Omega}|\nabla u|^{m}|\nabla \zeta|^{m}\zeta^{4k-m}dx\\ &\; + C_{N, m,k}\int_{\Omega}|u|^{m}\left(|\nabla \zeta|^{2m} +|\nabla^{2} \zeta|^{m}\right)\zeta^{4k-2m}dx. \end{align*} Let $k > m$, we get then \begin{align}\label{new4} \begin{split} \int_{\Omega}| \nabla^{2} u|^{m}\zeta^{4k}dx \leq& \;C\int_{\Omega}|\nabla^2(u\zeta^{\frac{4k}{m}})|^m dx +C_{m,k}\int_{\Omega}|\nabla u|^{m}|\nabla \zeta|^{m}\zeta^{4k-m}dx\\ &\;+ C_{m,k}\int_{\Omega}|u|^{m}\left(|\nabla \zeta|^{2m} +|\nabla^{2} \zeta|^{m}\right)\zeta^{4k-2m}dx\\ \leq & \; C_{N,m}\int_{\Omega}|\Delta u|^{m}\zeta^{4k} dx + C_{N, m,k}\int_{\Omega}|\nabla u|^{m}|\nabla \zeta|^{m}\zeta^{4k-m}dx\\ &\; + C_{N, m,k}\int_{\Omega}|u|^{m}\left(|\nabla \zeta|^{2m} +|\nabla^{2} \zeta|^{m}\right)\zeta^{4k-2m}dx. \end{split} \end{align} Combining \eqref{new3}, \eqref{new2} and \eqref{new4}, we arrive at \begin{align}\label{SSS0.255} \begin{split} I_1 \leq &\;C_{N,m,k}\epsilon \int_{\Omega}|\Delta u|^{m}\zeta^{4k} dx +C_m\epsilon\int_{\Omega}|\nabla u|^{m}|\nabla \zeta|^{m}\zeta^{4k-m}dx \\ &+\; C_{N,\epsilon, m, k} \int_{\Omega}|u|^{m}\left(|\nabla \zeta|^{2m} +|\nabla^{2} \zeta|^{m}\right)\zeta^{4k-2m}dx. \end{split} \end{align} Furthermore, by Young’s inequality, \begin{align}\label{SS0.255} \begin{split} I_2 = & - m\int_{\Omega} u|\nabla u|^{m-2} |\nabla \zeta|^{m-2}\nabla^2\zeta(\nabla\zeta, \nabla u)\zeta^{4k-m} dx\\ & -(4k-m)\int_{\Omega} u|\nabla u|^{m-2}|\nabla \zeta|^{m}(\nabla u\cdot \nabla\zeta)\zeta^{4k-m-1}dx\\ \leq &\; C_{ m,k} \int_{\Omega}|u||\nabla u|^{m-1}|\nabla \zeta|^{m-1}\left(|\nabla \zeta|^{2} +|\nabla^{2} \zeta|\right)\zeta^{4k-m-1}dx\\ \leq &\; C_{\epsilon, m,k} \int_{\Omega}|u|^{m}\left(|\nabla \zeta|^{2m} +|\nabla^{2} \zeta|^{m}\right)\zeta^{4k-2m}dx + \epsilon\int_{\Omega}|\nabla u|^{m}|\nabla \zeta|^{m}\zeta^{4k-m}dx. \end{split} \end{align} Combining \eqref{SSS0.255}--\eqref{SS0.255} with \eqref{new1}, one concludes \begin{align*} (1-C_{N, m,k}\epsilon)\int_{\Omega} |\nabla u|^m |\nabla \zeta|^{m}\zeta^{4k-m} dx & \leq C_{N,\epsilon, m ,k} \int_{\Omega}|u|^{m}\left(|\nabla \zeta|^{2m} +|\nabla^{2} \zeta|^{m}\right)\zeta^{4k-2m}dx\\ &+C_{N, m, k}\epsilon\int_{\Omega}|\Delta u|^{m}\zeta^{4k} dx. \end{align*} This means that \eqref{new13} holds true for $\epsilon > 0$ small enough, hence for any $\epsilon > 0$.\qed \section{Proof of Theorem \ref{main4}.} \setcounter{equation}{0} In this section, we prove Theorem \ref{main4}. As already mentioned, we need only to handle the case $p\theta > 1$. We use first the classification for stable solutions, Proposition \ref{p12} to obtain the decay estimates for stable at infinity solutions of \eqref{1.1}. \begin{lem}\label{newla} Let $p, \theta > 0$ verify $p\theta > 1$ and \eqref{LEbis}. Let $(u,v)$ be a solution of \eqref{1.1} which is stable outside a compact set. Then there exists a constant C such that \begin{align}\label{new5} \sum_{k\leq 2}\Big[ |x|^{\alpha+k} |\nabla^k u(x)|+|x|^{\beta+k} |\nabla^k v(x)|\Big]\leq C,\quad \forall\; x \in \mathbb R^N. \end{align} \end{lem} \textbf{Proof.} Assume that $(u, v)$ is stable outside $B_{R_0}$. Denote $$W(x)=\sum_{k\leq 2}\left[ |\nabla^k u(x)|^{{\frac{1}{\alpha+k}}}+ |\nabla^k v(x)|^{\frac{1}{\beta+k}}\right].$$ Suppose that \eqref{new5} does not hold true. Let $d(x) = \|x\|-R_0$, there holds $$\sup_{\mathbb R^N\backslash B_{R_0}}W(x)d(x) =\infty,$$ or equally there exists a sequence $(x_n)$ such that $\|x_n\| > R_0$ and $W(x_{n})d(x_{n})>n$ for $n \geq 1$. Since $(u, v)$ are smooth in $\mathbb{R}^N$, then $d(x_{n})\rightarrow \infty$. By the doubling lemma \cite{pqs}, there exists another sequence $(y_{n})$ such that for any $n \geq 1$, $\|y_n\| > R_0$, \begin{enumerate} \item [$(i)$] $W(y_{n})d(y_{n})\geq n$; \item [$(ii)$] $W(y_{n})\geq W(x_{n})$; \item [$(iii)$] $W(z)\leq 2 W(y_{n})$ for $|z|> R_0$ such that $|z-y_{n}|\leq \frac{n}{W(y_{n})}.$ \end{enumerate} Let $(u,v)$ be a solution of \eqref{1.1}, consider the sequence of functions \begin{align*} \widetilde u_{n}(x) = \lambda_{n}^{\alpha}u(y_n + \lambda_{n}x), \quad \widetilde v_{n}(x) = \lambda_{n}^{\beta} v(y_n + \lambda_{n}x), \quad \mbox{with }\; \lambda_{n}=W(y_{n})^{-1}. \end{align*} It's well known that $(\widetilde u_n, \widetilde v_n)$ are a sequence of solutions to \eqref{1.1}. Moreover, $$W_n(x) := \sum_{k\leq 2}\left( |\nabla^k \widetilde u_n(x)|^{\frac{1}{\alpha+k}}+ |\nabla^k \widetilde v_n(x)|^{{\frac{1}{\beta+k}}}\right) = \lambda_n W(y_n + \lambda_n x), \quad \forall\; x \in \mathbb R^N.$$ By $(i)$, we have $B_{n\lambda_n}(y_{n}) \subset \mathbb R^N\backslash B_{R_0}$, and we can readily check that $(\widetilde u_n, \widetilde v_n)$ is stable in $B_{n}$ since $(u,v)$ is stable in $\mathbb R^N\backslash B_{R_0}$. Using $(iii)$, there holds, for all $n \geq 1$, \begin{align}\label{newAnewest5} W_n(x) \leq 2W_n(0) = 2\quad \mbox{in }\; B_n. \end{align} From \eqref{newAnewest5} and standard elliptic theory, up to a subsequence, $(\widetilde u_n, \widetilde v_n)$ converges to $(u_\infty, v_\infty)$ in $C^2_{loc}({\mathbb R}^N)$. Therefore $$\sum_{k\leq 2}\left( |\nabla^k u_\infty(0)|^{\frac{1}{\alpha+k}}+ |\nabla^k v_\infty(0)|^{{\frac{1}{\beta+k}}}\right) = 1.$$ So $(u_{\infty}, v_\infty)$ is nontrivial. Clearly, $(u_\infty, v_\infty)$ a smooth positive solution to \eqref{1.1}. Using again the elliptic theory, it's not difficult to see that $(u_\infty, v_\infty)$ is stable in $\mathbb R^N$. However, this contradicts Proposition \ref{p12} since $p, \theta$ verifies \eqref{LE}. Hence the hypothesis was wrong, i.e.~the estimate \eqref{new5} holds true. \qed \medskip Another tool is the following classical Pohozaev identity (see \cite{em, PJ, sou}). \begin{lem}\label{newlem1.1} Let $(u,v)$ be a solution to \eqref{1.1}. Therefore for any regular bounded domain $\Omega$, \begin{align}\label{new6} \begin{split} &\; \frac{2(p+1)- pN}{p+1}\int_{\Omega} v^{p+1} dx +\frac{N}{\theta+1} \int_{\Omega} u^{\theta+1} dx \\ =& \int_{\partial\Omega}u^{\theta+1}(\nu\cdot x)d\sigma-\frac{p}{p+1} \int_{\partial\Omega}v^{p+1}(\nu\cdot x)d\sigma +\int_{\partial\Omega}\frac{\partial v}{\partial \nu}(\nabla u\cdot x)d\sigma-\int_{\partial\Omega}v\frac{\partial(\nabla u\cdot x)}{\partial \nu}d\sigma. \end{split} \end{align} \end{lem} We claim then \begin{lem}\label{newl.552.GR7a} Let $p, \theta > 0$ satisfy $p\theta > 1$ and \eqref{LEbis}. If $(u,v)$ be a solution of \eqref{1.1} which is stable outside a compact set, then $v \in L^{p+1}(\mathbb{R}^N)$, $u\in L^{\theta+1}(\mathbb{R}^N)$ and \begin{align}\label{new8newest0} \frac{2(p+1)-pN }{p+1}\int_{\mathbb{R}^N}v^{p+1}dx + \frac{N}{\theta+1}\int_{\mathbb{R}^N}u^{\theta+1}dx = 0. \end{align} \end{lem} \noindent{\bf Proof .} By \eqref{new5}, we have (noticing that $\alpha(\theta+1)=(p+1)\beta=2+\alpha+\beta$) \begin{align*} u^{\theta+1}(x)+v^{p+1}(x) \leq C\left(1+ |x|\right)^{-(2 + \alpha+\beta)} \quad \mbox{in }\; \mathbb R^N. \end{align*} By \eqref{LEbis}, then $v \in L^{p+1}(\mathbb{R}^N)$, $u\in L^{\theta+1}(\mathbb{R}^N) .$ Using Lemma \ref{newlem1.1} with $\Omega =B_{R}$, we deduce that \begin{align}\label{newac2k.5} \begin{split} & \frac{2(p+1)-pN }{p+1}\int_{B_{R}}v^{p+1}dx+ \frac{N}{\theta+1}\int_{B_{R}}|u|^{\theta+1}dx\\ =&\;\int_{\partial B_{R}}\left[R\frac{\partial u}{\partial r}\frac{\partial v}{\partial r}-\frac{pR}{p+1}v^{p+1} -v\frac{\partial (\nabla u\cdot x)}{\partial r}+\frac{R}{\theta+1}|u|^{\theta+1}\right]d\sigma. \end{split} \end{align} Using again \eqref{new5} and $N < 2 + \alpha+\beta$, we deduce that $$ \int_{\partial B_{R}}\left[R\frac{\partial u}{\partial r}\frac{\partial v}{\partial r}-\frac{R}{p+1}v^{p+1} -v\frac{\partial (\nabla u\cdot x)}{\partial r}+\frac{R}{\theta+1}|u|^{\theta+1}\right]d\sigma\rightarrow 0, \quad \mbox{as }\; R\rightarrow \infty.$$ Taking the limit $R \to\infty$ in \eqref{newac2k.5}, the claim follows. \qed \medskip\noindent {\bf Proof of Theorem \ref{main4} completed.} We are now in position to conclude. Suppose that $(u, v)$ is a solution to \eqref{1.1} stable at infinity with $p\theta > 1$ verifying \eqref{LEbis}. Choose $\phi_0$ a cut-off function in $C_c^\infty(B_2)$ verifying $0 \leq \phi_0 \leq 1$ and $\phi_0=1$ in $B_1$. Denote $\zeta = \phi_0(R^{-1}x)$ and $A_R= B_{2R}\backslash B_R$. By the system \eqref{1.1}, there holds \begin{align*} \int_{B_{2R}}v^{p+1} \zeta dx-\int_{B_{2R}} u^{\theta+1} \zeta dx & = \int_{B_{2R}}u\zeta \Delta v dx - \int_{B_{2R}} u\zeta \Delta v dx\\ & = \int_{B_{2R}}v\Big(2\nabla u\cdot \nabla \zeta + u\Delta\zeta\Big) dx\\ & \leq \frac{C}{R^{2}} \int_{A_R}uv dx + \frac{C}{R} \int_{A_R}v |\nabla u| dx. \end{align*} Using \eqref{new5}, and tending $R\rightarrow \infty$, as $N < 2 + \alpha+\beta,$ we have $$\int_{\mathbb{R}^N}v^{p+1} dx=\int_{\mathbb{R}^N}u^{\theta+1} dx.$$ Substituting this in \eqref{new8newest0}, \begin{align*} \left(\frac{2(p+1) - pN}{p+1}+\frac{N}{\theta+1}\right)\int_{\mathbb{R}^N}u^{\theta+1}dx=0. \end{align*} As \eqref{LEbis} implies that $$\frac{2(p+1) - pN}{p+1}+\frac{N}{\theta+1} = 2 - \frac{(p\theta - 1)N}{(p+1)(\theta + 1)} = 2 - \frac{2N}{2+\alpha + \beta} > 0,$$ $u \equiv 0$ in $\mathbb R^N$ which is absurd, so we are done.\qed \section{\bf Proof of Theorem \ref{main3}.} \setcounter{equation}{0} The approach is similar to that for Theorem \ref{main4}. We derive first some integral estimates thanks to Lemma \ref{lemnewBN}. Suppose that $u$ is stable outside the ball $B_{R_0}$. Let $R > R_0 + 3$ and $\zeta \in C^2_c(\mathbb R^N\backslash B_{R_0})$ verifying that $0 \leq \zeta \leq 1$ and $$\zeta(x) =\left\{ \begin{array}{ll} 0 \quad \mbox{for}\; \|x\|\leq R_{0}+1,\; \|x\|\geq 2R,\\ 1\quad \mbox{for}\; R_{0}+2 \leq \|x\| \leq R. \end{array} \right.$$ Clearly, we can assume that there exists $C > 0$ independent on $R$ such that $$\|\zeta\|_{C^2(B_{R_0+2})} \leq C \quad \mbox{and} \quad R|\nabla \zeta(x)| + R^2|\nabla^2 \zeta(x)|\leq C \;\; \mbox{in } A_R = B_{2R}\backslash B_R.$$ Applying the estimate \eqref{new7} with $\zeta$, we get readily \begin{align}\label{AvxZK} \begin{split} & \; \int_{R_{0}+2\leq \|x\|\leq R} |\Delta u|^m dx + \int_{R_{0}+2\leq \|x\|\leq R} |u|^{\theta+1} dx \leq C\left(1 + R^{N-\frac{2m(\theta + 1)}{\theta -(m-1)}}\right). \end{split} \end{align} Using \eqref{new8} and tending $R \to \infty$, we have then \begin{align} \label{new11} u \in L^{\theta + 1}(\mathbb R^N) \quad \mbox{and}\quad \Delta u \in L^m(\mathbb R^N). \end{align} By H\"older's inequality, there holds \begin{align*} R^{-2m} \int_{B_{R}} |u|^mdx \leq CR^{\frac{N(\theta+1-m)}{\theta+1}-2m}\left(\int_{B_R} |u|^{\theta+1}dx\right)^{\frac{m}{\theta+1}}. \end{align*} On the other hand, by standard scaling argument, there exists $C > 0$ such that for any $R > 0$, any $u \in W^{2, m}(A_R)$ with $A_R = B_{2R}\backslash B_R$, \begin{align*} R^{-m} \int_{A_{R}} |\nabla u|^mdx \leq C \int_{A_{R}} |\Delta u|^mdx + CR^{-2m} \int_{A_{R}} |u|^mdx. \end{align*} Therefore, under the assumptions of Theorem \ref{main3}, we get \begin{align} \label{new9} R^{-2m} \int_{A_{R}} |u|^mdx + R^{-m} \int_{A_{R}} |\nabla u|^mdx \to 0 \quad \mbox{as }\; R \to \infty. \end{align} Let $\zeta(x) = \phi_0(R^{-1}x)$ with a standard cut-off function $\phi_0 \in C_c^2(B_2)$, $\phi_0\equiv 1$ in $B_1$. Applying the estimate \eqref{new4} and using \eqref{new11}--\eqref{new9}, there holds \begin{align} \label{new10} \int_{R^N}|\nabla^2 u|^m dx < \infty. \end{align} However, as we have mentioned, the weak solutions of \eqref{1} are in general not belongs to $C^2$, so we cannot use the standard Pohozaev identity similar to \eqref{new6} because of the boundary terms. We show here a variant of the Pohozaev identity, which proof is given in the appendix for the convenience of the readers. \begin{lem}\label{lem1h.1} Let $u$ be a weak solution to \eqref{1} with $m > 2$. Then for any $\psi \in C_c^{2}(\Omega)$, \begin{align}\label{new12} \begin{split} &\; \frac{N}{\theta+1}\int_{\Omega} |u|^{\theta+1}\psi dx - \frac{N-2m}{m}\int_{\Omega} |\Delta u|^{m}\psi dx \\ =& \;-\frac{1}{\theta+1}\int_{\Omega} |u|^{\theta+1}(\nabla \psi\cdot x) dx + \frac{1}{m}\int_{\Omega} (\nabla\psi\cdot x)|\Delta u|^{m} dx\\ & \; - \int_{\Omega} |\Delta u|^{m-2}\Big[2\Delta u(\nabla u\cdot \nabla \psi) + 2\Delta u \nabla^2(x\cdot\nabla\psi) + \Delta u(\nabla u\cdot x)\Delta \psi \Big]dx. \end{split} \end{align} \end{lem} This implies that if u is a weak solution of \eqref{1}, stable at infinity with $1 < m-1<\theta$ and $N$ verifying \eqref{new8}, then \begin{align}\label{8newest0} \frac{N-2m }{m}\int_{\mathbb{R}^N}|\Delta u|^{m}dx=\frac{N}{\theta+1}\int_{\mathbb{R}^N}|u|^{\theta+1}dx. \end{align} Indeed, let $\psi$ in \eqref{new12} be defined by $\psi(x) = \phi_0(R^{-1}x)$ with a standard cut-off function $\phi_0 \in C_c^2(B_2)$, $\phi_0\equiv 1$ in $B_1$. Denote the right hand side in \eqref{new12} by $I_R$. Remark that $\nabla \psi \ne 0$ only in $A_R = B_{2R}\backslash B_R$ and $\|\nabla^k \psi\|_\infty \leq C_k R^{-k}$, we obtain readily \begin{align*} |I_R| \leq C\int_{A_R} \Big(|u|^{\theta+1} + |\Delta u|^m\Big) dx + \frac{C}{R}\int_{A_R}|\Delta u|^{m-1}|\nabla u|dx + C\int_{A_R}|\Delta u|^{m-1}|\nabla^2 u| dx \end{align*} Thanks to the estimates \eqref{new11}-\eqref{new10} and H\"older's inequality, clearly $\lim_{R\to \infty}I_R = 0$, hence we get \eqref{8newest0}. \medskip On the other hand, using $u \psi$ as test function in \eqref{1} , we get \begin{align*} \int_{B_{2R}}|\Delta u|^{m} \psi dx-\int_{B_{2R}} |u|^{\theta+1} \psi dx &\leq C\int_{B_{2R}}|u||\Delta u|^{m-1} |\Delta \psi| dx + C\int_{B_{2R}}|\Delta u|^{m-1} |\nabla u||\nabla\psi| dx\\ & \leq \frac{C}{R^2}\int_{A_R}|u||\Delta u|^{m-1}dx + \frac{C}{R}\int_{A_R}|\Delta u|^{m-1}|\nabla u|dx. \end{align*} Apply H\"older's inequality, \eqref{new11}--\eqref{new9} and tending $R$ to $\infty$, we obtain \begin{align}\label{llfou} \int_{\mathbb{R}^N}|u|^{\theta+1}dx=\int_{\mathbb{R}^N} |\Delta u|^{m}dx. \end{align} Combining \eqref{8newest0} and \eqref{llfou}, one obtains \begin{align*} \left(\frac{N-2m }{m}-\frac{N}{\theta+1}\right)\int_{\mathbb{R}^N}|u|^{\theta+1}dx=0. \end{align*} We are done, since \eqref{new8} implies that $\frac{N-2m }{m}-\frac{N}{\theta+1} < 0$.\qed \section{Appendix } \setcounter{equation}{0} We prove here the Lemma \ref{lem1h.1}. Let $\psi\in C_c^{2}(\Omega)$, multiplying equation \eqref{1} by $\nabla u\cdot x \psi$ and integrating by parts, we get \begin{align*} & \int_{\Omega}|u|^{\theta-1}u (\nabla u\cdot x)\psi dx\\ = & \; \int_{\Omega}|\Delta u|^{m-2}\Delta u\Delta(\nabla u\cdot x\psi)dx\\ = & \; \int_{\Omega}\; |\Delta u|^{m-2}\Delta u \Big[(\nabla(\Delta u)\cdot x)\psi+2\Delta u \psi+2\nabla(\nabla u\cdot x)\cdot\nabla \psi+(\nabla u\cdot x)\Delta\psi \Big]dx. \end{align*} Direct calculation yields $\nabla(\nabla u\cdot x)\cdot\nabla \psi = \nabla^{2}u (x, \nabla \psi) + (\nabla u\cdot \nabla \psi)$ and \begin{align*} \int_{\Omega}\; |\Delta u|^{m-2}\Delta u \Big[(\nabla(\Delta u)\cdot x)\psi+2\Delta u \psi\Big]dx &= \int_{\Omega} \frac{\nabla |\Delta u|^m}{m}\cdot x \psi dx+ 2\int_{\Omega}|\Delta u|^{m}\psi dx\\ &=\frac{2m-N }{m}\int_{ \Omega}|\Delta u|^m\psi dx-\frac{1}{m}\int_{\Omega}|\Delta u|^m(\nabla \psi\cdot x)dx. \end{align*} Moreover, \begin{align*} \int_\Omega |u|^{\theta-1}u (\nabla u\cdot x)\psi dx & = - \frac{1}{\theta + 1}\int_\Omega |u|^{\theta +1} {\rm div}(\psi x)dx \\ &=-\frac{N}{\theta+1} \int_{\Omega}|u|^{\theta+1} \psi dx - \frac{1}{\theta+1} \int_{\Omega}|u|^{\theta+1} x\cdot \nabla \psi dx. \end{align*} Therefore, the claim follows by regrouping the above equalities. \qed \vskip 1cm \section*{Acknowledgment} The authors are partly supported by the CNRS-DGRST Project No.~EDC26348. \section*{References}
3,212,635,537,878
arxiv
\section{INTRODUCTION} Under the influence of the high pressure, the elemental calcium undergoes a series of structural phase transitions. In particular, one can distinguish seven phases in the range of pressure ($p$) from $0$ to $241$ GPa \cite{Olijnyk}, \cite{Yabuuchi}, \cite{Sakata} (please see Figure \fig{fig1} (A) for the details). The two first phases, namely Ca-I and Ca-II, have been classified as a fcc and bcc structures, respectively \cite{Olijnyk}, \cite{Yabuuchi}. The third phase (Ca-III) has been primarily linked with the sc structure, however the recent reports suggest other assignments. On the basis of the theoretical studies, Teweldeberhan {\it et al.} proposed the {\it Cmmm} structure \cite{Teweldeberhan}. Nakamoto {\it et al.} also vote in favor of the {\it Cmmm} structure \cite{Nakamoto}. On the other hand, Mao {\it et al.} have predicted the transition from the sc-like structure to the monoclinic phase at $30$ K and $p\simeq 40$ GPa \cite{Mao}. It needs to be underlined that the stability of the structure sc in the area of the existence of the phase Ca-III is being confirmed by the results achieved by Errea {\it et al.} and Yao {\it et al.}, at least for the temperature of $300$ K \cite{Errea}, \cite{Yao}. The existence of the phases Ca-IV and Ca-V has been experimentally examined in the papers \cite{Yabuuchi} and \cite{Nakamoto}. Fujihisa {\it et al.} have proposed for them the following assignment: the structure Ca-IV should be characterized by $P4_{1}2_{1}2$ and Ca-V by {\it Cmca} space groups, respectively \cite{Fujihisa}. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{{pssb.201248032_Fig1}.eps} \caption{(A) The sequence of the structural phase transitions in calcium determined on the basis of the experimental data. (B) The dependence of the critical temperature on the pressure: stars - Okada {\it et al.} \cite{Okada}, squares - Yabuuchi {\it et al.} \cite{Yabuuchi1}, circles - Sakata {\it et al.} \cite{Sakata}.} \label{fig1} \end{figure} In the year $2010$, Nakamoto {\it et al.} discovered the new Ca-VI phase with the {\it Pnma} structure \cite{Nakamoto2}. Further, in the year $2011$, Sakata {\it et al.} have reported the existence of the host-guest phase Ca-VII \cite{Sakata}. We can notice that the high-pressure phase of the host-guest character had been previously predicted by Arapan {\it et al.} and then by Ishikawa {\it et al.} \cite{Arapan}, \cite{Ishikawa}. The first mention of the existence of the pressure-induced superconducting state in calcium was provided by Dunn and Bundy in 1981 \cite{Dunn}. Fifteen years later, Okada {\it et al.} determined the dependence of the critical temperature ($T_{C}$) on the pressure up to the 150 GPa \cite{Okada} (please see Figure \fig{fig1} (B) for details). In the year 2006, Yabuuchi {\it et al.} have repeated the experimental studies of Okada \cite{Yabuuchi1}. It has been found that the values of the critical temperature increase much faster together with the increase of $p$ in comparison to the results achieved by Okada. The last notable experimental results have been obtained by Sakata {\it et al.} \cite{Sakata}. On the basis of Figure \fig{fig1} (B), it can be easily noticed that for $p=216$ GPa, the critical temperature takes the value equal to $29$ K (the highest observed $T_{C}$ among all elements). However, this result has been challenged by Andersson \cite{Andersson}. In the presented paper, we have determined all relevant thermodynamic parameters of the superconducting state that is induced in calcium under the pressure at $161$ GPa. We draw the readers' attention to the fact that the pressure of $161$ GPa represents the highest value of $p$ considered by Yabuuchi {\it et al.} \cite{Yabuuchi1}. Additionally, the high value of the critical temperature at $p=161$ GPa, which is equal to $\sim 25$ K, has been recently confirmed by the results obtained by Sakata {\it et al.} \cite{Sakata}. For the purpose of this paper, we have assumed that the phase Ca-VI is being characterized by the $Pnma$ crystal structure. To support this assumption we quote the results presented in: \cite{Nakamoto2}, \cite{Yin} and \cite{Aftabuzzaman}. \section{THE ELIASHBERG EQUATIONS} On the imaginary axis ($i\equiv\sqrt{-1}$), the order parameter ($\Delta_{n}\equiv\Delta\left(i\omega_{n}\right)$) and the wave function renormalization factor ($Z_{n}\equiv Z\left(i\omega_{n}\right)$) can be calculated by using the Eliashberg equations \cite{Eliashberg}: \begin{equation} \label{r1} \Delta_{n}Z_{n}=\frac{\pi}{\beta} \sum_{m=-M}^{M} \frac{K\left(\omega_{n}-\omega_{m}\right)-\mu^{*}\theta\left(\omega_{c}-|\omega_{m}|\right)} {\sqrt{\omega_m^2+\Delta_m^2}} \Delta_{m}, \end{equation} \begin{equation} \label{r2} Z_n=1+\frac {\pi}{\beta\omega _n }\sum_{m=-M}^{M} \frac{K\left(\omega_{n}-\omega_{m}\right)} {\sqrt{\omega_m^2+\Delta_m^2}}\omega_m. \end{equation} In equations (\ref{r1}) and (\ref{r2}) the symbol $\omega_{n}\equiv \frac{\pi}{\beta}\left(2n-1\right)$ denotes the $n$-th Matsubara frequency, where $\beta\equiv 1/k_{B}T$, and $k_{B}$ is the Boltzmann constant. The complicated form of the electron-phonon pairing kernel is represented by the expression: \begin{equation} \label{r3} K\left(\omega_{n}-\omega_{m}\right)\equiv 2\int_0^{\Omega_{\rm{max}}}d\Omega\frac{\alpha^{2}F\left(\Omega\right)\Omega} {\left(\omega_n-\omega_m\right)^2+\Omega ^2}, \end{equation} where $\Omega_{\rm{max}}$ is the maximum phonon frequency ($\Omega_{\rm{max}}=71.37$ meV), and $\alpha^{2}F\left(\Omega\right)$ indicates the Eliashberg function, which models the shape of the electron-phonon interaction in a detailed way. In the presented paper, the form of the $\alpha^{2}F\left(\Omega\right)$ function has been taken from Yin {\it et al.} \cite{Yin}. The depairing interaction between electrons is described with the use of the Coulomb pseudopotential ($\mu^{*}$). The symbol $\theta$ denotes the Heaviside function and $\omega_{c}$ is the phonon cut-off frequency: $\omega_{c}=3\Omega_{\rm{max}}$. In the presented paper the Eliashberg equations have been solved for $2201$ Matsubara frequencies ($M=1100$). In this case, the obtained solutions are stable for the temperatures greater than or equal to $T_{0}=5$ K. A detailed discussion of the numerical method has been presented in \cite{Szczesniak1a}-\cite{Szczesniak1e}. \section{THE COULOMB PSEUDOPOTENTIAL} The physical value of the Coulomb pseudopotential ($\mu^{*}_{C}$) can be defined by using the condition: $\left[\Delta_{m=1}\right]_{T=T_{C}}=0$, where the critical temperature is equal to the experimental value ($T_{C}=25$ K) \cite{Yabuuchi1}. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{{pssb.201248032_Fig2}.eps} \caption{(A) The dependence of the order parameter on $\omega_{m}$ for selected values of the Coulomb pseudopotential ($T=T_{C}$). (B) The maximum value of the order parameter as a function of the Coulomb pseudopotential.} \label{fig2} \end{figure} In Figure \fig{fig2} (A) we have presented the dependence of the order parameter on $\omega_{m}$ for selected values of $\mu^{*}$. One can notice that together with the increase of the Coulomb pseudopotential, the largest value of the order parameter ($\Delta_{m=1}$) decreases. Additionally, in Figure \fig{fig2} (B) we have outlined the complete form of the function $\Delta_{m=1}\left(\mu^{*}\right)$. On the basis of the obtained results, we have found that the physical value of the Coulomb pseudopotential is equal to $0.24$. The above result means that the depairing electron correlations in calcium are relatively strong (the classical low-temperature superconductors $\mu^{*}_{C}$ is about $0.1$ \cite{Carbotte}). It can be noted that similar non-standard value of $\mu^{*}_{C}$ has been obtained for lithium and $\rm CaLi_{2}$ \cite{Jishi}, \cite{Profeta}, \cite{Luders}, \cite{Szczesniak1f}, \cite{Szczesniak1g}. For example, the properties of the superconducting state in the fcc phase of lithium for the pressure values $22.3$ GPa ($T_{C}= 7.27$ K) and $29.7$ GPa ($T_{C}=13.93$ K) have been specified in the paper \cite{Szczesniak1f}. It has been shown that the physical value of the Coulomb pseudopotential increases with $p$ from $0.22$ to $0.36$. In the case of $\rm CaLi_{2}$, the parameter $\mu^{*}_{C}$ is equal to $0.23$ ($p=45$ GPa and $T_{C}=12.9$ K) \cite{Szczesniak1g}. The high values of $\mu^{*}_{C}$ in Ca, Li, and $\rm CaLi_{2}$ are difficult to explain in the framework of the classical Morel-Anderson (MA) model \cite{Morel}, where: $\mu^{*}_{C}=\mu\left[1+\mu\ln\left(\omega_{e}/\omega_{ph}\right)\right]^{-1}$. The symbol $\mu$ is defined by: $\mu\equiv\rho\left(0\right)U_{C}$, where $\rho\left(0\right)$ is the electronic density of states at the Fermi level and $U_{C}$ is the Coulomb potential; $\omega_{e}$ and $\omega_{ph}$ denote the characteristic electron and phonon frequency, respectively. Since $\omega_{e}\gg\omega_{ph}$, the MA pseudopotential is of the order $0.1$ and $\mu^{*}_{C}\ll\mu$. It can be noted that the MA model corresponds to treating the irreducible vertex to the first order in $U_{C}$. Recently, Bauer, Han, and Gunnarsson have extended the MA theory to the second order in $U_{C}$. The main result is that the retardation effects lead to the reduction $\mu\rightarrow\mu^{*}_{C}$ also in the higher order calculation, but not as efficiently as in the first order \cite{Bauer}. The model presented in the paper \cite{Bauer} is probably the most advanced attempt to explain the high values of $\mu^{*}_{C}$ in Ca, Li, and $\rm CaLi_{2}$. In particular, Bauer {\it et al.} have given the following expression for the physical value of the Coulomb pseudopotential: \begin{equation} \label{rDodatkowe} \mu^{*}_{C}=\frac{\mu+a\mu^{2}}{1+\mu\ln\left[\frac{\omega_{e}}{\omega_{ph}}\right]+a\mu^{2}\ln\left[\frac{\alpha\omega_{e}}{\omega_{ph}}\right]}, \end{equation} where $a=1.38$ and $\alpha\simeq 0.10$. On the basis of the equation (\ref{rDodatkowe}) one can easily estimate the value of $U_{C}$ for the real materials. In the paper, we assume the following: $\omega_{e}=W$ ($W$ is the half-band width), $\omega_{ph}=\omega_{{\rm ln}}$ ($\omega_{{\rm ln}}\equiv \exp\left[\frac{2}{\lambda}\int^{\Omega_{\rm{max}}}_{0}d\Omega\frac{\alpha^{2}F\left(\Omega\right)} {\Omega}\ln\left(\Omega\right)\right]$), and $\rho\left(0\right)=1/2W$ (the constant DOS). In the literature, the values of all the important parameters are provided only for ${\rm CaLi_{2}}$ ($p=45$ GPa). In particular: $\mu_{C}^{*}=0.23$, $W=1991$ meV, and $\omega_{\rm ln}=17.02$ meV \cite{Szczesniak1g}, \cite{Xie}. The result is the following: $U_{C}=2803$ meV. Next, we address an important issue, namely, how big is the error bar of the calculated physical value of the Coulomb pseudopotential ($\Delta\mu^{*}_{C}$). First of all, we can notice that in the framework of the presented analysis, the value of $\mu^{*}_{C}$ depends on the shape of the Eliashberg function and the accuracy of the experimental value of $T_{C}$. For calcium the appropriate Eliashberg function, taken from \cite{Yin}, has been calculated by using the linear-response method (full-potential LMTART code \cite{Savrasov}). In that paper, the Eliashberg function error bar has been omitted ($\left[\Delta\mu^{*}_{C}\right]_{\alpha^{2}F\left(\Omega\right)}=0$). On the other hand, the value of the critical temperature has been measured with the accuracy about $\pm 1$ K \cite{Yabuuchi1}. On the basis of these facts, we have obtained: $\left[\Delta\mu^{*}_{C}\right]_{T_{C}}=\pm 0.02$. \section{THE CRITICAL TEMPERATURE} \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{{pssb.201248032_Fig3}.eps} \caption{The dependence of the critical temperature on the Coulomb pseudopotential. The filled circles mean the results obtained by using the Eliashberg equations; the arrow indicates the experimental value of the critical temperature ($\mu^{*}_{C}=0.24$). The solid line denotes the results obtained with the help of the modified Allen-Dynes formula. Finally, the dashed and dotted lines have been generated based on the classical Allen-Dynes formula and the McMillan expression, respectively.} \label{fig3} \end{figure} In the framework of the presented formalism, the exact value of the critical temperature should be obtained on the basis of Eliashberg equations. However, in the case of the data interpretation, it is far more convenient to use the simple formula that explicitly reproduces the results of the advanced numerical calculations. In the branch literature there are known two basic formulas that serve for the determination of the critical temperature's value. The first one has been introduced by McMillan \cite{McMillan}; the second one is the Allen-Dynes expression \cite{AllenDynes}. Unfortunately, in the case of calcium, both formulas considerably underestimate the critical temperature. According to the above, the Allen-Dynes formula has been modified in such a way, that it allows us to reproduce the numerical results correctly. Particularly, in order to achieve the proper values of the fitting parameters, the dependence of the critical temperature on the Coulomb pseudopotential has been analyzed on the level of the Eliashberg equations (only $\alpha^{2}F\left(\Omega\right)$ has been considered as the physical input parameter). Next, the least-squares method was applied. The obtained result is presented below: \begin{equation} \label{r4} k_{B}T_{C}=f_{1}\left(\mu^{*}\right)f_{2}\left(\mu^{*}\right)\frac{\omega_{\rm ln}}{1.45}\exp\left[\frac{-1.03\left(1+\lambda\right)}{\lambda-\mu^{*}\left(1+0.06\lambda\right)}\right], \end{equation} where the functions $f_{1}\left(\mu^{*}\right)$ and $f_{2}\left(\mu^{*}\right)$ are expressed by the formulas: \begin{equation} \label{r5a} f_{1}\left(\mu^{*}\right)\equiv\left[1+\left(\frac{\lambda}{\Lambda_{1}\left(\mu^{*}\right)}\right)^{\frac{3}{2}}\right]^{\frac{1}{3}}, \end{equation} and \begin{equation} \label{r5b} f_{2}\left(\mu^{*}\right)\equiv 1+\frac{\left(\frac{\sqrt{\omega_{2}}}{\omega_{\rm{ln}}}-1\right)\lambda^{2}}{\lambda^{2}+\Lambda^{2}_{2}\left(\mu^{*}\right)}. \end{equation} The parameters, that depend on the Eliashberg function, can be determined on the basis of the expressions: $\omega_{2}\equiv\frac{2}{\lambda}\int^{\Omega_{\rm{max}}}_{0}d\Omega\alpha^{2}F\left(\Omega\right)\Omega$ and $\lambda\equiv 2\int^{\Omega_{\rm{max}}}_{0}d\Omega\frac{\alpha^{2}F\left(\Omega\right)}{\Omega}$. For calcium under the pressure at $161$ GPa, we have respectively: $\sqrt{\omega_{2}}=34.36$ $\rm{meV}$ and $\lambda=1.27$. The fitting functions $\Lambda_{1}\left(\mu^{*}\right)$ and $\Lambda_{2}\left(\mu^{*}\right)$ are presented in the following forms: \begin{equation} \label{r6} \Lambda_{1}\left(\mu^{*}\right)\equiv 0.145(1+115.862\mu^{*}), \end{equation} and \begin{equation} \label{r7} \Lambda_{2}\left(\mu^{*}\right)\equiv 5.185(1-2.247\mu^{*})\left(\frac{\sqrt{\omega_2}}{\omega_{\ln}}\right). \end{equation} In Figure \fig{fig3} we have presented the numerical solutions obtained with the use of the Eliashberg equations and the modified Allen-Dynes formula. Additionally, for the comparison purposes, we have depicted the results based on the classical formulas derived by Allen-Dynes and McMillan. On the basis of Figure \fig{fig3}, one can observe, that the expression (4) perfectly reproduces the exact Eliashberg numerical predictions. The constants in the equation (4) deviate notably from the original parameterization. This situation is connected with the fact that the analysis based on the real-axis Eliashberg equations suggests only the semiphenomenological form of the $T_{C}$-formula: $k_{B}T_{C}=\frac{\omega_{\rm ln}}{a}\exp\left[-\frac{-b\left(1+\lambda\right)}{\lambda-\mu^{\star}\left(1+c\lambda\right)}\right]$ (see the detailed discussion in \cite{Carbotte}, p. 1051-1052). In the case of calcium under the pressure at $161$ GPa, the values of the Allen-Dynes parameters are inappropriate. Thus, the constants ($a\sim c$) should be fit to the data taken from the exact solutions of the Eliashberg equations. We can notice that the change of the coefficient in the expression (4) under $\omega_{\ln}$ from $1.2$ to $1.45$ slightly lowers the phonon frequency; the two remaining parameters (1.03 and 0.06), in comparison to the classical parameterization, increase the value of the effective electron-phonon coupling constant. Moreover, the parameterization of the strong-coupling correction function ($f_{1}\left(\mu^{*}\right)$) and the shape correction function ($f_{2}\left(\mu^{*}\right)$) also deviates from the original form. The achieved result indicates that for the high-pressure superconducting state in calcium the shape function has greater significance than in the classical superconductors. The value of the critical temperature for $p=160$ GPa has been also calculated in the paper \cite{Yin}. By using the Allen-Dynes formula the Authors have qualitatively reconstructed the experimental value of $T_{C}$. However, in the examined case the physical value of the Coulomb pseudopotential has been strongly lowered ($\mu^{*}_{C}\sim 0.15$). In the last step, we boldly underline that in the literature exist other calculations of $\lambda$ than has been presented in the paper \cite{Yin}. In particular, Lei {\it et al.} have suggested a very large value of the electron-phonon coupling constant ($\lambda=3.75$ for $p=155$ GPa and the sc structure) \cite{Lei}. On the other hand, Aftabuzzaman and Islam have predicted $\lambda=0.903$ for $p=161$ GPa and the {\it Pnma} structure \cite{Aftabuzzaman}. The last result is similar to the result obtained by Yin {\it et al.} \cite{Yin}. \section{THE CHARACTERISTICS OF THE SOLUTIONS ON THE IMAGINARY AXIS} The form of the order parameter on the imaginary axis for selected values of the temperature has been presented in Figure \fig{fig4} (A). It has been shown that together with the increase of $\omega_{m}$ the absolute values of $\Delta_{m}$ are decreasing and are subjected to saturation. It should be underlined that taking the negative values by the order parameter function is connected with the non-zero value of the Coulomb pseudopotential. When analyzing the temperature's dependence of the order parameter, we found that absolute values of the function $\Delta_{m}$ decrease together with the temperature's growth. The above result means that together with the growth of the temperature, the less number of Matsubara frequencies give significant contribution to the Eliashberg equations. The full dependence of the maximum value of the order parameter ($\Delta_{m=1}$) on the temperature has been plotted in Figure \fig{fig4} (B). We can observe that the values $2\Delta_{m=1}\left(T\right)$ with a good approximation reproduce the temperature dependence of the energy gap at the Fermi level. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{{pssb.201248032_Fig4}.eps} \caption{(A) The dependence of the order parameter on $\omega_{m}$ for selected values of the temperature. (B) The maximum value of the order parameter as a function of the temperature.} \label{fig4} \end{figure} In Figure \fig{fig5} (A) we have presented the form of the wave function renormalization factor on the imaginary axis. Similarly as for the order parameter, the increase of $\omega_{m}$ causes the decrease of the successive values of the function $Z_{m}$. In the case of the high values of $\omega_{m}$, the function $Z_{m}$ is subjected to the saturation and takes the value equal to one. Further, Figure \fig{fig5} (B) presents the full dependence of the maximum value of the wave function renormalization factor on the temperature. It can be noted that the presented function with a good approximation determines the temperature dependence of the electron effective mass. Moreover, from the obtained results, we can conclude that the electron effective mass takes a high value in the entire area of the existence of the superconducting state. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{{pssb.201248032_Fig5}.eps} \caption{(A) The dependence of the wave function renormalization factor on $\omega_{m}$ for selected values of the temperature. (B) The maximum value of the wave function renormalization factor as a function of the temperature.} \label{fig5} \end{figure} \section{THE PHYSICAL VALUE OF THE ORDER PARAMETER} In order to determine the physical value of the order parameter for the chosen temperature, the solutions of the Eliashberg equations on the imaginary axis ($i\omega_{n}$) should be analytically continued on the real axis ($\omega$). In the presented paper we have used the method introduced by Beach {\it et al.} \cite{Beach}. The form of the order parameter on the real axis is being reproduced by using the function: \begin{equation} \label{r8} \Delta\left(\omega\right)=\frac{p_{\Delta 1}+p_{\Delta 2}\omega+...+p_{\Delta r}\omega^{r-1}} {q_{\Delta 1}+q_{\Delta 2}\omega+...+q_{\Delta r}\omega^{r-1}+\omega^{r}}, \end{equation} where $p_{\Delta j}$ and $q_{\Delta j}$ denote the number coefficients, and $r=550$. The dependence of the real and imaginary part of the order parameter on the frequency for selected values of the temperature has been presented in Figure \fig{fig6}. Additionally, the rescaled Eliashberg function has been specified. On the basis of the presented results, one can observe that in the range of the low frequencies (from $0$ to about $20$ meV), only the real part of the order parameter takes the non-zero values. From the physical point of view, the obtained result defines the lack of the damping effects. For the higher values of the frequency (from about $20$ meV to about $40$ meV), the real part of the order parameter takes relatively high values, which are clearly induced by the characteristic peaks in the Eliashberg function. Furthermore, we can notice that in the discussed range of the energy, the imaginary part of the order parameter becomes non-zero and strongly increases together with the increase of frequency. For the higher frequencies (above $40$ meV), the real part of the order parameter begins to vanish. This fact is related to the extinction of the Eliashberg function itself. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{{pssb.201248032_Fig6}.eps} \caption{The real and imaginary part of the order parameter on the real axis for selected values of the temperature. The rescaled Eliashberg function has been also presented.} \label{fig6} \end{figure} The physical value of the order parameter for the chosen temperature should be determined on the basis of the expression \cite{Eliashberg}, \cite{Carbotte}: \begin{equation} \label{r9} \Delta\left(T\right)={\rm Re}\left[\Delta\left(\omega=\Delta\left(T\right),T\right)\right]. \end{equation} In the case of the superconductors the most interesting is the value of the order parameter for the temperature of zero Kelvin ($\Delta\left(0\right)\simeq\Delta\left(T_{0}\right)$). On the basis of the simple calculations we have made the following estimation: $\Delta\left(0\right)=4.32$ meV. Let us mention that familiarity with the value of $\Delta\left(0\right)$ and $T_{C}$ allows to calculate the dimensionless ratio: $R_{1}\equiv 2\Delta\left(0\right)/k_{B}T_{C}$. In the case of calcium we have obtained the following: \begin{equation} \label{r10} R_{1}=4.01. \end{equation} The above result indicates that $R_{1}$ considerably exceeds the value predicted by the BCS theory: $\left[R_{1}\right]_{\rm BCS}=3.53$ \cite{BCS}. \section{THE ELECTRON EFFECTIVE MASS} The influence of the electron-phonon interaction on the electron effective mass ($m^{*}_{e}$) can be determined on the basis of the expression: $m^{*}_{e}={\rm Re}\left[Z\left(0\right)\right]m_{e}$, where the symbol $Z\left(0\right)$ denotes the value of the wave function renormalization factor on the real axis and $m_{e}$ is the bare electron mass. The form of the wave function renormalization factor on the real axis has been calculated with the use of the analytical continuation method: \begin{equation} \label{r11} Z\left(\omega\right)=\frac{p_{Z 1}+p_{Z 2}\omega+...+p_{Z r}\omega^{r-1}} {q_{Z 1}+q_{Z 2}\omega+...+q_{Z r}\omega^{r-1}+\omega^{r}}, \end{equation} where $p_{Z j}$ and $q_{Z j}$ are the number coefficients, and $r=550$. In Figure \fig{fig7} we have presented the shape of the function Re$\left[Z\left(\omega\right)\right]$ and Im$\left[Z\left(\omega\right)\right]$ for the critical temperature. Similarly to the situation which took place in the case of the order parameter, for the low frequencies the non-zero is only the real part of the wave function renormalization factor. In the energy range around $20$ meV we can observe characteristic but not so strong amplification of Re$\left[Z\left(\omega\right)\right]$, which is clearly correlated with the peaks of the Eliashberg function. Additionally, the function Im$\left[Z\left(\omega\right)\right]$ is non-zero. In the range of the high frequencies, Re$\left[Z\left(\omega\right)\right]$ decreases together with the increase of $\omega$. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{{pssb.201248032_Fig7}.eps} \caption{The real and imaginary part of the wave function renormalization factor on the real axis. Additionally, the rescaled Eliashberg function has been outlined. The inset represents the dependence of $m^{*}_{e}/m_{e}$ on the temperature.} \label{fig7} \end{figure} Next, the dependence of the ratio $m^{*}_{e}/m_{e}$ on the temperature has been determined. The results have been presented in the inset in Figure \fig{fig7}. We have found that the electron effective mass is large in the entire range, in which the superconducting state exists, and reaches its maximum equal to $2.32$ for $T=T_{C}$. We can notice that for $T=T_{C}$ the value of the ratio $m^{*}_{e}/m_{e}$ can be calculated with a great accuracy by using the simple formula: $m^{*}_{e}/m_{e}\simeq 1+\lambda=2.27$. The consistency between the exact numerical result and the analytical approach is the measure of the presented analysis. From the physical point of view, the result presented above is particularly important, since it can be verified in a simple way if the measurement of the Sommerfeld coefficient is to be made in the future. \section{THE THERMODYNAMIC CRITICAL FIELD AND THE SPECIFIC HEAT} The thermodynamic critical field ($H_{C}$) and the difference between the specific heat in the superconducting and normal state ($\Delta C\equiv C^{S}-C^{N}$) can be calculated on the basis of the free energy difference ($\Delta F\equiv F^{S}-F^{N}$): \begin{eqnarray} \label{r12} \frac{\Delta F}{\rho\left(0\right)}&=&-\frac{2\pi}{\beta}\sum_{m=1}^{M} \left(\sqrt{\omega^{2}_{m}+\Delta^{2}_{m}}- \left|\omega_{m}\right|\right)\\ \nonumber &\times&(Z^{{\rm S}}_{m}-Z^{N}_{m}\frac{\left|\omega_{m}\right|} {\sqrt{\omega^{2}_{m}+\Delta^{2}_{m}}}). \end{eqnarray} The dependence of the free energy difference on the temperature has been presented in Figure \fig{fig08}. We can see that in the whole range of the existence of the superconducting phase, the value of the ratio $\Delta F/\rho\left(0\right)$ is negative. From the physical point of view, it means that the superconducting state is thermodynamically stable. The thermodynamic critical field should be determined on the basis of the expression: \begin{equation} \label{r13} \frac{H_{C}}{\sqrt{\rho\left(0\right)}}= \sqrt{-8\pi\left[\Delta F/\rho\left(0\right)\right]}. \end{equation} The influence of the temperature on the value of the ratio $H_{C}/\sqrt{\rho\left(0\right)}$ has been presented in the Figure \fig{fig08}. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{{pssb.201248032_Fig8}.eps} \caption{The ratios $\Delta F/\rho\left(0\right)$ and $H_{C}/\sqrt{\rho\left(0\right)}$ as a function of the temperature.} \label{fig08} \end{figure} The difference of the specific heat has been determined on the basis of the formula: \begin{equation} \label{r14} \frac{\Delta C}{k_{B}\rho\left(0\right)} =-\frac{1}{\beta}\frac{d^{2}\left[\Delta F/\rho\left(0\right)\right]} {d\left(k_{B}T\right)^{2}}. \end{equation} Additionally, the values of the specific heat in the normal state have been also determined: \begin{equation} \label{r15} \frac{C^{N}}{ k_{B}\rho\left(0\right)}=\frac{\gamma}{\beta}, \end{equation} where $\gamma\equiv\frac{2}{3}\pi^{2}\left(1+\lambda\right)$. In Figure \ref{fig09}, we have plotted the dependence of the specific heat in the superconducting state and the normal state on the temperature. The characteristic "jump", which appears at the critical temperature, can be easily noticed. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{{pssb.201248032_Fig9}.eps} \caption{The specific heat in the superconducting state and in the normal state as a function of the temperature.} \label{fig09} \end{figure} On the basis of the specified thermodynamic functions, we have calculated the values of the dimensionless ratios: $R_{2}\equiv \Delta C\left(T_{C}\right)/C^{N}\left(T_{C}\right)$ and $R_{3}\equiv T_{C}C^{N}\left(T_{C}\right)/H^{2}_{C}\left(0\right)$. We have obtained the following: \begin{equation} \label{r16} R_{2}=2.17, \end{equation} and \begin{equation} \label{r17} R_{3}=0.158. \end{equation} Taking into account the results above, we can state that the values of the considered ratios significantly diverge from the values predicted by the classical BCS theory. In particular: $\left[R_2\right]_{\rm BCS}=1.43$ and $\left[R_3\right]_{\rm BCS}=0.168$. \section{SUMMARY} In the paper, we have determined all relevant thermodynamic parameters of the superconducting state in calcium under the pressure at $161$ GPa. We have conducted all numerical calculations in the framework of the Eliashberg formalism, where the electron-phonon spectral function $\alpha^{2}F\left(\Omega\right)$ has been taken form the paper \cite{Yin}. On the basis of the exact numerical results, we can state that the depairing electron correlations in calcium are relatively strong ($\mu^{*}_{C}=0.24$). In the next step, the values of the parameters in the Allen-Dynes formula have been calculated. It has been shown that the critical temperature is properly determined by the modified Allen-Dynes expression. Furthermore, we have proven that the thermodynamic properties of the superconducting state significantly diverge from the predictions based on the simple BCS theory. In particular, the following values of the thermodynamic ratios have been obtained: $R_{1}=4.01$, $R_{2}=2.17$, and $R_{3}=0.158$. In the last step, we have shown that the electron effective mass is large in the entire area of the existence of the superconducting state, and $\left[m^{*}_{e}\right]_{\rm max}=2.32m_{e}$ at $T=T_{C}$. \begin{acknowledgments} The authors would like to thank Professor K. Dzili{\'{n}}ski, Professor Z. B{\c{a}}k, and Professor A. Khater for providing excellent working conditions and the financial support. D. Szcz{\c{e}}{\'s}niak would like to acknowledge the financial support under the "Young Scientists" program, provided by the Dean of the Faculty of Mathematics and Science JDU Professor Z. St{\c{e}}pie{\'n} (grant no. DSM/WMP/1/2011/17). Some calculations have been conducted on the Cz{\c{e}}stochowa University of Technology cluster, built in the framework of the PLATON project, no. POIG.02.03.00-00-028/08 - the service of the campus calculations U3. \end{acknowledgments}
3,212,635,537,879
arxiv
\section{Introduction} Since the earliest “artificial neuron” model proposed based on studies of the human brain and nervous system, neural networks have achieved a series of developments and even notable successes in some commercially essential areas, such as image captioning, machine translation and video games. However, several shortcomings in modern deep neural networks are emphasised by Garnelo and Shanahan \cite{garnelo2019reconciling}: \begin{itemize} \item Data inefficiency and high sample complexity. In order to be effective, contemporary neural models usually need large volumes of training data, such as BERT~\cite{Devlin_Chang_Lee_Toutanova_2018} and GPT2~\cite{radford2019language} with the pre-trained basis which comes from an incredible amount of data. \item Poor generalisation. It is a challenge to predict correct answers when neural models are evaluated on examples outside of the training distribution, and even just small invisible changes to the inputs can entirely derail predictions~\cite{goodfellow2014explaining}. \end{itemize} Meanwhile, the ability of NLU systems is increasingly being questioned --– and neural networks more generally --– to generalise systematically and robustly \cite{bahdanau2018systematic}. For instance, the brittleness of NLU systems to adversarial examples \cite{DBLP:journals/corr/JiaL17}, the failure of exhibiting reasoning and generalisation capabilities but only exploiting statistical artefacts in datasets \cite{Gururangan_Swayamdipta_Levy_Schwartz_Bowman_Smith_2018}, the difficulty of incorporating the statistics of the natural language instead of reasoning on those large pre-trained models \cite{DBLP:journals/corr/abs-1802-05365} and the substantial performance gap of generalisation and robustness between state-of-art NLU models and a Graph Neural Networks (GNN) model \cite{sinha2019clutrr}. It seems that modern neural models capture the wrong pattern and do not understand the content of data, which is far away from the expected reasoning ability mimicking from the human brain. In other words, both of this aspect should be emphasised so that we can build up a better model. In order to evaluate and compare each model's ability of systematic generalisation and robust reasoning, we use the benchmark suite name Compositional Language Understanding and Text-based Relational Reasoning (CLUTRR), which contains a broad set of semi-synthetic stories involving hypothetical families, and the goal is to reason the relationship between two entities when given a story \cite{sinha2019clutrr}. Actually, \cite{sinha2019clutrr} already analysed several models, including those shocking baselines models we have known such as BERT and Graph Attention Networks (GAT) \cite{velivckovic2017graph}. They found that existing NLU systems show poor results both on generalisation and robustness comparing GAT who directly works on symbolic inputs. For instance, interestingly line graphs in the paper showed us that in most situations, BERT stays the lowest accuracy among all text-based models and also a graph neural model, GAT; when trained on noisy data, only GAT can effectively capture the robust reasoning strategies. Both of these phenomena show us that there is a gap between unstructured text inputs and structured symbolic inputs. Therefore, motivated by that unexpected results and inspired by the neuro-symbolic reasoning models \cite{garcez2015neural,evans2018learning,garnelo2019reconciling}, we explore two types of models, the graph-based model and the sequence-based model. Each of them has different forms of symbolic inputs, and they are trained over the CLUTRR datasets to evaluate their generalisation and robustness performance compared with texted-based models. Briefly speaking, graph-based models are those models with a graph as input, such that in CLUTRR datasets, entities and relationships are modelled as nodes and edges to form graphs. To deal with graph-type input, we usually use GNN models such as GAT, Graph Convolutional Networks (GCN) \cite{kipf2016semi}. In fact, their architectures need to be modified so that we can feed the multi-relational graph into models. For sequence-based models, they are the revised version of those typical sequence encoding models such as Convolutional Neural Networks (CNN) \cite{LeCun1989}, Recurrent Neural Networks (RNN) \cite{Rumelhart_Hinton_Williams_1986}. Their entities and relationships are modelled into graphs and then concatenated into sequences. \section{Related Work} \textbf{Neuro-symbolic Models.} Recent advances in deep learning show an improvement of expressive capacity, and significant results have been achieved in some perception tasks, such as action recognition and event detection. Nevertheless, it is widely accepted that an intelligent system needs combining both perception and cognition. Highly cognitive tasks, such as abstracting, reasoning and explaining, are closely related to symbol systems which usually cannot adapt to complex high-dimensional space. Neuro-symbolic models combine the advantages of the deep learning model with symbolic methods, thereby significantly reducing the search space of symbolic methods such as program synthesis \cite[cf.]{gan2017vqs,yi2018neural}. There are many applications and methods of neuro-symbolic reasoning in natural language processing (NLP), basically involved complex question answering (QA), that is, given a complicated question, the answer is inferred from the context. Usually, the process is to split the tricky question into several sub-questions and then to use the neuro-symbolic reasoning model to get the results, which requires the abilities of question understanding, information extraction of context, and symbolic reasoning. For instance, Gupta et al.\cite{gupta2019neural} proposed Neural Module Networks (NMN) to solve tricky question answering tasks, where the reasoning process over texts involves natural language reasoning and symbolic reasoning. Natural language reasoning can be seen as the process of information extraction from texts, and symbolic reasoning is the reasoning process based on the extracted, structured information. Compared with NMN customising modules based on tasks, Compositional Attention Networks (MAC) \cite{Hudson_Manning_2018} is a soft-attention architecture (i.e. computing all corresponding attention weight with all the data) providing a more universal and reusable architectures with shared parameters, and it is an end to end differentiable. Neural State Machine (NSM) \cite{hudson2019learning} has a similar reasoning mechanism that in MAC, but its expression way is the probability distribution of a scene graph based on the given content, which shows powerful generalisation capacity under multi-tasks. Neuro-Symbolic Concept Learner (NS-CL) \cite{mao2019neuro} has three modules: extract and express target in fixed-length vectors, similarity comparison for the object and contents, execution and analysis with a curriculum learning approach, where the modules are efficient and even achieve good results over a fewer quantity of data. By combining neural networks with symbolic systems, models can be improved effectively in their shortcomings aspects, such as promoting data efficiency due to reusable property in multiple tasks; becoming interpretable or human understanding to some extent because of human-like thinking process; facilitating generalisation capacity because of high-level, abstract representations. In our work, we modify the architectures of GNN and sequence encoding models (i.e. CNN, RNN) to achieve the neuro-symbolic reasoning process (see Chapter \ref{char3}), and based on their types of input, we separate them into two classes: the graph-based model and the sequence-based model. \section{Assessing Systematic Generalisation \label{char3}} Two types of model, the E-GNN (graph-based) model and the L-Graph (sequence-based) model, are used to work on CLUTTR datasets to evaluate the performance of systematic generalisation and robust reasoning as well. \subsection{Datasets} As in \cite{sinha2019clutrr}, we use the same pre-generated datasets \footnote{https://github.com/facebookresearch/clutrr/} to evaluate our models and it is also easier and more obvious to compare our models' performance with that in the paper. There are totally six datasets separated into two groups: two of them (named "data\_089907f8" and "data\_db9b8f04") are used for systematic generalisation evaluation, and the rest of them (named "data\_7c5b0e70", "data\_06b8f2a1", "data\_523348e6" and "data\_d83ecc3e") are used for robust reasoning evaluation. We test our models' systematic generalisation capacity with clauses of length $k$ = 2, 3 $\cdots$, 10 for both datasets but training process has two regimes: clauses of length in "data\_089907f8" are $k$ = 2, 3 and in "data\_db9b8f04" are $k$ = 2, 3, 4. An example of a train and test instance in CLUTRR in \textbf{Fig. \ref{Fig.ttex}}. \begin{figure}[H] \includegraphics[width=1\textwidth]{ttex.png} \caption{An example of a train and test instance in CLUTRR. Training over instance with clauses of length k=2 (left), and testing over instance with clauses of length k=10 (right). All names are replaced with capital alphabet since replaceable and unrelated in the process; relationships are shown between two nodes beside the line. Task (or query) is to identify the relationship between two nodes linked by red dash based on the given supporting facts.} \label{Fig.ttex} \end{figure} \subsection{E-GNN \label{Graph-based}} In order to handle multi-relational graphs and better learn the relationship between two entities, we modified the normal Graph Neural Networks (GNN) architecture. Entities and relationships are modelled as nodes and edges, respectively in the graph, and we concatenate relationships information on the entities so that the model can also consider the edge attributes during encoding. In other words, the graph-based mechanism is based on the supporting facts to embed a query and seek the target, so we can also present this type of model in the form of: $$\hat{y} = softmax\left(\mathbf{W}[emb(\mathcal{F}) \, \Vert \, emb(\mathcal{Q}\, \vert \, \mathcal{F})]\right)$$, where $emb(\cdot)$ denotes an embedding process working on a set of ground facts, such that the supporting facts ($\mathcal{F}$) or a query conditioning on supporting facts ($\mathcal{Q\, \vert \,F}$); $"\, \Vert \,"$ means a concatenation of two items; $\mathbf{W}$ represents weight matrix; and $\hat{y}$ represents the distribution of all possible relationships we get after softmax function, and then we will choose the highest probability relationship type as the predicted target. After processing with GNN models, $emb(\mathcal{F})$ is the representation of each node in the graph with a shaped [$B \, \times \, N \, \times \, emb_{dim}$], where $B$ is the batch size, $N$ is the number of nodes and $emb_{dim}$ is the embedding dimension. $emb(\mathcal{Q\, \vert \,F})$ is the representation of each query and it comes from $emb(\mathcal{F})$. Here, we gather the representation of needed nodes in the query from $emb(\mathcal{F})$ and it shaped in [$B \, \times \, (2*emb_{dim})$], where "2" means two entities/nodes in a query. Besides, when we conduct concatenating, $emb(\mathcal{F})$ is reshaped into [$B \, \times \, emb_{dim}$] by taking average on the nodes, and we get the shape [$B \, \times \, (3*emb_{dim})$] in the end. To discuss this in more detail, we need to look at the message passing framework used in the GNN model. \subsubsection{Message Passing Framework} When we handle the graph data, we normally will conduct convolution operation in a neighbourhood aggregation or message passing scheme, which can be expressed as $$\mathbf{x}_i^{(k)} = \textbf{UPDATE} \left(\mathbf{x}_i, \textbf{ AGGR}_{j\in \mathcal{N}(i)} \textbf{ MESSAGE}^{(k)} (\mathbf{x}_i^{(k-1)}, \mathbf{x}_j^{(k-1)}, \mathbf{e}_{ij}) \right) $$ , where $\mathbf{x}_i^{(k-1)}$ denotes the state of the current node $i$ in the $(k-1)^{th}$ layer; $\mathbf{x}_j^{(k-1)}$ denotes the state of the neighbour node $j$ in the $(k-1)^{th}$ layer; $\mathbf{e}_{ij}$ denotes the edge features from node $j$ to node $i$; $\mathcal{N}(i)$ denotes the neighbourhood set of node $i$ (i.e. 1-hop neighbours); $\textbf{MESSAGE}$ denotes differentiable message generation function and it can be in various ways (i.e. Multi Layer Perceptrons), and obtain the embedding vector of each pair of the nodes and edge; $\textbf{AGGR}$ denotes the aggregation function (i.e. sum, mean, max) which is differentiable permutation invariant; $\textbf{UPDATE}$ denotes the state update function which usually conducts bias term adding, linear transformation or multi-head processing (i.e. concatenating) to update what we obtain from aggregation process. In short, this framework has two steps: Gather the state message from neighbour nodes, and then generate the current node’s message by applying the specific aggregation function; based on the message so far, update the state of the target node. In our work, the modification can be shown in two parts. In the $\textbf{MESSAGE}$ part, we have $$\mathbf{\Theta}\mathbf{x}_j \rightarrow (\mathbf{\Theta}\mathbf{x}_j) \, \Vert \, \mathbf{e_{ij}}$$, where $\mathbf{\Theta}$ is the weight matrix and $" \, \Vert \,"$ means concatenation. We concatenate the edge representation $\mathbf{e_{ij}}$ to each corresponding node $\mathbf{x}_j$, and then in the $\mathbf{UPDATE}$ part, an edge update matrix will be multiplied to get the dimension we expect (i.e. the number of relationship types). The graph-based models we used in this work: Graph Convolutional Networks (GCN) \cite{kipf2016semi}, Graph Attentional Networks (GAT) \cite{velivckovic2017graph}, Simple Graph Convolutional Networks (SGCN) \cite{wu2019simplifying}, Attention-based Graph Neural Networks (AGNN) \cite{thekumparampil2018attention}, and a special one, Relational Graph Convolutional Networks (RGCN) \cite{schlichtkrull2018modeling}. Noticed that RGCN does not need to be modified since it already considered the relationship between two entities, we put it here as it belongs to one kind of graph-based model. \subsection{L-Graph}\label{Sequence-based} Instead of conditioning the supporting facts to represent the query in the graph-based model, we process the facts and query independently. Inputs in the sequence-based encoder are the subject-predicate-object (SPO) triples by linearising a relational graph that extracted from supporting facts within a story, as in \cite{minervini2020learning}. Actually supporting facts within a story can be seen as a reasoning path, we only pay attention to nodes(i.e. entities) and edges (i.e. relationship) among them. Therefore, subject and object correspond to two entities, and a predicate is a relationship between these two entities. After extracting all SPO triples in a story, we then directly put them (i.e. SPO sequence) into standard sequence models (i.e. CNN, RNN), since they are ordered and can be seen as a short/key sentence compared to the original story. Similarly, we represent the query into the SO sequence which similar to SPO sequence but without the predicate/target (i.e. relationship) and then put it into sequence models as well. At last, after concatenating and softmax function, we can predict the relationship by choosing the highest value. To be specific. The sequence-based model does not process query conditioning on the supporting facts, in contrast, supporting facts and query are handled independently: $$\hat{y} = softmax\left(\mathbf{W}[emb(\mathcal{F}) \, \Vert \, emb(\mathcal{Q})]\right)$$ where $"emb"$ denotes an embedding process working on a set of ground facts, such that the supporting facts ($\mathcal{F}$) or a query ($\mathcal{Q}$); $"\, \Vert \,"$ means a concatenation of the two items; $\mathbf{W}$ represents weight matrix; and $\hat{y}$ represents the distribution of all possible relationships we get after softmax function, and we will choose the highest probability relationship type as the predicted target. Here, $emb(\mathcal{F})$ is from the result of processing SPO sequences with sequence encoding models, such as CNN and RNN, while $emb(\mathcal{Q})$ without predicate in the processing inputs (i.e. SO sequences). The shape of them both are [$B \, \times \, emb_{dim}^*$], where $B$ is the batch size, $emb_{dim}^*$ is the embedding dimension for each story (consist of SPO sequences and SO sequences) based on sequence models, and [$B \, \times \, (2*emb_{dim}^*)$] after concatenation. For example, when the LSTM \cite{Hochreiter_Schmidhuber_1997} is used with hidden size = 100, we can get [$B \, \times \, 100$] for both processes and get [$B \, \times \, 200$] after concatenating, and if we use the Bidirectional Long Short-Term Memory Networks (Bi-LSTM) \cite{thireou2007bidirectional}, $emb_{dim}^*$ will become twice of that in the LSTM. We consider several sequence encoding models, namely Recurrent Neural Networks (RNN) \cite{Rumelhart_Hinton_Williams_1986}, Long Short-Term Memory Networks (LSTM) \cite{Hochreiter_Schmidhuber_1997}, Gated Recurrent Units (GRU) \cite{cho2014learning}, Bidirectional Recurrent Neural Networks (Bi-RNN) \cite{schuster1997bidirectional}, Bidirectional Long Short-Term Memory Networks (Bi-LSTM) \cite{thireou2007bidirectional}, Bidirectional Gated Recurrent Units (Bi-GRU) \cite{vukotic2016step}, Convolutional Neural Networks (CNN) \cite{LeCun1989}, CNN with Highway Encoders (CNNH) \cite{kim2015character}, Multi-Headed Attention Networks (MHA) \cite{vaswani2017attention}. All related codes can be easily built up based on "torch.nn" or the NLP research library, named "AllenNLP" \cite{Gardner2017AllenNLP}. \section{Experimental Results \label{char4}} We build up several baselines for evaluating the systematic generalisation and robust reasoning capacity on the pre-generated CLUTTR datasets. Models' performance is comparing to each other, including "text-based" models \cite{sinha2019clutrr} mentioned in the paper. The prefix "graph\_" using in the sequence-based model (L-Graph) is to distinguish from the text-based model. \subsection{Systematic Generalisation Evaluation} We have conducted experiments on two pre-generated CLUTRR datasets, "data\_ 089907f8" and "data\_db9b8f0", to compare each models' systematic generalisation capacity. Due to limited pages, only partly results under maximising validation accuracy metric based on "data\_db9b8f0" are illustrated. Numerical details for all models under both minimising validation loss metric and maximising validation accuracy metric did not show here, as well as optimal hyperparameters. \begin{figure}[H] \centering \includegraphics[width=1\textwidth]{1.2.pdf} \caption{Systematic generalisation performance of graph-based models when trained on clauses of length k = 2, 3, 4.\\ In ascending order, the mean test accuracy of graph-based models are: 0.7255 (gcn), 0.7812 (sgcn), 0.7858 (gat), 0.8258 (rgcn) and 0.8442 (agnn); the test accuracy of clauses length $k$ = 10 are: 0.3196 (gcn), 0.4591 (gat), 0.4975 (sgcn), 0.571 (rgcn) and 0.617 (rgcn).} \label{Fig.1.2} \end{figure} The gap between two types of model is quite evident in \textbf{Fig. \ref{Fig.1.2}}. Graph-based models have more than 0.9 accuracies on clauses length 2, 3 and 4, and then drop significantly till clauses length 10. Among them, AGNN can be regarded as the best model; it obtains the highest value across most of the situations and even at length 10 with a figure of 0.617. Different from the monotonic decline in the graph-based models, over the length from 4 to 10, the text-based models' testing accuracy remained level. Although the graph-based model performs much better than text-based in most scenarios, they decrease very fast and at most 0.6 accuracies can be obtained at length 10 and the worst one only about 0.3. \begin{figure}[H] \centering \includegraphics[width=1\textwidth]{2.2.pdf} \caption{Systematic generalisation performance of common sequence models when trained on clauses of length k = 2, 3, 4.\\ In ascending order, the mean test accuracy of sequence-based models are: 0.3023 (graph\_cnn), 0.3265 (graph\_cnnh), 0.3764 (graph\_boe), 0.9447 (graph\_lstm), 0.9571 (graph\_rnn) and 0.9721 (graph\_gru); the test accuracy of clauses length $k$ = 10 are: 0.1473 (graph\_cnn), 0.1509 (graph\_boe), 0.171 (graph\_cnnh), 0.8679 (graph\_lstm), 0.9116 (graph\_rnn) and 0.9241 (graph\_gru).} \label{Fig.2.2} \end{figure} RNN, LSTM and GRU perform much stable with high test accuracy in \textbf{Fig. \ref{Fig.2.2}} (trained on clauses of length $k$ = 2, 3, 4), with nearly 0.9 test accuracy on clauses length 10 stories. However, BOE, CNN and CNNH decline in an unbelievable speed, from nearly 1.0 test accuracy on clauses length 2 to only about 0.1 on clauses length 3, while test-based models do not show much difference. \subsection{Robust Reasoning Evaluation} We also have conducted experiments on four pre-generated CLUTRR datasets, "data\_7c5b0e70", "data\_06b8f2a1", "data\_523348e6" and "data\_d83ecc3e", to compare each models' robust reasoning capacity. Numerical details for all models under both minimising validation loss metric and maximising validation accuracy metric did not show here, and optimal hyperparameters as well. \section{Conclusions} In this paper, we implemented and evaluated two neuro-symbolic reasoning architectures for testing systematic generalisation and inductive reasoning capabilities of NLU systems, namely the E-GNN (Graph-based) model and the L-Graph (Sequence-based) model. Both two types of models were trained on the CLUTRR datasets, and we demonstrated and evaluated quite a lot of experiments results in both generalisation and robustness aspects. Most models can perform well with fewer clauses length inputs, however, as the length of clauses increasing in the test stage, all models' performance decline monotonically, especially for graph-based models, which proves the challenge of "zero-shot" systematic generalisation \cite{lake2018generalization,sodhani2018training}. Among them, those sequence-based models with recurrent neural networks outperform the graph-based model, the typical sequence model, and the multi-head attention networks model when considering generalisation capacity, with around 90\% on clauses length 10 when trained over clauses of length 2, 3 and 4. Despite that, their robustness performance cannot as good as that of graph-based models and sometimes even fail to obtain any correct kinship. These results highlight that graph-based models with attention architectures can catch the patterns of the data and prevent from diverse noise influence; sequence-based models can make good use of SPO sequences to predict the kinship between entities; however, we can also notice that there is a fatal flaw in this strategy---very sensitive to the order of sequences; bidirectional settings and multi-heads attention architecture did not have much more advantage than the original ones; although models were trained on data with noise added, most of them may be able to get some linguistic cues (i.e. gender) and predict well under that type of noise scenarios on clauses of length $k$ = 2, but testing on clauses of length $k$ = 3 is still a challenge for some models. Structured input plays a significant role in all those models. Compared with text-based models, the performance of models with structured inputs improves significantly, especially for models with recurrent neural networks, such as LSTM and GRU, show us an unbelievable accuracy during the systematic generalisation tests. Moreover, this phenomenon also proves that there is a gap between reasoning models trained on structured input or unstructured natural language. \bibliographystyle{splncs04}
3,212,635,537,880
arxiv
\section{Introduction} Massive multiple input multiple output (MIMO) is a key component of 5G since this technology can improve ergodic spectral efficiency (SE) with the orders of magnitude compared to single-antenna systems. It is achieved by equipping each base station (BS) with a large number of antennas such that the system can spatially multiplex tens of users at the same time and frequency resource \cite{Marzetta2016a,le2020pareto}. The quasi-orthogonality of all channels allows a simple linear beamforming to yield SE close to the channel capacity for single-cell systems, in which orthogonal pilot signals are assumed to be available for all users. In cellular Massive MIMO systems, it is an impractical assumption because the pilot overhead is directly proportional to the total number of users \cite{Chien2018a}, while the coherence interval is limited. A small set of orthogonal pilot signals should be reused among the cells, which results in mutually correlated interference known as pilot contamination that downgrades the ergodic SE \cite{Bjornson2017bo}. In order to mitigate pilot contamination, an observation is that some users will cause more severe contamination to each other when they use the same pilot. This issue should be avoided in the pilot assignment. The system can reuse pilot signals in a way that gives these users a priority to use orthogonal pilots. Nonetheless, an optimal pilot assignment is expensive to find since it is attained by solving a combinatorial problem. Heuristic algorithms with affordable complexity are necessary in practice to eliminate pilot contamination at a reasonable cost. It should be noticed that most previous works only consider the pilot assignment for either the uplink or downlink transmission, see \cite{Chien2018a, Jin2015a} and references therein. The authors \cite{marinello2017joint} were the first to propose a heuristic pilot assignment algorithm taking both uplink and downlink into account, but based on the asymptotic SE from uncorrelated Rayleigh fading that behaves differently from the capacity regime for a limited number of BS antennas and spatially correlated channels. Meanwhile, the combinations of pilot signals to obtain the pilot assignment increases rapidly with the total number of users in all cells making it an intractable solution. Motivated by the fact that uncorrelated channels rarely appear in practice \cite{Gao2015a}, the pilot assignment for spatially correlated channels were considered in \cite{Chien2018a, You2015a}, but the pilot assignment for jointly enhancing the uplink and downlink SE has not been considered in this context. In this paper, we assign the pilot signals to jointly maximize the minimum weighted sum of uplink and downlink SE per user with spatially correlated channels. This optimization problem has flexibility since the weights can be used to assign different priorities to the downlink and uplink. The optimization problem is based only on statistical channel information, thus the obtained solution can be utilized as long as the statistics remain the same. Since this optimization problem is combinatorial and NP-hard, we propose a heuristic pilot assignment that works well for systems with a limited number of BS antennas. The obtained pilot assignment solution outperforms the other benchmarks from previous works. \textit{Notation}: The upper and lower-bold letters denote vectors and matrices, respectively. The notation $(\cdot)^H$ is the Hermitian transpose. $\mathbb{E} \{ \cdot \}$ is the expectation of a random variable, while $\mathcal{CN}(\cdot,\cdot)$ denotes circularly complex Gaussian distribution. The identity matrix of size $M\times M$ is denoted by $\mathbf{I}_M$. Finally, $\mathrm{tr}(\cdot)$ is the trace operator, $\| \cdot \|$ is Euclidean norm, and $\mathcal{O}(\cdot)$ denotes the big-$\mathcal{O}$ notation. \section{System Model} \label{Section: System Model} We consider a multi-cell Massive MIMO system comprising $L$ cells, each with an $M$-antenna BS serving $K$ single-antenna users. The system uses a time-division duplexing protocol. Let $\tau_c$ be the length of each coherence block whereof $\tau_p$ symbols are used for the uplink training and the remaining is used for the data transmission. We denote by $\gamma^{\mathrm{ul}}$ and $\gamma^{\mathrm{dl}}$ the fraction of the $\tau_c - \tau_p$ symbols used for the uplink and downlink data transmission and satisfied $\gamma^{\mathrm{ul}} + \gamma^{\mathrm{dl}} =1$. The set $\mathcal{S}$ contains all tuple of cell and user indices in the system as \begin{equation} \mathcal{S} = \left\{ (i,t): i \in \{1, \ldots, L\}, t \in \{1, \ldots, K \} \right\}. \end{equation} The channel between user~$t$ in cell~$i$ and BS~$l$ is denoted as $\mathbf{h}_{i,t}^l \in \mathbb{C}^M$ and is assumed to feature correlated Rayleigh fading: $\mathbf{h}_{i,t}^l \sim \mathcal{CN} (\mathbf{0}, \mathbf{R}_{i,t}^l)$, where $\mathbf{R}_{i,t}^l \in \mathbb{C}^{M \times M}$ is the channel correlation matrix. All BSs know the channel statistics, but need to estimate the realizations in each coherence block.\footnote{For sake of the simplicity, we assume that the channel correlation matrices are known. In practical systems, we can easily estimate the channel correlation matrices by averaging over many different instantaneous channel realizations.} \subsection{Uplink training phase} Let us introduce a set $\mathcal{P}$ of $\tau_p$ mutually orthogonal pilot signals, $K \leq \tau_p \leq KL,$ reused among the users. The pilot signal assigned to user~$t$ in cell~$l$ is denoted as $\pmb{\psi}_{i,t}$. In the uplink training phase, the received baseband pilot signal $\mathbf{Y}_{p,l} \in \mathbb{C}^{M \times \tau_p}$ at BS~$l$ is \begin{equation}\label{eq:Ylk} \mathbf{Y}_{p,l} = \sum_{(i,t) \in \mathcal{S} } \mathbf{h}_{i,t}^l \pmb{\psi}_{i,t}^H + \mathbf{N}_{p,l}, \end{equation} where $\mathbf{N}_{p,l}$ is an $M \times \tau_p$ noise matrix with independent elements distributed as $\mathcal{CN}(0, \sigma_{\textrm{UL}}^2)$ and $\sigma_{\textrm{ul}}^2$ being the noise variance in the uplink. The channel estimate $\hat{\mathbf{h}}_{l,k}^l$ of $\mathbf{h}_{l,k}^l$ is obtained from \eqref{eq:Ylk} by MMSE estimation \cite{Bjornson2017bo} as \begin{equation} \label{eq:ChannelEst} \begin{split} \hat{\mathbf{h}}_{l,k}^l &= \| \pmb{\psi}_{l,k} \|^2 \mathbf{R}_{l,k}^l (\mathbf{F}_{l,k}^l)^{-1} \mathbf{Y}_{p,l} \pmb{\psi}_{l,k}, \end{split} \end{equation} where $\mathbf{F}_{l,k}$ is given as \begin{equation} \mathbf{F}_{l,k} = \sum_{(i,t) \in \mathcal{S} } \mathbf{R}_{i,t} ^l | \pmb{\psi}_{i,t}^H \pmb{\psi}_{l,k} |^2 + \sigma_{\mathrm{UL}}^2 \| \pmb{\psi}_{l,k} \|^2 \mathbf{I}_M. \end{equation} For all $l,k,$ the channel estimates are distributed as \begin{equation} \label{eq:DistChannelEst} \hat{\mathbf{h}}_{l,k}^l \sim \mathcal{CN} \left( \mathbf{0}, \| \pmb{\psi}_{l,k} \|^4 \mathbf{R}_{l,k}^l \mathbf{F}_{l,k}^{-1} \mathbf{R}_{l,k}^l \right). \end{equation} The channel estimate in \eqref{eq:ChannelEst} together with its statistical information in \eqref{eq:DistChannelEst} are used to formulate the linear processing vectors for the data transmission and compute closed-form ergodic SE expressions for each user. \subsection{Data transmission} In the uplink data transmission, the $K$ users in each cell simultaneously send data to the serving BS. Specifically, user $k$ in cell $l$ sends a complex data symbol $s_{l,k}$ with $\mathbb{E} \{ |s_{l,k} |^2\} = 1$. The received signal $\mathbf{y}_l \in \mathbb{C}^M$ at BS~$l$ is \begin{equation} \mathbf{y}_l = \sum_{(i,t) \in \mathcal{S} } \sqrt{p_{i,t}^{\mathrm{ul}}} \mathbf{h}_{i,t}^l s_{i,t} + \mathbf{n}_l, \end{equation} where $p_{i,t}^{\mathrm{ul}}$ is the transmit data power and $\mathbf{n}_l \in \mathbb{C}^M$ is the uplink Gaussian noise with $\mathbf{n}_l \sim \mathcal{CN}(\mathbf{0}, \sigma_{\textrm{ul}}^2 \mathbf{I}_M )$. We assume BS~$l$ detects the desired signal from its user~$k$ by utilizing a maximum-ratio combining vector as \begin{equation} \mathbf{v}_{l,k} = \hat{\mathbf{h}}_{l,k}^l. \end{equation} The desired signal is then obtained from \begin{equation} \label{eq: DetectedSignal} \begin{split} & \mathbf{v}_{l,k}^{H} \mathbf{y}_l = \sum_{(i,t) \in \mathcal{S} } \sqrt{p_{i,t}^{\mathrm{ul}}} \hat{\mathbf{h}}_{l,k}^{l,H} \mathbf{h}_{i,t} s_{i,t} + \hat{\mathbf{h}}_{l,k}^{l,H} \mathbf{n}_l. \end{split} \end{equation} In the downlink data transmission, BS~$l$ transmits a signal $\mathbf{x}_l \in \mathbb{C}^M$ to its $K$ users, which is formulated as \begin{equation} \mathbf{x}_l = \sum_{t=1}^K \sqrt{p_{l,t}^{\mathrm{dl}}} \mathbf{w}_{l,t} q_{l,t}, \end{equation} where $p_{l,t}^{\mathrm{dl}}$ is the power allocated to the data symbol $q_{l,t}$ with $\mathbb{E} \{ |q_{l,t}|^2 \} = 1$. The maximum ratio (MR) precoding vector \begin{equation} \label{eq:NormalizedMR} \mathbf{w}_{l,t} = \frac{\hat{\mathbf{h}}_{l,t}^l}{\sqrt{ \mathbb{E} \left\{ \| \hat{\mathbf{h}}_{l,t}^l \|^2 \right\}}} = \frac{\hat{\mathbf{h}}_{l,t}^l}{ \sqrt{ \| \pmb{\psi}_{l,k} \|^4 \mathrm{tr} ( \mathbf{R}_{l,k}^l \mathbf{F}_{l,k}^{-1} \mathbf{R}_{l,k}^l)}}, \end{equation} is used. The received signal at user $k$ in cell $l$ is a superposition of the signals from all $L$ BSs as \begin{equation} \label{eq:DLReceivedSig} \begin{split} &r_{l,k} = \sum_{(i,t) \in \mathcal{S} } \sqrt{p_{i,t}^{\mathrm{dl}}} \left(\mathbf{h}_{l,k}^{i} \right)^H \mathbf{w}_{i,t} q_{i,t} + n_{l,k}, \end{split} \end{equation} where $n_{l,k}$ denotes the additive noise which is distributed as $n_{l,k} \sim \mathcal{CN}(0, \sigma_{\textrm{dl}}^2)$ and $\sigma_{\textrm{dl}}^2$ is the noise variance in the downlink. By applying the standard Massive MIMO methodology \cite{Bjornson2017bo} to \eqref{eq: DetectedSignal} and \eqref{eq:DLReceivedSig}, the closed-form expression of the ergodic uplink and downlink SEs in Lemma~\ref{lemma:ClosedForm} are attained. \begin{lemma} \label{lemma:ClosedForm} Closed-form expression of the uplink and downlink ergodic SEs of user $k$ in cell $l$ are respectively \begin{align} R_{l,k}^{\mathrm{ul}} &= \gamma^{\mathrm{ul}} \left(1 - \frac{\tau_p}{\tau_c} \right) \log_2 \left( 1 + \mathrm{SINR}_{l,k}^{\mathrm{ul}} \right), \label{eq:ClosedULRate}\\ R_{l,k}^{\mathrm{dl}} &= \gamma^{\mathrm{dl}} \left(1 - \frac{\tau_p}{\tau_c} \right) \log_2 \left(1 + \mathrm{SINR}_{l,k}^{\mathrm{dl}} \right), \label{eq:DLCorrelatedRate} \end{align} where the effective SINR values are given in \eqref{eq:ULSINR} and \eqref{eq:DLSINR}. \begin{figure*} \begin{equation} \label{eq:ULSINR} \mathrm{SINR}_{l,k}^{\mathrm{ul}} = \frac{p_{l,k}^{\mathrm{ul}} \| \pmb{\psi}_{l,k} \|^4 \mathrm{tr} ( \mathbf{R}_{l,k}^l \mathbf{F}_{l,k}^{-1} \mathbf{R}_{l,k}^l ) }{ \sum_{(i,t) \in \mathcal{S} \setminus \{(l,k) \}} p_{i,t}^{\mathrm{ul}} | \pmb{\psi}_{l,k}^H \pmb{\psi}_{i,t} |^2 \frac{| \mathrm{tr} \left( \mathbf{R}_{i,t}^l \mathbf{F}_{l,k}^{-1} \mathbf{R}_{l,k}^l \right) |^2}{\mathrm{tr} (\mathbf{R}_{l,k}^{l} \mathbf{F}_{l,k}^{-1} \mathbf{R}_{l,k}^{l} )} + \sum_{(i,t) \in \mathcal{S} } p_{i,t}^{\mathrm{ul}} \frac{\mathrm{tr} ( \mathbf{R}_{i,t}^l \mathbf{R}_{l,k}^{l} \mathbf{F}_{l,k}^{-1} \mathbf{R}_{l,k}^{l} ) }{\mathrm{tr} (\mathbf{R}_{l,k}^{l} \mathbf{F}_{l,k}^{-1} \mathbf{R}_{l,k}^{l} )} + \sigma_{\mathrm{ul}}^2} \end{equation} \begin{equation} \label{eq:DLSINR} \mathrm{SINR}_{l,k}^{\mathrm{dl}} = \frac{p_{l,k}^{\mathrm{dl}} \| \pmb{\psi}_{l,k} \|^4 \mathrm{tr} ( \mathbf{R}_{l,k}^l \mathbf{F}_{l,k}^{-1} \mathbf{R}_{l,k}^l ) }{ \sum_{(i,t) \in \mathcal{S} \setminus \{(l,k) \}} p_{i,t}^{\mathrm{dl}} |\pmb{\psi}_{l,k}^H \pmb{\psi}_{i,t}|^2 \frac{ | \mathrm{tr} ( \mathbf{R}_{i,t}^{i} \mathbf{F}_{i,t}^{-1} \mathbf{R}_{l,k}^{i}) |^2 }{\mathrm{tr} \left(\mathbf{R}_{i,t}^i \mathbf{F}_{i,t}^{-1} \mathbf{R}_{i,t}^i\right) } + \sum_{(i,t) \in \mathcal{S}} p_{i,t}^{\mathrm{dl}} \frac{ \mathrm{tr} ( \mathbf{R}_{i,t}^{i} \mathbf{F}_{i,t}^{-1} \mathbf{R}_{i,t}^{i} \mathbf{R}_{l,k}^{i}) }{\mathrm{tr} (\mathbf{R}_{i,t}^i \mathbf{F}_{i,t}^{-1} \mathbf{R}_{i,t}^i ) } + \sigma_{\mathrm{dl}}^2} \end{equation} \hrulefill \end{figure*} \end{lemma} \begin{proof} The proof follows along the lines of Corollaries 4.5 and 4.9 in \cite{Bjornson2017bo} except for the different notations and the fact that pilot reuse pattern is arbitrary and not defined in advance. \end{proof} In the SINR expressions, the numerator represents an array gain as the trace of the covariance matrix is proportional to the number of BS antennas. The first term of the denominator represents coherent interference originating from pilot contamination caused by the pilot reuse and it grows with the number of BS antennas. The last terms of the denominator are noncoherent interference and noise. While the uplink SINR expression of each user only depends on the channel estimate of this own user, a superposition of the channel estimation quality from all the users are observed in the downlink SINR expression. The denominators of the SINR expressions in \eqref{eq:ULSINR} and \eqref{eq:DLSINR} indicate the different contributions of a pilot reuse pattern to the uplink and downlink data transmission. The coupled nature motivates a pilot assignment for jointly optimizing the both SEs per user instead of either the uplink or downlink SE as in most previous works. \begin{figure}[t] \centering \includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, width=2.3in]{Fig1} \caption{The proposed pilot assignment for the $K$ users in cell~$l$.} \label{FigPilotAssigment} \end{figure} \section{Max-Min Fairness Optimization} This section studies the pilot assignment for the weighted max-min sum SE per user fairness problem with uplink and downlink transmit power constraints. Due to the inherent non-convexity, we propose a heuristic algorithm to obtain a good local solution with tolerable computational complexity. \subsection{Problem formulation} By introducing the weights $\{w_{l,k}^{\mathrm{ul}}, w_{l,k}^{\mathrm{dl}} \}$ that prioritize the uplink and downlink transmission of arbitrary user~$k$ in cell~$l$, the optimization problem is formulated for a given set of orthogonal pilot signals as \begin{subequations} \label{Problem:DataOptimizationv1} \begin{align} \underset{ \substack{ \{ p_{l,k}^{\mathrm{ul}} \geq 0 \}, \{p_{l,k}^{\mathrm{dl}} \geq 0 \}, \\ \{ \pmb{\psi}_{l,k} \in \mathcal{P} \} }}{\mathrm{maximize}} & \quad \underset{(l,k)}{\mathrm{min}} \quad w_{l,k}^{\mathrm{ul}} R_{l,k}^{\mathrm{ul}} + w_{l,k}^{\mathrm{dl}} R_{l,k}^{\mathrm{dl}} \\ \textrm{subject to} & \quad p_{l,k}^{\mathrm{ul}} \leq P_{\mathrm{max},l,k}^{\mathrm{ul}} \;, \forall l,k, \label{eq:ULPowerConst}\\ & \quad \sum_{k=1}^K p_{l,k}^{\mathrm{dl}} \leq P_{\mathrm{max},l}^{\mathrm{dl}} \;, \forall l, \label{eq:DLPowerConst} \end{align} \end{subequations} where $P_{\mathrm{max},l,k}^{\mathrm{ul}}, P_{\mathrm{max},l}^{\mathrm{dl}}$ is the maximum power that each user and BS can allocate to in the uplink and downlink, respectively. Problem~\eqref{Problem:DataOptimizationv1} is combinatorial and its optimal solution for the pilots is obtained by exhaustive search over possible pilot assignments. For the pilot length of $\tau_p = K$, there are $(K!)^{L-1}$ different pilot assignments, which is impossible to perform in a large-scale system \cite{Chien2018a}. We notice that by introducing weights and considering spatially correlated fading, problem~\eqref{Problem:DataOptimizationv1} is a generalization of previous works which only focus on uncorrelated Rayleigh channel for either the uplink or downlink transmission \cite{Xu2015a}. \subsection{Heuristic pilot assignment with fixed data powers} \label{SubSec:PilotAssignment} A low-complexity heuristic pilot assignment algorithm is proposed, in which the user having the lowest weight sum SE is prioritized. For a given set of transmit power coefficients, problem~\eqref{Problem:DataOptimizationv1} becomes \begin{equation} \label{Problem:PilotAssignment} \begin{aligned} & \underset{ \{ \pmb{\psi}_{l,k} \} \in \mathcal{P} }{\mathrm{maximize}} && \underset{(l,k)}{\mathrm{min}} \quad w_{l,k}^{\mathrm{ul}} R_{l,k}^{\mathrm{ul}} + w_{l,k}^{\mathrm{dl}} R_{l,k}^{\mathrm{dl}}. \end{aligned} \end{equation} We assume that all $KL$ users first randomly select the pilot signals such that there is no pilot contamination inside a cell. After that, cell~$l$ reallocates the pilot signals to its $K$ users with the availability of pilot assignment information from other cells. If we define the weighted sum SE of user~$k$ in cell~$l$ as \begin{equation} f_{l,k} = w_{l,k}^{\mathrm{ul}} R_{l,k}^{\mathrm{ul}} + w_{l,k}^{\mathrm{dl}} R_{l,k}^{\mathrm{dl}}, \end{equation} then BS~$l$ can sort all $K$ users in the ascending order as \begin{equation} \label{eq:Increasingcost} f_{l,1'} \leq f_{l,2'} \leq \ldots \leq f_{l,K'}, \end{equation} where $\{ 1', 2', \ldots, K'\}$ is a permutation of the set $\{1, 2, \ldots, K\}$. From these notations, user~$k'$ in cell~$l$ has the weighted sum SE $f_{l,k'}, \forall k' = 1, \ldots, K$. We now compute the normalized mean square error (NMSE) of user $k$ in cell~$l$ as \begin{equation} \label{eq:PilotContCost} \begin{split} g_{l,k} &= \frac{ \mathbb{E} \{ \| \mathbf{e}_{l,k}^l \|^2 \} }{\mathbb{E} \{ \| \mathbf{h}_{l,k}^l \|^2 \} } = 1 - \frac{\mathrm{tr} ( \mathbf{R}_{l,k}^l \mathbf{F}_{l,k}^{-1} \mathbf{R}_{l,k}^l )}{\mathrm{tr} (\mathbf{R}_{l,k}^l )}, \end{split} \end{equation} then the channel estimation quality of the $K$ users in cell~$l$ is sorted in the ascending order as \begin{equation} \label{eq:PilotContOrder} g_{l,1''} \leq g_{l,2''} \leq \ldots \leq g_{l,K''}, \end{equation} where $\{ 1'', 2'', \ldots, K''\}$ is a permutation of the set $\{1, 2, \ldots, K\}$. The pilot signal $\pmb{\psi}_{l,k''}$ currently used by user $k''$ is reassigned to user~$k'$. The intuition is to dedicate pilots signal subject to less pilot contamination to users with worse conditions, which is viewed as one with a smaller $f_{l,k'}$. Our proposed pilot assignment for users in cell~$l$ is illustrated in Fig.~\ref{FigPilotAssigment}. This process is implemented cell-by-cell in an iterative algorithm (please see Algorithm~\ref{Algorithmv1}). Concerning on the per-cell-based pilot assignment, a new pilot reuse set may harm the minimum weighted sum SE in the entire system. To avoid this issue, we introduce a backtracking condition to assign the pilot signals at iteration~$n$ as in Lemma~\ref{Lemma:BackTrackingCond}. \begin{lemma} \label{Lemma:BackTrackingCond} If the pilot signals are only assigned to the $K$ users in cell~$l$ when the objective function in \eqref{Problem:PilotAssignment} does not increase, then the proposed iterative pilot assignment approach converges to a fixed point. \end{lemma} \begin{proof} Let us denote $h_{l}^{\ast, (n)}$ and $h_{l}^{\ast, (n-1)}$ as the minimum weighted sum SE per user before and after BS~$l$ reassigns the pilot signals, i.e., \begin{align} &h_l^{\ast, (n)} = \underset{(l',k)}{\mathrm{min}} \quad w_{l',k}^{\mathrm{ul}} R_{l',k}^{\mathrm{ul}, (n)} + w_{l',k}^{\mathrm{dl}} R_{l',k}^{\mathrm{dl}, (n)}, \label{eq:hln1}\\ &h_l^{\ast, (n-1)} = \underset{(l',k)}{\mathrm{min}} \quad w_{l',k}^{\mathrm{ul}} R_{l',k}^{\mathrm{ul},(n-1)} + w_{l',k}^{\mathrm{dl}} R_{l',k}^{\mathrm{dl}, (n-1)}, \label{eq:hln2} \end{align} where $R_{l',k}^{\mathrm{ul}, (n)}, R_{l',k}^{\mathrm{dl}, (n)}, R_{l',k}^{\mathrm{ul}, (n-1)},$ and $R_{l',k}^{\mathrm{dl}, (n-1)}$ are the uplink and downlink SEs at iteration~$n$ and $n-1$, respectively. A criterion is then used to approve if the reassignment is valid by checking the backtracking condition \begin{equation} \label{eq:AssignmentCriterion} h_l^{\ast, (n)} \geq h_l^{\ast, (n-1)}, \end{equation} which ensures a nondecreasing objective function in problem~\eqref{Problem:PilotAssignment}. We stress that the condition~\eqref{eq:AssignmentCriterion} needs to be implemented in each iteration due to the non-convexity of \eqref{eq:hln1} and \eqref{eq:hln2}. Moreover the limited power budgets in \eqref{eq:ULPowerConst} and \eqref{eq:DLPowerConst} ensure that this objective function is bounded from above for any set of pilot and data power coefficients in the feasible domain. Consequently, problem \eqref{Problem:PilotAssignment} converges to a fixed point and we conclude the proof. \end{proof} During assigning the pilot signals to the users over cells, the proposed approach will be stopped when, for example, a small variation of two consecutive iterations, which is computed for all the $L$ cells as \begin{equation} \label{eq:StoppingCriterion} \sum_{l=1}^L \left| h_l^{\ast, (n)} - h_l^{\ast, (n-1)} \right| \leq \epsilon \end{equation} where $\epsilon \geq 0$ is a given accuracy. The proposed pilot assignment approach is applied to all the cells as in Algorithm~\ref{Algorithmv1}. \subsection{Data power control} \label{Subsec:ULDLPowerControl} For a given pilot assignment, problem \eqref{Problem:DataOptimizationv1} now reduces to the data power control problem. To obtain a low-complexity, the uplink and downlink data power controls can be separately optimized. We therefore present a framework which is applied for both using the nominal parameters: Let us denote $ \alpha \in \{ \mathrm{ul}, \mathrm{dl} \}$, the max-min fairness problem is now formulated as \begin{equation} \label{Problem:DataOptimizationv2} \begin{aligned} & \underset{ \{ p_{l,k}^{\alpha} \geq 0 \} }{\mathrm{maximize}} && \underset{(l,k)}{\mathrm{min}} \quad w_{l,k}^{\alpha} R_{l,k}^{\alpha} \\ & \textrm{subject to} & & \mbox{Constraints in } \eqref{eq:Powerbudget}, \end{aligned} \end{equation} where the power budget constraints are \begin{equation} \label{eq:Powerbudget} \begin{cases} p_{l,k}^{\mathrm{ul}} \leq P_{\mathrm{max},l,k}^{\mathrm{ul}} \;, \forall l,k, &\mbox{for the uplink}, \\ \sum_{k=1}^K p_{l,k}^{\mathrm{dl}} \leq P_{\mathrm{max},l,k}^{\mathrm{dl}} \;, \forall l, &\mbox{for the downlink}. \end{cases} \end{equation} By adopting the epigraph representation \cite{Boyd2004a}, \eqref{Problem:DataOptimizationv2} is equivalent to as \begin{equation} \label{Problem:DataOptimizationv5} \begin{aligned} & \underset{\{ p_{l,k}^{\alpha} \geq 0 \}, \xi }{\mathrm{maximize}} && \xi \\ & \textrm{subject to} && \mathrm{SINR}_{l,k}^{\alpha} \geq \xi \;, \forall l,k, \\ &&& \mbox{Constraints in } \eqref{eq:Powerbudget}, \end{aligned} \end{equation} where the new optimization variable $\xi \in \{ \xi^{\mathrm{ul}}, \xi^{\mathrm{dl}} \}$ is the minimum SINR per user. In \eqref{Problem:DataOptimizationv5}, the objective function and uplink power constraints aligns with monomials. The SINR expressions and downlink power constraints can be recast as posynomials. Consequently, \eqref{Problem:DataOptimizationv5} is a geometric program whose global optimum is able to attain by a general purpose optimization toolbox \cite{Chien2018a}.\footnote{We have implemented the pilot assignment and data power control iteratively, but no further improvement is observed.} Our proposal to obtain a local solution to \eqref{Problem:DataOptimizationv1} is summarized in Algorithm~\ref{Algorithmv1}. The computational complexity of the pilot assignment is from sorting the pilot signals and from computing $KL$ inverse matrices. The matrix inversion is more expensive since each BS has many antennas. The computational complexity of the pilot assignment is in the order of $\mathcal{O} \left( \nu N_1 L^2 K M^3 \right)$, where $N_1$ is the number of iterations requires to reach the fixed point and the constant value $\nu$ stands for the effectiveness of computing matrix inverse \cite{krishnamoorthy2013matrix}. Next, the data power control by the interior-point method consumes the computational complexity of the order of $\mathcal{O} \left( 2N_2^\alpha\max \left\{ 2L^3 K^3, F_1 \right\} \right)$, where $F_1$ is the first and second derivative estimation cost of computing the SINR constraints in \eqref{Problem:DataOptimizationv5}. Consequently, the total computational complexity of Algorithm~\ref{Algorithmv1} is in the order of $\mathcal{O} \left( \nu N_1 L^2 K M^3 + 2N_2^\alpha\max \left\{ 2L^3 K^3, F_1 \right\} \right)$. By conditioned on the dominated computational complexity from the matrix inverses $\mathbf{F}_{l,k}^{-1}, \forall l,k,$ and the computational complexity of the pilot assignment and either the uplink or downlink data power control is $\mathcal{O} \left( \nu N_1 L^2 K M^3 + N_2^\alpha\max \left\{ 2L^3 K^3, F_1 \right\} \right)$. \begin{algorithm}[t] \caption{An approach finding a fixed point to \eqref{Problem:DataOptimizationv1}}\label{Algorithmv1} \begin{algorithmic}[1] \State \textbf{Input} Set $\{ P_{\max,l,k}^{\mathrm{ul}}$, $P_{\max,l,k}^{\mathrm{dl}} \}, \epsilon, h_l^{\ast,(0)} =0, h_l^{\ast,(1)} =1, \forall l, $ and preliminary pilot assignment $\{ \pmb{\psi}_{l,k}^{\ast} \}$ obtained by randomization; Select initial transmit powers $\{p_{l,k}^{\mathrm{ul}}, p_{l,k}^{\mathrm{dl}} \}$. Set $n=0$. \While {\eqref{eq:StoppingCriterion} unsatisfied}{ \State Set $n = n + 1$ \For{$l=1, \ldots, L$} \State BS~$l$ computes \eqref{eq:Increasingcost} and \eqref{eq:PilotContOrder}, then assigns the pilot signals as in Fig.~\ref{FigPilotAssigment} \State BS~$l$ verifies the backtracking condition~$h_l^{\ast, (n)} \geq h_l^{\ast, (n-1)}$ \textbf{If} this is satisfied, \textbf{then} update $h_l^{\ast,(n)}$ and broadcast the new pilot assignment $\{\pmb{\psi}_{l,k}^{\ast}\}_{k=1}^K$. \textbf{Otherwise} keep the previous one. \State BS~$l$ checks the stopping condition. \textbf{If} not satisfied, \textbf{then} continue by setting $l = l+1$. \textbf{Otherwise} go to step 9. \EndFor \State{Update the cost $\sum_{l=1}^L \big| h_l^{\ast, (n)} - h_l^{\ast, (n-1)} \big|$ for the stopping criterion \eqref{eq:StoppingCriterion}.} \EndWhile} \State Solve problem \eqref{Problem:DataOptimizationv5} to obtain the optimal data powers $\{ p_{l,k}^{\ast, \mathrm{ul}},p_{l,k}^{\ast, \mathrm{dl}} \}, \forall l,k.$ \State \hspace{-0.5cm}\textbf{Output} $\{ p_{l,k}^{\ast, \mathrm{ul}} $, $p_{l,k}^{\ast, \mathrm{dl}} \}$, and $\{ \pmb{\psi}_{l,k}^{\ast} \}, \forall l,k$. \end{algorithmic} \end{algorithm} \begin{figure}[t] \centering \includegraphics[trim=0.6cm 0.1cm 0.6cm 0.5cm, clip=true, width=3.2in]{FigCovgK4B4} \caption{The convergence of the proposed pilot assignment for a network with $4$ users per cell.} \label{FigConvergence} \end{figure} \section{Numerical Results} \label{Sec:NumericalResults} A network is considered with $4$ square cells covering the area $0.5$ km$^{2}$ utilizing the wrap-around technique at the edges to avoid boundary effects, and therefore one BS has eight neighbors. In each cell, a BS with $200$ antennas is at the center and serving $K$ uniformly distributed users with a minimum distance to the serving BS being $35$~m. There are $K$ orthogonal pilot signals, while the maximum power is $P_{\mathrm{max},l,k}^{\mathrm{ul}} = 200$ mW and $P_{\mathrm{max},l}^{\mathrm{dl}} = 200K$ mW for an equal total power budget of the uplink and downlink data transmission thanks to the uplink-downlink duality \cite{Boche2002a}. The noise variance is $-96$ dBm corresponding to the noise figure $5$ dB. The large-scale fading coefficients are \begin{equation} \beta_{l,k}^j [\mathrm{dB}] = -148.1 - 37.6 \log_{10}(d_{l,k}^j/1 \textrm{km}) + z_{l,k}^j, \end{equation} where $d_{l,k}^j$ in km is the distance between user~$k$ in cell~$l$ and BS~$j$ \cite{Chien2018a}. The shadow fading $z_{l,k}^j$ follows a log-normal distribution with standard deviation $7$~dB. The covariance matrix of the channel from user~$k$ in cell~$l$ and BS~$j$ is defined by the exponential correlation model, which models a uniform linear array as \begin{equation} \mathbf{R}_{l,k}^j = \beta_{l,k}^j \begin{bmatrix} 1 & r_{l,k}^{j,\ast} & \cdots & \big( r_{l,k}^{j,\ast} \big)^{M-1} \\ r_{l,k}^{j}& 1 & \cdots & \big( r_{l,k}^{j,\ast} \big)^{M-2} \\ \vdots & \vdots & \ddots & \vdots \\ \big( r_{l,k}^{j} \big)^{M-1}& \big( r_{l,k}^{j} \big)^{M-2} & \cdots & 1 \end{bmatrix}, \end{equation} where the spatial correlation $r_{l,k}^{j}= \mu e^{j \theta_{l,k}^j}$ with the correlation magnitude $\mu$ in the range $[0,1]$ and the user incidence angle to the array boresight being $\theta_{l,k}^j$. By setting the weights $w_{l,k}^{\mathrm{ul}}, w_{l,k}^{\mathrm{dl}} \in \{ 0, 1 \}, \forall l,k,$ (i.e., $w_{l,k}^{\mathrm{dl}} = 0$ if only considering the uplink data transmission; $w_{l,k}^{\mathrm{ul}} = 0$ if only considering the downlink data transmission; $w_{l,k}^{\mathrm{ul}} = w_{l,k}^{\mathrm{dl}} = 1$ if both the uplink and downlink data transmissions are considered) the following benchmarks are used for comparison:\footnote{Exhaustive research is not included for comparison due to its extremely heavy complexity. One realization of user locations needs to evaluate the SE of $(K!)^{L-1} = 373,248,000$ combinations of pilot signals.} \begin{itemize} \item[$1)$] \textit{Random pilot assignment (Denoted as ``Ran. Pi. Assign." in the figures):} The pilot signals are randomly assigned to all users, which was used in, for example \cite{Chien2018a}. \item[$2)$] \textit{Greedy pilot assignment (Denoted as ``Gre. Pi. Assign." in the figures)}: The pilot signals are assigned based on the similarity between the covariance matrices, which was proposed by \cite{You2015a}. \item[$3)$] \textit{Uplink pilot assignment (Denoted as ``UL. Pi. Only" in the figures):} The pilot signals are assigned based on the uplink SE only. \item[$4)$] \textit{Downlink pilot assignment (Denoted as ``DL. Pi. Only" in the figures):} The pilot signals are assigned based on the downlink SE only. \item[$5)$] \textit{Pilot assignment for the joint UL/DL SE enhancement (Denoted as ``Joint UL/DL Pi. Assign." in the figures):} The pilot signals are assigned by the weighted sum SE per user. \end{itemize} \begin{figure}[t] \centering \includegraphics[trim=0.6cm 0.2cm 0.6cm 0.7cm, clip=true, width=3.2in]{FigCovgK6B6} \caption{The convergence of the proposed pilot assignment for a network with $6$ users per cell.} \label{FigConvergencev1} \end{figure} \begin{figure}[t] \centering \includegraphics[trim=0.6cm 0.2cm 0.6cm 0.7cm, clip=true, width=3.2in]{FigCDFK4B4NoPowerCtrRelatedWork} \caption{The cumulative distribution function of the minimum sum SE per user without data power control.} \label{FigCDFNoPowerCtr} \end{figure} \begin{figure}[t] \centering \includegraphics[trim=4cm 9.2cm 3.8cm 9.7cm, clip=true, width=3.2in]{FigCDFK4B4PowerCtrRelatedWork} \caption{The cumulative distribution function of the minimum sum SE per user with data power control.} \label{FigCDFPowerCtr} \end{figure} \begin{figure}[t] \centering \includegraphics[trim=0.6cm 0.2cm 0.6cm 0.7cm, clip=true, width=3.2in]{FigAverageOptSumSEPerUser} \caption{The minimum sum SE per user versus the number of users per cell with data power control.} \label{FigAverageSE} \end{figure} Figs.~\ref{FigConvergence} and \ref{FigConvergencev1} display the convergence of proposed pilot assignments for a network with $4$ users and $6$ users per cell, respectively. The convergence is obtained after less than $8$ iterations for all the considered scenarios. When each BS serves $4$ users, the sum SE per user converges to about $1.4$ b/s/Hz when utilizing either the uplink or downlink SE as the utility metric to assign the pilot signals. However, relying on the downlink SE to assign the pilot signals yields $2\%$ better the sum SE per user than utilizing the uplink SE when increasing the number of users per cell to $6$ users. Fig.~\ref{FigCDFNoPowerCtr} shows the cumulative distribution function (CDF) of the minimum weighted SE per user for a network where each cell has $4$ users. The greedy pilot gives $1.2 \times$ SE better than the random assignment. While assigning the pilot signals based on either the uplink or downlink SE gives almost equivalent performance, but it provides better performance than the random pilot assignment by $1.5 \times$. Meanwhile, the improvement of joint pilot assignment is up to $39\%$ in weighted minimum sum SE per user compared with the second best and it approves the locality of Algorithm~\ref{Algorithmv1}. Finally, Fig.~\ref{FigCDFPowerCtr} manifests the benefits of data power control based on the proposed pilot assignment over the other benchmarks. We observe that the greedy pilot assignment only outperforms the random pilot assignment $1.1\times$. Meanwhile, an improvement up to $2.29 \times$ better weighted sum SE per user than random pilot assignment is obtained. In addition, the data power control improves the sum SE per user up to about $2\times$ compared with the fixed data power allocation. Fig.~\ref{FigAverageSE} plots the minimum sum SE per user versus the number of users per cell. Specifically, the minimum sum SE per user decreases when there are more users in the coverage area that generate more mutual interference. For instance, The joint pilot assignment reduce the minimum sum SE per user $1.7 \times$ as the number of user per cell increases from $2$ to $8$ users. We also observe the benefits of combining the joint pilot assignment and data power control that results in a superior SE improvement up to $1.4 \times$ compared with the random assignment. \section{Conclusion} This paper has formulated and solved a max-min sum SE per user optimization problem considering both the pilot assignment and data power control for cellular Massive MIMO systems with correlated Rayleigh channels. We observed significant improvements of pilot assignment to the minimum sum SE per user compared with the other related works. Interestingly, only deploying the uplink or downlink SE as side information to assign pilot still yields good sum SE to weak users if the max-min fairness optimization is considered. \bibliographystyle{IEEEtran}
3,212,635,537,881
arxiv
\section{Introduction\label{sec:1}} Vortical structures are ubiquitously observed in hydrodynamic and magnetohydrodynamic phenomena. The genesis of cyclones (typhoon, hurricane, tornado, etc.) is one of the open problems in atmospheric science. Small-scale vortical structures in turbulence are considered to be the cause of large-scale magnetic fields in geo- and astro-physical objects \cite{par1955,bra2005}. Recently the importance of the large-scale vortical motions in the dynamo process has been discussed \cite{yok2013}. It was also pointed out that a swirling structure may play an important role in channeling energy from the lower photosphere into the upper solar atmosphere \cite{wed2012}. To understand these processes better, the large-scale vorticity generation mechanisms in turbulence should be studied from various viewpoints. Turbulent kinetic helicity resulting from velocity--vorticity fluctuation correlation represents the topological or structural properties of turbulence. It has been noted that in the presence of helicity, a suppression of turbulent energy transfer may occur due to the topological constraint related to the possible conservation of kinetic helicity \cite{mof1992}. In the context of local turbulent transport, helicity is expected to play some role in momentum-transport suppression \cite{yok1993}. This is in contrast to the turbulent or eddy viscosity, which is expressed in terms of the turbulent energy (intensity information of turbulence), and represents an enhanced transport due to turbulence. In homogeneous isotropic turbulence studies, helicity has been discussed in the context of a relation to the inverse energy cascade from larger to smaller wavenumbers, or reduction of the turbulent energy cascade \cite{bri1973,kra1973}. Using a numerical simulation of a variant of the Eddy-Damped Quasi-Normalized Markovian (EDQNM) approximation closure equations, Andr\'{e} and Lesieur \cite{and1977} showed that helicity influences the energy transfer rate of turbulent energy towards small scales. Their results showed that helicity suppresses the energy transfer to the small scales in the early stage of evolution, but once the inertial range has been established, such suppression effects disappear. As for the recent studies on the inverse energy cascade and helicity in three-dimensional rotating and stratified turbulence, see a series of papers by Pouquet, et al.\ \cite{pou2013,mar2013a,mar2013b}. The relationship between the helicity density and the dissipation rate has been investigated in several homogeneous and pipe flow geometries. Using Direct Numerical Simulations (DNSs) of the Navier--Stokes equation in channel and Taylor--Green vortex flows, Pelz {\textit{et al.}}\ \cite{pel1985} examined the local helicity density $\langle {{\bf{u}} \cdot {\mbox{\boldmath$\omega$}}} \rangle$ [${\bf{u}}$: velocity, $\mbox{\boldmath$\omega$} (= \nabla \times {\bf{u}})$: vorticity]. They found that the alignment between the velocity and vorticity is stronger in the region where the dissipation rate is smaller. However, detailed numerical results in several homogeneous flows and fully developed turbulent channel flow by Rogers and Moin \cite{rog1987} showed no correlation between the relative helicity density $\langle {{\bf{u}} \cdot \mbox{\boldmath$\omega$}} \rangle / (|{\bf{u}}| |\mbox{\boldmath$\omega$}| )$ and the dissipation of turbulent energy. Wallace {\textit{et al.}}\ \cite{wal1992} performed an elaborate experimental study of the helicity density in a turbulent boundary-layer, a two-stream mixing-layer, and in grid-flow turbulence. They found that there is a tendency for the instantaneous velocity and vorticity to align in the shear flows, but concluded that there is little relationship between the small instantaneous dissipation and large helicity density except in the shear flows. Their results support the numerical results obtained by Rogers and Moin. In general, the second-order correlation tensor of the velocity for homogeneous and isotropic but non-mirrorsymmetric turbulence is expressed in terms of the energy (pure scalar) and helicity (pseudoscalar) profiles. Note that the helicity-related part never appears in the mirrorsymmetric case. It has been argued from the symmetry of the Reynolds-stress tensor that helicity itself cannot contribute to the Reynolds stress \cite{kra1974}. It has also been pointed out that the presence of turbulent helicity density alone is insufficient and some other factors breaking the symmetry, such as the compressibility \cite{moi1983,chk1988,kho1991,kit1994}, anisotropy \cite{fri1987}, mean flow, etc.\ are indispensable for the large-scale vortical flow generation. In this context, it is important to note that Gvaramadze {\textit{et al.}}\ \cite{gva1989} showed that even in incompressible turbulence, turbulent helicity may contribute to the generation of large-scale vortices through the coupling with the mean flow. Also Chkhetiany {\textit{et al.}}\ \cite{chk1994} showed possibility of a spontaneous generation of vortical structures in homogeneous turbulent shear with helicity. Assuming the general form of the correlation functions for homogeneous isotropic and non-mirrorsymmetric turbulence profiles as the basic or lowest-order field in the framework of a closure theory for inhomogeneous turbulence, Yokoi and Yoshizawa \cite{yok1993} obtained an expression for the Reynolds stress from the fundamental fluid equations. In this expression, the gradient of the turbulent helicity enters the Reynolds stress as a higher-order effect representing the mean-field inhomogeneities. In their formulation with a derivative expansion [see Eqs.~(\ref{eq:two-scale_diff_exp}) and (\ref{eq:diff_exp}) in \S~\ref{sec:3B}] with respect to the large-scale inhomogeneities, the gradient of helicity appears as the coupling coefficient for the mean vorticity in non-mirrorsymmetric turbulence. At the same time, it had been well recognized that the usual turbulence model with the eddy-viscosity expression for the Reynolds stress completely fails when it is applied to a turbulent swirling flow \cite{kob1987}. With the usual eddy-viscosity model, the dent or decelerated profile of the mean axial velocity near the center axis, imposed at the inlet, cannot be sustained and will decay rapidly and immediately to turn to the usual flat profile of the non-swirling pipe flow. The turbulent or eddy viscosity is too strong to produce an inhomogeneity in the axial velocity profile. Yokoi and Yoshizawa \cite{yok1993} applied their turbulence model with the helicity effect implemented into the Reynolds stress to a turbulent swirling flow and succeeded in reproducing the dent profile in the downstream region found experimentally \cite{kit1991,ste1995}. In this sense, the inhomogeneous helicity effect has been confirmed at the level of turbulence model simulations or closure calculations. As has been referred to above, the theoretical derivation of the Reynolds-stress expression is very general and straightforward. It was based not on the heuristic assumptions for Reynolds stress modeling but on the generic expression for the turbulence fields that are isotropic and non-mirrorsymmetric in the lowest order correlation. However, the closure scheme itself contains several approximations \cite{yos1984,yok2013}. It is necessary to study carefully the inhomogeneous helicity effect in DNSs. One of the most straightforward tests is to check the model expression for the Reynolds stress and compare it to the DNS result for the Reynolds stress. Since the transport coefficients in turbulence models are expressed in terms of turbulent statistical quantities such as the turbulent energy, its dissipation rate, the turbulent helicity, etc., we have to calculate the spatiotemporal evolution of the statistical quantities using DNS data. In the present work, we perform DNSs of inhomogeneous helical turbulence with the simplest possible flow geometry, and validate the turbulence model expression based on the theoretical investigation. The organization of this paper is as follows. After presenting the fundamental equations in \S~\ref{sec:2}, we summarize the helicity effects in inhomogeneous turbulence with special reference to the symmetry of Reynolds stress and its modeling in \S~\ref{sec:3}. In \S~\ref{sec:4}, the set-up of the numerical simulation and its results are presented. In \S~\ref{sec:5}, the helicity effect on turbulent momentum transport is discussed with a special reference to the vortex dynamo. Conclusions are given in \S~\ref{sec:6}. Details of the turbulence model with helicity and its application to a turbulent swirling flow, comparison with previous notions including the so-called $\Lambda$ effect and the Anisotropic Kinetic Alpha (AKA) effect are given in Appendices. \section{Fundamental equations\label{sec:2}} We consider an incompressible fluid in a rotating system. The velocity ${\bf{u}}$ obeys the incompressible Navier--Stokes equation \begin{equation} \frac{\partial{\bf{u}}}{\partial t} + ({\bf{u}} \cdot \nabla) {\bf{u}} = - \nabla p + {\bf{u}} \times 2 \mbox{\boldmath$\omega$}_{\rm{F}} + \nu \nabla^2 {\bf{u}} + {\bf{f}}_{\rm{e}} \label{eq:NS_eq \end{equation} and the solenoidal condition \begin{equation} \nabla \cdot {\bf{u}} = 0, \label{eq:solenoidal_u \end{equation} where $p$ is the pressure divided by fluid density with the centrifugal force included, $\nu$ the kinematic viscosity, $\mbox{\boldmath$\omega$}_{\rm{F}}$ the angular velocity of the system, and ${\bf{f}}_{\rm{e}}$ the external forcing which satisfies the solenoidal conditions. Taking a curl operation to Eqs.~(\ref{eq:NS_eq}) and (\ref{eq:solenoidal_u}), we have the equations of vorticity $\mbox{\boldmath$\omega$} (= \nabla \times {\bf{u}})$ as \begin{equation} \frac{\partial \mbox{\boldmath$\omega$}}{\partial t} = \nabla \times \left[ { {\bf{u}} \times \left( { \mbox{\boldmath$\omega$} + 2 \mbox{\boldmath$\omega$}_{\rm{F}} } \right) } \right] + \nu \nabla^2 \mbox{\boldmath$\omega$} + \nabla \times {\bf{f}}_{\rm{e}} \label{eq:omega_eq \end{equation} and \begin{equation} \nabla \cdot \mbox{\boldmath$\omega$} = 0. \label{eq:solenoidal_omega \end{equation} We divide a flow quantity $f$ into the mean part $\langle {f} \rangle$ and fluctuation around it, $f'$, as \begin{subequations} \begin{equation} f = F + f',\; F = \langle {f} \rangle \label{rey_decomp \end{equation} with \begin{equation} f = ({\bf{u}}, p,\mbox{\boldmath$\omega$}),\; F = ({\bf{U}}, P,\mbox{\boldmath$\Omega$}),\; f' = ({\bf{u}}', p',\mbox{\boldmath$\omega$}'). \label{rey_decomp_flds \end{equation} \end{subequations} Substituting Eq.~(\ref{rey_decomp}) into Eqs.~(\ref{eq:NS_eq})-(\ref{eq:solenoidal_omega}), we obtain the mean-field equations as \begin{equation} \frac{\partial{\bf{U}}}{\partial t} + ({\bf{U}} \cdot \nabla) {\bf{U}} = - \nabla P + {\bf{U}} \times 2 \mbox{\boldmath$\omega$}_{\rm{F}} - \nabla \cdot \mbox{\boldmath${\cal{R}}$} + \nu \nabla^2 {\bf{U}}, \label{eq:mean_vel_eq \end{equation} \begin{equation} \frac{\partial \mbox{\boldmath$\Omega$}}{\partial t} = \nabla \times \left[ { {\bf{U}} \times \left( { \mbox{\boldmath$\Omega$} + 2 \mbox{\boldmath$\omega$}_{\rm{F}} } \right) } \right] + \nabla \times {\bf{V}}_{\rm{M}} + \nu \nabla^2 \mbox{\boldmath$\Omega$}, \label{eq:mean_vor_eq \end{equation} \begin{equation} \nabla \cdot {\bf{U}} = \nabla \cdot \mbox{\boldmath$\Omega$} = 0, \label{eq:solenoidal_mean \end{equation} where $\mbox{\boldmath${\cal{R}}$} = \{ {{\cal{R}}^{ij}} \}$ and ${\bf{V}}_{\rm{M}}$ are the Reynolds stress and the turbulent Vortex-Motive or Pondero-Motive Force (VMF or PMF), which are defined by \begin{equation} {\cal{R}}^{ij} = \left\langle {u'{}^i u'{}^j} \right\rangle, \label{eq:rey_strss_def_2 \end{equation} \begin{equation} {\bf{V}}_{\rm{M}} = \langle {{\bf{u}}' \times \mbox{\boldmath$\omega$}'} \rangle, \label{eq:vmf_def_2 \end{equation} respectively. If we compare Eqs.~(\ref{eq:mean_vel_eq}) and (\ref{eq:mean_vor_eq}) with Eqs.~(\ref{eq:NS_eq}) and (\ref{eq:omega_eq}), we see the Reynolds stress and the VMF are the sole quantities that represent the effects of turbulence in the mean-field equations. It should be noted that in this work we adopt an external forcing that does not directly produce any mean flow ($\langle {{\bf{f}}_{\rm{e}}} \rangle = 0$). \section{Helicity effect\label{sec:3}} \subsection{Symmetry of the Reynolds stress\label{sec:3A}} As was mentioned in \S~\ref{sec:1}, the presence of turbulent kinetic helicity alone is not sufficient for the helicity effect to appear in the mean momentum transport. As we will see later in Eq.~(\ref{eq:Rey_strss_model}), the inhomogeneity of the turbulent helicity ($\nabla H$) is a key ingredient. This point is easily understood if we consider the symmetry properties of the Reynolds stress tensor \begin{equation} {\cal{R}}^{ij}({\bf{x}},t) = \left\langle {u'{}^i({\bf{x}},t) u'{}^j({\bf{x}},t)} \right\rangle \label{eq:Rey_strss_def \end{equation} (${\bf{u}}'$: velocity fluctuation). Helicity is a quantity which represents the breakage of mirror- or reflectional symmetry. A reflection with respect to a plane is equivalent to the combination of a pure (proper) rotation around the axis perpendicular to the plane and an inversion or parity transformation. Since proper rotations never change the mirrorsymmetry-related property of a vector, we can express the symmetry property of reflections in terms of that of inversions. (Note that the determinant of the transformation is always $+1$ for all proper rotations whereas the counterparts are $-1$ for reflections and inversions.) The velocity is a polar vector and has odd parity under inversion. Namely, with a reversal of the coordinate system: $x^i \longmapsto \tilde{x}^i = - x^i$ (a tilde denotes a quantity in the reversal frame), the velocity reverses its sign as $u^i({\bf{x}},t) \longmapsto \tilde{u}^i(\tilde{\bf{x}}, t) = - u^i({\bf{x}},t)$. As a consequence, the Reynolds stress transforms under inversion as \begin{eqnarray} &&{\cal{R}}^{ij}({\bf{x}},t) \nonumber\\ &&\longmapsto \tilde{\cal{R}}^{ij}(\tilde{\bf{x}}, t) = \langle { \tilde{u}'{}^i(\tilde{\bf{x}},t) \tilde{u}'{}^j(\tilde{\bf{x}},t) } \rangle \nonumber\\ &&\hspace{20pt} = \langle { [-u'{}^i({\bf{x}},t)] [-u'{}^j({\bf{x}},t)] } \rangle = \langle { u'{}^i({\bf{x}},t) u'{}^j({\bf{x}},t) } \rangle \nonumber\\ &&\hspace{20pt}= {\cal{R}}^{ij}({\bf{x}},t). \label{eq:Rey_strss_parity \end{eqnarray} Namely, the Reynolds stress is symmetric with respect to the inversion of the coordinate system and must have even parity. The mean vorticity $\mbox{\boldmath$\Omega$}$ ($= \nabla \times {\bf{U}}$, ${\bf{U}}$: mean velocity) is an axial- or pseudo-vector which does not change its sign (symmetric) under the inversion: \begin{eqnarray} &&\Omega^i({\bf{x}},t) \nonumber\\ &&\longmapsto \tilde{\Omega}^i(\tilde{\bf{x}},t) = \tilde{\epsilon}^{ijk} \frac{\partial \tilde{U}^k(\tilde{\bf{x}},t)}{\partial \tilde{x}^j} \nonumber\\ &&\hspace{20pt}= \epsilon^{ijk} \frac{\partial (-U)^k({\bf{x}},t)}{\partial (-x)^j} = \epsilon^{ijk} \frac{\partial U^k({\bf{x}},t)}{\partial x^j} \nonumber\\ &&\hspace{20pt}= \Omega^i ({\bf{x}},t) \label{eq:Omega_parity \end{eqnarray} (Note that the alternate tensor has even parity: $\epsilon^{ijk} \longmapsto \tilde{\epsilon}^{ijk} = \epsilon^{ijk}$). On the other hand, the turbulent helicity $H (= \langle {{\bf{u}}' \cdot \nabla \times {\bf{u}}'} \rangle)$ is a pseudoscalar which changes its sign (antisymmetric) under the inversion: \begin{eqnarray} &&H({\bf{x}},t) \nonumber\\ &&\longmapsto \tilde{H}(\tilde{\bf{x}},t) = \left\langle { \tilde{u}'{}^i(\tilde{\bf{x}},t) \tilde{\epsilon}^{ijk} \frac{\partial {\tilde{u}}'{}^k(\tilde{\bf{x}},t)}{\partial \tilde{x}^j} } \right\rangle \nonumber\\ &&\hspace{10pt}= \left\langle { -u'{}^i({\bf{x}},t) \epsilon^{ijk} \frac{\partial (-u'{}^k)({\bf{x}},t)}{\partial (-x)^j} } \right\rangle \nonumber\\ &&\hspace{10pt}= - \left\langle { u'{}^i({\bf{x}},t) \epsilon^{ijk} \frac{\partial u'{}^k({\bf{x}},t)}{\partial x^j} } \right\rangle = - H({\bf{x}},t). \label{eq:H_parity \end{eqnarray} From Eq.~(\ref{eq:H_parity}) we infer an important point. In a mirrorsymmetric system, by definition, all the statistical quantities are symmetric under inversion (or reflection) as \begin{equation} F({\bf{x}},t) \longmapsto \hat{F}({\bf{x}},t) = F({\bf{x}},t). \label{eq:mirrorsym_system \end{equation} On the other hand, any pseudoscalar changes its sign under the inversion (or reflection) as \begin{equation} F({\bf{x}},t) \longmapsto \hat{F}({\bf{x}},t) = - F({\bf{x}},t). \label{eq:pseudo_scalar \end{equation} It follows from Eqs.~(\ref{eq:mirrorsym_system}) and (\ref{eq:pseudo_scalar}) that any pseudoscalar statistical quantity in a mirrorsymmetric system should satisfy \begin{equation} F({\bf{x}},t) = - F({\bf{x}},t) \label{eq:vanishing_peudoscalar_1 \end{equation} or equivalently, \begin{equation} F({\bf{x}},t) = 0. \label{eq:vanishing_peudoscalar_2 \end{equation} Hence, any pseudoscalar statistical quantities should vanish in a mirrorsymmetric system. In other words, a finite pseudoscalar indicates a broken mirrorsymmetry in the system. In this sense, a pseudoscalar statistical quantity can serve itself as a measure of the breakage of mirrorsymmetry. Since the helicity $\int_V{{\bf{u}} \cdot \mbox{\boldmath$\omega$}}dV$, as well as the kinetic energy $\int_V {{\bf{u}}^2} dV$, is an inviscid invariant of the hydrodynamic equation, its local turbulent density, the turbulent helicity $H \equiv \langle {{\bf{u}}' \cdot \mbox{\boldmath$\omega$}'} \rangle$ [$\mbox{\boldmath$\omega$}' (= \nabla \times {\bf{u}}')$: vorticity fluctuation], is an important statistical quantity that represents structural properties of the turbulence. A positive (negative) sign of local helicity represents right-handed (left-handed) ``twistedness'' of the turbulence. The sign of $H$ is directly connected to the structural properties of turbulence. However, as explained below, $H$ itself cannot enter the Reynolds stress expression. We saw in Eq.~(\ref{eq:Omega_parity}) that the mean vorticity or rotation vector has even parity. So, the coupling coefficient for the mean vorticity or rotation should have even parity in order to attain the even parity for the Reynolds stress [Eq.~(\ref{eq:Rey_strss_parity})]. This suggests that the turbulent helicity with odd parity [Eq.~(\ref{eq:H_parity})] itself cannot enter the expression for the Reynolds stress as the coupling coefficient for the mean vorticity or the rotation velocity. This point is reflected later in the generic mathematical expression of the correlation in non-mirrorsymmetric isotropic turbulence, Eq.~(\ref{eq:iso_nonmirror}). \subsection{Helicity effect in the Reynolds stress\label{sec:3B}} Using the Two-Scale Direct-Interaction Approximation (TSDIA), a closure theory for inhomogeneous turbulence \cite{yos1984}, Yokoi and Yoshizawa \cite{yok1993} explored the effects of helicity in inhomogeneous turbulence. The TSDIA is a combination of the multiple-scale analysis and the DIA, an elaborated renormalized perturbation method in ${\bf{k}}$, or wavenumber space, for homogeneous turbulence at high Reynolds number. In this analysis, we introduce two scales for space and time variables with a scale parameter $\delta$ as \begin{equation} \mbox{\boldmath$\xi$} = {\bf{x}},\; {\bf{X}} = \delta {\bf{x}};\;\; \tau = t,\; T = \delta t, \label{eq:two-scale_variables \end{equation} where ($\mbox{\boldmath$\xi$}, \tau$) and (${\bf{X}}, T$) are fast and slow variables, respectively. With a small $\delta$, the slow variables (${\bf{X}}, T$) are suitable for representing slow variations of fields since they vary only when ${\bf{x}}$ and $t$ vary strongly. Under Eq.~(\ref{eq:two-scale_variables}), a field quantity is divided into the mean $F$ and the fluctuation $f'$ as \begin{equation} f = F({\bf{X}};T) + f'(\mbox{\boldmath$\xi$},{\bf{X}}; \tau,T), \label{two-scale_fields \end{equation} which represents the properties that the mean field slowly changes with slow variables and fluctuation field depends both on fast and slow variables. Also under Eq.~(\ref{eq:two-scale_variables}), we have \begin{equation} \nabla = \nabla_{\small{\mbox{\boldmath$\xi$}}} + \delta \nabla_{\bf{X}},\;\; \frac{\partial}{\partial t} = \frac{\partial}{\partial \tau} + \delta \frac{\partial}{\partial T}, \label{eq:two-scale_diff_exp \end{equation} where $\nabla_{{\mbox{\boldmath$\xi$}}}^i = (\partial / \partial \xi^i)$ and $\nabla_{\bf{X}}^i = (\partial / \partial X^i)$. We see from Eq.~(\ref{eq:two-scale_diff_exp}) that a derivative with respect to the slow variables gives an $O(\delta)$ contribution (derivative expansion) \cite{nay1973}. The turbulence correlations such as the Reynolds stress are calculated with the aid of a perturbation expansion: \begin{equation} f'({\bf{k}},{\bf{X}}; \tau,T) = \sum_{n=0}^{\infty} \delta^n f'_n({\bf{k}},{\bf{X}}; \tau,T). \label{eq:diff_exp \end{equation} The scale parameter $\delta$ is associated with the inhomogeneity of the large-scale fields. The $O(\delta^0)$ fields correspond to those in homogeneous turbulence. We also expand the lowest- and higher-order fields with respect to the rotation vector $\mbox{\boldmath$\omega$}_{\rm{F}}$ as \begin{eqnarray} {\bf{u}}'_0({\bf{k}},{\bf{X}}; \tau,T) &=& {\bf{u}}'_{\rm{B}}({\bf{k}},{\bf{X}}; \tau,T) \nonumber\\ && + \sum_{m=1}^{\infty} |\mbox{\boldmath$\omega$}_{\rm{F}}|^m {\bf{u}}'_{0m}({\bf{k}},{\bf{X}}; \tau,T), \label{eq:u0_expn \end{eqnarray} \begin{equation} {\bf{u}}'_n({\bf{k}},{\bf{X}}; \tau,T) = \sum_{m=0}^{\infty} |\mbox{\boldmath$\omega$}_{\rm{F}}|^m {\bf{u}}'_{nm}({\bf{k}},{\bf{X}}; \tau,T)\;\; (n\ge 1). \label{eq:un_expn \end{equation} The basic or lowest-order field ${\bf{u}}_{\rm{B}}$ corresponds to a homogeneous isotropic field, and the effects of inhomogeneity and anisotropy are systematically incorporated in the higher-order field calculations using the spectral and response functions. As for the statistical properties of the basic fields, we assume \begin{eqnarray} &&{\left\langle { u'_{\rm{B}}{}^\alpha({\bf{k}},{\bf{X}};\tau,T) u'_{\rm{B}}{}^\beta({\bf{k}}',{\bf{X}};\tau',T) } \right\rangle}/{\delta({\bf{k}} + {\bf{k}}')} \nonumber\\ &&\hspace{10pt} = D^{\alpha\beta}({\bf{k}}) Q_{\rm{B}}(k,{\bf{X}};\tau,\tau',T) \nonumber\\ &&\hspace{20pt} + \frac{i}{2} \frac{k^a}{k^2} \epsilon^{\alpha\beta a} H_{\rm{B}}(k,{\bf{X}};\tau',T), \label{eq:iso_nonmirror \end{eqnarray} \begin{equation} \left\langle {G^{\alpha\beta}({\bf{k}},{\bf{X}};\tau,\tau',T)} \right\rangle = D^{\alpha\beta}({\bf{k}}) G(k,{\bf{X}};\tau,\tau',T), \label{eq:G_iso \end{equation} where $Q_{\rm{B}}$ and $H_{\rm{B}}$ are the spectral density functions of the turbulent energy and helicity, respectively: \begin{equation} \frac{1}{2} \left\langle {{\bf{u}}'_{\rm{B}}{}^2} \right\rangle = \int d{\bf{k}} Q_{\rm{B}}(k; \tau, \tau), \label{eq:Q_B_spectrum \end{equation} \begin{equation} \left\langle {{\bf{u}}'_{\rm{B}} \cdot \mbox{\boldmath$\omega$}'_{\rm{B}}} \right\rangle = \int d{\bf{k}} H_{\rm{B}}(k, \tau, \tau) \label{eq:H_B_spectrum \end{equation} and $D^{\alpha\beta}({\bf{k}}) (= \delta^{\alpha\beta} - k^\alpha k^\beta/k^2)$ is the solenoidal projection operator. It should be noticed that $Q_{\rm{B}}$ and $H_{\rm{B}}$ in Eq.~(\ref{eq:iso_nonmirror}) are normalized to satisfy Eqs.~(\ref{eq:Q_B_spectrum}) and (\ref{eq:H_B_spectrum}). Equations~(\ref{eq:iso_nonmirror}) and (\ref{eq:G_iso}) are the most general expressions for homogeneous, isotropic and non-mirrorsymmetric turbulence \cite{bat1953,mon1975}. We should note that these assumptions only apply to the basic or lowest-order fields of turbulence. The turbulent fields considered in this formulation are inhomogeneous and anisotropic; these effects enter through the higher-order fields. The Reynolds stress is calculated by \begin{eqnarray} \left\langle {u'{}^\alpha u'{}^\beta} \right\rangle &=& \left\langle {u'_{\rm{B}}{}^\alpha u'_{\rm{B}}{}^\beta} \right\rangle + \left\langle {u'_{\rm{B}}{}^\alpha u'_{01}{}^\beta} \right\rangle + \left\langle {u'_{01}{}^\alpha u'_{\rm{B}}{}^\beta} \right\rangle + \cdots \nonumber\\ &&+ \left\langle {u'_{\rm{B}}{}^\alpha u'_{10}{}^\beta} \right\rangle + \left\langle {u'_{10}{}^\alpha u'_{\rm{B}}{}^\beta} \right\rangle + \cdots. \label{eq:Rey_strss_expn \end{eqnarray} It was shown by the analysis up to $O(\delta^1 |\mbox{\boldmath$\omega$}_{\rm{F}}|^1)$ that the Reynolds stress is expressed as \cite{yok1993} \begin{eqnarray} \lefteqn{ \left\langle {u'{}^\alpha u'{}^\beta} \right\rangle_{\rm{D}} = - \nu_{\rm{T}} {\cal{S}}^{\alpha\beta} }\nonumber\\ &&\hspace{10pt} + \left[ { \Gamma^\alpha \left( { \Omega^\beta + 2 \omega_{\rm{F}}^\beta } \right) + \Gamma^\beta \left( { \Omega^\alpha + 2 \omega_{\rm{F}}^\alpha } \right) } \right]_{\rm{D}}, \label{eq:Rey_strss_exprssn \end{eqnarray} where ${\rm{D}}$ denotes the deviatoric or traceless part of tensor as ${\cal{A}}^{\alpha\beta}_{\rm{D}} = {\cal{A}}^{\alpha\beta} - (1/3) {\cal{A}}^{aa} \delta^{\alpha\beta}$, and ${\cal{S}}$ is the mean velocity strain defined by \begin{equation} {\cal{S}}^{\alpha\beta} = \frac{\partial U^\alpha}{\partial x^\beta} + \frac{\partial U^\beta}{\partial x^\alpha} - \frac{2}{3} \nabla \cdot {\bf{U}} \delta^{\alpha\beta}. \label{eq:mean_vel_strn_def \end{equation} In Eq.~(\ref{eq:Rey_strss_exprssn}), the mean velocity-strain- and the mean vorticity and angular velocity-related coefficients, $\nu_{\rm{T}}$ and $\mbox{\boldmath$\Gamma$}$, are given by \begin{equation} \nu_{\rm{T}} = \frac{7}{15} \int {\rm{d}}{\bf{k}} \int_{-\infty}^{t} \!\!\!d\tau_1\ G(k;\tau,\tau_1) {Q}(k;\tau,\tau_1), \label{eq:nu_T_exprssn \end{equation} \begin{equation} \mbox{\boldmath$\Gamma$} = \frac{1}{30} \int k^{-2} {\rm{d}}{\bf{k}} \int_{-\infty}^{t} \!\!\!d\tau_1\ G(k;\tau,\tau_1) \nabla {H}(k;\tau,\tau_1). \label{eq:Gamma_exprssn \end{equation} The first term of Eq.~(\ref{eq:Rey_strss_exprssn}) corresponds to the usual eddy-viscosity representation of the Reynolds stress. The second term is the correction to the eddy-viscosity representation due to the mean vortical or rotational motion. Equations~(\ref{eq:nu_T_exprssn}) and (\ref{eq:Gamma_exprssn}) show that we can express the turbulent transport coefficients if we know the propagators of turbulent field such as the spectral functions of energy and helicity, ${{Q}}(k;\tau,\tau')$ and ${{H}}(k;\tau,\tau')$, and the response function $G(k;\tau,\tau')$, which represent how turbulence is distributed in scales and how much the present state is affected by the past, respectively. However, for practical purpose, the theoretical expressions for the transport coefficients $\nu_{\rm{T}}$ [Eq.~(\ref{eq:nu_T_exprssn})] and $\mbox{\boldmath$\Gamma$}$ [Eq.~(\ref{eq:Gamma_exprssn})] in terms of the time and spectral integrals of the propagators are too much complicated. We need to reduce them into a more tractable form. In the simplest case, if the time integral of the response function can be separated from the spectral integral of the energy in Eq.~(\ref{eq:nu_T_exprssn}), the time integral of the response function just gives a time scale of turbulence, $\tau$: \begin{equation} \tau \simeq \int_{-\infty}^{t} \!\!\!d\tau_1\ G(k;\tau,\tau_1). \label{eq:time_G \end{equation} Then, Eq.~(\ref{eq:nu_T_exprssn}) is reduced to the turbulence time scale multiplied by the turbulent energy $K$. Namely, the mixing-length expression for the turbulent viscosity: \begin{equation} \nu_{\rm{T}} \sim \tau K \sim \tau u^2 \sim \ell u. \label{nu_T_mxng_lngth \end{equation} In other words, Eq.~(\ref{eq:nu_T_exprssn}) is a natural generalization of the simplest mixing-length expression for the turbulent viscosity. \subsection{Helicity turbulence model\label{sec:3C}} The Reynolds-stress expression (\ref{eq:Rey_strss_exprssn}) with the spectral expressions for the transport coefficients [Eqs.~(\ref{eq:nu_T_exprssn}) and (\ref{eq:Gamma_exprssn})] is too heavy for practical uses in the astro/geophysical applications. In order to construct simple expressions for the transport coefficients more generic than the mixing-length one, we use one-point turbulence statistical quantities which represent the statistical properties of turbulence. We choose the turbulent energy $K$, its dissipation rate $\varepsilon$, and the turbulent helicity $H$, defined by \begin{equation} K = \frac{1}{2} \left\langle {{\bf{u}}'{}^2} \right\rangle, \label{eq:K_def \end{equation} \begin{equation} \varepsilon = \nu \left\langle { \frac{\partial u'{}^a}{\partial x^b} \frac{\partial u'{}^a}{\partial x^b} } \right\rangle, \label{eq:eps_def \end{equation} \begin{equation} H = \left\langle { {\bf{u}}' \cdot \mbox{\boldmath$\omega$}' } \right\rangle. \label{eq:H_def \end{equation} On the basis of the analytical expression Eq.~(\ref{eq:Rey_strss_exprssn}), the Reynolds stress is modeled as \cite{yok1993} \begin{eqnarray} &&\left\langle { u'{}^\alpha u'{}^\beta } \right\rangle_{\rm{D}} = - \nu_{\rm{T}} {\cal{S}}^{\alpha\beta} \nonumber\\ && \hspace{10pt} + \eta \left[ { \frac{\partial H}{\partial x^\alpha} \left( { \Omega^\beta + 2\omega_{\rm{F}}^\beta } \right) + \frac{\partial H}{\partial x^\beta} \left( { \Omega^\alpha + 2\omega_{\rm{F}}^\alpha } \right) } \right]_{\rm{D}} \label{eq:Rey_strss_model \end{eqnarray} with the transport coefficients expressed in terms of the above turbulence statistical quantities [Eqs.~(\ref{eq:K_def})-(\ref{eq:H_def})]: \begin{equation} \nu_{\rm{T}} = C_\nu \tau K = C_\nu \frac{K}{\varepsilon} K, \label{eq:nu_T_K_eps \end{equation} \begin{equation} \eta = C_\eta \tau \ell^2 = C_\eta \frac{K}{\varepsilon} \frac{K^3}{\varepsilon^2}, \label{eq:eta_K_eps \end{equation} where $C_\nu$ and $C_\eta$ are model constants, whose values are to be optimized through the applications of the turbulence model to several flows \cite{lau1972}. An application of the helicity model to a turbulent swirling flow \cite{yok1993} suggests \begin{equation} C_\nu = 0.09,\;\; C_\eta = 0.003. \label{eq:model_consts_hel \end{equation} Note that, in Eqs.~(\ref{eq:nu_T_K_eps}) and (\ref{eq:eta_K_eps}), $\tau = K / \varepsilon$ and $\ell = K^{3/2} / \varepsilon$ are the time and length scales of turbulence, respectively. Equation~(\ref{eq:eta_K_eps}) corresponds to the modeling of the mean vorticity- and angular-velocity-related coefficient $\mbox{\boldmath$\Gamma$}$ [Eq.~(\ref{eq:Gamma_exprssn})] as \begin{equation} \mbox{\boldmath$\Gamma$} = \eta \nabla H = C_\eta \tau \ell^2 \nabla H = C_\eta \frac{K}{\varepsilon} \frac{K^3}{\varepsilon^2} \nabla H. \label{eq:Gamma_model \end{equation} As is seen in Eqs.~(\ref{eq:nu_T_K_eps}), (\ref{eq:eta_K_eps}), and (\ref{eq:Gamma_model}), the transport coefficients in the Reynolds stress, $\nu_{\rm{T}}$, $\eta$, and $\mbox{\boldmath$\Gamma$}$, are expressed in terms of the turbulent statistical quantities, $K$, $\varepsilon$, and $H$. In order to close or construct a self-consistent turbulence model, we consider the transport equations of $K$, $\varepsilon$, and $H$. Details of the present helicity model with the transport equations of $K$, $\varepsilon$, and $H$ is presented in Appendix \ref{sec:appA1}. The helicity turbulence model was applied to a turbulent swirling pipe flow \cite{yok1993}. It was numerically shown that the model successfully reproduces basic behaviors of the turbulent swirling flow: (i) the stationary dent profile of the mean axial velocity in the central axis region; (ii) the radial profile of the mean circumferential velocity; (iii) the exponential decay of the swirl intensity defined by the axial flux of the mean angular momentum along the axial direction \cite{kit1991,ste1995}. These behavior could not be reproduced by the standard $K - \varepsilon$ model with the eddy viscosity \cite{kob1987}. In this sense, validity of the Reynolds-stress expression [Eqs.~(\ref{eq:Rey_strss_exprssn}) and (\ref{eq:Rey_strss_model})] is confirmed at the turbulence or closure model simulation level. Details of the application of the helicity turbulence model to a swirling pipe flow are presented in Appendix~\ref{sec:appA2}. \section{Numerical simulations\label{sec:4}} It is necessary to study carefully the inhomogeneous helicity effect in the turbulent momentum transport using Direct Numerical Simulations (DNSs). For this purpose, we check the validity of the expression for the Reynolds stress [Eq.~(\ref{eq:Rey_strss_model})] using DNSs of a turbulent flow with inhomogeneous helicity. \subsection{Set-up\label{sec:4A}} In the present work, we adopt a set-up that is suitable for examining the deviatoric part of the Reynolds-stress expression [Eqs.~(\ref{eq:Rey_strss_exprssn}) and (\ref{eq:Rey_strss_model})]. Let us consider helical turbulence in a box with imposed rotation $\mbox{\boldmath$\omega$}_{\rm{F}}$ depicted in Fig.~\ref{fig:setup}. The axis of rotation $\mbox{\boldmath$\omega$}_{\rm{F}}$ is aligned with the $y$ axis as \begin{equation} \mbox{\boldmath$\omega$}_{\rm{F}} = (\omega_{\rm{F}}^x, \omega_{\rm{F}}^y, \omega_{\rm{F}}^z) = (0, \omega_{\rm{F}}, 0). \label{eq:omega_F_setup \end{equation} \begin{figure}[b!] \includegraphics[width=.40\textwidth]{helicity_effect_r1_fig_01} \caption{Set-up of the turbulence with rotation $\mbox{\boldmath$\omega$}_{\rm{F}}$ (left) and schematic spatial profiles of turbulent helicity $H$ ($= \langle {{\bf{u}}' \cdot \mbox{\boldmath$\omega$}'} \rangle$) given by Eq.~(\ref{eq:H_profile}) (center) and its derivative $dH/dz$ (right). The helicity inhomogeneity is generated by an external forcing.} \label{fig:setup} \end{figure} The inhomogeneous helicity is sustained by an external forcing, leading to a spatial distribution of turbulent helicity schematically expressed as \begin{equation} H(z) = - \frac{1}{2} H_0 z (z^2 - 3z_0^2), \label{eq:H_profile \end{equation} where $H_0$ is the peak magnitude of the turbulent helicity at positions $z = \pm z_0$. In the simulations, we use helically forced turbulence where the degree of helicity is modulated in the $z$ direction in a periodic fashion. The wavenumber of the forcing is $k_{\rm f}$ and that of the box is $k_1$, which is also the wavenumber of the helicity modulation in the $z$ direction. We consider averages over $x$, $y$, and $t$ over an interval during which the system is statistically steady. We denote these averages by $\langle \cdots \rangle$. In particular, we consider the Reynolds stress tensor component ${\cal{R}}^{yz}=\langle{u'{}^y u'{}^z}\rangle$, the mean flow $\langle {{\bf{u}}} \rangle = {\bf{U}} (z,t)$, and the helicity density $H = \langle {{\bf{u}}' \cdot \mbox{\boldmath$\omega$}'} \rangle$ with $\mbox{\boldmath$\omega$}' = \nabla \times {\bf{u}}'$ being the vorticity fluctuation. In all cases, the helicity of the mean flow, ${\bf{U} \cdot \mbox{\boldmath$\Omega$}}$ ($\mbox{\boldmath$\Omega$} = \nabla \times {\bf{U}}$), is negligible. We apply rotation in the $y$ direction as Eq.~(\ref{eq:omega_F_setup}). At the initial stage, we have no large-scale flow ${\bf{U}}$. It follows from Eq.~(\ref{eq:Rey_strss_model}) that, at the early stage of flow evolution, the $y$-$z$ component of the Reynolds stress may be given by \begin{equation} \langle {u'{}^y u'{}^z} \rangle = \eta 2 \omega_{\rm{F}}^y \frac{\partial H}{\partial z}. \label{eq:Rey_strss_num_early \end{equation} Once the large-scale flow is generated, but the mean relative vorticity is still smaller than the rotation ($|\mbox{\boldmath$\Omega$}| \ll |2 \mbox{\boldmath$\omega$}_{\rm{F}}|$), the $y$-$z$ component of the Reynolds stress is written as \begin{equation} \langle {u'{}^y u'{}^z} \rangle = -\nu_{\rm T} \frac{\partial U^y}{\partial z} + \eta 2 \omega_{\rm{F}}^y \frac{\partial H}{\partial z}. \label{eq:Rey_strss_num_dev \end{equation} If $\langle{u'{}^y u'{}^z}\rangle=0$ in the statistically steady state, then \begin{equation} U^y=(\eta /\nu_{\rm T})\, 2 \omega_{\rm{F}}^y H, \label{eq:meanUy \end{equation} which corresponds to a mean flow in the direction of the rotation axis. We consider three values for the scale separation ratio, $k_{\rm f}/k_1=5$, $15$, and $30$ and determine $\eta/\nu_{\rm{T}}$ using Eq.~(\ref{eq:meanUy}) by measuring $U^y$ and $H$ ($k_{\rm{f}}$: forcing wavenumber, $k_1$: wavenumber for system size). We express time in terms of \begin{equation} \tau=1/u_{\rm rms}k_{\rm f}, \label{tau_def_num \end{equation} which is also used as an estimate of the correlation time of the turbulence ($u_{\rm{rms}}$: root mean square velocity). Kinetic energy spectra, $E_{\rm K}(k,t)$, are normalized such that $\int E_{\rm K}(k,t)\,{\rm d} {} k=\bra{\bm{u}^2}/2$. All simulations are performed with the {\sc Pencil Code}\footnote{http://github.com/pencil-code/}, which uses a high-order finite difference method for solving the compressible hydrodynamic equations. We use a small Mach number so that the results are essentially the same as for a purely incompressible flow. \subsection{Numerical results\label{sec:4B}} The results are summarized in \Tab{tab:Summary}. All simulations show that the sign of $\eta$ is {\it positive}. We find that $\eta/(\nu_{\rm T}\tau^2)$ is in the range of $O(10^{-2})$ to $O(10^{-1})$, depending on Reynolds and Coriolis numbers (${\rm{Co}} = \omega_{\rm{F}} \tau$) as well as scale separation. Run~A shows clear generation of a mean flow as seen from Eq.~(\ref{eq:meanUy}). This equation is also used to determine $\eta/(\nu_{\rm T} \tau^2)$ as the correlation coefficient in $U^y$ vs.\ $2\omega_{\rm{F}}^y H$; see the last column of \Tab{tab:Summary}. \begin{table}[b!] \caption{ Summary of DNS results. The Reynolds number is defined by $\mbox{\rm Re} = u_{\rm{rms}} /(\nu k_{\rm{f}})$. }\vspace{12pt}\centerline{\begin{tabular}{ccccccc} Run & $k_{\rm f}/k_1$ & $\mbox{\rm Re}$ & $\mbox{\rm Co}$ & $\eta/(\nu_{\rm T} \tau^2)$ \\ \hline A & 15 & 60 & 0.74 & 0.22 \\ B1 & 5 & 150 & 2.6 & 0.27 \\ B2 & 5 & 460 & 1.7 & 0.27 \\ B3 & 5 & 980 & 1.6 & 0.51 \\ C1 & 30 & 18 & 0.63 & 0.50 \\ C2 & 30 & 80 & 0.55 & 0.03 \\ C3 & 30 &100 & 0.46 & 0.08 \\ \label{tab:Summary}\end{tabular}} \end{table} \subsubsection{Mean flows\label{sec:4B1}} As we see from Eq.~(\ref{eq:meanUy}), the large-scale flow is expected to be generated in the direction of the rotation vector $\mbox{\boldmath$\omega$}_{\rm{F}}$ (or the large-scale vorticity $\mbox{\boldmath$\Omega$}$) mediated by the helicity effect. The shape of the mean axial velocity component $U^y$ is shown in Fig.~\ref{fig:velocity_contour}. A clear flow pattern with positive and negative velocity is seen, which corresponds to the velocity distribution given by Eq.~(\ref{eq:meanUy}). \begin{figure}[t!]\begin{center} \includegraphics[width=.40\textwidth]{helicity_effect_r1_fig_02} \end{center}\caption[]{ Axial flow component $U^y$ on the periphery of the domain for Run~B2 with $k_{\rm f}/k_1=5$ and $\mbox{\rm Re}=460$. } \label{fig:velocity_contour} \end{figure} In Fig.~\ref{fig:turb_mean_helicity}, we show the temporal evolution of the turbulent helicity $\langle {\bf{u}}' \cdot \mbox{\boldmath$\omega$}' \rangle$ and the mean-flow helicity ${\bf{U}} \cdot \mbox{\boldmath$\omega$}_{\rm{F}}$. In this simulation, the turbulent helicity $\langle {\bf{u}}' \cdot \mbox{\boldmath$\omega$}' \rangle$ is sustained by the external forcing from the beginning of the simulation. Its spatial distribution reflects the forcing, which is proportional to $\sin k_1 z$ so that $H>0$ for $z>0$ and $H<0$ for $z<0$. On the other hand, the mean-flow helicity ${\bf{U}} \cdot \mbox{\boldmath$\omega$}_{\rm{F}}$ is generated as the mean axial flow $U^y$ is induced by the inhomogeneous turbulent helicity effect. The magnitude of ${\bf{U}} \cdot \mbox{\boldmath$\omega$}_{\rm{F}}$ reaches an equilibrium value around $t / \tau = 2000$. Its spatial distribution is consistent with the direction of the induced axial flow $U^y$. \begin{figure}[h!]\begin{center} \includegraphics[width=.40\textwidth]{helicity_effect_r1_fig_03} \end{center}\caption[]{ Turbulent helicity $\langle {\bf{u}}' \cdot \mbox{\boldmath$\omega$}' \rangle$ (top) and mean-flow helicity ${\bf{U}} \cdot \mbox{\boldmath$\omega$}_{\rm{F}}$ (bottom) for Run~C1 with $k_{\rm f}/k_1=30$ and $\mbox{\rm Re}=18$. } \label{fig:turb_mean_helicity} \end{figure} \subsubsection{Reynolds stress tensor\label{sec:4B2}} As noted in connection with Eqs.~(\ref{eq:Rey_strss_num_early}) and (\ref{eq:Rey_strss_num_dev}), at the early stage of development, we have no large-scale flows. In this case, the Reynolds stress should be represented only by the $\mbox{\boldmath$\omega$}_{\rm{F}}$- or rotation-related terms in Eqs.~(\ref{eq:Rey_strss_exprssn}) and (\ref{eq:Rey_strss_model}). First we examine this early stage of development by taking an average over time from $t/\tau = 40$ to $200$. The $y$-$z$ component of the Reynolds stress, $\langle {u'{}^y u'{}^z} \rangle$ in the early stage is shown in the top panel of Fig.~\ref{fig:uzuy_gradH_corr}. The spatially averaged magnitude of the Reynolds stress is drawn with the dot dashed line. The top panel shows that the peak magnitude of the Reynolds stress normalized by the turbulent intensity $\langle u'{}^2 \rangle$ is about $0.01$. In the middle panel of Fig.~\ref{fig:uzuy_gradH_corr}, the helicity-related term $2 \omega_{\rm{F}}^y (\nabla H)^z$ is plotted against $z$; the basic spatial profile reflects the counterpart of the turbulent helicity schematically depicted in the center panel of Fig.~\ref{fig:setup}. The spatial profile of the Reynolds stress $\langle {u'{}^y u'{}^z} \rangle$ is in remarkable agreement with the turbulent helicity gradient coupled with the rotation, $2 \omega_{\rm{F}}^y (\nabla H)^z$. This agreement confirms the validity of the model expression (\ref{eq:Rey_strss_model}) based on the theoretical result [Eq.~(\ref{eq:Rey_strss_exprssn})]. \begin{figure}[t!] \includegraphics[width=.40\textwidth]{helicity_effect_r1_fig_04} \caption{Reynolds stress $\langle {u'{}^y u'{}^z} \rangle$ (top), helicity-effect term $(\nabla H)^z 2\omega_{\rm{F}}^y$ (middle), and their correlation (bottom) for Run~A with $k_{\rm f}/k_1=15$ and $\mbox{\rm Re}=60$ at the early stage of development (averaged over time from $t/\tau = 40$ to $200$). } \label{fig:uzuy_gradH_corr} \end{figure} Next, we examine the correlation between the mean velocity and the helicity at the developed equilibrium stage reached around $t/\tau = 2000$. In the developed equilibrium state, the mean velocity should be related to the rotation and turbulent helicity as Eq.~(\ref{eq:meanUy}). In Fig.~{\ref{fig:uy_omega_yH_corr}}, we compare the mean axial velocity component $U^y$ and the turbulent helicity $H$. The correlation between the generated mean velocity and helicity is quite remarkable. This result also confirms the model expression for the Reynolds stress [Eq.~(\ref{eq:Rey_strss_model})]. \begin{figure}[t!] \includegraphics[width=.40\textwidth]{helicity_effect_r1_fig_05} \caption{Mean axial velocity $U^y$ (top), turbulent helicity multiplied by rotation $C_0^2 \langle {{\bf{u}}' \cdot \mbox{\boldmath$\omega$}'} \rangle = (2 \omega_{\rm{F}} \tau)^2 H$ (middle), and their correlation (bottom) for Run~A with $k_{\rm f}/k_1=15$ and $\mbox{\rm Re}=60$ at the developed equilibrium stage (averaged over time from $t/\tau =0$ to $2000$).} \label{fig:uy_omega_yH_corr} \end{figure} The ratio of the magnitudes of the helicity to the eddy-viscosity effects may be given by \begin{eqnarray} \lefteqn{ \frac{(\mbox{helicity effect})}{(\mbox{eddy-viscosity effect})} = \frac{|\eta 2\omega_{\rm{F}} \nabla H|}{|\nu_{\rm{T}} \nabla U|} }\nonumber\\ && \hspace{30pt} \sim \frac{\eta}{\nu_{\rm{T}}} \frac{\Omega_\ast}{{\cal{S}}} {|\nabla H|} \sim \frac{\eta}{\nu_{\rm{T}} \tau^2} \frac{\Omega_\ast}{{\cal{S}}}, \label{eq:Rey_strss_hel_ratio \end{eqnarray} where $\Omega_\ast$ is the magnitude of mean absolute vorticity $\mbox{\boldmath$\Omega$}_\ast (\equiv \mbox{\boldmath$\Omega$} + 2 \mbox{\boldmath$\omega$}_{\rm{F}}$) and ${\cal{S}}$ the magnitude of velocity strain. With the modeling of $\nu_{\rm{T}}$ [Eq.~(\ref{eq:nu_T_K_eps})] and $\eta$ [Eq.~(\ref{eq:eta_K_eps})], we see that $\eta/(\nu_{\rm{T}} \tau^2)$ in this ratio corresponds to the ratio of model constants $C_\eta / C_\nu$, which was estimated as \begin{equation} \frac{\eta}{\nu_{\rm{T}} \tau^2} \approx \frac{C_\eta}{C_\nu} = \frac{0.003}{0.09} \approx 0.03 \label{eq:Ratio_value \end{equation} in the turbulent swirling flow model simulation \cite{yok1993} (see Appendix~\ref{sec:appA2}). The present DNS results for $\eta / (\nu_{\rm{T}} \tau^2)$ listed in Table~\ref{tab:Summary} should be compared with this estimate Eq.~(\ref{eq:Ratio_value}). The value of $C_\eta / C_\nu$ utilized in the previous work \cite{yok1993} is in the range of the value of $\eta/(\nu_{\rm{T}} \tau^2)$ in the present DNSs. The agreement seems to be better in the case with a weaker rotation (smaller ${\rm{Co}}$) and a larger scale separation ($k_{\rm{f}} / k_1$). We should note that in the turbulent swirling flow, where the helicity turbulence model was applied, the ratio $\Omega_\ast / {\cal{S}}$ estimated from the scaled axial angular momentum flux was less than $0.2$ [$\int_0^a r U^\phi U^z 2\pi dr / (\pi a^2 U_{\rm{m}}^2) \simeq 0.18$ at inlet, $a$: pipe radius, $U^\phi$: circumferential velocity, $U^z$: axial velocity]. This is a much smaller rotation case as compared with the present DNSs. Also note that the Reynolds number of the swirling flow was much larger ($\sim 5 \times 10^{4}$) and that the turbulent helicity was provided by a large-scale swirling flow not by an external forcing. \subsubsection{Spectra\label{sec:4B3}} The spectra show a peak at the forcing wavenumber $k_{\rm f}$ and, at early times, a second peak at $k/k_{\rm f}\approx0.25$, which then gradually moves to smaller values of $k$; see \Fig{pkt_per288_theta90a_kf15x}. \begin{figure}[h!]\begin{center} \includegraphics[width=.40\textwidth]{helicity_effect_r1_fig_06} \end{center}\caption[]{ Inverse transfer seen in kinetic energy spectra at $t/\tau=100$, 200, 500, 1000, 2000, and 3500, for Run~A with $k_{\rm f}/k_1=15$, $\mbox{\rm Re}=60$, $\mbox{\rm Co}=0.7$, and $288^3$ meshpoints. The arrow denotes the temporal evolution. The bold line indicates the last time in the plot. }\label{pkt_per288_theta90a_kf15x}\end{figure} The inverse transfer behavior can also be seen at smaller scale separation. In \Fig{pkt_per288_thetam90a_kf5x_nu1em4} we show the spectra for Run~B2 with $k_{\rm f}/k_1=5$ and $\mbox{\rm Re}=460$. The flow takes the form of pairs of counterrotating vortices. This is shown in Fig.~\ref{fig:velocity_contour} for the same run. \begin{figure}[h!]\begin{center} \includegraphics[width=.40\textwidth]{helicity_effect_r1_fig_07} \end{center}\caption[]{ Same as \Fig{pkt_per288_theta90a_kf15x}, but Run~B2 with $k_{\rm f}/k_1=5$, $\mbox{\rm Re}=460$, and $t/\tau=50$, 100, 200, 500, and 1600, with $\mbox{\rm Co}=0.7$, and $288^3$ meshpoints. }\label{pkt_per288_thetam90a_kf5x_nu1em4}\end{figure} It is surprising at first sight that the inverse transfer behavior is {\it not} seen at larger scale separation. This is demonstrated in \Fig{pkt_per576_thetam90a_kf30x_nu5em5}, where we see the result for $k_{\rm f}/k_1=30$ and $\mbox{\rm Re}=80$. To test whether the inverse transfer might be the result of a subcritical bifurcation, we have performed an identical simulation, but with a different initial condition where an initial random flow with a $k^{-5/3}$ spectrum was used. The result is shown in \Fig{pkt_per576_thetam90a_kf30x_nu5em5_ini}, where we see that the initial power at large scales gradually disappears. This suggests that the large-scale flow only occurs at finite scale separation ratios of between 5 and 15. \begin{figure}[h!]\begin{center} \includegraphics[width=.40\textwidth]{helicity_effect_r1_fig_08} \end{center}\caption[]{ Same as \Fig{pkt_per288_theta90a_kf15x}, but for Run~C2 with $k_{\rm f}/k_1=30$, $\mbox{\rm Re}=80$, and $t/\tau=50$, 200, 1300, and 4700, with $\mbox{\rm Co}=0.7$, and $576^3$ meshpoints. }\label{pkt_per576_thetam90a_kf30x_nu5em5}\end{figure} \begin{figure}[h!]\begin{center} \includegraphics[width=.40\textwidth]{helicity_effect_r1_fig_09} \end{center}\caption[]{ Same as \Fig{pkt_per576_thetam90a_kf30x_nu5em5}, but for Run~C3 with a finite amplitude initial condition, giving $\mbox{\rm Re}=100$, at $t/\tau=20$, 100, 200, 500, 1000, and 1700. Here, $k_{\rm f}/k_1=30$, $\mbox{\rm Co}=0.5$, and $576^3$ meshpoints. }\label{pkt_per576_thetam90a_kf30x_nu5em5_ini} \end{figure} These results show that the inverse cascade is less strong when the forcing is at smaller scales. This spectral behavior may be attributed to the finite size of the simulation box. Also this may be related to the fact that the relative importance of helicity to eddy-viscosity effects is scale dependent. The helicity effect (representing the transport suppression) is connected with the inverse cascade, whereas the eddy-viscosity effect (representing the transport enhancement) is connected with the normal cascade. From the spectral expression of the Reynolds stress [Eq.~(\ref{eq:Rey_strss_exprssn})], we estimate the relative importance of the helicity effect as \begin{equation} \frac{(\mbox{helicity effect})}{(\mbox{eddy-viscosity effect})} \sim \frac{\Omega_\ast}{{\cal{S}}} \frac{|H(k,t)|}{kE(k,t)}, \label{eq:relative_hel_effect \end{equation} where $\Omega_\ast (= \sqrt{{\Omega_\ast}^{ab} \Omega_\ast^{ab}/2})$ and ${\cal{S}} (= \sqrt{{\cal{S}}^{ab} {\cal{S}}^{ab}/2})$ are the magnitudes of the absolute vorticity and strain tensors, respectively. If turbulent helicity obeys the same scaling as turbulent energy, $H(k,t) \sim E(k,t) \propto k^{-p}$ ($p$: power index), we have $|H(k,t)|/[k E(k,t)] \propto k^{-1}$ for the relative helicity. This suggests that the helicity effect is less relevant as we go to smaller scales. In this sense, it is important to see how the relative helicity $|H(k,t)|/[k E(k,t)]$ depends on the scale separation $k_{\rm{f}}/k_1$. Further examination of these results is left for the interesting future work. \section{Discussions\label{sec:5}} Here we discuss the present mean-flow generation mechanism by inhomogeneous turbulent helicity in the context of vortex dynamo. Comparisons with some of the representative previous works on large-scale vorticity generation mechanism such as the Anisotropic Kinetic Alpha (AKA) effect \cite{fri1987} and the $\Lambda$ effect \cite{rue1980,rue1989} are presented in Appendix~\ref{sec:appB}. \subsection{Vortex dynamo due to turbulent helicity\label{sec:5A}} By applying the curl operation to the mean momentum equation, we obtain the equation for the mean vorticity $\mbox{\boldmath$\Omega$} (= \nabla \times {\bf{U}})$ as Eq.~(\ref{eq:mean_vor_eq}). There the turbulent Vortex-Motive Force (VMF) ${\bf{V}}_{\rm{M}} = \langle {{\bf{u}}' \times \mbox{\boldmath$\omega$}'} \rangle$ represents the effect of turbulence in the $\mbox{\boldmath$\Omega$}$ equation. The turbulent VMF is directly related to the Reynolds stress $\mbox{\boldmath${\cal{R}}$} = \{ {{\cal{R}}^{ij}} \}$ [Eq.~(\ref{eq:rey_strss_def_2})] as \begin{equation} V_{\rm{M}}^i = - \frac{\partial {\cal{R}}^{ij}}{\partial x^j} + \frac{\partial K}{\partial x^i}. \label{eq:vmf_rey_strss_rel \end{equation} Note that the second or $\nabla K$ term does not contribute to the vorticity generation at all since $\nabla \times \nabla K =0$. Substitution of $\mbox{\boldmath${\cal{R}}$}$ [Eq.~(\ref{eq:Rey_strss_exprssn})] into Eq.~(\ref{eq:vmf_rey_strss_rel}) gives the VMF expression as \begin{equation} {\bf{V}}_{\rm{M}} = - D_\Gamma \left( { \mbox{\boldmath$\Omega$} + 2 \mbox{\boldmath$\omega$}_{\rm{F}} } \right) - \left[ {\left({ \mbox{\boldmath$\Omega$} + 2 \mbox{\boldmath$\omega$}_{\rm{F}} } \right) \cdot \nabla} \right] \mbox{\boldmath$\Gamma$} - \nu_{\rm{T}} \nabla \times \mbox{\boldmath$\Omega$} \label{eq:vmf_exprssn \end{equation} with \begin{equation} D_\Gamma = \nabla \cdot \mbox{\boldmath$\Gamma$}. \label{eq:D_gamma \end{equation} The third or $\nu_{\rm{T}}$-related term in Eq.~(\ref{eq:vmf_exprssn}) is the turbulent diffusion of $\mbox{\boldmath$\Omega$}$, representing the destruction of the large-scale vorticity due to turbulence. The first and the second terms give a possibility of the mean vorticity generation due to the inhomogeneity of the turbulent helicity. We consider a situation where the angular velocity is much larger than the mean vorticity: \begin{equation} |2 \mbox{\boldmath$\omega$}_{\rm{F}}| \gg |\mbox{\boldmath$\Omega$}|. \end{equation} In this case with the set-up we considered in the numerical simulation, the contribution from the second term vanishes since the direction of $\mbox{\boldmath$\omega$}_{\rm{F}}$ ($y$ direction) is perpendicular to the direction of the inhomogeneity ($z$ direction). We then have the turbulent VMF as \begin{equation} {\bf{V}}_{\rm{M}} = - D_\Gamma 2\mbox{\boldmath$\omega$}_{\rm{F}} - \nu_{\rm{T}} \nabla \times \mbox{\boldmath$\Omega$}. \label{eq:vmf_simple_geometry \end{equation} In a special case with the spatial distribution of the turbulent kinetic helicity given by Eq.~(\ref{eq:H_profile}), we have \begin{equation} D_\Gamma \simeq C_\eta \tau \ell^2 \frac{\partial^2 H}{\partial z^2} = - 3 C_\eta \tau \ell^2 H_0 z. \label{eq:D_Gamma_special \end{equation} In this case, the mean vorticity induction due to the first term in Eq.~(\ref{eq:vmf_simple_geometry}) is given by \begin{subequations}\label{eq:vort_ind_simple \begin{equation} {\bf{I}}_{\rm{V}} = \nabla \times {\bf{V}}_{\rm{M}} = \nabla \times \left( { - D_\Gamma 2\mbox{\boldmath$\omega$}_{\rm{F}} } \right) = 2\mbox{\boldmath$\omega$}_{\rm{F}} \times \nabla D_\Gamma \label{eq:vort_ind_simple_vec \end{equation} or in components, \begin{equation} I_{\rm{V}}^\alpha = \epsilon^{\alpha ba} 2\omega_{\rm{F}}^b \frac{\partial D_\Gamma}{\partial x^a} . \label{eq:vort_ind_simple_comp \end{equation} \end{subequations} This leads to the mean vorticity generation in the $x$ direction as \begin{equation} I_{\rm{V}}^x = 6 C_\eta \tau \ell^2 H_0 \omega_{\rm{F}} = 6 C_\eta \frac{K}{\varepsilon} \frac{K^3}{\varepsilon^2} H_0 \omega_{\rm{F}}. \label{eq:vort_ind_y_simple \end{equation} Note that the large-scale vorticity is generated in the direction perpendicular to both the directions of the angular velocity ($y$ direction) and of the turbulent helicity inhomogeneity ($z$ direction). Equation~(\ref{eq:vort_ind_y_simple}) shows that rotation coupled with inhomogeneous turbulent kinetic helicity generates a large-scale vorticity component that is not in the rotation direction, and the magnitude of generation in this particular case is uniform in space. Equation~(\ref{eq:vmf_simple_geometry}) should be compared with the equilibrium expression~(\ref{eq:meanUy}), which shows that the large-scale flow is generated in the direction of the rotation vector with a coefficient proportional to the turbulent helicity. The mean velocity equation is obtained by uncurling Eq.~(\ref{eq:mean_vor_eq}). We see from Eq.~(\ref{eq:vmf_simple_geometry}) that the mean velocity generation is in the direction of the rotation vector with the proportionality coefficient $-2 D_\Gamma$. In the special case we considered in the numerical simulation, $D_\Gamma$ may be expressed as Eq.~(\ref{eq:D_Gamma_special}), which corresponds to Eq.~(\ref{eq:meanUy}). \subsection{Physical origin of the helicity effect\label{sec:5B}} Since the Reynolds stress $\mbox{\boldmath${\cal{R}}$} = \{ {\cal{R}}^{ij} \}$ is a rank two tensor, it is not so simple to draw an intuitive physical picture of each component of the Reynolds stress. On the other hand, the turbulent VMF ${\bf{V}}_{\rm{M}} = \langle {{\bf{u}}' \times \mbox{\boldmath$\omega$}'} \rangle$ is a vector so that it is easier to get a physical picture of ${\bf{V}}_{\rm{M}}$. The relationship between ${\bf{V}}_{\rm{M}}$ and $\mbox{\boldmath${\cal{R}}$}$ is given by Eq.~(\ref{eq:vmf_rey_strss_rel}). Here, we consider the turbulent VMF in the mean vorticity equation, instead of the Reynolds stress in the mean momentum equation, to understand the physical origin of the present helicity effect. A mean velocity induction constant in space does not contribute to the generation of mean vorticity at all since $\mbox{\boldmath$\Omega$} = \nabla \times {\bf{U}}$. So, we focus our attention on the inhomogeneous helicity represented by $D_\Gamma = \nabla \cdot \mbox{\boldmath$\Gamma$} \propto \nabla^2 H$ [Eq.~(\ref{eq:D_gamma})]. The Laplacian of $H$, $\nabla^2 H$, quantifies how prominent the local $H$ is as compared with the surrounding $H$ in average. The Laplacian may be estimated as \begin{equation} \nabla^2 H \simeq - \frac{\delta H}{\ell^2} = - \frac{\langle {{\bf{u}}' \cdot \delta \mbox{\boldmath$\omega$}'} \rangle}{\ell^2}, \label{eq:laplace_H \end{equation} where $\ell$ is the helicity variation scale, $\delta H$ is the helicity variation relative to the average of the surroundings, and is $\delta \mbox{\boldmath$\omega$}'$ the vorticity fluctuation associated with $\delta H$. Positive $\delta H$ correspond to positive alignment of $\delta\mbox{\boldmath$\omega$}'$ with the velocity fluctuation ${\bf{u}}'$ in the statistical sense ($\delta H = \langle {{\bf{u}}' \cdot \delta \mbox{\boldmath$\omega$}'} \rangle > 0$) and vice versa. We consider a fluid element fluctuating with ${\bf{u}}'$ in the mean absolute vorticity $\mbox{\boldmath$\Omega$}_\ast$ (Fig.~\ref{fig:phys_hel_effect}). We further assume an inhomogeneous helicity density with $\nabla^2 H < 0$ [i.e., $\delta H > 0$ according to Eq.~(\ref{eq:laplace_H})] and the relative helicity variation $\delta H$ varies in space ($\delta H_+ > \delta H_-$ in Fig.~\ref{fig:phys_hel_effect}). Equation~(\ref{eq:laplace_H}) indicates that $\delta \mbox{\boldmath$\omega$}'$ is statistically parallel to ${\bf{u}}'$ ($\langle {{\bf{u}}' \cdot \delta \mbox{\boldmath$\omega$}'} \rangle > 0$) although each realization is more random. In this figure, for the sake of simplicity, the direction of ${\bf{u}}'$ is drawn in the direction parallel to the gradient of $\delta H$. Note that the present argument applies for any ${\bf{u}}'$ direction with respect to the gradient of $\delta H$. For a given ${\bf{u}}'$, the magnitude of $\delta \mbox{\boldmath$\omega$}'$ reflects that of $\delta H$. It is also worthwhile to remark that the spatial variation of $\nabla^2 H$ produces a non-uniform flow necessary for the induction of large-scale vorticity. \begin{figure}[t!] \includegraphics[width=.40\textwidth]{helicity_effect_r1_fig_10} \caption{Physical origin of the helicity effect.} \label{fig:phys_hel_effect} \end{figure} In the presence of absolute vorticity $\mbox{\boldmath$\Omega$}_\ast$, a fluid element moving with ${\bf{u}}'$ is subject to the Coriolis-like force to induce a flow modulation $\delta {\bf{u}}' = \tau {\bf{u}}' \times \mbox{\boldmath$\Omega$}_\ast$. Then, we have a contribution to the VMF and consequently to the mean velocity induction $\delta {\bf{U}} = \tau \langle {\delta{\bf{u}}' \times \delta \mbox{\boldmath$\omega$}'} \rangle$, whose direction is parallel to the mean absolute vorticity when $\nabla^2 H < 0$. As a result, mean vorticity is generated as $\delta \mbox{\boldmath$\Omega$} = \nabla \times \delta{\bf{U}}$, which is in the direction of $\mbox{\boldmath$\Omega$}_\ast \times \nabla (\nabla^2 H)$. This is in agreement with Eq.~(\ref{eq:vort_ind_simple}). Note that the direction of the gradient of $\nabla^2 H$ is opposite to that of $\delta H$ as in Eq.~(\ref{eq:laplace_H}). These arguments show that the basic elements of the present helicity effect are (i) local angular momentum conservation represented by the Coriolis force; and (ii) the presence of an inhomogeneous turbulent helicity. \section{Conclusions\label{sec:6}} The effect of kinetic helicity in the turbulent momentum transport was investigated. We assumed the generic statistical properties for the basic or lowest-order fields of homogeneous isotropic and non-mirrorsymmetric turbulence. It was shown that, as a higher-order or inhomogeneity contribution, the turbulent helicity gradient naturally enters the Reynolds-stress expression as the coupling coefficient of the mean vorticity and/or angular velocity. The inhomogeneous turbulent helicity coupled with the mean vorticity or rotation may contribute to the generation of large-scale flow. This mechanism was examined with the aid of DNSs of rotating turbulence with non-uniform helicity sustained by an external forcing. The numerical result showed a good correlation between the Reynolds stress and the helicity inhomogeneity coupled with the rotation in this simple flow geometry. This confirmed the inhomogeneous helicity effect in large-scale flow generation; a large-scale vortical motion is generated by inhomogeneous turbulent helicity in the presence of rotation. Unlike other vorticity-generation mechanisms such as baroclinicity, the present helicity effect can work even in incompressible turbulence. Since non-uniform turbulent helicity is easily generated by rotation in the presence of boundaries, this large-scale flow generation mechanism is expected to be ubiquitous in astro- and geo-physical phenomena. At the same time, density stratification, as well as rotation, is one of the main factors that produce turbulent helicity. In this sense, the large-scale flow generation due to the inhomogeneous helicity effect in compressible turbulence will provide an interesting subject for future investigation. \begin{acknowledgments} The authors would like to thank Jim Wallace and Fazle Hussain for invaluable comments on the experimental and numerical studies of helicity effects in turbulent flows. Their thanks are also due to Simon Candelaresi for useful discussions on vortex dynamo. They are grateful to Robert Rubinstein and unanimous referees for suggestions substantially improving the presentation of manuscript. Support by the NORDITA Program on Magnetic Reconnection in Plasmas (2015) is also acknowledged. This work was supported by the Japan Society for the Promotion of Science (JSPS) Grants-in-Aid for Scientific Research (No.\ 24540228). \end{acknowledgments}
3,212,635,537,882
arxiv
\section{Introduction} Neutron stars (NS) are gravitationally bound, therefore the precise measurement of mass and radius of a NS should provide a very fine probe for the equation of state (EOS) of dense matter. The first reasonable ideas about the composition of compact stars argued that matter are under extreme densities and is mainly composed of neutrons with small fractions of protons and electrons. Further theoretical developments and modern experimental results opened the window to other possibilities. The densities in the interior of neutron stars is about $3-10$ times that of the nuclear saturation density ($n_0 \sim 0.15\:$fm$^{-3}$). At such high densities in their interiors, the matter there is likely to be in a deconfined and chirally restored quark phase \cite{weber}. The strange matter hypothesis was first proposed by Itoh and Bodemer \cite{itoh,bodmer} and was then improved by Witten \cite{witten}. It states that, matter at extreme density and/or temperature are composed of almost equal number of up, down and strange quarks, called strange quark matter (SQM). It is also the ground state of strongly interacting matter at such extreme conditions. If this is true, then matter at such extreme conditions is likely to eventually convert to SQM. Such a high density scenario is present in the interiors of a NS and therefore normal nuclear matter is likely to undergo a phase transition and converts to SQM. The strange matter hypothesis was first extensively studied in the simple MIT bag model by Farhi \& Jaffe \cite{farhi}. The conversion process and the phase transition was further analyzed by Alcock et al. \cite{alcock}. The phase transition in a NS may continue up to the surface of the star or may stop inside the star. Depending up on where this, a quark star (QS) may be of two types, a strange star (SS) or a hybrid star (HS). SS are stars composed only of SQM, while HS has a quark core and a hadronic exteriors. IN the region between the quark core and hadronic outer matter, there may exist a mixed phase region where both quarks and hadrons are present. Thus the observed pulsars are still very much model dependent. Recently, Demorest et al. \cite{Demorest10} found a new maximum mass limit for compact stars by measuring very precisely the mass of the millisecond pulsar PSR J1614-2230 to be $M=1.97 \pm 0.04\:$M$_\odot$. This value is much higher than any previously measured pulsar mass. This measurement, has imposed a very severe constraints on the EOS of matter describing the compact objects. The model of NS, without hyperons, can easily satisfy the new mass constraint. However the presence of strangeness, either in the form of hyperons in nuclear matter or in the form od strange quarks in quark matter, cannot easily satisfy the mass limit. So, new studies had been carried out to make the hyperonic EOS and quark EOS to satisfactorily explain the new mass constraint. Basically to satisfy the new mass limit, one has to make the EOS stiffer, which usually is softened by the presence of strangeness. In the hyperonic nuclear matter sector, recent studies have suggested that the stiffening of hyperonic EOS is possible at par with the new experimental results \cite{Bednarek2011}. Authors also had revisited the role of vector meson-hyperon coupling \cite{Weissenborn2011b} and hyperon potentials \cite{Weissenborn2012}, to calculate the maximum mass. Studies prior to the discovery pulsar PSR J1614-2230 have suggested the stiffening quark matter EOS from the effect of strong interactions, such as one-gluon exchange or color-superconductivity \cite{Lugones03,Ruester04,Horvath04,Alford07,Fischer10,Kurkela10a,Kurkela10b}, which can satisfy the new constraint. Ozel \cite{Ozel10} and Lattimer \cite{Lattimer10} gave first studies on the implications of the new mass limits from PSR J1614-2230 for quark and hybrid stars in the quark bag model. Recently, Bonanno \& Sedrakian \cite{Bonanno2012} has succeeded in obtaining massive HS. They employed color-superconducting quark core and very stiff hadronic EOS (like the NL3 hyperonic model or the GM3 nuclear model). In this work I perform an extensive study of hybrid star mass using the relativistic mean-field hadronic EOS together with a simple three-flavor MIT bag model quark EOS. The model of the HS has a mixed phase intermediate region. I would also discuss as how the understanding of more precise astrophysical measurements of the mass and radius of neutron stars can help revealing the viability of exotic quark star models. The paper is organized as follows: In Section II, I describe the hadronic phase and in section III I describe the MIT bag model. The mixed phase EOS is constructed in section IV, using the Glendenning construction. I present my plots and extensively describe my results for the EOS and the mass-radius curve in Section V. The maximum mass for the hybrid star is also calculated in this section. Finally in section VI, I summarize my results and draw important conclusion from them. \section*{Hadronic phase} At the outermost region of the star, at comparatively low densities the matter is mainly composed of hadrons. I use the non linear relativistic mean field (RMF) model with hyperons (TM1 parametrization) to describe the hadronic phase EOS. In this model the baryons interact with mean meson fields \cite{boguta,glen91,sghosh,sugahara,schaffner}. The model lagrangian density includes nucleons, baryon octet ($\Lambda,\Sigma^{0,\pm},\Xi^{0,-}$) and leptons \begin{eqnarray} \label{baryon-lag} {\cal L}_H & = & \sum_{b} \bar{\psi}_{b}[\gamma_{\mu}(i\partial^{\mu} - g_{\omega b}\omega^{\mu} - \frac{1}{2} g_{\rho b}\vec \tau . \vec \rho^{\mu}) \nonumber \\ & - & \left( m_{b} - g_{\sigma b}\sigma \right)]\psi_{b} + \frac{1}{2}({\partial_\mu \sigma \partial^\mu \sigma - m_{\sigma}^2 \sigma^2 } ) \nonumber \\ & - & \frac{1}{4} \omega_{\mu \nu}\omega^{\mu \nu}+ \frac{1}{2} m_{\omega}^2 \omega_\mu \omega^\mu - \frac{1}{4} \vec \rho_{\mu \nu}.\vec \rho^{\mu \nu} \nonumber \\ & + & \frac{1}{2} m_\rho^2 \vec \rho_{\mu}. \vec \rho^{\mu} -\frac{1}{3}bm_{n}(g_{\sigma}\sigma)^{3}- \frac{1}{4}c(g_{\sigma}\sigma)^{4} +\frac{1}{4}d(\omega_{\mu}\omega^{\mu})^2 \nonumber \\ & + & \sum_{L} \bar{\psi}_{L} [ i \gamma_{\mu} \partial^{\mu} - m_{L} ]\psi_{L}. \end{eqnarray} Leptons $L$ are non-interacting but the baryons are coupled with the scalar $\sigma$ mesons, the isoscalar-vector $\omega_\mu$ mesons and the isovector-vector $\rho_\mu$ mesons. The model constants are fitted according to the experimental results of bulk properties of nuclear matter \cite{glen91,schaffner}. The TM1 model explains the nuclear saturation of but cannot sufficiently models the hyperonic matter, as it fails to reproduce the strong observed $\Lambda \Lambda$ attraction. This defect can be remedied by Mishustin \& Schaffner \cite{schaffner} by the addision of iso-scalar scalar $\sigma^*$ mesons and the iso-vector vector $\phi$ mesons, coupling only with the hyperons. The detailed EOS calculation can be found in the above mentioned references \cite{sugahara,schaffner}, and I do not repeat them here. The total energy density takes the form \begin{eqnarray} \varepsilon & = & \frac{1}{2} m_{\omega}^2 \omega_0^2 + \frac{1}{2} m_{\rho}^2 \rho_0^2 + \frac{1}{2} m_{\sigma}^2 \sigma^2 + \frac{1}{2} m_{\sigma^*}^2 \sigma^{*2} + \frac{1}{2} m_{\phi}^2 \phi_0^2 +\frac{3}{4}d\omega_0^4+ U(\sigma) \nonumber \\ & & \mbox{} + \sum_b \varepsilon_b + \sum_l \varepsilon_l \,, \end{eqnarray} and the pressure can be represented as \begin{eqnarray} P= \sum_i \mu_i n_i - \varepsilon, \end{eqnarray} where $\mu_i$ and $n_i$ is the chemical potential and number density of particle species $i$. \section*{Quark phase} The quark phase is modeled according to the simple MIT bag model \cite{chodos}. The current masses of up and down quarks are extremely small, e.g., $5$ and $10$ MeV respectively, whereas, for strange quark the current quark mass is not well established, and I vary it in my calculation. For the bag model the energy density and pressure can be written as \begin{eqnarray} \epsilon^Q &=& \sum_{i=u,d,s} \frac{g_i}{2 \pi^2} \int_0^{k_F^i} dk k^2\sqrt{m_i^2 + k^2}+ B_G\,,\label{edec}\\ P^Q &=& \sum_{i=u,d,s} \frac{g_i}{6\pi^2} \int_0^{k_F^i} dk \frac{k^4}{\sqrt{m_i^2 + k^2}}- B_G\,, \label{pdec} \end{eqnarray} where $k_F^i=\sqrt{\mu_i^2-m_i^2}$ and $g_i$ is the Fermi momentum and degeneracy factor of quarks of species $i$. $B_G$ is the energy density difference between the perturbative vacuum and the true vacuum, {\rm i.e.}, the bag constant. In this sense $B_G$ can be considered as a free parameter. Both the hadronic and quark matter, maintains baryon number conservation, and are beta-equilibrated and charge neutral. \section*{Mixed phase} With the previously described hadronic and quark EOS, Glendenning construction \cite{glen} gives the mixed phase regime. The mixed phase is the baryong density range where both quarks and hadrons are present. In the mixed phase the hadron and the quark phases are separately charged but the mixed phase is charge neutral as a whole. Thus the matter can be parametrized by the pair of electron and baryon chemical potentials $\mu_e$ and $\mu_n$. Pressure of the two phases are made equal to maintaining mechanical equilibrium. To satisfy the chemical and beta equilibrium conditions the chemical potential of different particles are related to each other. The Gibbs criterion gives the mechanical and chemical equilibrium between two phases, and is written as \begin{equation} P_{\rm {HP}}(\mu_e, \mu_n) =P_{\rm{QP}}(\mu_e, \mu_n) = P_{\rm {MP}}. \label{e:mpp} \end{equation} The solution of above equation gives the equilibrium chemical potentials of the mixed phase. As the two phases intersects one can calculate the corresponding charge densities of the hadronic components $\rho_c^{\rm{HP}}$ and quark components $\rho_c^{\rm{QP}}$ separately in the mixed phase. The volume fraction occupied by quark matter in the mixed phase $\chi$ is given by \begin{equation} \chi \rho_c^{\rm{QP}} + (1 - \chi) \rho_c^{\rm{HP}} = 0. \label{e:vol} \end{equation} The mixed phase energy density $\epsilon_{\rm{MP}}$ and the number density $n_{\rm{MP}}$ can be written as \begin{eqnarray} \epsilon_{\rm{MP}} &=& \chi \epsilon_{\rm{QP}} + (1 - \chi) \epsilon_{\rm{HP}}, \\ n_{\rm{MP}} &=& \chi n_{\rm{QP}} + (1 - \chi) n_{\rm{HP}}. \label{e:mpep} \end{eqnarray} Therefore the EOS is now a system having a charge neutral hadronic phase at lower densities, a charge neutral mixed phase in the intermediate region and a charge neutral quark phase at higher densities. \section*{Results} \begin{figure} \vskip 0.2in \centering \includegraphics[width=3.0in]{fig1.eps} \caption{Pressure as a function of energy density with bag pressure of $170$ and $180$MeV.} \label{fig1} \end{figure} \begin{figure} \vskip 0.2in \centering \includegraphics[width=3.0in]{fig2.eps} \caption{Pressure as a function of baryon density with bag pressure of $170$ and $180$MeV.} \label{fig2} \end{figure} The EOS are constructed to describe the properties of matter inside a NS, therefore the EOS properties would also resemble the properties of a NS. The central region of the star has maximum density (few times $n_0$), therefore the matter at the core is most likely to have a phase transition. Therefore the central region would have stable strange matter (or a colour superconducting matter). As the density decreases radially outwards some nuclear matter (nucleons) starts appearing and so in the intermediate region there is likely to have a mixed phase. Much further outwards I have only matter consisting of only nucleons. The crust consisting mainly of free electrons and nuclei, which completes the star structure. The hadronic EOS, I assume a fixed TM1 parameter set, which satisfactorily explains the properties of hadronic matter at extreme condition. I can control the quark EOS by changing the masses of the quarks and the bag constant. The masses of the light quarks are quite bounded and take them to be $5$MeV (u) and $10$MeV (d). The mass of s-quark is still not well established, but expected to lie between $100-300$MeV. I would vary the mass of the s-quark within this bounded mass range. I would also vary the bag constant ($B_G$) to regulate the mixed phase region. This parametrization of the EOS of the hadron and quark matter is responsible for characterization of the matter in the mixed phase region. Using the Glendenning construction to construct the mixed phase, and plot curves of pressure against energy density as seen in fig \ref{fig1}. In fig \ref{fig1} I have plotted the mixed phase EOS with bag pressures $170$MeV and $180$MeV. Actually the relation runs as ${B_G}^{1/4}=170$MeV, but for simplicity I will denote ${B_G}^{1/4}=170 MeV=B_g$. For this case the mass of the s-quark ($m_s$) is taken to be $150$MeV. With a constant bag pressure, lower bag pressure cannot generate a mixed phase region. I do not go above $B_g=180$MeV, as for that case the EOS becomes very flat, and the maximum mass of the star becomes less. In the curves, the lower portion is nuclear phase (dotted/dash line), the intermediate region is the mixed phase (bold line) and the higher region is the quark phase (dotted/dash line). Fig \ref{fig2}, shows the pressure against baryon density for bag constant $170$MeV and $180$MeV. The mixed phase starts at $0.2 fm^{-3}$ and ends at $0.76 fm^{-3}$ for bag pressure $170$MeV. For bag pressure $180$MeV the mixed phase region is in between $0.22 fm^{-3}$ and $0.89 fm^{-3}$. The curve with bag constant $170$MeV is much stiffer than the curve with bag pressure $180$MeV, because the bag pressure adds negatively to the matter pressure, making the effective pressure low. The above curves also shows that as the bag pressure increases the range of mixed phase region also increases. As the variation of pressure with both energy density and baryon density is quite similar, from now on I would only plot curve showing pressure as function of energy density. \begin{figure} \vskip 0.2in \centering \includegraphics[width=3.0in]{fig8a.eps} \caption{Pressure against energy density plot with constant and varying bag pressure, having $B_g=170$MeV.} \label{fig4} \end{figure} With such high bag pressure it is impossible to attain the mass limit set by PSR J1614-2230. Therefore I have to devise some other mechanism which would give stiffer EOS, thereby increasing the maximum mass of the HS. For that, I assume a density dependent bag constant. In the literature there are several attempts to understand the density dependence of $B_g$ \cite{adami,blaschke}; however, currently the results are highly model dependent and still there is no definite picture. I parametrized the bag constant in such a way that it attains a value $B_\infty$, asymptotically at very high densities. The range of value of $B_{\infty}$ obtained from experiments can be found in Burgio et al. \cite{burgio}, and I assume it to be $130$MeV, the lowest value mentioned there. With such assumptions I then construct a Gaussian parametrization given as \cite{burgio,ritam1207} \begin{eqnarray} B_{gn}(n_b) = B_\infty + (B_g - B_\infty) \exp \left[ -\beta \Big( \frac{n_b}{n_0} \Big)^2 \right] \:. \label{bag} \end{eqnarray} The lowest value of $B_{gn}$, which is its value at the asymptotic high density in quark matter, is fixed at $130$MeV. The bag pressure quoted would be the value of the bag constant at the starting of the mixed phase region on the low density regime ($B_g$ in the equation). As the density increases the bag pressure decreases and reaches $130$MeV asymptotically, the decrease rate is controlled by $\beta$. \begin{figure} \vskip 0.2in \centering \includegraphics[width=3.0in]{fig5.eps} \caption{Pressure against energy density plot with varying bag pressure, having $B_g=160$ and $150$MeV.} \label{fig5} \end{figure} \begin{figure} \vskip 0.2in \centering \includegraphics[width=3.0in]{fig6.eps} \caption{Pressure vs energy density plot showing the explicitly the mixed phase region, for the varying bag pressure $B_g=150$MeV.} \label{fig6} \end{figure} In fig \ref{fig4} I have plotted curves showing the difference in the slope of the curves with and without the variation of bag pressure (for $B_g=170$MeV). For the varying bag pressure the mixed phase region shrinks, becomes flatter but the quark phase region becomes stiffer. The mixed phase region now only extends up to baryon density $0.53 fm^{-3}$. The change in the mixed phase region is about $\sim 30\%$. This is because, going to higher densities (or higher energy density towards the core) the effective matter pressure increases with the decrease in bag pressure (bag pressure adds negatively to the matter pressure). With such a density dependent bag constant I can have a significant mixed phase region with lower values of bag pressure. As shown in fig \ref{fig5} I can have mixed phase region with bag pressure $B_g$, for $160$MeV and $150$MeV. For the $160$MeV EOS the s-quark mass $m_s=150$MeV and for the $150$MeV curve the s-quark mass is $m_s=300$MeV. With bag pressure, $B_g$, $160$ and $150$MeV the mixed phase region is of considerable small. For bag constant $160$MeV the mixed phase region starts at density $0.15 fm^{-3}$ and ends at $0.36 fm^{-3}$. With bag constant $150$MeV the mixed phase region starts at density $0.13 fm^{-3}$ and ends at $0.3 fm^{-3}$. In fig \ref{fig6} I have separately plotted the EOS for $B_g=150$MeV showing the mixed phase region clearly. As it would be shown later that with only such choice of quark matter parameters I can attain the mass limit set by PSR J1614-2230. Assuming the star to be stationary and spherical, the Tolman-Oppenheimer-Volkoff (TOV) equations \cite{shapiro} gives the solution for the pressure $P$ and the enclosed mass $m$, \begin{widetext} \begin{eqnarray} {dP(r)\over{dr}} &=& -{ G m(r) \epsilon(r) \over r^2 } \, { \left[ 1 + {P(r) / \epsilon(r)} \right] \left[ 1 + {4\pi r^3 P(r) / m(r)} \right] \over 1 - {2G m(r)/ r} } \:, \\ {dm(r) \over dr} &=& 4 \pi r^2 \epsilon(r) \:, \end{eqnarray} \end{widetext} $G$ being the gravitational constant. Starting with a fixed central energy density $\epsilon(r=0) \equiv \epsilon_c$, I integrate radially outwards until the pressure on the surface equals the one corresponding to the density of iron. This gives the star's radius $R$ having gravitational mass \begin{equation} M_G~ \equiv ~ m(R) = 4\pi \int_0^Rdr~ r^2 \epsilon(r) \:. \end{equation} For the NS crust, in the medium density range we add the hadronic EOS by Negele and Vautherin \cite{negele}, and for the outer crust we add the EOS by Feynman-Metropolis-Teller \cite{feynman} and Baym-Pethick-Sutherland \cite{baym}. \begin{figure} \vskip 0.2in \centering \includegraphics[width=3.0in]{fig7.eps} \caption{Mass-radius curve with constant and varying bag pressure, having $B_g=170$MeV.} \label{fig7} \end{figure} \begin{figure} \vskip 0.2in \centering \includegraphics[width=3.0in]{fig9.eps} \caption{Mass-radius curve with varying bag pressures, $B_g=160$ MeV and $150$ MeV.} \label{fig8} \end{figure} Fig \ref{fig7} shows the gravitational mass $M$ (in units of solar mass $M_{\odot}$) as a function of radius $R$, for constant and varying bag pressure $B_g=170$ MeV. As the bag pressure varies and decreases towards the center of the star (at higher densities) the curve becomes stiffer as the effective matter pressure increases (bag pressure being negative). I find that a flatter EOS corresponds to a flatter mass-radius curve, and therefore the maximum mass of the star with varying bag pressure is higher than the non varying one. With such varying bag constant I plot the mass-radius curve with $B_g=160$ MeV and $150$ MeV (fig \ref{fig8}). With the same qualitative aspect I find that the maximum mass of a mixed hybrid star obtained with $B_g=160$MeV is $1.84 M_{\odot}$. The maximum mass with $B_g=150$MeV and $m_s=300$MeV, is $2.01$ solar mass. The discovery of high-mass pulsar PSR J1614-2230 \cite{Demorest10} with mass of about $1.97 M_{\odot}$, has set a stringent condition on the EOSs describing the interior of a compact star. They \cite{Demorest10} quote the typical values of the central density of J1614-2230, for the allowed EOSs in the range 2$n_0$ - 5$n_0$, whereas consideration of the EOS independent analysis of \cite{lattimer2005} sets the upper central density limit at $10n_0$. The maximum mass of a mixed phase EOS star with $m_s=150$MeV is calculated to be $1.84$ solar mass. The maximum mass for the mixed hybrid star can be increased to $2.01$ solar mass, with $m_s=300$MeV having a varying bag pressure of $B_g=150$MeV. Only such choice of the quark matter parametrization can give rise to star which would satisfy the mass set by PSR J1614-2230. But with such choice of parameters the mixed phase region is small. This maximum mass limit is for this hadronic and quark matter EOSs. Stiffer EOS sets (like hadronic NL3 and quark quark NJL model) for the mixed hybrid star can produce much higher maximum mass \cite{lenzi}. From the figure it is also clear that the maximum mass of the star corresponds to a radius of about $10$km. Previous calculations have shown the maximum mass of a NS have radius greater than $12$km, whereas the maximum mass of a SS corresponds to a radius of less than $9$km. Therefore it is clear from my calculation that the mixed hybrid star has radius corresponding to the maximum mass, quite different from the neutron and strange star. They are not as compact as strange stars and their radius lies between the nuclear and strange star. \section*{Summary and Conclusion} In this work I have studied the maximum mass of a hybrid star having a mixed phase region. With the hadronic matter EOS having hyperons, and remaining in the simple MIT bag model I wanted to study what parameters value could give such high masses for a HS having a mixed phase region. The star has a dense quark core, a mixed phase intermediate region and hadronic outer region. The hadronic and quark matter EOS is simultaneously constructed according to relativistic mean field approach and MIT bag model. The mixed phase is determined in accordance with the Glendenning construction. All the phases are at chemical and mechanical equilibrium, and also they are charge neutral as a whole. With constant bag pressure $B_g$ of $170$ and $180$MeV (and $m_s=150$MeV) I get EOS with considerable mixed phase region but with such parametrization the maximum mass of the star is about $1.5$ solar mass. I therefore consider a density dependent bag pressure $B_g$, parametrized according to the Gaussian parametrization. The asymptotic value of the bag constant at high density is fixed at $130$MeV, which is its lowest value known from the experiments \cite{burgio}. With such varying bag pressure I can have a mixed phase region with $B_g=160$MeV, but still the mass of the star is below $1.9$ solar mass. To reach the mass limit set by PSR J1614-2230, for a mixed phase HS, I build the EOS with bag pressure of $B_g=150$MeV, having s-quark mass $m_s=300$MeV. For such choice of parameters values, the mixed phase region is small. Further lowering of bag pressure is not possible, as then the mixed phase disappears. The maximum mass for a mixed hybrid star with the given set of parameters is $2.01 M_{\odot}$. Another important results of my calculation is that the HS, with mixed phase, has radius (for the maximum mass) quite different from the neutron or strange star, their radius lying in between the neutron and strange star. After the discovery of PSR J1614-2230, setting the mass limit to $2$ solar mass, new EOSs model has been proposed. Weissenborn et al. \cite{Weissenborn2011a} showed that absolutely strange star can have mass above $2$ solar mass is the effect of strong coupling constant and color superconductivity is taken into account. Bednarek et al. \cite{Bednarek2011} argued that EOS with hyperons having quartic terms involving hidden strangeness vector meson can reach such limit. Matsuda et al. \cite{masuda} extended their calculation to hybrid stars, having a smooth crossover from hadronic to quark matter. For the mass to reach the maximum mass limit they showed that the crossover has to take place at low density and the quark matter has to be strongly interacting. Using very stiff EOS sets (hadronic NL3 and quark quark NJL model) the maximum mass limit for the hybrid star can be raised much higher as shown by Lenzi \& Lugones \cite{lenzi}. In my work, I also have shown that the maximum mass limit can be reached by a HS with mixed phase even with simple hyperonic nuclear matter EOS and MIT bag model quark matter EOS if I assume a relatively low density dependent bag pressure. Observationally the NS is characterised only by signals coming to us from its surface. Developments has been made on them to measure accurately the mass of compact stars but same cannot be done for their radius. Reasonable measurement of the radius of a compact stars could differentiate NS, SS and HS, as we have seen here that different EOS of matter gives different mass-radius relationship. As it is clear from my calculation and also from previous calculations that by suitable tuning of the parameters or by invoking new terms in the EOSs calculations the mass limit set by PSR J1614-2230 can be reached. Therefore to have a full understanding of the matter at extreme densities we need results not only from astrophysical observations but also from earth based experiments.
3,212,635,537,883
arxiv
\section{Introduction} Despite the tremendous progress made in communication, we are still in its infancy. As first envisioned in Claude E. Shannon's seminal work \cite{Shannon1948}, the current goal of communication is about how to accurately transfer and reconstruct information bits, while ignoring the \emph{meaning (or semantics)} of the bits and \emph{the effectiveness (or goal)} of transferring them \cite{Shannon1964}. Leveraging semantics in the post-Shannon communication era hinges upon discovering the hidden semantics of the transmitted bits from their raw data and understanding their impacts on accomplishing well-defined goals. This mandates the use of machine learning (ML) that has shown remarkable success in recognizing hidden patterns in various types of data, as well as in understanding complex relationships between the input data and its output representations \cite{Goyal2020,Scholkopf2021,Belfiore2021}. To tackle the semantic and effectiveness problems in the post-Shannon system design, a number of recent works can be found, which can be broadly categorized into two directions. One direction is \emph{semantics-empowered communication} whose goal is to transmit only the important data samples that can be reconstructed at the receiver \cite{Popovski2019,Kountouris2020,Maatouk2020,Yun2021,Uysal2021}. This however postulates that the semantics are already known and fixed, e.g., via constructing a knowledge base or a knowledge graph, which may not always be feasible particularly when the semantics vary over time and in different contexts. The other direction is referred to as \emph{emergent communication} that studies iterative communication between intelligent agents, through which semantics and their goal-oriented representations naturally emerge \cite{Foerster2016,Lazaridou2017,Lazaridou2020,Hoydis2020}. Nonetheless, while interesting, current solutions are based on heuristics and lack theoretical grounding. \begin{figure*} \centering \subfigure[System 1 SNC.]{\includegraphics[width=0.7\textwidth]{Fig_System1.pdf}\label{fig:system1}}\\ \subfigure[System 2 SNC.]{\includegraphics[width=0.7\textwidth]{Fig_System2.pdf}\label{fig:system2}} \caption{A schematic illustration of (a) System 1 semantics-native communication (SNC), whose semantic encoder conceptualizes and symbolizes a communicating entity and semantic decoder vice versa, and (b) System 2 SNC model, whose semantic encoder and decoder are instilled with contextual reasoning.} \label{Fig_Shannon_semantic} \vspace{-15pt} \end{figure*} To fill this void, the overarching goal of this article is to open up the black box of semantic communication between ML-driven agents (or between agent and human) by building a stochastic model of emergent semantics. To this end, inspired by linguistics and information theory, we propose a novel \emph{Semantics-Native Communication (SNC)} model that exploits the advantages of communicating semantics of entities enabled by ML. In the model, an entity of interest (e.g., abstract idea, physical phenomena and objects) is conceptualized and symbolized by a speaker as a \emph{semantic representation (SR)}, that can be decoded as the intended entity by its listener. As illustrated in Fig.~\ref{Fig_Shannon_semantic}, these \emph{semantics coding} operations are connected to the source/channel coding blocks of the classical Shannon communication model in a principled way, allowing us to analyze their in-depth operations and making inroads towards advancing the principles of semantic communication. Further building on and expanding the aforementioned SNC model, referred to as \emph{System~1 SNC}, we additionally put forward \emph{System~2 SNC}, spurred by the recent paradigm shift from System~1 ML to System~2 ML \cite{Bengio2019}. According to \cite{Bengio2019} itself inspired by Daniel Kahneman's book \emph{Thinking, Fast and Slow} \cite{Kahneman2011}, System 1 ML is tantamount to fast and unconscious pattern recognition as in the current deep learning, in which the semantic coding in System 1 SNC (hereafter referred to as \emph{System 1 semantic coding}), falls. In contrast, System 2 ML is about slow and logical metacognition, which enables reasoning, planning and handling exceptions. Inspired by this, System 2 SNC incorporates \emph{System 2 semantic coding} to infuse reasoning into System 1 semantic coding, such that before every utterance each agent runs internal simulations by locally and iteratively reasoning about the communication context of its interlocutor, in a way that `\emph{I} think of \emph{You} thinking of \emph{Me} thinking of \emph{You} and so on.' Such \emph{contextual reasoning} or pragmatic reasoning \cite{Scholkopf2021,Zaslavsky2020,Wang2020,Kang2020} in System 2 semantic coding gives rise to a new emerging language between communicating agents. This language is specialized for its unique ways of semantic coding induced by different tasks and other communication contexts. As a result, compared to System 1 SNC that conveys all the semantics associated with the entity of interest, System 2 SNC significantly reduces the communication cost by sending only the most effective semantics to its interlocutors. \subsection{Related Works} \textbf{Semantics for Communication.}\quad The semantics and effectiveness problems have been identified just a year after the inception of the Shannon communication model \cite{Shannon1964}. Spurred by advances in ML, the problem has recently been revisited through the lens of the \textit{semantics-empowered communication} framework that is broadly categorized into three directions. The first direction is to filter out less important or uninformative data, and generate semantically meaningful information at the sender \cite{Popovski2019,Kountouris2020,Maatouk2020,Yun2021}. Here, the importance of data can be evaluated with goal-oriented metrics considering the effectiveness of information at the receiver, such as the age-of-information-based metrics (and variants) \cite{Kountouris2020,Maatouk2020}, attention-based similarities \cite{Yun2021}, and control-theoretic accuracy \cite{Popovski2019}. The second direction lies in embedding raw data into a lower dimensional space that compresses the information size. This includes the image-to-text conversion via Transformer \cite{Xie2021a,Xie2021b} and non-linear to linear system dynamics transformation via the Koopman operator with auto-encoder \cite{Girgis2021}. Lastly, one can exploit a knowledge base as side information or a codebook for reducing the communication overhead, which is effective in speech-based communication, video streaming, and holographic communication \cite{Strinati2021,Shi2021}. Meanwhile, semantics has been taken into consideration in the context of \textit{emergent communication} frameworks to create new semantic vocabularies and syntax for ML-driven agents such that their communication becomes effective in their downstream tasks. Technically, these methods are based mostly on multi-agent reinforcement learning (MARL), where the interactions among agents induce the emergence of semantic communication. One prominent direction is the Differentiable Inter-Agent Learning (DIAL) framework under a continuous communication channel \cite{Foerster2016,Singh2019,Kim2019}, where different agents’ ML models are concatenated into a single model that is trained using backpropagation algorithm. Another direction is Reinforced Inter-Agent Learning (RIAL) type methods under a discrete communication channel \cite{Foerster2016,Lowe2017,Lazaridou2017} where the standard backpropagation algorithm is not applicable. In a nutshell, on the one hand, although the semantics-empowered communication frameworks are structured in a principled way, their applications are often restricted to specific data domains (e.g., images and natural languages) and/or environments (e.g., control systems), limiting their adoption in practice. On the other hand, although emergent communication frameworks are relatively flexible and applicable to most of the scenarios, their end-to-end operations are treated as a black-box ML process, calling for further improvements. Motivated by these opposite research directions and their limitations, in this work, we develop a novel stochastic model of semantics native communication (SNC), inspired by human cognition and linguistics. What is more, in stark contrast to existing semantics-empowered and emergent communication frameworks that commonly ignore communication context, we additionally propose System 2 SNC that significantly improves communication efficiency and its effectiveness by reasoning about the context of communicating agents as elaborated next. \textbf{System 2 Contextual Reasoning for Communication.}\quad According to \cite{Bengio2019}, System 1 ML, which is commonly fast, intuitive and unconscious, is about recognizing patterns and correlations in raw data space. In contrast, System 2 ML, which is slow, logical and conscious, is about finding the underlying causation of the perceived correlations while reasoning on the data’s SRs \cite{Scholkopf2021}. We are now at the cusp of two paradigm shifts, namely, the departure from model-based communication systems to System 1 ML aided communication systems \cite{Park2019,Park2020,Hoydis2020}, as well as from System 1 ML to System 2 ML \cite{Bengio2019}. The next paradigm shift from System~1~ML to System 2 ML based communication systems that harness logical reasoning for communication is upon us. Among various types of reasoning \cite{Goyal2020,Scholkopf2021}, in this work we mainly focus on incorporating principles of \emph{contextual reasoning} (or pragmatic reasoning) into SNC. Contextual reasoning refers to the humans' ability to reason about the hidden meaning beyond communicating utterances such as linguistic ambiguity and intention of the interlocutor, based on the local context of the communication and social-interactions \cite{Grice1975,Bell1995,Bell1999,Bell2001}. A well-known computational approach to contextual reasoning is the Rational Speech Act (RSA) framework \cite{Frank2012, Goodman2013, Kao2014, Goodman2016, Frank2016}, which formulates the speaker and listener as stochastic models and simulates contextual reasoning based human communication. Recently, the connection between the RSA model and optimal transport was explored in \cite{Wang2020} and investigated from the information-theoretic rate-distortion point of view in \cite{Zaslavsky2020}. While these computational approaches are interesting, as the `RSA' literally implies, the model focuses mostly on the speaker, ignoring the listener and its interactions with the speaker. In contrast, we primarily focus on the contextual reasoning of both speaker, listener and their interactions, and build a computational framework of such reasoning. \subsection{Contributions and Organization} The major contributions of this work are summarized as follows. \begin{itemize} \item We propose a novel stochastic model of System 1 SNC, and derive the expected SR bit length obtained after both semantic encoding and Shannon source encoding (see \textbf{Theorem 1}). \item Next, we propose System 2 SNC that infuses contextual reasoning in System 1 SNC. The System 2 SNC is derived by designing an optimization problem formulating contextual reasoning over System 1 reasoning and solving it. Furthermore, we prove the convergence of locally recurrent contextual reasoning to a common communication context (see \textbf{Theorem 2}). \item Finally, leveraging the proposed stochastic model, we show that the reliability of System 2 SNC increases with the number of meaningful concepts (see \textbf{Theorem~3}), and derive the expected SR bit length under System 2 SNC (see \textbf{Corollary~3}). \end{itemize} The rest of this article is organized as follows. The stochastic models of System 1 and System 2 SNC are formalized and analyzed in Sections \ref{sec:semantic_communication} and \ref{sec:RHSC}, respectively. The effectiveness of System 2 SNC is corroborated in Section \ref{sec:experiments} via experimental results based on the proposed stochastic model. Finally, this article is concluded by discussing several future research directions in Section \ref{sec:discussion}. \section{A Stochastic Model of System 1 Semantics-Native Communication}\label{sec:semantic_communication} The key element of SNC is \emph{semantics coding} added before and after source/channel coding of Shannon communication, as visualized in Fig.~\ref{Fig_Shannon_semantic}. Inspired by the dual process of human cognition \cite{Kahneman2011}, we consider the following two levels of semantic coding. System 1 semantic coding is assumed to be driven by System 1 ML, enabling each communicating agent to locally extract meaningful semantics from observations and to build a SR for accomplishing its desired task. System 2 semantic coding is rooted in System 2 ML, enabling contextual reasoning that improves the efficiency of semantic extraction and SRs, given a communication context. The goal of this section is to build a stochastic model of System 1 semantic coding and thereby analyze the end-to-end operations of System 1 SNC, which in the next section will be used for analyzing System 2 SNC with System 2 semantic coding. To this end, we hereafter consider a point-to-point communication scenario between two agents who can communicate in both directions. For each direction, one agent becomes a \emph{speaker} and the other is its \emph{listener}. Suppose a world that consists of a finite set $\mathcal{E}$ of \textit{entities} that can be referred to by the agents, such as abstract ideas, physical phenomena, and objects in the world. Each entity cannot be directly recognized by the agents, but is obtained by sifting through its noisy \emph{observations} or raw data samples, where $\mathcal{O}$ denotes the set of all observations in the world. Like human communication, we consider that the speaker in SNC first forms its intention, and cognizes the \emph{intended entity} $e\in \mathcal{E}$ from a set $\mathcal{O}_e\subset \mathcal{O}$ of partial observations in the world. Then, the speaker maps $e$ into its SR, and communicates it to the listener. Each SR comprises a set of semantic symbols or vocabularies, where the finite set $\mathcal{S}$ of the entire semantic symbols is known to both speaker and listener. The goal of SNC is the successful recognition of the intended entity $e$ at the listener upon receiving SR, which is equivalent to the successful inference of the speaker's intention by the listener. The SNC reliability is measured by the accuracy of the intended entity recognition at the listener. The speaker's mapping from the observations of entity $e$ to its SR is termed \emph{semantic encoding}, while the listener's mapping from the SR back to entity $e$ is named \emph{semantic decoding}, as elaborated in the following subsections. \begin{figure}\centering \includegraphics[width=0.7\textwidth]{Fig_triangles.pdf} \caption{An illustration of the multi-triangular semantic coding model in SNC, inspired by the Ogden \& Richards' semantic triangle \cite{Ogden1923}.} \label{Fig_triangles} \vspace{-15pt} \end{figure} \subsection{System 1 Semantic Coding}\label{subsec:semantic_coding} In linguistics, human communication architectures are often explained using \emph{the triangle of meanings} \cite{Ogden1923,Cherry1966}. As visualized in Fig. \ref{Fig_triangles}, the vertices of this semantic triangle connect the three spaces of the entity (or observation of the entity), concept (or meaning), and symbol (or representation). The directed edge from an entity to its concept is called \emph{conceptualization}, and the edge from the concept to its symbol is called \emph{symbolization}, while their opposite directions imply deconceptualization and desymbolization, respectively. Borrowing this model into System 1 SNC, we define a \emph{concept} as a unit of an agent's interpretation that partly or fully describes the intrinsic properties of an entity in the semantic domain, and a \emph{symbol} as a concept's representation that is the smallest meaningful unit in the semantic domain. We consider that each entity connotes one or multiple concepts in a finite set $\mathcal{C}$ of the entire concepts in the world, which is known to all agents. For simplicity, we assume that each concept is one-to-one mapped into a single symbol, and extending this to multi-symbol mapping is deferred to future work. Consequently, System 1 SNC can be summarized by a multi-triangular model with a shared intended entity as illustrated in Fig. \ref{Fig_triangles}. Here, semantics encoding encompasses conceptualization and symbolization (i.e., entity-to-symbol mapping) while semantics decoding incorporates deconceptualization and desymbolization (i.e., symbol-to-entity mapping), as detailed next. \subsubsection{Entity-Concept Mapping} Consider an agent who obtains an observation $o\in \mathcal{O}_e$ of an intended entity $e$. From the observation $o$, the agent finds multiple relevant concepts. These relevant concepts by the agent are not always identical to those of other agents carrying out different downstream tasks. To reflect this, we assume that each agent is in a task-specific \emph{state} $a\in \mathcal{A}$ where the number of tasks or states is finite. Let $X_c \in \{0,1\}$ be a binary random variable that indicates whether a concept $ c\in \mathcal{C}$ is relevant ($X_c = 1$) or not ($X_c = 0$) to a certain entity or its observation, out of the finite set $\mathcal{C}$ of the entire world concepts. Then, we introduce a stochastic model to describe the relevance of concepts to an agent's observation, i.e., \emph{observation-to-concept mapping (O2C)}, defined by: \begin{align} \label{eq:conceptualizer} p_{\scriptscriptstyle \mathbf{X}|O}(\vb*{x}|o;a)= \prod_{c\in\mathcal{C}}p_{\scriptscriptstyle X_c|O}(x_c|o;a),\quad \forall\vb*{x}=(x_1,x_2,\dots,x_{|\mathcal{C}|}) \in \{0,1\}^{|\mathcal{C}|}, \end{align} where $p_{\scriptscriptstyle X_c|O}(x_c|o;a)$ denotes a singular O2C about the concept $c$. The left-hand-side (LHS) of \eqref{eq:conceptualizer} states that the O2C is a conditional probability distribution of $\mathbf{X}$ given $o\in \mathcal{O}$, where $\mathbf{X} = (X_1,\dots,X_{|\mathcal{C}|})$ is the $|\mathcal{C}|$-tuple random variables of concept indicators, and $O \in \mathcal{O}$ is the random variable of observations. The O2C model is parameterized by an agent's state $a\in\mathcal{A}$, reflecting the variation of conceptualization across agents. We consider that the relevance of concepts to a given observation is conditionally independent, yielding the right-hand-side (RHS) of \eqref{eq:conceptualizer}. \begin{remark} (O2C as Quantization) Conceptualization can be seen as a process of quantizing an observation into an infinite space of the world with concepts living in a finite space. The number $|\mathcal{C}|$ of concepts in the world determines the quantization resolution. With a larger $|\mathcal{C}|$, the concepts and their associated observations can be discriminated with higher fidelity. For a given number of concepts $|\mathcal{C}|$, the fidelity can be improved by concept disentanglement. \end{remark} \begin{remark} (O2C as ML) The O2C stochastic model can be interpreted as a stochastic soft-decision such as the normalized logits in knowledge distillation \cite{Hinton2015,Seo2020}, or the likelihood of a decision such as the stochastic policy in reinforcement learning (RL) \cite{Sutton2018}. Besides, the O2C defined over different agent states can be seen as a meta-learning model trained over different tasks or a multiple task-adaptive model, such as a slimmable neural network \cite{Yu2019} where each of its switchable model configuration is trained using an individual task. \end{remark} In reality, an observation often includes noises or \emph{nuisances} (e.g., rotations, translations, etc.) that are not relevant to the entity but instead affects its observation. To model this, we introduce a random variable $N \in \mathcal{N}$ from an infinite set $\mathcal{N}$, referred to as a nuisance for the entity $E\in \mathcal{E}$, if the mutual information between $N$ and $E$ is zero, i.e., $\mathsf{I}(N;E) = 0$. To overcome such a nuisance, we consider that each agent observes the entity multiple times before conceptualization as elaborated next. At first, consider a noise-free conceptualization described by an \emph{entity-to-concept mapping (E2C)}: \begin{align}\label{eq:e2c_cond} p_{\scriptscriptstyle \mathbf{X}|E}(\vb*{x}|e;a)=\prod_{c\in\mathcal{C}} p_{\scriptscriptstyle X_c|E}(x_c|e;a),\quad \forall\vb*{x}\in \{0,1\}^{|\mathcal{C}|}, \end{align} where $p_{\scriptscriptstyle X_c|E}(x_c|e;a)$ is a singular E2C about concept $c$. The LHS of \eqref{eq:e2c_cond} states that the E2C is a conditional probability distribution describing the relevant concepts to an entity $e\in\mathcal{E}$ found by an agent in state $a\in \mathcal{A}$. The RHS holds because of the conditional independence of the concept relevance for a given entity and an agent's state. Suppose that the observation $O$ consists of a nuisance and an entity, i.e., $O = (N,E)$. Then, E2C can be retrieved by marginalizing the O2C in \eqref{eq:conceptualizer} as follows: \begin{align} \label{eq:E2C_1}\sum_{o\in\mathcal{O}_e}p_{\scriptscriptstyle \mathbf{X}|O}(\vb*{x}|o;a)p_{\scriptscriptstyle O}(o) &= \sum_{o\in\mathcal{O}}p_{\scriptscriptstyle \mathbf{X}|O}(\vb*{x}|o;a)p_{\scriptscriptstyle O|E}(o|e)\\ \label{eq:E2C_2}&= \sum_{n\in\mathcal{\mathcal{N}}}p_{\scriptscriptstyle \mathbf{X}|N,E}(\vb*{x}|n,e;a)p_{\scriptscriptstyle N|E}(n|e)\\ \label{eq:E2C_3}&= p_{\scriptscriptstyle \mathbf{X}|E}(\vb*{x}|e;a), \end{align} where $\mathcal{O}_e \subset \mathcal{O}$ is the infinite subset of observations on entity $e\in\mathcal{E}$. Here, \eqref{eq:E2C_1} holds in that the observations in $\mathcal{O}_e$ are $O$ given $E=e$. In \eqref{eq:E2C_2}, we use the assumption $O = (N,E)$, and \eqref{eq:E2C_3} is the marginalization over the nuisance set $\mathcal{N}$. As \eqref{eq:E2C_3} illustrates, nuisances can be offset by an infinite number of observations, and O2C can be ideally recast as E2C. With a finite number $\big|\widetilde{\mathcal{O}}_e\big|$ of observations in practice, we instead assume that $p_{\scriptscriptstyle O|E}(o|e)$ in \eqref{eq:E2C_1} is equally likely, and approximate E2C using the empirical mean of O2C, i.e., \begin{align}\label{eq:E2Capprox} p_{\scriptscriptstyle \mathbf{X}|E}(\vb*{x}|e;a) &\approx \frac{1}{\big|\widetilde{\mathcal{O}}_e\big|}\sum_{o\in\widetilde{\mathcal{O}}_e} p_{\scriptscriptstyle \mathbf{X}|O}(\vb*{x}|o;a), \end{align} where $\widetilde{\mathcal{O}}_e \subset \mathcal{O}_e$ is a finite set of observations on an entity $e\in \mathcal{E}$. Finally, given E2C, an agent can obtain a \emph{deconceptualizer (C2E)} that maps concepts back to an entity by using the Bayes rule: \begin{align} \label{eq:C2E}p_{\scriptscriptstyle E|\mathbf{X}}(e|\vb*{x};a)=\frac{p_{\scriptscriptstyle \mathbf{X}|E}(\vb*{x}|e;a) p_{\scriptscriptstyle E}(e)}{\sum_{e\in\mathcal{E}}p_{\scriptscriptstyle \mathbf{X}|E}(\vb*{x}|e;a) p_{\scriptscriptstyle E}(e)},\quad \forall e\in\mathcal{E}, \end{align} where $p_{\scriptscriptstyle E}(e)$ is the prior distribution of entity $E$ at the agent. \subsubsection{Concept-Symbol Mapping} Even if the agents know the same concept, its representation at each agent may differ when they are developed in isolation, especially when the agents have learned the concept without supervision. A similar situation happens in humans, for example, when two people thinking of a concept \textit{rabbit} might imagine different shaped rabbits even though they are conceptually the same. In this respect, a \emph{concept-symbol mapping} is instrumental in harmonizing the agents by synchronizing the representation of concepts for semantic communication. To develop the concept-symbol mapping, there are two main issues that need to be addressed. First, a set of symbols $\mathcal{S}$ that are commonly known by the agents must be predetermined or emerged among the agents. Second, a \textit{concept-to-symbol mapping (C2S)} $s:\mathcal{C} \rightarrow \mathcal{S}$ that maps concepts to symbols should be predetermined or emerged among the agents. Alternatively, a centralized unit can harmonize the agents by carefully designing the symbol set and communication protocol. However, if the agents must learn them in a distributed manner, the emergence of both symbol set and communication protocol falls under the framework of emergent communication \cite{Lazaridou2020}. Throughout this work, we suppose that there exists one symbol which is developed to describe a concept, and their set $\mathcal{S}$ is known to the agents. Moreover, C2S is assumed to be a deterministic one-to-one mapping $s:\mathcal{C}\rightarrow\mathcal{S}$ from the concept to the symbol set $\mathcal{S}$, where $s(c) \in \mathcal{S}$ describes the symbol that represents the concept $c$, while if $c \neq c'$ then $s(c) \neq s(c')$ and thereby $|\mathcal{C}| = |\mathcal{S}|$. On the other hand, since the mapping is defined to be one-to-one, we also have a \textit{symbol-to-concept mapping (S2C)} $s^{-1}:\mathcal{S}\rightarrow\mathcal{C}$, which is the inverse function of the C2S such that $s^{-1}(s(c)) = c$. \subsection{Shannon Coding under System 1 Semantic Coding}\label{subsec:shannon_communication} As illustrated in Fig. \ref{Fig_Shannon_semantic}, in order to communicate the SR of an intended entity, the traditional Shannon communication is applied after System 1 semantic coding in System 1 SNC. To explain, the obtained SR via semantics encoding is first encoded by a source encoder followed by a channel encoder to gain minimality and sufficiency of the SR for a given source and channel, respectively, for efficient and effective communication.\footnote{We posit that source and channel codings are separately designed without loss of asymptotic optimality by assuming both the source and channel are discrete and memoryless \cite{Shannon1948,Vembu1994}.} Specifically, the length of the binary uniquely decodable source-coded SR of an intended entity quantifies the size of the SR in bits. The expected bit-length of SR in System 1 SNC can be derived as follows. \begin{theorem} (Bit-Length of SR in System 1 SNC)\label{prop:ShannonSNC} The expected bit-length of SR in System 1 SNC between agents in state $a\in\mathcal{A}$ is lower bounded as \begin{align}\label{eq:lowerboundSNC} \mathsf{L}_{\text{S$_1$}}(a) \geq -\sum_{c \in \mathcal{\mathcal{C}}} p_{\scriptscriptstyle X_{c}}(1;a) \log_2 \frac{p_{\scriptscriptstyle X_c}(1;a)}{\sum_{c\in\mathcal{C}}p_{\scriptscriptstyle X_{c}}(1;a)}, \end{align} and upper bounded as \begin{align}\label{eq:upperboundSNC} \mathsf{L}_{\text{S$_1$}}(a) \leq -\sum_{c \in \mathcal{\mathcal{C}}} p_{\scriptscriptstyle X_{c}}(1;a) \left\lceil{\log_2 \frac{p_{\scriptscriptstyle X_c}(1;a)}{\sum_{c\in\mathcal{C}}p_{\scriptscriptstyle X_{c}}(1;a)}}\right\rceil, \end{align} where $p_{\scriptscriptstyle X_c}(1;a) = \sum_{e\in\mathcal{E}}p_{\scriptscriptstyle X_c|E}(1|e;a)p_{\scriptscriptstyle E}(e)$ is the probability of extracting concept $c$ at an agent in state $a \in \mathcal{A}$ and $p_{\scriptscriptstyle E}(e)$ is the prior distribution of entity $E\in\mathcal{E}$. \end{theorem} \begin{IEEEproof} The proof is provided in Appendix \ref{appendix:proofofProp1}. \end{IEEEproof} The above theorem states that the SR bit-length depends on the number of concepts extracted from the intended entity as well as the relative frequency of each concept's extraction among all entities. Meanwhile, although lossless source coding achieves both sufficiency and minimality of the SR in System 1 SNC over a noiseless channel, it lacks sufficiency in terms of communication accuracy under channel noise and fading. To overcome this, a channel encoder generates channel codes that achieve sufficiency by compromising minimality to ensure reliable communication of the source-coded SR. At the listener, the received channel code is decoded with the channel decoder and source decoder, successively to obtain the original symbolized concepts, followed by semantic decoding. \section{System 2 Semantics-Native Communication}\label{sec:RHSC} In human communication, rational speakers are self-conscious as to what they sound like, and change the ways they talk depending on the target listeners. In linguistics, this process is described as \emph{contextual reasoning} about the semantics in the local context of social interactions \cite{Bell1995, Bell1999, Bell2001}. Inspired by this, in this section we infuse contextual reasoning into System~1~SNC, and develop System 2 SNC with System 2 semantic coding. In System 2 semantic coding, before emitting utterances each agent performs contextual reasoning that is equivalent to \emph{self-SNC} with a virtual agent that mimics and simulates its listener. The self-SNC procedure sifts through the semantics yielding the most-effective SR for its (physical) listener, improving communication efficiency, as we shall describe in the following subsections. \begin{figure}[!t] \centering \includegraphics[width=0.7\textwidth]{Fig_example.pdf} \caption{A rabbit referential game example with three types of rabbit entities having different concepts.} \label{fig:example} \vspace{-15pt} \end{figure} \subsection{Contextual Reasoning for SNC} In linguistics, contextual reasoning is often computationally described using the RSA model \cite{Frank2012, Goodman2013, Kao2014, Goodman2016, Frank2016}. The RSA model is rooted in the Gricean view of language use \cite{Grice1975}, presuming that people are `rational' agents who can communicate effectively and efficiently based on reasoning. In a similar vein, the rationality of System 2 SNC lies in the belief that agents use to reason about each other for more effective and efficient SNC. To illustrate the importance of contextual reasoning, we begin by providing a motivating example of System 2 SNC. \subsubsection{A Rabbit Referential Game Example - System 1 vs. System 2 Agents} Consider an instance of a world with a speaker, listener and three different entities: a sitting \emph{rabbit} ($e_1$), a jumping \emph{rabbit} ($e_2$), and a \emph{rabbit} jumping through a ring ($e_3$). In addition, there exists a set $\mathcal{C} = \{c_\text{rabbit}$, $c_\text{jumping},c_\text{ring}\}$ of the three atomic concepts that are equivalently developed at each agent as illustrated in Fig.~\ref{fig:example}. Note here that `sitting' in $e_1$ is not a concept in $\mathcal{C}$ that exists in the world. Consider a communication-limited environment where the speaker must select only one of the given symbolized concepts, which are $\mathcal{S} = \{s_\text{rabbit}$, $s_\text{jumping}$, $s_\text{ring}$\}, to speak about an entity to the listener. Suppose that the speaker refers to $e_2$, and intends to communicate it to the listener. Under the communication-limited environment, a n\"aive (System 1) speaker who should speak all the symbolized concepts about the entity cannot describe $e_2$ to its listener. By contrast, a rational (System 2) speaker, assuming its listener is also rational, would select the symbol $s_\text{jumping}$ to speak about $e_2$. The rationale behind the selection is as follows. If the speaker utters $s_{\text{rabbit}}$, then a rational listener will infer $e_1$, since $c_{\text{rabbit}}$ is the only concept of $e_1$, and $s_{\text{rabbit}}$ is the most efficient and effective representation of $e_1$. Likewise, if the speaker utters $s_{\text{ring}}$, the listener will infer $e_3$, in that $c_{\text{ring}}$ is the unique concept of $e_3$. These two counter-examples justify the choice $s_\text{jumping}$ of the rational speaker. Meanwhile, upon receiving $s_{\text{jumping}}$, a n\"aive listener cannot identify which entity the speaker refers to unless it additionally receives $s_{\text{rabbit}}$. A rational listener, by contrast, can directly infer $e_2$ by reasoning in the same way as the rational speaker. In conclusion, the rational agents can exchange only a single symbol $s_{\text{jumping}}$ when referring to $e_2$, which significantly improves the communication efficiency without compromising accuracy. \subsubsection{Contextual Reasoning via self-SNC} As seen by the rabbit referential example, seeking for the most meaningful concepts is crucial for improving the efficiency of SNC. One major challenge is that the meaningfulness of a concept is context-dependent, determined by its effectiveness in achieving a given goal of SNC. In the world of the rabbit referential example, all entities are associated with the concept $c_{\text{rabbit}}$ that can be an effective concept for describing the entire world but are not meaningful for referring to a certain entity. On the contrary, $c_{\text{ring}}$ is a unique concept that is effective in referring to the entity $e_3$, but otherwise becomes meaningless. Another challenge comes from the fact that the individual \emph{communication context} (CC) of every agent is unique, which is formed by complex factors such as the configuration of the world and the agent's knowledge and beliefs. In SNC, the individual CC is affected by the agent's mapping between concepts and entities as well as by the prior beliefs about these concepts and entities, resulting in heterogeneous individual CCs for different agents. Contextual reasoning can overcome the aforementioned difficulties as follows. Assuming that the states of all agents are known, each agent can obtain the individual CCs, based on which it can apply conditional reasoning. A rational speaker, as an example, can reason that `if I were the rational listener, I would have inferred $e_3$ upon listening to $s_{\text{ring}}$; therefore, I should speak $s_{\text{ring}}$ to communicate $e_3$.' Similarly, a rational listener can reason that `if I were the rational speaker, I would have spoken $s_{\text{jumping}}$ to describe $e_2$; therefore, I will infer $e_2$ when listening to $s_{\text{jumping}}$.' Such contextual reasoning can be seen as a self-SNC between a rational agent and a virtual interlocutor built on the individual CC of its rational listener. While iterating this self-SNC, the individual CCs of the agent and its virtual interlocutor converge towards a focal point, referred to as a \emph{mutual CC}. The convergent mutual CC gives rise to the emergent E2C and C2E, i.e., System 2 semantic coding as will be elaborated in the following subsection. \subsection{System 2 Semantic Coding}\label{subsec:RCRD} System 2 semantic coding is enabled by a \emph{rational conceptualizer (rE2C)} and a \emph{rational deconceptualizer (rC2E)} that emerge through self-SNC. While rE2C finds a meaningful concept to describe an entity, rC2E returns the intended entity given a meaningful concept. Here, we aim to build a stochastic model for rE2C and rC2E. As opposed to E2C and C2E, rE2C and rC2E include the self-SNC which is carried out locally at each agent, in which their stochastic models are parameterized by the states of both speaker and listener, as detailed next. Consider a speaker in state $a \in \mathcal{A}$ and a listener in state $a' \in \mathcal{A}$, and assume that both of them know each other's states. For a given entity $e\in\mathcal{E}$, the rE2C is denoted by \begin{align} p_{\scriptscriptstyle C|E}(c|e;a,a'),\;\forall c \in \mathcal{C}, \end{align} where $C\in \mathcal{C}$ describes a random variable of a meaningful concept. The rE2C is a conditional probability distribution of choosing $c \in \mathcal{C}$ as a meaningful concept to represent an entity $e \in \mathcal{E}$. On the other hand, for a given meaningful concept $c\in\mathcal{C}$, the rC2E is denoted by \begin{align}\label{eq:rC2E} p_{\scriptscriptstyle E|C}(e|c;a,a'),\; \forall e\in\mathcal{E}. \end{align} The rC2E is a conditional probability distribution for inferring $e\in\mathcal{E}$ from the meaningful concept $c\in \mathcal{C}$. In what follows, we show how to obtain the optimal rE2C and rC2E by self-SNC. Recall that the individual CC is affected by the mapping between concepts and entities, as well as the prior beliefs. Let us define the individual CC of a speaker as a function of rE2C and prior distribution $p_{\scriptscriptstyle E}(e)$ of entities $\forall e\in\mathcal{E}$, i.e., \begin{align}\label{eq:speakercontext} \mathsf{P}(e,c;a,a') = \frac{p_{\scriptscriptstyle C|E}(c|e;a,a') p_{\scriptscriptstyle E}(e)}{\sum_{e\in\mathcal{E}}\sum_{c\in\mathcal{C}}p_{\scriptscriptstyle C|E}(c|e;a,a') p_{\scriptscriptstyle E}(e)},\; \forall (e,c)\in\mathcal{E}\times\mathcal{C}, \end{align} and similarly, the individual CC of a listener is defined as a function of rC2E and prior distribution $p_{\scriptscriptstyle C}(c)$ of concepts $\forall c\in\mathcal{C}$, i.e., \begin{align}\label{eq:listenercontext} \mathsf{Q}(e,c;a,a') = \frac{p_{\scriptscriptstyle E|C}(e|c;a,a') p_{\scriptscriptstyle C}(c)}{\sum_{e\in\mathcal{E}}\sum_{c\in\mathcal{C}}p_{\scriptscriptstyle E|C}(e|c;a,a') p_{\scriptscriptstyle C}(c)},\; \forall (e,c)\in\mathcal{E}\times\mathcal{C}. \end{align} The individual CCs are both normalized to be in a form of joint distribution of an entity $E\in\mathcal{E}$ and meaningful concept $C\in\mathcal{C}$ parameterized by $a$ and $a'$. Now, denote by $\mathsf{M}(e,c;a,a')$ for all $(e,c)\in\mathcal{E}\times\mathcal{C}$, the mutual CC of the speaker and listener and suppose that the speaker and listener independently and individually minimize an objective function defined by \begin{align}\label{eq:lossfunction} \mathsf{G} = \lambda\! \left[ \mathsf{H}(\mathsf{P},\mathsf{M}) \! - \! \frac{\mathsf{H}(\mathsf{P})}{\alpha} \right]\! +\! (1\!-\!\lambda) \!\left[ \mathsf{H}(\mathsf{Q},\mathsf{M}) \!-\! \frac{\mathsf{H}(\mathsf{Q})}{\beta} \right], \end{align} with respect to the individual CC of the speaker $\mathsf{P}$ and listener $\mathsf{Q}$, and their mutual CC $\mathsf{M}$, given parameters $\lambda \in (0,1)$ and $\alpha,\beta \geq 1$, $\alpha,\beta\neq1$. In \eqref{eq:lossfunction}, $\mathsf{H}(\mathsf{P}) = -\sum_{(e,c)}\mathsf{P}(e,c;a,a')\log \mathsf{P}(e,c;a,a')$ and $\mathsf{H}(\mathsf{Q}) = -\sum_{(e,c)}\mathsf{Q}(e,c;a,a')\log \mathsf{Q}(e,c;a,a')$ are the joint entropies of the entity and meaningful concept given the agent's state with respect to the individual CCs $\mathsf{P}$ and $\mathsf{Q}$, respectively. The term $\mathsf{H}(\mathsf{P},\mathsf{M}) -\sum_{(e,c)}\mathsf{P}(e,c;a,a')\log \mathsf{M}(e,c;a,a')$ is the cross entropy of $\mathsf{M}$ and $\mathsf{P}$, and $\mathsf{H}(\mathsf{Q},\mathsf{M}) = -\sum_{(e,c)}\mathsf{Q}(e,c;a,a')\log \mathsf{M}(e,c;a,a')$ is the cross entropy of $\mathsf{M}$ and $\mathsf{Q}$. To illustrate the meaning of the minimization of \eqref{eq:lossfunction}, first consider $\alpha = \beta = 1$, which reduces \eqref{eq:lossfunction} to a weighted sum of two KL-divergences, one between $\mathsf{P}$ and $\mathsf{M}$ and the other between $\mathsf{Q}$ and $\mathsf{M}$, such that \begin{align}\label{eq:lossfunctionKL} \mathsf{G}_{\scriptscriptstyle \alpha,\beta = 1} = \lambda\mathsf{D}_\text{KL}(\mathsf{P}||\mathsf{M}) + (1-\lambda)\mathsf{D}_\text{KL}(\mathsf{Q}||\mathsf{M}). \end{align} Minimizing \eqref{eq:lossfunctionKL} in terms of $\mathsf{P}$ and $\mathsf{Q}$, given $\mathsf{M}$, reduces the divergence between two distributions $\mathsf{P}$ and $\mathsf{Q}$, and make them move towards a focal point $\mathsf{M}$. Moreover, for fixed $\mathsf{P}$ and $\mathsf{Q}$, minimizing \eqref{eq:lossfunctionKL} in terms of $\mathsf{M}$ finds the division point on the line segment between $\mathsf{P}$ and $\mathsf{Q}$ (see Appendix \ref{appendix:proofofAM}). Meanwhile, minimizing \eqref{eq:lossfunctionKL} induces the maximization of both $\mathsf{H}(\mathsf{P})$ and $\mathsf{H}(\mathsf{Q})$. For the fixed prior distributions $p_{\scriptscriptstyle E}(e)$ and $p_{\scriptscriptstyle C}(c)$, for all $e\in \mathcal{E}$ and $c\in \mathcal{C}$, respectively, maximizing $\mathsf{H}(\mathsf{P})$ results in increasing the uncertainty of conceptualizing the rE2C and maximizing $\mathsf{H}(\mathsf{Q})$ results in increasing the rC2E uncertainty. Thus, both of them are closely related to the performance of System 2 SNC, in a sense that the maximization of the former reduces communication efficiency by increasing the lower bound of the expected bit-length of SR (see Section \ref{subsec:shannon_communication}), while the maximization of the latter reduces the deconceptualization accuracy. Therefore, such factors are controlled by hyperparameters $\alpha$ and $\beta$ in \eqref{eq:lossfunction}, which determine how rational the rE2C and rC2E are. For example, setting $\alpha>1$ promotes representational efficiency by suppressing the maximization of $\mathsf{H}(\mathsf{P})$, while setting $\beta >1$ promotes deconceptualization accuracy by suppressing the maximization of $\mathsf{H}(\mathsf{Q})$. However, setting $\alpha$ and $\beta$ to be too large does not always make System 2 SNC efficient and effective, since large $\alpha$ may result in obtaining rE2C that maps multiple different entities to the same meaningful concept, and large $\beta$ may result in obtaining rC2E that infers the same entity from multiple different meaningful concepts. Numerical experiments illustrating this aspect are shown in Section \ref{sec:experiments}. The minimization of \eqref{eq:lossfunction} is a variational problem, which can be solved by alternating minimization with respect to $\mathsf{P}$, $\mathsf{Q}$ and $\mathsf{M}$, in the order of $\mathsf{M} \!\rightarrow\! \mathsf{P} \!\rightarrow\! \mathsf{M} \!\rightarrow\! \mathsf{Q} \!\rightarrow\! \mathsf{M} \!\rightarrow\! \cdots$ as formalized in Theorem \ref{thm:theonly}. \begin{theorem} (Mutual CC Convergence) \label{thm:theonly} As the iteration step $t\rightarrow \infty$, the alternating iterations~of \begin{align} \label{eq:iteration_1}\mathsf{M}^{\scriptscriptstyle [t]}_{\scriptscriptstyle 1}(e,c;a,a') &= \lambda \mathsf{P}^{\scriptscriptstyle [t-1]}(e,c;a,a') \! + \! (1\!-\!\lambda)\mathsf{Q}^{\scriptscriptstyle [t-1]}(e,c;a,a'),\\ \label{eq:iteration_2}\mathsf{P}^{\scriptscriptstyle [t]}(e,c;a,a') &= \frac{\mathsf{M}^{\scriptscriptstyle [t]}_{\scriptscriptstyle 1}(e,c;a,a')^\alpha}{\sum_{(e,c)\in\mathcal{E}\times\mathcal{C}}\mathsf{M}^{\scriptscriptstyle [t]}_{\scriptscriptstyle 1}(e,c;a,a')^\alpha}, \\ \label{eq:iteration_3}\mathsf{M}^{\scriptscriptstyle [t]}_{\scriptscriptstyle 2}(e,c;a,a') &= \lambda \mathsf{P}^{\scriptscriptstyle [t]}(e,c;a,a') \! + \! (1\!-\!\lambda)\mathsf{Q}^{\scriptscriptstyle [t-1]}(e,c;a,a'),\; \text{and}\\ \label{eq:iteration_4}\mathsf{Q}^{\scriptscriptstyle [t]}(e,c;a,a') &= \frac{\mathsf{M}^{\scriptscriptstyle [t]}_{\scriptscriptstyle 2}(e,c;a,a')^\beta}{\sum_{(e,c)\in\mathcal{E}\times\mathcal{C}}\mathsf{M}^{\scriptscriptstyle [t]}_{\scriptscriptstyle 2}(e,c;a,a')^\beta} \end{align} converge to a common mutual CC $\mathsf{M}^{\scriptscriptstyle [*]} = \lim_{t\rightarrow\infty}\mathsf{M}_{\scriptscriptstyle 1}^{\scriptscriptstyle [t]} = \lim_{t\rightarrow\infty}\mathsf{M}_{\scriptscriptstyle 2}^{\scriptscriptstyle [t]}$ for all $e\in \mathcal{E}$ and $c\in\mathcal{C}$, which is a local minimum of \eqref{eq:lossfunction}. \end{theorem} \begin{IEEEproof} The proof is provided in Appendix \ref{appendix:proofofAM}. \end{IEEEproof} In other words, Theorem \ref{thm:theonly} states that \begin{align} \mathsf{G}(\mathsf{P}^{\scriptscriptstyle [t-1]}, \mathsf{Q}^{\scriptscriptstyle [t-1]}) \leq \mathsf{G}(\mathsf{P}^{\scriptscriptstyle [t]}, \mathsf{Q}^{\scriptscriptstyle [t-1]}) \leq \mathsf{G}(\mathsf{P}^{\scriptscriptstyle [t]}, \mathsf{Q}^{\scriptscriptstyle [t]}) \quad \forall t\geq 1. \end{align} Moreover, the solution of Theorem 2 is locally optimal, since \eqref{eq:lossfunction} is not jointly convex with respect to $\mathsf{P}$, $\mathsf{Q}$ and $\mathsf{M}$. For the special case when $\alpha=\beta=1$, we show the global optimality of the solutions, as elaborated next. At $t=0$, rE2C and rC2E are initialized by E2C and C2E, respectively, i.e., \begin{align}\label{eq:initialrE2C} p^{\scriptscriptstyle [0]}_{\scriptscriptstyle C|E}(c|e;a,a') &= \frac{p_{\scriptscriptstyle X_c|E}(1|e;a)p_{\scriptscriptstyle E}(e)}{\sum_{e\in\mathcal{E}}p_{\scriptscriptstyle X_c|E}(1|e;a)p_{\scriptscriptstyle E}(e)} \quad \text{$\forall c\in\mathcal{C}$ and}\\ \label{eq:initialrC2E} p^{\scriptscriptstyle [0]}_{\scriptscriptstyle E|C}(e|c;a,a') &= \frac{p_{\scriptscriptstyle E|X_c}(e|1;a')p_{\scriptscriptstyle C}(c)}{\sum_{c\in\mathcal{C}}p_{\scriptscriptstyle E|X_c}(e|1;a')p_{\scriptscriptstyle C}(c)} \quad \text{$\forall e\in\mathcal{E}$}. \end{align} Substituting \eqref{eq:initialrE2C} and \eqref{eq:initialrC2E} respectively into the individual CCs \eqref{eq:speakercontext} and \eqref{eq:listenercontext} gives $\mathsf{P}^{\scriptscriptstyle [0]}$ and $\mathsf{Q}^{\scriptscriptstyle [0]}$ when initializing \eqref{eq:iteration_1} to \eqref{eq:iteration_4}. Denote by $\mathsf{G}^{\scriptscriptstyle [*]} = \mathsf{G}(\mathsf{P}^{\scriptscriptstyle [*]},\mathsf{Q}^{\scriptscriptstyle [*]})$ the stationary point of \eqref{eq:lossfunction} at the found minimum, and $\mathsf{P}^{\scriptscriptstyle [*]} = \lim_{t\to\infty}\mathsf{P}^{\scriptscriptstyle [t]}$, and $\mathsf{Q}^{\scriptscriptstyle [*]} = \lim_{t\to\infty}\mathsf{Q}^{\scriptscriptstyle [t]}$ the stationary points of $\mathsf{P}$ and $\mathsf{Q}$, respectively. Then, the mutual CC convergence in Theorem 2 can be recast as the individual CC convergence. \begin{corollary}\label{cor:proofofPequalsQ} (Individual CC Convergence) For any parameters $\alpha, \beta \geq 1$ and $0 < \lambda < 1$, \begin{align} \mathsf{P}^{\scriptscriptstyle [*]}(e,c;a,a') = \mathsf{Q}^{\scriptscriptstyle [*]}(e,c;a,a') = \mathsf{M}^{\scriptscriptstyle [*]}(e,c;a,a') \end{align} holds for all $(e,c)\in\mathcal{E}\times\mathcal{C}$. \end{corollary} \begin{IEEEproof} The proof is provided in Appendix \ref{appendix:proofofPequalsQ}. \end{IEEEproof} \begin{remark}(Global Optimality) For $\alpha=\beta=1$, the loss function \eqref{eq:lossfunction} of Theorem 2 boils down to \eqref{eq:lossfunctionKL} that can be minimized when all the KL-divergence terms become zero since the KL-divergence is non-negative. The solution of Theorem 2 achieves this result according to Corollary \ref{cor:proofofPequalsQ}, which is thus the global minimum. \end{remark} \noindent With the solution of Theorem 2, note also that \eqref{eq:lossfunctionKL} is the $\lambda$-divergence between $\mathsf{P}^{\scriptscriptstyle [*]}$ and $\mathsf{Q}^{\scriptscriptstyle [*]}$. For $\lambda=0.5$, \eqref{eq:lossfunctionKL} yields the Jensen-Shannon divergence whose minimum is zero, which can be achieved by the solution of Theorem 2. \begin{figure*} \centering \subfigure[]{\includegraphics[width=0.45\textwidth]{Fig_final_G_convergence.pdf}\label{fig:iteration}} \subfigure[]{\includegraphics[width=0.45\textwidth]{Fig_final_heatmap.png}\label{fig:heatmap}} \caption{Illustrations of (a) convergence of $\mathsf{G}$ through the recursion of rE2C and rC2E with different parameters $\alpha, \beta = 1.1, 1.5\text{ and } 2$, for some instance of the world with fixed $|\mathcal{E}| = |\mathcal{C}| = 100$ and $\lambda = 0.5$; (b) empirical distribution (raw-stochastic) of chosen meaningful concept for a given entity based on the stationary rE2C, with different parameters $\alpha = 1.1, 1.5 \text{ and } 2.0$, and fixed $|\mathcal{E}| = 10$, $|\mathcal{C}|=20$, $\beta = 1.5$, $\lambda = 0.5$.} \end{figure*} Next, the alternating iterations of \eqref{eq:iteration_1}-\eqref{eq:iteration_4} in Theorem 2 allow us to unravel the self-SNC operations between an agent and its virtual interlocutor as follows. \begin{corollary} ({self-SNC}) By recasting \eqref{eq:iteration_1}-\eqref{eq:iteration_4}, we can derive the rE2C at iteration step $t\geq 1$ \begin{align}\label{eq:RC} p_{\scriptscriptstyle C|E}^{\scriptscriptstyle [t]}(c|e;a,a') &= \frac{\mathsf{M}^{\scriptscriptstyle[t]}_{\scriptscriptstyle 1}(e,c;a,a')^\alpha }{\sum_{\forall c \in \mathcal{C}} \mathsf{M}^{\scriptscriptstyle[t]}_{\scriptscriptstyle 1}(e,c;a,a')^{\alpha}} \end{align} and the rC2E at iteration step $t$ \begin{align}\label{eq:RD} p_{\scriptscriptstyle E|C}^{\scriptscriptstyle [t]}(e|c;a,a') &= \frac{\mathsf{M}^{\scriptscriptstyle [t]}_{\scriptscriptstyle 2}(e,c;a,a')^\beta}{\sum_{\forall e \in \mathcal{E}} \mathsf{M}^{\scriptscriptstyle [t]}_{\scriptscriptstyle 2}(e,c;a,a')^\beta}, \end{align} for all $e \in \mathcal{E}$, $c \in \mathcal{C}$ given $a, a' \in \mathcal{A}$, which minimizes \eqref{eq:lossfunction} when $t\rightarrow \infty$. \end{corollary} The obtained rE2C can be seen as a scoring function of the context-dependent meaningfulness of concepts for communicating an entity. On the other hand, the rC2E scores the correctness of the inferred intended entity given a meaningful concept. Furthermore, \eqref{eq:RC} and \eqref{eq:RD} form an iterative recursion, which implements the self-SNC between an agent and its virtual interlocutor as described earlier. An infinite recursion $t\rightarrow \infty$ of \eqref{eq:RC} and \eqref{eq:RD}, induces their convergence, as well as the objective function \eqref{eq:lossfunction} to a stationary point $\mathsf{G}^{\scriptscriptstyle [*]}$. For simplicity, the notations $p_{\scriptscriptstyle C|E}(c|e;a,a')$ for all $c\in\mathcal{C}$ and $p_{\scriptscriptstyle E|C}(e|c;a,a')$ for all $e\in\mathcal{E}$ are now regarded as the stationary rE2C and rC2E, respectively, at the convergence of $\mathsf{G}^{\scriptscriptstyle [*]}$, unless stated otherwise. Experimental results showing the convergence dynamics of $\mathsf{G}$ with respect to the iteration step of the recursion are illustrated in Fig. \ref{fig:iteration} with different settings of the rationality parameters $\alpha, \beta > 1$. We find that the objective \eqref{eq:lossfunction} converges after some iterations, and the speed of convergence is faster for larger $\alpha$ and $\beta$. However, as mentioned earlier, large values of $\alpha$ and $\beta$, does not always provide good solutions for reliable System 2 SNC; thus $\alpha$ and $\beta$ should be carefully chosen. \begin{remark} As a special case, the rE2C and rC2E recursion can be recast as an iterative matrix scaling method termed Sinkhorn scaling \cite{Sinkhorn1964,Knopp1967} with parameters $\alpha = 1$, $\beta =1$ and $\lambda = 1$. Since the Sinkhorn scaling suboptimizes the optimal transport (or earth moving) problem \cite{Cuturi2013}, the recursion can be viewed as solving an optimal transport problem between the two priors $p_{\scriptscriptstyle E}$ and $p_{\scriptscriptstyle C}$, in which the rE2C is cast as a transport plan from the entity to the concept and rC2E as that from the concept to the entity \cite{Wang2020}, which is basically an infinite recursion of the RSA model \cite{Frank2012, Goodman2013, Kao2014, Goodman2016, Frank2016}. \end{remark} \subsection{Reliable System 2 SNC with Multiple Meaningful Concepts}\label{sec:sequentialRHSC} The above-mentioned self-SNC based approach selects a single meaningful concept at a time, however, communicating multiple meaningful concepts can improve the reliability of System 2 SNC. To this end, there are two main directions: a planning-based method that selects multiple meaningful concepts at once, and a greedy algorithm-based method that selects meaningful concepts one-by-one. The former may give better performance based on optimal planning, however, the algorithm is computationally expensive since the selection of multiple meaningful concepts should be jointly designed. Moreover, it is hard to find the minimum number of concepts guaranteeing reliable communication, thereby calling for a full search to obtain the optimal planning. On the other hand, the latter is computationally cheap, since one meaningful concept is selected at a time and the algorithm can stop whenever the communication reliability is guaranteed. Therefore, we consider the latter based on a greedy algorithm, which boils down to a problem of updating a pair of rE2C and rC2E after every meaningful concept selection and communication. In brief, since the meaningful concepts communicated in the past affect both the beliefs (priors) of the speaker and listener about the intended entity and meaningful concepts in the present, rE2C and rC2E should be updated based on self-SNC under the updated beliefs. To illustrate, consider a speaker in state $a$ selecting two meaningful concepts $c_1, c_2 \in \mathcal{C}$ in a sequence to communicate with a listener in state $a'$. Suppose the stationary rE2C is obtained via self-SNC with initial priors $p_{\scriptscriptstyle E}(e)$, $\forall e\in\mathcal{E}$ and $p_{\scriptscriptstyle C}(c)$, $\forall c\in\mathcal{C}$. The first meaningful concept is chosen by $c_1 = \argmax_c p_{\scriptscriptstyle C|E}(c|e;a,a')$ and communicated in the form of a symbolized concept $s(c_1)$. At the listener, upon receiving $s(c_1)$, the prior distribution about the intended entity is updated by $p_{\scriptscriptstyle E}(e) \leftarrow p_{\scriptscriptstyle E|C}(e|c_1;a,a')$, $\forall e\in\mathcal{E}$. Furthermore, for the next meaningful concept selection, since $c_1$ henceforth is no longer meaningful, the prior distribution is updated by $p_{\scriptscriptstyle C}(c) \leftarrow \frac{p_{\scriptscriptstyle C}(c)}{\sum_{c\in \mathcal{C}\backslash c_1} p_{\scriptscriptstyle C}(c)}$, $\forall c \in \mathcal{C}\backslash c_1$ and $p_{\scriptscriptstyle C}(c_1) \leftarrow 0$. Then, rE2C and rC2E are also updated based on the updated prior distributions to select the next meaningful concept. \begin{algorithm}[t] \DontPrintSemicolon \algsetup{linenosize=\tiny} \small \KwInput{$\mathcal{E}$; $\mathcal{C}$; $p_{\scriptscriptstyle \mathbf{X}|E}(\vb*{x}|e;a)$, $p_{\scriptscriptstyle E|\mathbf{X}}(e|\vb*{x};a')$ $\forall{\vb*{x}}\in\{0,1\}^{|\mathcal{C}|}$, $\forall e\in\mathcal{E}$; $p_{\scriptscriptstyle E}(e)$, $\forall e\in\mathcal{E}$; $p_{\scriptscriptstyle C}(c)$, $\forall c \in\mathcal{C}$, $\alpha$, $\beta$, $\lambda$} \KwOutput{$K$ meaningful concepts $c_1$,$c_2$,\dots,$c_K$} Fix intended entity $\hat{e}\in\mathcal{E}$ of the speaker\; \For{$k=1$ to $K$} { \KwInitialize{$\mathsf{P}(c,e;a,a') \propto p_{\scriptscriptstyle X_c|E}(1|e;a)p_{\scriptscriptstyle E}(e)$, $\mathsf{Q}(e,c;a,a') = p_{\scriptscriptstyle E|X_c}(e|1;a')p_{\scriptscriptstyle C}(c)$, $\forall (e,c)\in\mathcal{E}\times\mathcal{C}$;} \Repeat{convergence} { $\mathsf{M}(e,c;a,a') \leftarrow \lambda \mathsf{P}(c,e;a,a') + (1-\lambda) \mathsf{Q}(c,e;a,a')$, $\forall (e,c)\in\mathcal{E}\times\mathcal{C}$\; $\mathsf{P}(e,c;a,a') \leftarrow \frac{\mathsf{M}(e,c;a,a')^\alpha}{\sum_{(e,c)\in\mathcal{E}\times\mathcal{C}}\mathsf{M}(e,c;a,a')^\alpha}$, $\forall (e,c)\in\mathcal{E}\times\mathcal{C}$\; $\mathsf{M}(e,c;a,a') \leftarrow \lambda \mathsf{P}(c,e;a,a') + (1-\lambda) \mathsf{Q}(c,e;a,a')$, $\forall (e,c)\in\mathcal{E}\times\mathcal{C}$\; $\mathsf{Q}(e,c;a,a') \leftarrow \frac{\mathsf{M}(e,c;a,a')^\beta}{\sum_{(e,c)\in\mathcal{E}\times\mathcal{C}}\mathsf{M}(e,c;a,a')^\beta}$, $\forall (e,c)\in\mathcal{E}\times\mathcal{C}$ } $p_{\scriptscriptstyle C|E}(c|e;a,a') \leftarrow \frac{\mathsf{M}(e,c;a,a')^\alpha}{\sum_{c\in\times\mathcal{C}}\mathsf{M}(e,c;a,a')^\alpha}$\; $p_{\scriptscriptstyle E|C}(e|c;a,a') \leftarrow \frac{\mathsf{M}(e,c;a,a')^\alpha}{\sum_{e\in\times\mathcal{E}}\mathsf{M}(e,c;a,a')^\alpha}$\; $c_k = \argmax_{c} p_{\scriptscriptstyle C|E}(c|\hat{e};a,a')$\; $\mathcal{C}_k = \mathcal{C}_{k-1}\backslash c_{k}$ ($\mathcal{C}_0 = \mathcal{C})$\; $p_{\scriptscriptstyle E}(e) \leftarrow p_{\scriptscriptstyle E|C}(e|c_k;a,a')$, $\forall e\in\mathcal{E}$\; $p_{\scriptscriptstyle C}(c) \leftarrow \frac{p_{\scriptscriptstyle C}(c)}{\sum_{c\in \mathcal{C}_k} p_{\scriptscriptstyle C}(c)}$, $\forall c\in\mathcal{C}_k$, $p_{\scriptscriptstyle C}(c_k) \leftarrow 0$\; } \caption{Selecting $K$ Meaningful Concepts for System 2 SNC} \label{algo:1} \end{algorithm} Likewise, $k \geq 1$ meaningful concepts can be chosen by sequentially obtaining rE2Cs and corresponding rC2Es. The pseudo-code for obtaining sequential pairs of rE2C and rC2E is provided in Algorithm \ref{algo:1}. Meanwhile, Theorem \ref{thm:reliability} states that the accuracy of inferring the intended entity by rC2E enhances as the number of communicated meaningful concepts increases. \begin{theorem}\label{thm:reliability} (Reliability Enhancement) In System 2 SNC, for a given intended entity $e\in\mathcal{E}$, the probability of successfully inferring $e$ with the stationary rC2E is non-decreasing with the number of communication rounds, i.e., \begin{align} p^{k-1}_{\scriptscriptstyle E|C}(e|c_{k-1};a,a') \leq p^{k}_{\scriptscriptstyle E|C}(e|c_{k};a,a') \end{align} for $k\geq 2$ and fixed parameters $\alpha,\beta\geq 1$ (excluding $\alpha=\beta = 1$) and $0<\lambda<1$ over communication rounds, where $p^{k}_{\scriptscriptstyle E|C}(e|c;a,a')$ and $c_k$ denote the $k$-th updated stationary rC2E and $k$-th meaningful concept selected with the $k$-th updated stationary rE2C, respectively. \end{theorem} \begin{IEEEproof} The proof is provided in Appendix \ref{appendix:proofofreliability}. \end{IEEEproof} Note that the above theorem proves that the greedy approach is \emph{correct} in the sense that the greedy selection of meaningful concepts will eventually guarantee reliable communication. \subsection{Shannon Coding under System 2 Semantic Coding}\label{subsec:shannon_communication_RHSC} Let $K$ be the minimum number of meaningful concepts, i.e., $c_1,c_2,\dots,c_K$, that guarantees reliable communication in System 2 SNC. Then, the SR of an intended entity is the collection of $K$ symbolized meaningful concepts. As described in Section \ref{subsec:shannon_communication}, the SR is encoded via source and channel coding to ensure minimality and sufficiency for the physical transmission over a noiseless/noisy channel. Here, the (lossless) source coding depends on the distribution of the meaningful concept, which is examined by the obtained stationary rE2C. To further illustrate, Fig. \ref{fig:heatmap} shows the empirical distribution of the meaningful concepts for a given intended entity with a different rationality parameter $\alpha$. Note that with increasing $\alpha$, the uncertainty of meaningful concept for each entity is reduced, i.e., $\mathsf{H}(\mathsf{P})$ in \eqref{eq:lossfunction} is reduced. Reflecting this, the expected bit-length of SR in System 2 SNC can be derived as follows. \begin{corollary} (Bit-Length of SR in System 2 SNC) The minimum expected bit-length of SR composed of $K$ symbolized meaningful concepts in System 2 SNC between a speaker in state $a\in\mathcal{A}$ and a listener in state $a'\in\mathcal{A}$ is lower bounded as \begin{align}\label{eq:irSNC_lower} \mathsf{L}_{\text{S$_2$}}(a,a') &\geq -\sum_{k = 1}^{K}\sum_{c\in\mathcal{C}} p^k_{\scriptscriptstyle C}(c;a,a') \log_2 p^k_{\scriptscriptstyle C}(c;a,a'), \end{align} and upper bounded as \begin{align}\label{eq:irSNC_upper} \mathsf{L}_{\text{S$_2$}}(a,a') &\leq -\sum_{k = 1}^{K}\sum_{c\in\mathcal{C}} p^k_{\scriptscriptstyle C}(c;a,a') \left\lceil\log_2 p^k_{\scriptscriptstyle C}(c;a,a')\right\rceil, \end{align} where \begin{align} p^k_{\scriptscriptstyle C}(c;a,a') = \sum_{e\in\mathcal{E}} p^k_{\scriptscriptstyle C|E}(c|e;a,a') p^{k-1}_{\scriptscriptstyle E|C}(e|c_{k-1};a,a') \end{align} is the marginalized rE2C over $\mathcal{E}$; $c_{k-1}$ is the $(k-1)$-th selected meaningful concept. \end{corollary} \begin{IEEEproof} The proof is similar to the proof of Theorem \ref{prop:ShannonSNC} provided in Appendix \ref{appendix:proofofProp1}. \end{IEEEproof} In practice, it is hard to know the number of meaningful concepts $K$ that need to be selected for reliable System 2 SNC. However, as mentioned earlier, the greedy algorithm allows to select the meaningful concepts one-by-one, thereby allowing early stopping of communication when the reliability is guaranteed. Thus, the number of meaningful concepts of an entity in System 2 SNC is upper bounded by the number of extracted concepts from the same entity in System 1 SNC. Such communication-efficiency of System 2 SNC is further corroborated by numerical results in the following section. \section{Experimental Results}\label{sec:experiments} This section provides experimental results to give more insights about the proposed System 1 and System 2 SNC concepts. For the experiment, we fixed the number of entities and concepts in the world to $|\mathcal{E}| = 100$ and $|\mathcal{C}| = 100$. The singular E2Cs, each of which indicates the probability distribution of whether a concept is extracted from an entity or not, are generated by the Dirichlet distribution with hyperparameter pair $(0.1,0.1)$. Note that the product of singular E2Cs yields E2C \eqref{eq:e2c_cond}. Both prior distributions of entities and concepts are uniform in the beginning of the communication, and might vary during communication in System 2 SNC. For System 1 SNC, we introduce a criterion to decide the concepts extraction from an entity, i.e., all concepts $c\in\mathcal{C}$ such that $p_{\scriptscriptstyle X_c|E}(1|e;a)\geq 0.9$ are extracted from a given intended entity $e\in\mathcal{E}$. We assume a binary erasure channel (BEC) between the agents with erasure probability $p_e$ and the erased bits are retransmitted based on feedback (e.g., hybrid ARQ) to achieve the BEC capacity $1-p_e$ in probability. In the experiment, the communication reliability $\gamma$ is the ratio of the listener's correct inference about the speaker's intended entity to the total communication rounds. \vspace{5pt}\noindent\textbf{Computation-Communication Trade-Off in System 2 SNC.}\quad Fig. \ref{fig:rSNCreliability} shows the impact of self-SNC iteration steps on the reliability of System 2 SNC with a single meaningful concept. As shown in Fig. \ref{fig:rSNCreliability}, for a larger number of iteration steps, i.e., with larger computational effort, the reliability of System 2 SNC is higher, especially when both parameters $\alpha$ and $\beta$ approach $1$. This is related to the slow convergence of the self-SNC with $\alpha$ and $\beta$ close to 1 as shown in Fig. \ref{fig:iteration}. Such computation-communication trade-off can be also found in Fig. \ref{fig:srSNCreliability}. For instance, it can be easily found that with $\alpha, \beta = 1.1$, the communication reliability increases as the number of iteration steps increases. Moreover, the number of communicated meaningful concepts becomes smaller as the agents put more computational efforts in self-SNC. \vspace{5pt}\noindent\textbf{Impact of $\alpha$ and $\beta$ on System 2 SNC.}\quad As previously mentioned and shown in Fig. \ref{fig:iteration}, for the objective $\mathsf{G}$ in \eqref{eq:lossfunction} with larger $\alpha$ and $\beta$, the alternating iteration \eqref{eq:iteration_1}-\eqref{eq:iteration_4} approaches faster to the minimum. Such trend can be also seen in Figs. \ref{fig:rSNCreliability} and \ref{fig:srSNCreliability}, where with larger $\alpha, \beta$, the reliability of System 2 SNC with a single meaningful concept stays constant after $20$ iteration steps, while with smaller $\alpha, \beta$ it varies as the number of iteration steps increases until convergence. However, Fig. \ref{fig:rSNCreliability_a} shows that the reliability $\gamma$ does not exceed $0.5$ when $\alpha,\beta=2$, even after convergence. On the other hand, when $\alpha$ and $\beta$ are smaller, though it converges slowly, communication reliability is high after convergence. Specifically, under our experimental setting, the reliability $\gamma$ of the System 2 SNC approaches $1$ after 200 iteration steps even with a single meaningful concept, when $\alpha$ and $\beta$ are close to $1$ as shown in Fig. \ref{fig:rSNCreliability_c}. This provides the insight that faster convergence is not always better. Rather, slower and steadier reasoning provides better rationality in SNC. It is also worth noting that at $\alpha, \beta = 1$, communication reliability is low since minimizing \eqref{eq:lossfunctionKL} induces an uncertainty increase in both rE2C and rC2E. \begin{figure*} \centering \subfigure[$t = 20$]{\includegraphics[width=0.32\textwidth]{Fig_alpha_beta_depth_20.pdf}\label{fig:rSNCreliability_a}} \subfigure[$t = 100$]{\includegraphics[width=0.32\textwidth]{Fig_alpha_beta_depth_100.pdf}\label{fig:rSNCreliability_b}} \subfigure[$t = 200$]{\includegraphics[width=0.32\textwidth]{Fig_alpha_beta_depth_200.pdf}\label{fig:rSNCreliability_c}} \caption{Reliability $\gamma$ with respect to parameters $\alpha$ and $\beta$ ranging $0.9$ to $2$ in System 2 SNC, for different self-SNC iteration depth $t = 10$, $20$ and $100$.} \label{fig:rSNCreliability} \end{figure*} \vspace{5pt}\noindent\textbf{Reliability-Latency Trade-Off in System 2 SNC.}\quad Fig. \ref{fig:srSNCreliability} illustrates the impact of the number of communication rounds on the reliability of System 2 SNC. Note that a meaningful concept is communicated in each round. The number of communication rounds therefore corresponds to the number of communicating meaningful concepts. Moreover, the larger number of communication rounds indicates higher communication latency. In Fig. \ref{fig:srSNCreliability_a}, each communication round runs for $20$ iteration steps of \eqref{eq:iteration_1}-\eqref{eq:iteration_4}, and thus the reliability of System 2 SNC when $\alpha,\beta = 1.1$ is poor compared to the other settings. However, the reliability $\gamma$ approaches to $1$ after $4$ communication rounds. As the iteration steps per communication round increases, the reliability of System 2 SNC with smaller $\alpha$ and $\beta$ enhances owing to their convergence as mentioned earlier. Furthermore, when $\alpha,\beta = 1.1$, the reliability $\gamma$ approaches $1$ within two communication rounds, which means that allocating more computational effort enhances the reliability-latency trade-off in System 2 SNC. \begin{figure*} \centering \subfigure[$t=20$]{\includegraphics[width=0.32\textwidth]{Fig_sequential_alpha_beta_depth_20.pdf}\label{fig:srSNCreliability_a}} \subfigure[$t=100$]{\includegraphics[width=0.32\textwidth]{Fig_sequential_alpha_beta_depth_100.pdf}\label{fig:srSNCreliability_b}} \subfigure[$t=200$]{\includegraphics[width=0.32\textwidth]{Fig_sequential_alpha_beta_depth_200.pdf}\label{fig:srSNCreliability_c}} \caption{Reliability $\gamma$ versus communication rounds in System 2 SNC with different parameters $\alpha,\beta = 1.1$, $1.5$ and $2.0$, and different self-SNC iteration steps $t = 10$, $20$ and $200$ in each round.} \label{fig:srSNCreliability} \end{figure*} \vspace{5pt}\noindent\textbf{SR Length Comparison: System 1 SNC vs. System 2 SNC.}\quad The bit-length of the source-coded SR quantifies the size of SR in bits. In this regard, Fig. \ref{fig:codelength_noiseless} compares the expected bit-length of SRs in System 1 and System 2 SNC with noiseless channel between communicating agents. Here, we consider cases in which SNC achieves a target reliability $\gamma = 1$. For System 1 SNC, the expected SR bit-length is related to the number of extracted concepts per entity. Thus, in the experiment, the SR length exceeds $300$ bits in System 1 SNC. On the other hand, the expected length of coded SRs in System 2 SNC is significantly smaller that of System 1 SNC. One interesting aspect is that the average code length is always smaller when $\alpha,\beta = 2.0$ compared to the case $\alpha,\beta = 1.5$, even though it takes more communication rounds to reach reliability $\gamma = 1$ as shown in Fig. \ref{fig:srSNCreliability}. This is because, as also shown in Fig. \ref{fig:heatmap}, a larger value of $\alpha$ induces a reduction of uncertainty about what meaningful concepts should be chosen for System 2 SNC. Meanwhile, Figs. \ref{fig:codelength_01} and \ref{fig:codelength_02} show the total SR length including retransmissions under a noisy channel scenario when $p_e = 0.1$ and $p_e = 0.2$, respectively. Since retransmissions with feedback can achieve the capacity of BEC, i.e., $1-p_e$, the results show that the SR length for reliability $\gamma = 1$ achieving System 1 and System 2 SNC is $\frac{1}{1-p_e}$ times longer than that with noiseless channel shown in Fig. \ref{fig:codelength_noiseless}. \begin{figure*} \centering \subfigure[$p_e = 0$ (noiseless)]{\includegraphics[width=0.32\textwidth]{Fig_codelength.pdf}\label{fig:codelength_noiseless}} \subfigure[$p_e = 0.1$]{\includegraphics[width=0.32\textwidth]{Fig_codelength_01.pdf}\label{fig:codelength_01}} \subfigure[$p_e = 0.2$]{\includegraphics[width=0.32\textwidth]{Fig_codelength_02.pdf}\label{fig:codelength_02}} \caption{SR length achieving reliability $\gamma = 1$ in System 1 and System 2 SNC, under noiseless and noisy ($p_e = 0.1,0.2$ binary erasure) channels, for different self-SNC iteration steps $t = 10$, $20$ and $100$.} \label{fig:codelength} \end{figure*} \vspace{5pt}\noindent\textbf{Robustness to Asynchronous Contextual Reasoning in System 2 SNC.}\quad Fig. \ref{fig:perturbation} illustrates the reliability of System 2 SNC with asynchronous self-SNC of both speaker and listener. Here, such asynchrony comes from speaker and listener having different E2C and C2E when initializing their self-SNC procedure. For a fixed speaker's E2C and listener's C2E, the experiment considers that there exists a random perturbation on the speaker's E2C known at the listener and listener's C2E known at the speaker. The amount of perturbation is chosen randomly and uniformly over $[-\epsilon,+\epsilon]$ for each component of E2C and C2E. When the perturbed E2C and C2E is directly used for initializing self-SNC at each agent, the reliability reduces significantly as $\epsilon$ increases as shown from the blue curve (\emph{E2C and C2E without Quantization}) in Fig. \ref{fig:perturbation}. One way to ensure robustness against such perturbation is to quantize E2C and C2E before initializing the self-SNC at both speaker and listener. The quantization is done by following the same method of introducing a decision-criterion in System 1 SNC on E2C and C2E. The yellow curve (\emph{E2C and C2E with Quantization}) in Fig. \ref{fig:perturbation} shows the reliability of System 2 SNC against perturbation when initializing the self-SNC with quantized E2C and C2E. Compared to the one without quantization, System 2 SNC with self-SNC initialized with quantized E2C and C2E is more robust to perturbations. \begin{figure}[!t] \centering \includegraphics[width=0.45\textwidth]{Fig_perturbation.pdf} \caption{Reliability $\gamma$ of System 2 SNC with different initializations versus perturbed communication context.} \label{fig:perturbation} \end{figure} \section{Conclusion and Discussion on Beyond Semantics-Native Communication}\label{sec:discussion} In this article, we first proposed System 1 SNC, which is a novel stochastic model for agent communication among agents. Moreover, by distilling reasoning into System 1 SNC, we developed a novel System 2 SNC model that extracts effective semantics for a given listener's communication context. Based on the proposed stochastic model, we numerically showed that System 2 SNC significantly reduces the SR bit-length with high reliability. Our proposed SNC framework and its stochastic model can be extended towards developing more effective SR, such as considering \emph{invariance} against nuisance factors or \emph{causal structure} of the agent tasks for more effective communications. Besides, the extension of the proposed theoretical framework can be done towards more practical scenarios. To advocate such a potential, we conclude this article by illustrating two promising directions for future extensions of SNC. \subsection{Channel Coding Theory for SNC}\label{subsec:coding_discussion} One factor that the SNC reliability depends on is the extent to which the O2C approximates the noise-free E2C. Let us hypothetically assume that there exists an invariant concept indicator $\vb*{\hat{x}} = (\hat{x}_1,\hat{x}_2,\dots,\hat{x}_{|\mathcal{C}|}) \in \{0,1\}^{|\mathcal{C}|}$ that represents the entity $e$. Now consider a scenario for conceptualizing an observation $O\in\mathcal{O}_e$ at an agent in state $a\in\mathcal{A}$ as \begin{align} \label{eq:semantic_channel_1}p_{\scriptscriptstyle \mathbf{X}|O}(\vb*{x}|o;a) &= p_{\scriptscriptstyle \mathbf{X}|\mathbf{\hat{X}}}(\vb*{x}|\vb*{\hat{x}};a)\\ \label{eq:semantic_channel_2}&= \prod_{c\in\mathcal{C}}p_{\scriptscriptstyle X_c|\hat{X}_c}(x_c|\hat{x}_c;a), \end{align} for all $\vb*{x} \in \{0,1\}^{|\mathcal{C}|}$, where the observation is replaced by the invariant concept indicator in the RHS of \eqref{eq:semantic_channel_1} and \eqref{eq:semantic_channel_2} is from the conditional independence of the concept extractions given an entity. For better intuition, suppose the singular C2E $p_{\scriptscriptstyle X_c|\hat{X}_c}(x_c|\hat{x}_c)$ is identically distributed for all $c\in\mathcal{C}$, e.g., say the identical distribution is $p_{\scriptscriptstyle X|\hat{X}}(x|\hat{x})$ for random variables $X,\hat{X}\in\{0,1\}$ and $x,\hat{x}\in\{0,1\}$. Then, $p_{\scriptscriptstyle X|\hat{X}}(x|\hat{x})$ is analogous to the information-theoretic discrete-memoryless channel (DMC) and \eqref{eq:semantic_channel_2} can be seen as a joint distribution of received sequence $X_1,X_2,\dots,X_{|\mathcal{C}|}$ by transmitting sequence $\hat{x}_1,\hat{x}_2,\dots,\hat{x}_{|\mathcal{C}|}$ over $|\mathcal{C}|$ consecutive noisy channels with fixed distribution. Hence, by considering E2C as a kind of noisy channel, one can apply techniques based on channel coding theory, such as error correction or detection schemes, to obtain higher communication reliability (or diversity gain) in SNC. One simple approach would be applying repetition coding, i.e., repeatedly input an observation (or set of observations) of the same target entity into semantic coding, as explained in \eqref{eq:E2Capprox}. \subsection{Theory-of-Mind between Heterogeneous Agents for SNC} Humans have an innate ability to infer and represent others' mental states, such as knowledge, intention and beliefs. For example, one can tell how the others will think or act if they are in the same situation or they have been through it in the past. Instilling the so-called \emph{Theory of Mind (ToM)} into ML-driven agents has recently been studied in the ML field \cite{Rabinowitz2018,Choudhury2019,Reddy2020}, and is instrumental in building upon and going beyond SNC proposed in this article. In System 1 SNC, communicating agents should recognize the other's state to reach agreements on using the specific parameterized E2C and C2E, unless they are in the same state without doubt. Likewise, in System 2 SNC, the agents should know the other's state to do self-SNC at both ends. One way of inferring the state of other agents is to observe them directly or parse past trajectories of communication. For example, let $\pi_{\scriptscriptstyle \mathbf{X}|E}(\vb*{x}|e)$ for all $\vb*{x}\in\{0,1\}^{|\mathcal{C}|}$ be the empirical distribution of the interlocutor's E2C of an entity $e\in\mathcal{E}$ by parsing the past communication trajectories. Thus, the state of the interlocutor can be estimated as \begin{align} \hat{a} = \argmin_{a} \sum_{\vb*{x}\in\{0,1\}^{|\mathcal{C}|}}\pi_{\scriptscriptstyle \mathbf{X}|E}(\vb*{x}|e)\frac{\pi_{\scriptscriptstyle \mathbf{X}|E}(\vb*{x}|e)}{p_{\scriptscriptstyle \mathbf{X}|E}(\vb*{x}|e;a)}, \end{align} where the LHS describes finding the agent's state that minimizes the KL divergence (or other metric) between the empirical E2C and E2Cs parameterized with state parameter set $\mathcal{A}$. However, if the agents have never experienced each other's state, they need to directly use the empirical E2C. In this case, the empirical E2C should be more precise than the above approach since it is directly used for communication, and error propagation might happen especially when using System 2 SNC. Meanwhile, having the same (or similar) parametric family of E2Cs at both communication ends, agents can produce a commonsense E2C, as well as C2E, which does not depend on the agent state. One simple way is to marginalize the E2C and C2E over the agent set $\mathcal{A}$. For example, supposing the agent states are distributed uniformly, for a given parametric family of E2Cs within $\mathcal{A}$, the commonsense E2C is \begin{align} p_{\scriptscriptstyle \mathbf{X}|E}(\vb*{x}|e) = \frac{1}{|\mathcal{A}|}\sum_{a\in\mathcal{A}}p_{\scriptscriptstyle \mathbf{X}|E}(\vb*{x}|e;a),\; \forall \vb*{x}\in\{0,1\}^{|\mathcal{C}|}. \end{align} Using SNC based on the commonsense E2C and C2E may facilitate coordination and communication between agents that have never met before. However, its reliability depends on the similarity between the parametric families of E2Cs available at both speaker and listener, respectively. \appendices \section{Proof of Theorem \ref{prop:ShannonSNC}}\label{appendix:proofofProp1} A relative frequency distribution of extracting concept $c$ from the entities can be computed by \begin{align}\label{eq:frequency} f_c = \frac{p_{\scriptscriptstyle X_c}(1;a)}{\sum_{c\in\mathcal{C}}p_{\scriptscriptstyle X_{c}}(1;a)}, \end{align} for all $c\in\mathcal{C}$, and $\sum_{c\in\mathcal{C}} f_c = 1$, where where $p_{\scriptscriptstyle X_c}(x_c;a) = \sum_{e\in\mathcal{E}}p_{\scriptscriptstyle X_c|E}(x_c|e;a)p_{\scriptscriptstyle E}(e)$. Then, from Kraft's inequality, the expected length of coded symbols $s(1),s(2),\dots,s(|\mathcal{C}|)$ are lower bounded by the $d$-ary entropy with probabilities $f_1,f_2,\dots,f_{|\mathcal{C}|}$ as \begin{align}\label{eq:shannon_lower} \sum_{c\in\mathcal{C}}f_c \,\ell_{\text{S$_1$},c} \geq -\sum_{c \in \mathcal{\mathcal{C}}} f_c \log_d f_c, \end{align} where $\ell_{\text{S$_1$},c}$ is the code length for the symbol $s(c)$ for all $c\in\mathcal{C}$. Thus, we have a lower bound \begin{align} \mathsf{L}_{\text{S$_1$}}(a) &= \sum_{e\in\mathcal{E}} p_{\scriptscriptstyle E}(e;a) \sum_{c\in\mathcal{C}}p_{\scriptscriptstyle X_c|E}(1|e;a) \ell_{\text{S$_1$},c}\\ \label{eq:upperline}&= \sum_{c\in\mathcal{C}}p_{\scriptscriptstyle X_c}(1;a)\ell_{\text{S$_1$},c}\\ \label{eq:middleline}&= \sum_{c\in\mathcal{C}}p_{\scriptscriptstyle X_{c}}(1;a) \sum_{c\in\mathcal{C}}f_c\,\ell_{\text{S$_1$},c}\\ \label{eq:inequality_1}&\geq \sum_{c\in\mathcal{C}}p_{\scriptscriptstyle X_{c}}(1;a) \sum_{c\in\mathcal{C}}f_c\,\log_d f_c\\ \label{eq:last_line_1}& = \sum_{c\in\mathcal{C}}p_{\scriptscriptstyle X_{c}}(1;a)\log_d f_c \end{align} where \eqref{eq:middleline} holds from \eqref{eq:frequency}, and the inequality \eqref{eq:inequality_1} holds from \eqref{eq:shannon_lower}. Consequently, by substituting \eqref{eq:frequency} into \eqref{eq:last_line_1}, we can obtain the lower bound \eqref{eq:lowerboundSNC}. In the mean time, since $\log_d f_c$ is not always an integer, taking $\ell_{\text{S$_1$},c} = \lceil{\log_d f_c}\rceil$ for all $c\in\mathcal{C}$ gives an upper bound \begin{align}\label{eq:shannon_upper} \sum_{c\in\mathcal{C}}f_c \,\ell_{\text{S$_1$},c} \leq -\sum_{c \in \mathcal{\mathcal{C}}} f_c \lceil{\log_d f_c}\rceil. \end{align} Thus, we also have an upper bound \begin{align} \label{eq:inequality_2}\text{\eqref{eq:middleline}} &\leq \sum_{c\in\mathcal{C}} p_{\scriptscriptstyle X_c}(1;a) \sum_{c\in\mathcal{C}} f_c\, \lceil{\log_d f_c}\rceil\\ \label{eq:last_line_2}&= \sum_{c\in\mathcal{C}} p_{\scriptscriptstyle X_{c}}(1;a) \lceil{\log_d f_c}\rceil, \end{align} where the inequality \eqref{eq:inequality_2} holds from \eqref{eq:shannon_upper}. Again, substituting \eqref{eq:frequency} into \eqref{eq:last_line_2} yields the upper bound \eqref{eq:upperboundSNC}, and this ends the proof. \section{Proof of Theorem \ref{thm:theonly}}\label{appendix:proofofAM} Next, we prove that \eqref{eq:iteration_1} to \eqref{eq:iteration_4} minimize the objective \eqref{eq:lossfunction} for $t\rightarrow \infty$, with given parameters $\alpha,\beta \geq 0$ and states $a$, $a'$ of two communicating agents. Let $\mathcal{P}$ be the set of all joint probability distributions of an entity $E$ and concept $C$ parameterized by agent states $a$, $a'$. Note that the set is convex, since it is a set of probability distributions, which makes it possible to apply the alternating optimization of \eqref{eq:lossfunction} over it. First, fix $\mathsf{P}^{\scriptscriptstyle [t-1]} \in \mathcal{P}$ and $\mathsf{Q}^{\scriptscriptstyle [t-1]} \in \mathcal{P}$, thereby making \eqref{eq:lossfunction} a functional of $\mathsf{M} \in \mathcal{P}$, i.e., $\mathsf{G}(\mathsf{M})$. Note it can be easily shown that the functional is convex on $\mathcal{P}$. Since $\mathsf{M}$ is a probability distribution, a constraint $\sum_{(e, c) \in \mathcal{E}\times \mathcal{C}} \mathsf{M}(e,c;a,a') = 1$ must be satisfied. Thus, we introduce a Lagrange multiplier $\gamma_{\scriptscriptstyle \mathsf{M}}$, and form a Lagrangian functional \begin{align}\label{eq:lagrangian_1} \mathsf{J}(\mathsf{M}) = \mathsf{G}(\mathsf{M}) - \sum_{(e,c)\in\mathcal{E} \times \mathcal{C}} \gamma_{\scriptscriptstyle \mathsf{M}} \mathsf{M}(e,c;a,a'). \end{align} Taking a derivative of \eqref{eq:lagrangian_1} with respect to $\mathsf{M}(e,c;a,a')$ gives \begin{align}\label{eq:difflagrangian_1} \frac{\partial \mathsf{J}(\mathsf{M}) }{\partial \mathsf{M}(e,c;a,a')} &= \left( \lambda \frac{\mathsf{P}^{\scriptscriptstyle [t-1]}(e,c;a,a')}{\mathsf{M}(e,c;a,a')} + (1-\lambda)\frac{\mathsf{Q}^{\scriptscriptstyle [t-1]}(e,c;a,a')}{\mathsf{M}(e,c;a,a')} \right) - \gamma_{\scriptscriptstyle \mathsf{M}}, \end{align} for all $(e,c) \in \mathcal{E}\times\mathcal{C}$. By equating the RHS of \eqref{eq:difflagrangian_1} to zero, we have \begin{align}\label{eq:equatinglagrangian_1} \mathsf{M}(e,c;a,a') &= \frac{\lambda \mathsf{P}^{\scriptscriptstyle [t-1]}(e,c;a,a') + (1 - \lambda) \mathsf{Q}^{\scriptscriptstyle [t-1]}(e,c;a,a')}{\gamma_{\scriptscriptstyle \mathsf{M}}}. \end{align} Note that $\gamma_{\mathsf{M}} = \sum_{(e,c)\in\mathcal{E}\times\mathcal{C}} ( \lambda \mathsf{P}^{\scriptscriptstyle [t-1]}(e,c;a,a') + (1 - \lambda) \mathsf{Q}^{\scriptscriptstyle [t-1]}(e,c;a,a')) = 1$, since $\gamma_{\scriptscriptstyle \mathsf{M}}$ is the normalization constant in this case, and $\mathsf{P}$, $\mathsf{Q}$ and $\mathsf{M}$ are defined on the same domain $\mathcal{E}\times\mathcal{C}$. Thus we have \eqref{eq:iteration_1}, that is $\mathsf{M}$ at time $t\geq 0$ before updating $\mathsf{P}$ and $\mathsf{Q}$. Now, fix $\mathsf{Q}^{\scriptscriptstyle [t-1]}$ and $\mathsf{M}^{\scriptscriptstyle [t]}_{\scriptscriptstyle 1}$, making \eqref{eq:lossfunction} a functional of $\mathsf{P} \in \mathcal{P}$, i.e., $\mathsf{G}(\mathsf{P})$, where it can be also easily shown that $\mathsf{G}(\mathsf{P})$ is convex on $\mathcal{P}$. Considering a constraint $\sum_{(r, c) \in \mathcal{E}\times \mathcal{C}} \mathsf{P}(e,c;a,a') = 1$, consider a Lagrangian functional \begin{align}\label{eq:lagrangian_2} \mathsf{J}(\mathsf{P}) = \mathsf{G}(\mathsf{P}) - \sum_{(e,c)\in\mathcal{E} \times \mathcal{C}} \gamma_{\scriptscriptstyle \mathsf{P}} \mathsf{P}(e,c;a,a'), \end{align} where $\gamma_{\scriptscriptstyle \mathsf{P}}$ is the Lagrangian multiplier. Taking the derivative of \eqref{eq:lagrangian_2} gives \begin{align}\label{eq:difflagrangian_2} \frac{\partial \mathsf{L}(\mathsf{P})}{\partial \mathsf{P}(e,c;a,a')} = -\lambda\left(\frac{\log \mathsf{P}(e,c;a,a') + 1}{\alpha} - \mathsf{M}^{\scriptscriptstyle [t,1]}(e,c;a,a') \right) - \gamma_{\scriptscriptstyle \mathsf{P}}, \end{align} for all $(e,c) \in \mathcal{E}\times\mathcal{C}$. By equating the RHS of \eqref{eq:difflagrangian_2} to zero, we have \begin{align} \log \mathsf{P}(e,c;a,a') = \alpha \mathsf{M}^{\scriptscriptstyle [t]}_{\scriptscriptstyle 1}(e,c;a,a') - \left(\frac{\alpha\gamma_{\scriptscriptstyle \mathsf{P}}}{\lambda} + 1\right). \end{align} Let $\frac{\alpha\gamma_{\scriptscriptstyle \mathsf{P}}}{\lambda} + 1 = \log Z_{\scriptscriptstyle \mathsf{P}}$, then we have \begin{align} \mathsf{P}(e,c;a,a') = \frac{\exp(\alpha \mathsf{M}^{\scriptscriptstyle [t]}_{\scriptscriptstyle 1}(e,c;a,a'))}{Z_{\scriptscriptstyle \mathsf{P}}}. \end{align} Here, $Z_{\scriptscriptstyle \mathsf{P}} = \sum_{(e,c)\in\mathcal{E}\times\mathcal{C}} \exp(\alpha \mathsf{M}^{\scriptscriptstyle [t]}_{\scriptscriptstyle 1}(e,c;a,a'))$ becomes a normalization constant. This yields \eqref{eq:iteration_2}. The derivation of \eqref{eq:iteration_3} follows the same process of deriving \eqref{eq:iteration_1}, with the only difference that $\mathsf{P}^{\scriptscriptstyle [t-1]}$ is updated to $\mathsf{P}^{\scriptscriptstyle [t]}$. Moreover, \eqref{eq:iteration_4} can be derived from a similar process of deriving \eqref{eq:iteration_2}, since $\mathsf{P}$ and $\mathsf{Q}$ form a symmetry in \eqref{eq:lossfunction} (or nearly symmetric with different constants $\alpha$ and $\beta$). This ends the proof of the claim. \section{Proof of Corollary \ref{cor:proofofPequalsQ}}\label{appendix:proofofPequalsQ} After the convergence of \eqref{eq:iteration_1}-\eqref{eq:iteration_4}, \begin{align}\label{eq:convergeM} \mathsf{M}^{\scriptscriptstyle[*]}(e,c;a,a') = \lambda\mathsf{P}^{\scriptscriptstyle[*]}(e,c;a,a') + (1-\lambda)\mathsf{Q}^{\scriptscriptstyle[*]}(e,c;a,a') \end{align} holds for all $(e,c)\in\mathcal{E}\times\mathcal{C}$. By dividing both sides of \eqref{eq:convergeM} by $\mathsf{M}^{\scriptscriptstyle [*]}(e,c;a,a')$, we have \begin{align}\label{eq:lineeq} 1 = \lambda x + (1-\lambda)y. \end{align} where $x = \frac{\mathsf{P}^{\scriptscriptstyle[*]}(e,c;a,a')}{\mathsf{M}^{\scriptscriptstyle[*]}(e,c;a,a')}$ and $y = \frac{\mathsf{Q}^{\scriptscriptstyle[*]}(e,c;a,a')}{\mathsf{M}^{\scriptscriptstyle[*]}(e,c;a,a')}$. Note that from \eqref{eq:iteration_2} and \eqref{eq:iteration_4}, we have $\mathsf{P}^{\scriptscriptstyle [*]}(e,c;a,a') = \frac{\mathsf{M}^{\scriptscriptstyle[*]}(e,c;a,a')^{\alpha}}{\sum\mathsf{M}^{\scriptscriptstyle[*]}(e,c;a,a')^{\alpha}}$ and $\mathsf{Q}^{\scriptscriptstyle [*]}(e,c;a,a') = \frac{\mathsf{M}^{\scriptscriptstyle[*]}(e,c;a,a')^{\beta}}{\sum\mathsf{M}^{\scriptscriptstyle[*]}(e,c;a,a')^{\beta}}$, respectively, after the convergence. Since we consider parameters $\alpha,\beta \geq 1$, it is always $x \leq 1$ and $y \leq 1$. By drawing \eqref{eq:lineeq} on the $x$-$y$ coordinate plane, it can be easily known that $(x,y) = (1,1)$ is the only point on the line that satisfies $x \leq 1$ and $y \leq 1$. Thus, $\mathsf{P}^{\scriptscriptstyle[*]}(e,c;a,a')=\mathsf{M}^{\scriptscriptstyle[*]}(e,c;a,a')$ and $\mathsf{Q}^{\scriptscriptstyle[*]}(e,c;a,a')=\mathsf{M}^{\scriptscriptstyle[*]}(e,c;a,a')$, and thus $\mathsf{P}^{\scriptscriptstyle[*]}(e,c;a,a')=\mathsf{Q}^{\scriptscriptstyle[*]}(e,c;a,a')$. This holds for all $(e,c)\in\mathcal{E}\times\mathcal{C}$, and it ends the proof. \section{Proof of Theorem \ref{thm:reliability}}\label{appendix:proofofreliability} To begin with, refer to the following lemma. \begin{lemma}\label{lem:allnon-zero} For any parameters $\alpha,\beta\geq1$, but not $(\alpha,\beta) = (1,1)$, and $0<\lambda<1$, for given converged $\mathsf{M}^{\scriptscriptstyle [*]}$, its all non-zero components, i.e., $\mathsf{M}^{\scriptscriptstyle [*]}(e,c;a,a')$ for some $(e,c)\in\mathcal{E}\times\mathcal{C}$ such that $\mathsf{M}^{\scriptscriptstyle [*]}(e,c;a,a') \neq 0$, are equal to each other. \end{lemma} \begin{IEEEproof} From Theorem \ref{thm:theonly} and Corollary \ref{cor:proofofPequalsQ}, $\mathsf{M}^{\scriptscriptstyle [*]}(e,c;a,a') = \frac{\mathsf{M}^{\scriptscriptstyle [*]}(e,c;a,a')^\alpha}{\sum\mathsf{M}^{\scriptscriptstyle [*]}(e,c;a,a')^{\alpha}} = \frac{\mathsf{M}^{\scriptscriptstyle [*]}(e,c;a,a')^\beta}{\sum\mathsf{M}^{\scriptscriptstyle [*]}(e,c;a,a')^{\beta}}$. Since parameters $\alpha,\beta \geq 1$, but not $(\alpha,\beta) =(1,1)$, there always exists one is not equals to 1. Without loss of generality, let us say $\alpha > 1$. Then, for non-zero $\mathsf{M}^{\scriptscriptstyle [*]}(e,c;a,a')$, we have \begin{align}\label{eq:sumMM} \sum_{(e,c)\in\mathcal{E}\times\mathcal{C}}\mathsf{M}^{\scriptscriptstyle [*]}(e,c;a,a')^{\alpha} = \mathsf{M}^{\scriptscriptstyle [*]}(e,c;a,a')^{\alpha -1}. \end{align} For $\mathsf{M}^{\scriptscriptstyle [*]}(e,c;a,a') = 1$, since $\sum_{(e,c)}\mathsf{M}^{\scriptscriptstyle [*]}(e,c;a,a') = 1$, it is the only one that is non-zero. For $\mathsf{M}^{\scriptscriptstyle [*]}(e,c;a,a') \neq 1$, since the LHS of \eqref{eq:sumMM} is fixed, we can conclude that $\mathsf{M}^{\scriptscriptstyle [*]}(e,c;a,a')$ for some $(e,c)\in\mathcal{E}\times\mathcal{C}$ such that $\mathsf{M}^{\scriptscriptstyle [*]}(e,c;a,a') \neq 0$ are equal to each other. \end{IEEEproof} Note Lemma \ref{lem:allnon-zero} also applies for $\mathsf{P}^{\scriptscriptstyle [*]}$ and $\mathsf{Q}^{\scriptscriptstyle [*]}$, from Corollary \ref{cor:proofofPequalsQ}. Now let $\mathsf{P}^{\scriptscriptstyle [*]}_k(e,c;a,a') = p^k_{\scriptscriptstyle C|E}(c|e;a,a')p^{k-1}_{\scriptscriptstyle E|C}(e|c_{k-1};a,a')$ and $\mathsf{Q}^{\scriptscriptstyle [*]}_k(e,c;a,a') = p^{k}_{\scriptscriptstyle E|C}(e|c;a,a')p_{\scriptscriptstyle C}(c)$ for all $e\in\mathcal{E}$ and $c\in\mathcal{C}$ be the stationary individual CCs of the speaker and listener at $i$-th communication round of System 2 SNC, where $c_{k-1}$ is the communicated meaningful concept at $k-1$-th communication round. Then from Corollary 1, by equating $\mathsf{P}^{\scriptscriptstyle [*]}_k(e,c;a,a')$ and $\mathsf{Q}^{\scriptscriptstyle [*]}_k(e,c;a,a')$, and dividing both sides with $p^{k}_{\scriptscriptstyle C}(c)$ defined over $\mathcal{C}_{k-1}$, such that $\mathcal{C}_k = \mathcal{C}_{k-1}\backslash c_{k}$ for $k\geq 1$ and $\mathcal{C}_0 = \mathcal{C}$, we have \begin{align} p^k_{\scriptscriptstyle E|C}(e|c;a,a') = \frac{p^k_{\scriptscriptstyle C|E}(c|e;a,a')}{p^{k}_{\scriptscriptstyle C}(c)}p^{k-1}_{\scriptscriptstyle E|C}(e|c_{k-1};a,a'). \end{align} Since $p^{k}_{\scriptscriptstyle C|E}(c|e;a,a')$ is defined over a reduced set $\mathcal{C}_{k-1}$ and Lemma \ref{lem:allnon-zero} suggests that all non-zero components are the same, we know that $p^{k}_{\scriptscriptstyle C|E}(c|e;a,a')\geq p^{k}_{\scriptscriptstyle C}(c)$ for all $c\in\mathcal{C}$, since $p^{k}_{\scriptscriptstyle C}(c)$ is uniform by definition. Therefore, $p^{k}_{\scriptscriptstyle E|C}(e|c;a,a') \geq p^{k-1}_{\scriptscriptstyle E|C}(e|c_{k-1};a,a')$ for all $e\in\mathcal{E}$ and this ends the proof. \bibliographystyle{IEEEtran}
3,212,635,537,884
arxiv
\section{Acoustic Levitation}\label{sec:acoustic-levitation} \begin{figure}[b] \centering \includegraphics[width=1\textwidth]{figures/Acoustic_Levitation.png} \caption{(A) A user performing a pointing task on the real prototype (left), and in the \textit{Levitation Simulator}~\cite{Paneva20} (middle, right), using \textit{LeviCursor}~\cite{Bachynskyi18} for selecting targets in 3D space (marked in red). (B) A levitated particle traverses a periodic path at a frequency of 10Hz to reveal volumetric images in mid-air. Using the \textit{OptiTrap}~\cite{Paneva22} algorithm we can specify physically feasible trap trajectories that render generic shapes in optimal time. Figure adapted from~\cite{Paneva20} and~\cite{Paneva22}. } \label{fig:Acoustic_Lev} \end{figure} Some of the greatest visionaries in HCI imagined the interface of the future as ``a room where the computer can control the existence of matter''~\cite{Sutherland65}, and as ``a dynamic physical material that reflects the changes in digital states in real time''~\cite{Ishii12}. With acoustic levitation technology, we see vast potential to get a step closer to these great visions of the ultimate mixed reality, where the digital and the physical world are fully merged. Acoustic levitation displays offer a novel and innovative way of displaying and interacting with digital content in real physical space, by using sound waves to manipulate physical matter. The interface typically consists of two opposing phased arrays of transducers. Each transducer emits ultrasonic waves at a frequency inaudible for humans, of 40kHz. By appropriately setting the phase and amplitude of each transducer, we can generate \textit{acoustic traps} at the points where the acoustic forces converge. In these nodes of low acoustic pressure, it is possible to suspend millimeter-sized particles in mid-air. By moving the acoustic trap in 3D space, we can digitally control the position of the levitated physical matter, and in this manner, generate dynamic visualizations in physical space without the need for wearables or any other gadgets. When developing and designing for radically novel interfaces such as acoustic levitation, in addition to the financial and time costs associated with user testing mentioned earlier, other challenges, such as availability and operability occur. Building and maintaining interactive levitation interfaces requires specific technical expertise, knowledge of acoustics, microsecond synchronization, and submillimeter calibration of the system components. In addition, the interface can be difficult to debug. In case of problems, the only observable effect is the levitating particle shooting out in an uncontrollable manner. This can pose a barrier for designers, artists, game developers, researchers, etc.~to start experimenting with this novel technology, and test prototypes of their applications with users. To solve this problem we propose virtual prototyping, an approach that has proven successful in other disciplines, e.g., automotive, product design, and manufacturing~\cite{GomesDeSa99, Berg17}. We developed the \textit{Levitation Simulator}~\cite{Paneva20} -- an interactive simulation tool in VR that can be used to iteratively develop and prototype ideas for acoustic levitation interfaces and even conduct user tests and formal experiments. Only after the development has converged, the resulting system can be validated using a real apparatus. The simulator consists of two modules -- an interaction and a simulation module. The interaction module is implemented within the Unity Game engine, and it can receive user input via a motion capture system or VR controllers. In the simulation module, we incorporated a model of a levitated particle moving in an acoustic field, which allowed for the simulation of physically accurate dynamics of the virtual particle. We validated the tool by performing a pointing study in the \textit{Levitation Simulator} and on the real prototype, using \textit{LeviCursor}~\cite{Bachynskyi18}, a levitated 3D physical cursor. Figure~\ref{fig:Acoustic_Lev}(A) shows a user performing repetitive aimed movements in mid-air between two three-dimensional spherical targets on the real prototype, and in the \textit{Levitation Simulator}. The results showed comparable performance. Further testing with gaming applications showed that the \textit{Levitation Simulator} can provide good predictions, regarding user interaction and engagement with the real prototype. In future studies, the \textit{Levitation Simulator} can also be useful in exploring the multimodality of acoustic levitation displays, e.g., by augmenting the levitated particles with mid-air haptic feedback to potentially improve display accessibility~\cite{Paneva20haptiread, Carter13}. Modeling of the underlying interface dynamics is not only useful for the interaction design and testing, but also for optimizing the performance of the interface itself. We demonstrated this on an acoustic levitation interface that uses the persistence of vision effect to render smooth levitated graphics in real time, by rapidly moving a levitated particle along a periodic path. \textit{OptiTrap}~\cite{Paneva22} is an automated numerical approach that computes trap trajectories, i.e., position and timings of the acoustic traps, to generate physically feasible, nearly time-optimal paths that reveal generic mid-air shapes on the levitator. To achieve this, we derived a multi-dimensional model of the acoustic forces around a trap, and formulated and solved a non-linear path following problem. As a result of this trajectory optimization, we were able to render bigger and more complex shapes (e.g., involving sharp edges and sudden changes in curvature), than previously possible (see Figure~\ref{fig:Acoustic_Lev}(B)). On the example of acoustic levitation interfaces, we have demonstrated in practice, that modeling, simulation, and optimization methods can be of great benefit for the efficient and agile development of innovative user interfaces. \section{Modeling Human Movement During Interaction }\label{sec:biomechanical-modeling} In addition to developing novel displays and interaction techniques, simulation and optimization can be used to model human decision-making processes during interaction with computers and virtual systems. A graphical representation of our proposed framework % is given in Figure~\ref{fig:simulation_framework}. Using the terminology of an \textit{Optimal Control Problem (OCP)}~\cite{Diedrichsen10}, we assume that for a given interaction task, humans aim to find a sequence of valid controls (e.g., neuromuscular control signals) such that the resulting movement minimizes a given internalized cost function reflecting both task-specific goals (e.g., pointing at a specific target) and their individual preferences (e.g., using the right index finger for pointing). This is consistent with the general assumption of \textit{human rationality}, which states that given a set of options, humans will select the option that provides them with the greatest benefit~\cite{Silver21}. One challenge is thus to ``encode'' a given task instruction, or rather, the internal objectives humans derive from that, into a mathematically precise cost function to be minimized (or, alternatively, a reward function to be maximized). The human biomechanical model, input device, interface dynamics all have to be taken into account in this optimization. These are included in the system dynamics, capturing all the constraints that stem from the user model (e.g., kinematics and perception), input and output devices (e.g., transfer functions), and interface dynamics. Solving the resulting OCP leads to a simulation of the complete human-computer-loop that allows us to infer information such as movement times, cursor \textit{and} joint trajectories, or muscle expenditure. Depending on the complexity of the system dynamics and cost function, the OCP can be solved using different methods. Three of the most important approaches to obtain (approximately) optimal movement trajectories for a given task, technique, and user group are discussed below. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/MPC_flow_basic.pdf} \caption{ Our proposed optimization-based framework of human movement during interaction. The combination of system dynamics and cost/reward function results in an optimal control problem that can be solved using established methods. Figure adapted from~\cite{Klar2022}. } \label{fig:simulation_framework} \end{figure} \subsection{Linear-Quadratic Gaussian Regulator} In the case of linear system dynamics and a quadratic cost function, the \textit{Linear-Quadratic Regulator (LQR)} yields a unique optimal control policy $\pi$, mapping an arbitrary state $x$ to the control $u^{\star}=\pi(x)$ that is optimal to apply when being at state $x$. An extension to stochastic (linear) system dynamics, where controls and states are perturbed by Gaussian noise at each simulation step, is called the \textit{Linear-Quadratic Gaussian Regulator (LQG)}. For both LQR and LQG, the control policy is again linear in the (expected) state, i.e., $\pi(x)=Lx$ holds for some matrix $L$ that can be computed once in advance (i.e., during the planning stage)~\cite{Todorov98}. In particular, the generated solution trajectories are \textit{closed-loop}, i.e., the control adapts to perturbances that may occur during execution (e.g., due to model inaccuracies, system and control noise, or unexpected deviations from predicted states). Since LQR and LQG allow to solve the OCP analytically, the optimal control policy can be computed very fast (typically within a few seconds) once before the movement starts, and optimal closed-loop controls are available in real-time during the movement. We have introduced the LQR/LQG framework to the HCI community, and have shown its applicability to mouse pointing~\cite{Fischer20, Fischer22}. For the LQG with control and observation noise, we have shown that effort costs applied on a finite time horizon, together with terminal distance and stability costs, are sufficient to generate characteristic cursor trajectories (e.g., bell-shaped velocity profiles, with speed-accuracy trade-off attributed to signal-dependent noise~\cite{HarrisWolpert98}). % Moreover, reciprocal 1D mouse pointing trajectories were reproduced significantly better than using the minimum jerk model~\cite{Flash85} or pure dynamic models~\cite{Mueller17}, also capturing between-trial variability. However, the assumption of linear dynamics prevents the LQG controller from dealing with nonlinearities, as they e.g., arise from the biomechanical constraints of human movements. To overcome these limitations, we proposed the use of \textit{Model Predictive Control (MPC)}~\cite{Klar2022}. \subsection{Model Predictive Control}\label{sec:MPC} MPC is a receding horizon approach that handles \textit{nonlinearities}, provides optimality and convergence guarantees in many cases, and is inherently \textit{robust} to uncertainties due to its closed-loop nature~\cite{GP17}. It has become a standard control method for (non)linear dynamical systems from both academic and application perspectives~\cite{QIN03}. Instead of solving the original OCP that describes the entire interaction movement, a sequence of shorter OCPs is solved during motion execution. Thus, the movement time does not have to be fixed in advance and, in addition, the complexity of the OCP that we need to solve is reduced in time. External deviations that might occur during movement execution are taken into account by the receding horizon control principle. After solving an OCP, only the first part of the optimal control sequence is applied, the updated state of the system (which may deviate from the expected state) is observed, and a new OCP is setup starting from this state. This procedure of alternating planning and execution steps is continued until the interaction task is completed. The proposed MPC framework can be used with analytical (nonlinear) models or even ``black-box'' implementations. As an application, we investigated mid-air pointing % using a biomechanical model of the upper extremity implemented in the fast physics engine MuJoCo~\cite{mujoco}. This model is based on a state-of-the-art OpenSim model by Seth et al.~\cite{seth2018opensim}, consists of a torso, right shoulder, and arm, and has seven independent joints that can be directly actuated via applied torques. % We combined this physical model with a second-order model for aggregated muscles at the individual joints derived from van der Helm et al.~\cite{van2000musculoskeletal}. In order to assess how well our simulation reflects human movement, we captured motion data in a mid-air pointing user study featuring different pointing techniques. By comparing three cost functions, we found that the combination of distance, control, and joint acceleration costs best explained the experimentally observed user behaviour in terms of both end-effector trajectories and joint angle sequences~\cite{Klar2022}. We have also demonstrated the ability to replicate the behavior of a \textit{specific} user, and to generate models of \textit{new} users by adjusting the model and/or control cost parameters. \subsection{Model-Free Reinforcement Learning} As an alternative approach to MPC, we have investigated the ability of \textit{model-free reinforcement learning (RL)} to simulate human movement during interaction. While RL methods have a long tradition in robotics and character animation, in the last few years they have been increasingly used to predict human behavior and motion in areas such as neuroscience, digital health, and sports~~\cite{Bian20, Gottesman19, Liu22}. % In contrast to LQG, and similar to MPC, RL can handle complex, nonlinear system dynamics. In contrast to MPC, the closed-loop policy learned by policy-gradient RL methods is not only (approximately) optimal \textit{for a single initial state}, but generalizes to \textit{arbitrary states} explored during training. Practically, an RL policy thus needs to be ``trained'' only once (which, however, might take up to hours or even days) and can then be applied in real-time during execution (similar to LQG). On the downside, there are much fewer theoretical optimality and stability guarantees for RL policies than for MPC and LQG, % rendering RL approaches more ``experimental''. Using the same state-of-the-art model of the upper extremity as in Section~\ref{sec:MPC} to simulate mid-air pointing movements, we have shown that an RL policy trained to minimize constant time costs only is able to generate movements that capture well-established characteristics. In particular, the simulation trajectories follow Fitts’ Law in the case of aimed reaching, and the \nicefrac{2}{3} Power Law for ellipse drawing, while generating bell-shaped velocity and N-shaped acceleration profiles~\cite{Fischer21}. Building on these results, we have presented \textit{User-in-the-Box}~\cite{uitb}, a modular simulation framework that allows to combine a biomechanical model with one or multiple sensory input channels, an interaction task instance, and an RL method used to solve the resulting OCP. % In this work we used a muscle-actuated MuJoCo version of the upper extremity model (with 5 shoulder and arm joints actuated via 26 muscles and tendons, and fixed torso and wrist), as well as RGB-D based vision, proprioceptive and haptic input channels, and task-specific reward functions. We simulated four movement-based interaction tasks of increasing difficulty: mid-air pointing, target tracking, choice reaction, and remote car control (see Figure~\ref{fig:uitb-tasks}). Our trained models can successfully complete the respective interaction tasks, while exhibiting characteristic movement regularities such as Fitts' Law, showing that we are able to simulate interactive motion of real users. \begin{figure} \centering \includegraphics[width=0.24\linewidth, clip]{figures/evaluate_ISO_pointing_bright_1_cropped.png} \hfill \includegraphics[width=0.24\linewidth, clip]{figures/evaluate_tracking_bright_cropped.png} \hfill \includegraphics[width=0.24\linewidth, clip]{figures/evaluate_button_bright_cropped.png} \hfill \includegraphics[width=0.24\linewidth, clip]{figures/evaluate_newest-location-no-termination_1.png} \caption{Using policies trained via RL, our simulation is capable of predicting motion in four interactive tasks with differing perceptual-motor requirements. Figure reprinted from~\cite{uitb}. } \label{fig:uitb-tasks} \end{figure} \section{Discussion}\label{sec:discussion} The approach presented here highlights the benefits that simulation and optimization can offer to the design, evaluation, and improvement of user interfaces. However, in order to realize the full potential of model-based simulation, important aspects need to be discussed. \emph{Model accuracy.} Although we are able to implement models of increasing complexity, the question is what aspects qualify for a ``good enough'' model of the human \textit{and} the interface, given a specific interaction task and technique. Furthermore, what metrics are suitable and accepted by researchers and pracitioners alike to validate a model? \emph{Generalizability.} Assuming we have created a ``good enough'' model for a specific interaction task and technique and went through the (painful) process of validating it, how easy is it to apply it to (slightly) different tasks or account for different user-specific characteristics? Do classes of cost/reward functions that encapsule a more general setting exist, or are we stuck with tuning these functions for each task, user model, and interaction technique? \emph{Deployability.} We see a positive change within the HCI community of making code publicly available. Building on this, what tools and tutorials specifically created for interface designers are necessary to wisely augment the user-centered design process with predictions from model-based simulations and allow for virtual prototyping? We believe that an ongoing, goal-oriented dialogue between researchers and practitioners is essential to ensure easy deployability and leverage the benefits of both real and simulated user data. Moreover, an interdisciplinary approach would help making virtual prototyping more accessible to different audiences. \section{Introduction} As society becomes increasingly technologized, new tracking and display technologies, such as virtual, augmented, and mixed reality, have enabled massive growth in the design space of interaction techniques. However, interaction with technology still often feels lackluster and unnatural compared to how we interact with objects in nature. To design more natural interaction techniques, it is crucial to better understand interaction techniques and user intentions. This becomes increasingly more challenging, as not only the devices but also user preferences become more diverse. The traditional user-centered design process in Human-Computer Interaction (HCI) is focused on creating interfaces that work well for a specific user group. It tends to rely heavily on user feedback, which can be time-consuming, costly, and may not always provide actionable insights. As such, it is struggling to keep up. We propose to add into the mix a principled model-based design process, where simulation and optimization of the whole human-computer interaction loop is key. This does not replace the tried-and-tested approach of conducting user studies, but rather combines model-based simulation and optimization with user studies that need to be run much less frequently and at a much later stage. Simulation provides a virtual environment for testing the behavior and performance of HCI systems and interfaces. It allows designers to keep the prototype entirely virtual, and evaluate user experience and assess user engagement, for example, in Virtual Reality (VR). This helps to identify potential problems and to make improvements early on in the design process, before real prototypes are even built. Modeling the underlying interface dynamics is not only useful for the interaction design and testing, but also for optimizing the performance of the interface itself. In addition to modeling an interaction technique, we can build a generative user model. This helps in identifying potential barriers to usability and ``makes design and engineering more predictable and robust processes''~\cite{interact}. By simulating the user, we avoid physical, emotional, or ethical risks, and prevent causing stress to real users in exhausting user studies. Finally, the ability of a simulation to match real user behavior is a strong indicator of whether we understand an interactive system, thus, building user models also supports the creation and validation of new theories in HCI. Below we report on our most recent advances in this domain. In Section 2, we investigate the development of future interactive systems for acoustic levitation interfaces, and provide examples how modeling and simulation can help assess user engagement and test interaction parameters with virtual prototypes, and improve interface performance. In Section~\ref{sec:biomechanical-modeling}, we provide a comprehensive framework that encompasses the (biomechanical) simulation of users, where we focus on generating user \emph{movement}. In Section~\ref{sec:discussion}, we reflect on limitations and current open questions in the field.
3,212,635,537,885
arxiv
\section{Introduction}\label{sec:intro} The discovery of the SM-like Higgs boson and nothing else presents a serious challenge to particle phenomenology. On one hand, the SM is incomplete, as it fails to explain properties such as the hierarchy problem, neutrino masses, cosmological inflation and dark matter. On the other hand, a Higgs mass of $125$ GeV presents a problem for the Standard Model (SM) ({\it e.g.}, electroweak vacuum instability), and for most of its extensions. So far, no clear directions for theoretical explorations or experimental solutions are indicated. Constructing and studying models which attempt to solve some of the outstanding problems in SM emerges as a viable alternative. Out of these, supersymmetry presents a partial solution to the hierarchy problem and a clear one for dark matter. However, in its minimal incarnation, the minimal supersymmetric model (MSSM) requires squarks and gluinos in the multi-TeV range to explain such a low Higgs mass, raising a serious challenge for the LHC to find any signals. This issue may be resolved in models with extended gauge groups. In these models, additional $D$-term contributions to the Higgs mass matrices weaken considerably MSSM mass limits \cite{Haber:1986gz,Cvetic:1997ky,Ma:2011ea}. Depending on the models studied, these models can also resolve additional problems of MSSM. For instance, models with left-right symmetry \cite{Mohapatra:1995xd} can yield neutrino masses via the seesaw mechanism \cite{Mohapatra:1980yp, Schechter:1980gr, Schechter:1981cv}. In \cite{Malinsky:2005bi}, an extended supersymmetric model based on $SU(3)_c \times SU(2)_L \times U(1)_R \times U(1)_{B-L} $ was proposed. The model can be embedded in $SO(10) $ SUSY-GUT, much like the left-right supersymmetric model, and generate a new seesaw mechanism for neutrino masses. The factor $U(1)_R$ can be thought off as remnant of a more complete $SU(2)_R$. Unlike the left-right supersymmetric model, which requires Higgs triplet representations with vacuum expectation values (VEV) $v_R \sim 10^{15}$ GeV for obtaining neutrino masses and gauge unification, the symmetry in this model can be broken by singlet Higgs bosons (thought of as remnants of a doublet representation in left-right models), with VEVs in the TeV range, while still allowing for gauge coupling unification. In \cite{Malinsky:2005bi}, the smallness of neutrino masses was explained as based on an inverse seesaw mechanism. The general features of the TeV scale soft-supersymmetry breaking parameters were explored in \cite{DeRomeri:2011ie}, outlining conditions for models with intermediate scales obtained from breaking $SO(10)$. The Higgs sector of the model was further explored, showing that a larger mass than that predicted by MSSM can be obtained. The parameter space was further explored in \cite{Hirsch:2012kv}, where benchmarks, branching ratios, as well as lepton violation constraints were analyzed. In this work, we concentrate on investigating, discriminating, and restricting the parameter space of the model using dark matter studies. We include up-to-date constraints on the spectrum coming from the Higgs signal strengths and mass data, and including LHC restrictions on squark and gluino masses, constraints on flavor parameters from the $B$ sector, as well as recent lower limits on the $Z^\prime$ mass. Assuming universal scalar and gaugino masses, we show that the lightest supersymmetric particle (LSP) can be the sneutrino (which is different from the usual in this scenario, being a mixture of the right sneutrino and a gauge singlet fermion introduced to generate the inverse seesaw mechanism); or the lightest neutralino (which is favored to be a mixture of the two $U(1)$ binos). Relic density and indirect dark matter detection severely restrict the parameter space, as indeed does the recent limit on the $Z^\prime$ mass \cite{ATLAS:2017wce}. Within the parameter space allowed by dark matter limits, we analyze the consequences on sparticle spectra, the neutral Higgs sector and on the anomalous magnetic moment of the muon, which shows more than $3~\sigma$ \cite{Bennett:2006fi} discrepancy with the SM prediction. Finally we investigate the possibilities of testing the model at the LHC. Our work is organized as follows. We provide a brief description of the model in Sec. \ref{sec:model}, capitalizing on more complete descriptions which have appeared previously. In Sec. \ref{sec:scan} we describe in detail the parameters of the model and constraints imposed on them. Dark matter phenomenology is explored in Sec. \ref{sec:DMpheno}, for both neutralino LSP \ref{subsec:neutralinoDM} and sneutrino LSP \ref{subsec:sneutrinoDM}. We then look at the consequences of our findings and compare the two scenarios in Sec. \ref{sec:comparisonLSP}, for the sparticle spectrum, the Higgs sector \ref{subsec:Higgssector} and the anomalous magnetic moment of the muon \ref{subsec:muong2}, and show that imposing the $Z^\prime$ strict mass limitsbasically rules out the sneutrino DM solutions in \ref{subsec:zprime}. We discuss possibilities for detection in Sec. \ref{sec:collider} and conclude in Sec. \ref{sec:conclusion}. We leave some relevant formulas for the Appendix. \section{Model Description}\label{sec:model} In this section, we describe the supersymmetric model under investigation briefly. This model, based on $SU(3)_c \times SU(2)_L \times U(1)_R \times U(1)_{B-L} $ (thereafter referred to as the BLRSSM) was first introduced in \cite{Malinsky:2005bi} and further studied in \cite{DeRomeri:2011ie, Hirsch:2011hg, Hirsch:2012kv}. The model emerges from breaking of supersymmetric $SO(10)$ to the SM through the following intermediary steps, $$SO(10) \to SU(3)_C \times SU(2)_L \times SU(2)_R \times U(1)_{B-L} \to SU(3)_C \times SU(2)_L \times U(1)_R \times U(1)_{B-L} \to SU(3)_C \times SU(2)_L \times U(1)_{Y} .$$ The advantages of this model are \begin{itemize} \item It is obtained by breaking of SO(10) through a left-right symmetric model, thus inheriting some of its attractive features \cite{Mohapatra:1996vg,Mohapatra:1995xd}; \item It is able to explain neutrino masses by the inverse seesaw mechanism \cite{Malinsky:2005bi}; \item It preserves gauge coupling unification of the MSSM, even when the breaking scale in the last step is of the order of the electroweak scale \cite{DeRomeri:2011ie}; \item It resolves the MSSM Higgs mass problem by yielding larger Higgs masses through additional $D$-terms in the soft-breaking potential, without resorting to heavy particles \cite{DeRomeri:2011ie}; \item It could yield signals differentiating it from MSSM, which may lie in different regions of SUSY parameter space; \item It could provide different dark matter candidates and phenomenology, which in turn inform the study of direct and indirect searches. \end{itemize} The particle content of the model contains, in addition to the SM particles: \begin{enumerate} \item In the fermionic/matter sector, an additional (right-handed) neutrino $N_i^c$, required for anomaly cancellation, and an additional singlet fermion $S$, needed for generating neutrino masses. Both these fermions come in 3 families and are accompanied by their scalar partners; \item In the bosonic/Higgs sector, two new Higgs fields, $\mathcal{X}_{R}$ and $\mathcal{\overline{X}}_{R}$, remnants of $SU(2)_R$ doublets, needed to break $U(1)_R \times U(1)_{B-L} \to U(1)_Y$, and their fermionic partners; \item In the gauge sector, an additional neutral gauge field, $Z^\prime$, which emerges from the mixing of the neutral gauge fields of $SU(2)_L , U(1)_R $ and $ U(1)_{B-L} $, $(W^0, B_{R}, B_{B-L})$, and its fermionic partner. \end{enumerate} In a sense, the model described here is minimal: however it requires an extra ${\cal Z}_2$ matter parity to avoid breaking of $R$-parity \cite{Hirsch:2012kv}. The superpotential in this model is described by \begin{eqnarray} W&=&\mu H_{u}H_{d}+Y_{u}^{ij}Q_{i}H_{u}u^{c}_{j}-Y_{d}^{ij}Q_{i}H_{d}d^{c}_{j}-Y_{e}^{ij}L_{i}H_{d}e^{c}_{j} \nonumber \\ &+&Y_{\nu}^{ij}L_{i}H_{u}N^{c}_{i}+ Y^{ij}_{s}N^{c}_{i}\mathcal{X}_{R} S - \mu_{R}\mathcal{\overline{X}}_{R} \mathcal{X}_{R} + \mu_{S}S S \, , \label{superpotential} \end{eqnarray} where the first line of Eq.(\ref{superpotential}) contains the usual terms of the MSSM, while the second line includes the additional interactions from the right-handed neutrino $N^{c}_{i}$ and the singlet Higgs fields $\mathcal{\overline{X}}_{R}$, $\mathcal{X}_{R}$ with -1/2 and +1/2 $B - L$, and +1/2 and -1/2 $R$ charges, respectively. The first term of the second line in superpotential describes the Yukawa interactions between neutrinos, and $Y_{\nu}^{ij}$ is the Yukawa coupling associated with these interactions. In a similar manner, $Y^{ij}_{s}$ represents the Yukawa coupling among $N^{c}_{i}$, $\mathcal{X}_{R}$ and $S$. Moreover, $\mu_{R}$ is similar to $\mu'$ term of the $B-L$ extension of supersymmetric model (BLSSM) and stands for bilinear mixing between $\mathcal{X}_{R}$ and $\mathcal{\overline{X}}_{R}$ fields. Note that there is also a $\mu_{S}$ term to generate non-zero neutrino masses with inverse seesaw mechanism, and as customary, it is restricted to have a low value, as it cannot give important contributions to any other sector except for neutrinos. Contrary to BLSSM \cite{DelleRose:2017smp,Un:2016hji,Basso:2015xna}, where neutrinos have Majorana mass terms, $N^{c}_{i}$ fields interact with $\mathcal{X}_{R}$ and $S$ through $ Y^{ij}_{s}N^{c}_{i}\mathcal{X}_{R} S$ term, and lead to SM-singlet pseudo-Dirac mass eigenstates. Besides, the interaction of the $SU(2)_{L}$ singlet Higgs fields $\mathcal{X}_{R}$, $S$ and $N^{c}_{i}$ yield a significant contribution to the masses of the extra Higgs bosons. Implementing the inverse seesaw mechanism into model allows $Y_{\nu}^{ij}$ and $Y^{ij}_{s}$ to be at the order of unity. Hence, the contribution from the right-handed neutrino sector to the Higgs boson cannot be neglected and yields a different low scale phenomenology from MSSM and BLSSM with inverse seesaw mechanism \cite{Abdallah:2017gde,Khalil:2015wua,Khalil:2015naa}. The soft-breaking Lagrangian terms in the model are \begin{eqnarray} -{\cal L}_{SB,W}&=&- B_\mu (H_u^0 H_d^0- H^-_d H_u^+) - B_{\mu_R} \mathcal{X}_{R} \mathcal{\overline{X}}_{R} + A_u ({\tilde u}^{\star}_{R, i} {\tilde u}_{L,j} H^0_u- {\tilde u}^{\star}_{R, i} {\tilde d}_{L,j} H^+_u) + A_d ({\tilde d}^{\star}_{R, i} {\tilde d}_{L,j} H^0_d-{\tilde d}^{\star}_{R, i} {\tilde u}_{L,j} H^-_d)\nonumber\\ &+&A_e ({\tilde e}^{\star}_{R, i} {\tilde e}_{L,j} H^0_d- {\tilde e}^{\star}_{R, i} {\tilde \nu}_{L,j} H^-_d) +A_\nu ({\tilde \nu}^{\star}_{R, i} {\tilde \nu}_{L,j} H^0_u- {\tilde e}^{\star}_{R, i} {\tilde \nu}_{L,j} H^-_u) + A_{s, ij}\mathcal{X}_{R} {\tilde \nu}_{R,i} {\tilde S}+ {\rm h.c.} \, ,\nonumber\\ -{\cal L}_{SB,\phi}&=& m_{\mathcal{X}_{R}}^2 | \mathcal{X}_{R} |^2 +m_{\mathcal{\overline{X}}_{R}}^2 | \mathcal{\overline{X}}_{R} |^2 +m_{H_d}^2 ( | H_d^0 |^2 +|H_d^-|^2) +m_{H_u}^2 (| H_u^0 |^2 +|H_u^+|^2) + m^2_{q,ij}({\tilde d}^\star_{L,i} {\tilde d}_{L,j}+ {\tilde u}^\star_{L,i} {\tilde u}_{L,j}) \nonumber \\ &+&m^2_{d,ij}{\tilde d}^\star_{R,i} {\tilde d}_{R,j} +m^2_{u,ij}{\tilde u}^\star_{R,i} {\tilde u}_{R,j} +m^2_{l,ij}({\tilde e}^\star_{L,i} {\tilde e}_{L,j}+{\tilde \nu}^\star_{L,i} {\tilde \nu}_{L,j}) + m^2_{e,ij}{\tilde e}^\star_{R,i} {\tilde e}_{R,j}+m^2_{\nu,ij}{\tilde \nu}^\star_{R,i} {\tilde \nu}_{R,j} +m^2_{s,ij}{\tilde S}^\star_{i} {\tilde S}_{j}\nonumber \\ -{\cal L}_{SB, \lambda}&=&\frac12 \left( M_1\lambda_B^2 +M_2 \lambda_W^2 +M_3 \lambda_g^2 + 2M_{B_R} \lambda_B \lambda_R +{\rm h.c.} \right)\, , \label{softbreaking} \end{eqnarray} which contain triple scalar interactions, scalar masses and masses for the gauginos of all gauge groups, denoted by $\lambda$'s. The $U(1)_R \times U(1)_{B-L}$ symmetry is broken spontaneously to $U(1)_Y$ by the vacuum expectation values (VEVs) of $\mathcal{X}_{R}$ and $\mathcal{\overline{X}}_{R}$ \begin{equation} \langle \mathcal{X}_{R} \rangle =\frac{v_{\mathcal{X}_{R}}}{\sqrt{2}}\, , \qquad \langle \mathcal{\overline{X}}_{R} \rangle =\frac{v_{\mathcal{\overline{X}}_{R}}}{\sqrt{2}}\, , \end{equation} while $SU(2)_L \times U(1)_Y$ is broken further to $U(1)_{EM}$ by the VEVs of the Higgs doublets \begin{equation} \langle H_d^0 \rangle =\frac{v_d}{\sqrt{2}}\, , \qquad \langle H_u^0 \rangle =\frac{v_u}{\sqrt{2}}\, . \end{equation} We denote $v_R^2= v_{\mathcal{X}_{R}}^2 + v_{\mathcal{\overline{X}}_{R}}^2$ and $\displaystyle \tan \beta_R= \frac{v_{\mathcal{X}_{R}}}{v_{\mathcal{\overline{X}}_{R}}}$, in analogy with $v^2=v_d^2+v_u^2$, $\displaystyle \tan \beta=\frac{v_u}{v_d}$. The spectrum for this model, including particle masses, neutrino seesaw, mixing of gauge bosons and the neutralino sector has been discussed before \cite{Hirsch:2011hg}, and we do not repeat it here. In what follows we concentrate on scanning the model parameters first by imposing Higgs sector, particle masses and other low energy restrictions, and then looking for dark matter candidates and resolution of the anomalous magnetic moment of the muon, thus restricting the parameter space to region where these conditions are satisfied. \section{Scanning Procedure and Experimental Constraints} \label{sec:scan} We proceed to analyze the model by scanning the fundamental parameter space of BLRSSM. We use \textsc{SPheno} 3.3.3 package \cite{Porod:2003um,Porod:2011nf} obtained from the model implementation in \textsc{Sarah} 4.6.0 \cite{Staub:2008uz,Staub:2010jh}. This package employs renormalization group equations (RGEs), modified by the inverse seesaw mechanism to evolve Yukawa and gauge couplings from $M_{{\rm GUT}}$ to the weak scale, where $M_{{\rm GUT}}$ is determined by the requirement of gauge coupling unification. We do not strictly enforce the solutions to unify at $M_{{\rm GUT}}$, since a few percent deviation is allowed due to unknown GUT-scale threshold corrections \cite{Lucas:1995ic}. $M_{{\rm GUT}}$ is thus dynamically determined by the requirement of gauge unification, that is $ g_{L} = g_{R} = g_{B-L} \approx g_{3}$, with subindices denoting the gauge couplings associated with $ SU(2)_{L}, SU(2)_{R} ,U(1)_{B-L} $ and $SU(3)_{C}$ respectively. With boundary conditions determined at $M_{{\rm GUT}}$, all the soft supersymmetry breaking (SSB) parameters along with the gauge and Yukawa couplings are evolved to the weak scale. \begin{table} \setlength\tabcolsep{7pt} \renewcommand{\arraystretch}{1.4} \begin{tabular}{c|c||c|c} Parameter & Scanned range& Parameter & Scanned range\\ \hline $m_0$ & $[0., 3.]$~TeV & $v_{R}$ & $[6.5, 20.]$~TeV\\ $M_{1/2}$ & $[0., 3.]$~TeV & $diag(Y_{\nu}^{ij})$ & $[0.001, 0.99]$\\ $A_0/m_0$ & $[-3., 3.]$ & $diag(Y_{s}^{ij})$ & $[0.001, 0.99]$\\ $\tan\beta$ & $[0., 60.]$ & {\rm sign of} $\mu$ & {\rm positive} \\ $\tan\beta_R$ & $[1., 1.2]$ & {\rm sign of} $\mu_R$ & {\rm positive or negative} \\ \end{tabular} \caption{\label{tab:scan_lim} Scanned parameter space.} \end{table} We performed random scans over the parameter space, as illustrated in \autoref{tab:scan_lim}, imposing universal boundary conditions for scalar and gaugino masses. We comment briefly first on the parameters chosen, and then on the constraints included. Here $m_{0}$ corresponds the mass terms for all scalars, and $M_{1/2}$ represents the mass terms for all gauginos, including the ones associated with the $U(1)_{B-L}$ and $U(1)_{R}$ gauge groups. In setting the ranges for the free parameters, we scan scalar and gaugino SSB mass terms between 0--3 TeV, regions which yield sparticle masses at the low scale, especially the LSP. Here $A_{0}$ is the trilinear scalar interaction coupling coefficient, and we adjusted its range to avoid charge and/or color breaking minima, which translates into $\lvert A_{0} \rvert \lesssim 3 m_{0}$ \cite{Kusenko:1996vp,Chattopadhyay:2014gfa}. Also, $\tan\beta$ is the ratio of vacuum expectation values of the MSSM Higgs doublets $v_u/v_d$, while $\tan\beta_{R}$ which denotes the ratio of vacuum expectation values of $v_{\mathcal{X}_{R}}/v_{\mathcal{\overline{X}}_{R}}$, is also free parameter in this model. Practically however, $\tan\beta_{R}$ is required to be close to 1, in order to prevent large $D$-term contributions to the sfermion masses and to avoid tachyonic solutions. The VEV $v_{R}$ represents the vacuum expectation value which breaks extra $U(1)_{B-L} \times U(1)_{R}$ symmetry. Since the breaking scale of the extra symmetry plays a crucial role in determining the $Z^\prime$ mass, the gauge boson associated with $U(1)_{B-L} \times U(1)_{R}$ symmetry, we scan $v_{R}$ between 6.5 and 20 TeV to obtain $Z^\prime$ boson masses consistent with the current experimental bounds. The parameter $\mu$ is the bilinear mixing of the MSSM doublet Higgs fields, while $\mu_{R}$ is the bilinear mixing of the $SU(2)_{R}$ remnants Higgs fields, which are singlet under $SU(2)_{L}$ symmetry. The values of $\mu$ and $\mu_{R}$ can be determined by the radiative electroweak symmetry breaking (REWSB) but their signs cannot; thus, only their signs remain as free parameters. Since the model contributions to muon anomalous magnetic moment are related to the sign of $\mu M_{1/2}$, we scan over positive $\mu$ values, but we accept both negative and positive solutions of $\mu_R$, while requiring solutions consistent with experimental predictions, and favoring solutions which improve upon the SM predictions for the muon $g-2$ factor. The superpotential of the model also includes a $\mu_S$ parameter, which yields non-zero neutrino masses via the inverse seesaw mechanism. However, $\mu_S$ is constrained to be small, so that it cannot effect any supersymmetric particle masses or decays. We also fixed the top quark mass to its central value ($m_t$ = 173.3 GeV) \cite{Group:2009ad} in our scan. The Higgs boson mass is very sensitive to the top quark mass, and small changes in its value can shift Higgs boson mass by 1-2 GeV \cite{Gogoladze:2011aa,Ajaib:2013zha}, although it does not significantly affect sparticle masses \cite{Gogoladze:2011db}. Hence, we scan both diag($Y_{\nu}^{ij}$) and diag($Y^{ij}_{s}$) between 0.001--0.99, though the inverse seesaw mechanism prefers values of order 1. \begin{table}{ \setlength\tabcolsep{7pt} \renewcommand{\arraystretch}{1.6} \begin{tabular}{l|c|c||l|c|c} Observable & Constraints & Ref. & Observable & Constraints & Ref.\\ \hline $m_{h_1} $ & $ [122,128] $ GeV & \cite{Chatrchyan:2012xdj} & $m_{\widetilde{t}_1} $ & $ \geqslant 730 $ GeV & \cite{Olive:2016xmw}\\ $m_{\widetilde{g}} $ & $ > 1.75 $ TeV & \cite{Olive:2016xmw} & $ m_{\chi_1^\pm} $ & $ \geqslant 103.5 $ GeV & \cite{Olive:2016xmw} \\ $m_{\widetilde{\tau}_1} $ & $ \geqslant 105 $ GeV & \cite{Olive:2016xmw} & $m_{\widetilde{b}_1} $ & $ \geqslant 222 $ GeV & \cite{Olive:2016xmw}\\ $m_{\widetilde{q}} $ & $ \geqslant 1400 $ GeV & \cite{Olive:2016xmw} & $m_{\widetilde{\tau}_1} $ & $ > 81 $ GeV & \cite{Olive:2016xmw} \\ $m_{\widetilde{e}_1} $ & $ > 107 $ GeV & \cite{Olive:2016xmw} & $m_{\widetilde{\mu}_1} $ & $ > 94 $ GeV & \cite{Olive:2016xmw} \\ $\chi^2(\hat{\mu})$ & $\leq 3 $ & - & BR$(B^0_s \to \mu^+\mu^-) $ & $[1.1,6.4] \times 10^{-9}$ & \cite{Aaij:2012nna} \\ $\displaystyle \frac{{\rm BR}(B \to \tau\nu_\tau)} {{\rm BR}_{SM}(B \to \tau\nu_\tau)} $ & $ [0.15,2.41] $ & \cite{Asner:2010qj} & BR$(B^0 \to X_s \gamma) $ & $ [2.99,3.87]\times10^{-4} $ & \cite{Amhis:2012bh}\\ $m_{Z^{\prime}} $ & $ > 3.5 $ TeV & \cite{ATLAS:2017wce} & $\Omega_{DM}h^{2} $ & [0.09-0.14] & \cite{Komatsu:2010fb,Spergel:2006hy} \\ \end{tabular} \caption{\label{tab:constraints} Current experimental bounds imposed on the scan for consistent solutions.}} \end{table} In scanning the parameter space, we use the interface which employs Metropolis-Hasting algorithm described in \cite{Belanger:2009ti}. All collected data points satisfy the requirement of REWSB. After collecting the data, we impose current experimental mass bounds on all the sparticles and SM-like Higgs boson as highlighted in \autoref{tab:constraints}. Although we restrict the SM-like Higgs boson to lie between 122-128 GeV with 3 GeV uncertainty, we also employed \textsc{HiggsBounds} 4.3.1 package \cite{Bechtle:2013wla} to compare our Higgs sector predictions with the experimental cross section limits from the LHC, and we require agreement with Higgs boson decay signal strengths at tree level, $h \to WW^\star$, $h \to ZZ^\star$ and $h \to b \bar {b}$. Thus using the mass-centered $\chi^2 $, and selecting the parametrization for the Higgs mass uncertainty as ``box" we employed \textsc{HiggsSignals} 1.4.0 package \cite{Bechtle:2013xfa} and bounded the solutions which yield total $\chi^2(\hat{\mu}) \leqslant $ 3. Another constraint comes from rare $B$-decay processes, $ B_s \rightarrow \mu^+ \mu^- $ \cite{Aaij:2012nna}, $b \rightarrow s \gamma$ \cite{Amhis:2012bh} and $B_u\rightarrow\tau \nu_{\tau}$ \cite{Asner:2010qj}. The $B$-meson decay into a muon pairs, in particular, constrains the parameter space since there the SM predictions are consistent with the experimental measurements. The supersymmetric contributions are proportional to $(\tan\beta)^6/m_{A_i}^{4}$ and constrained to be small. Hence, $m_{A_i}$ has to be heavy enough ($m_{A_i}\sim$ TeV) to suppress the supersymmetric contributions for large $\tan\beta$ values. In addition to these limitations, dark matter observations severely restrict the parameter space, requiring the LSP to be stable and electric and color neutral, which excludes a significant portion of parameter space where stau is the LSP. We concentrate on two different data sets, one with the neutralino being the LSP, and one where sneutrino is the LSP, and we shall distinguish these two scenarios throughout our investigations. We employ \textsc{micrOMEGAs} 4.3.1 package \cite{Belanger:2014vza} and tag the solutions which yield consistent relic density within 20\% uncertainty range provided from WMAP data \cite{Komatsu:2010fb,Spergel:2006hy} as specified in \autoref{tab:constraints}. Apart from relic abundance constraint, we do not impose any restriction from the dark matter experiments. All the experimental restrictions mentioned above are listed in \autoref{tab:constraints}. \section{Dark matter phenomenology} \label{sec:DMpheno} For either neutralino or sneutrino to be viable candidates for dark matter, they must yield the correct level of relic abundance for thermal dark matter production in the early Universe, determined very precisely as the amount of non-baryonic dark matter in the energy-matter of the Universe, $\Omega_{DM}h^2=0.1199\pm 0.0027$ \cite{Ade:2013zuv}, with $\Omega_{DM}$ being the energy density of the dark matter with respect to the critical energy density of the universe, and $h$ the reduced Hubble parameter. In addition, as the lack of any dark matter signals in either direct or indirect dark matter detection experiments confront our theoretical expectations, these must satisfy increasingly severe constraints from experiments. The interaction of dark matter with detector nuclear matter can be spin-dependent or spin-independent. The spin-dependent scattering can only happen for odd-numbered nucleons in the nucleus of the detector material, while in spin-independent (scalar) scattering, the coherent scattering of all the nucleons in the nucleus with the DM are added in phase. Consequently, in direct detection experiments, the experimental sensitivity to spin-independent (SI) scattering is much larger than the sensitivity to spin-dependent scattering, and thus we shall concentrate on the former. We proceed as follows. First, we analyze the consequences of having the lightest neutralino as the dark matter candidate. Using the results in the previous sections, we explore the parameter space of the model which is consistent with this assumption. We follow in the next subsection with the parameter restrictions for sneutrino dark matter. \subsection{Neutralino Dark Matter} \label{subsec:neutralinoDM} In this subsection, we concentrate on analyzing the consequences on the mass spectrum of the BLRSSM obtained by scanning over the parameter space given in \autoref{tab:scan_lim} where lightest neutralino ($\widetilde{\chi}_1^0$) is always the LSP, and highlight the solutions compatible with the constraints showed in \autoref{tab:constraints}. We start with \autoref{fig:freeparams} which displays the allowed parameter regions, with plots in $ m_{0} - M_{1/2} $, $ m_0 - A_0/m_0 $ and $ M_{1/2} - \tan{\beta}$ planes. Throughout the graphs, all points satisfy REWSB. Blue points satisfy all experimental mass bounds, signal strengths of SM-like Higgs boson and rare $B$-decay constraints given in \autoref{tab:scan_lim}. Red points obey the above mentioned constraints, as well as relic density bounds, 0.09 $ \leq \Omega_{DM}h^{2} \leq$ 0.14. The $ m_{0} - M_{1/2} $ plane shows that solutions for $M_{1/2} \lesssim $ 800 GeV are excluded by the constraints in \autoref{tab:constraints}, and the requirement of consistent relic density (red points) excludes a significant portion of the LHC allowed region (blue points). For $M_{1/2}\sim$ 1 TeV, $m_0$ is bounded between 2--3 TeV, and low $m_0$ values can survive for larger $M_{1/2}$. On the other hand, the $ m_0 - A_{0}/m_0 $ panel shows that the regions with larger $m_0$ values prefer positive values of the trilinear scalar interaction strength $A_0$, while almost all solutions with consistent relic density have positive $A_{0}$ parameter. Unlike the $B-L$ Supersymmetric Standard Model (BLSSM) \cite{Un:2016hji}, where negative $A_{0}$ solutions for $m_0 \geq$ 1 TeV do not satisfy REWSB, here all LSP constraints can be fulfilled for this portion of parameter space, while only the relic density constraint imposes positivity of $A_0$. The $ M_{1/2} -\tan{\beta}$ plot indicates that it is possible to find solutions with 0.09 $ \leq \Omega_{DM}h^{2} \leq$ 0.14 only for large $\tan \beta$ values, 40 $ \leq \tan\beta \leq$ 60, although it is easier to satisfy LHC limitations for low $\tan\beta$ values. \begin{figure} \centering \includegraphics[scale=0.31]{m0_mhf \includegraphics[scale=0.31]{m0_A0m0 \includegraphics[scale=0.31]{mhf_tanb \caption{ Parameter scans for neutralino LSP scenario. (Left) $ m_{0} $ vs $ M_{1/2} $, (center) $ m_{0} $ vs $A_{0}/m_{0}$ and (right) $ M_{1/2}$ vs $\tan{\beta}$. All points are consistent with REWSB and neutralino LSP. Blue points satisfy all the experimental limits listed in \autoref{tab:constraints}. Red points form a subset of blue, and represent solutions consistent with the relic density constraint.} \label{fig:freeparams} \end{figure} In \autoref{fig:sparticles}, we show specific results for the determination of sparticle mass spectrum, with plots in (top left) $ m_{\widetilde{t}_1} - m_{\widetilde{\chi}_1^0} $, (top right) $ m_{\widetilde{b}_1} - m_{\widetilde{\chi}_1^0} $, (bottom left) $ m_{\widetilde{\chi}_1^{\pm}} - m_{\widetilde{\chi}_1^0}$ and (bottom right) $ m_{\widetilde{\tau}_1} - m_{\widetilde{\chi}_1^0}$ planes. The color coding is the same as \autoref{fig:freeparams}. Furthermore, the mass region in which the two masses are degenerate is displayed as a solid green line. We note that the LSP neutralino solutions consistent with the relic density bound can be obtained only when 300 GeV $ \leq m_{\widetilde{\chi}_1^0} \leq $ 800 GeV. As can be seen from $ m_{\widetilde{t}_1} - m_{\widetilde{\chi}_1^0} $ and $ m_{\widetilde{b}_1} - m_{\widetilde{\chi}_1^0} $ planes, we find that stop and sbottom masses have to be at least $\sim$ 1.5 TeV and 2 TeV respectively to fulfill all the restrictions. Even though it is possible to find light stop solutions ($m_{\widetilde{t}_1} \leq $ 1 TeV) when 340 GeV $ \leq m_{\widetilde{\chi}_1^0} \leq $ 550 GeV, the relic density condition is not satisfied for these solutions. Moreover, unlike the results of BLSSM \cite{Un:2016hji} where the lightest chargino masses are always above 600 GeV, here the $ m_{\widetilde{\chi}_1^{\pm}} - m_{\widetilde{\chi}_1^0}$ plot shows that there is a region of parameter space where lightest chargino solutions is nearly degenerate with the lightest neutralino when 300 GeV $ \leq m_{\widetilde{\chi}_1^0} \leq $ 500 GeV. These solutions correspond to the case where the lightest chargino decays into the neutralino LSP and $W/W^\star$ boson ($\widetilde{\chi}_1^{\pm} \to \widetilde{\chi}_1^0 + W^\pm(W^{\star \pm})$), and the branching ratio for this channel is almost 1. On the bottom right panel, the $ m_{\widetilde{\tau}_1} - m_{\widetilde{\chi}_1^0}$ plane illustrates the stau mass along with the LSP neutralino mass. There is a parameter space around $m_{\tilde\chi^0} \sim 600 $ GeV, where stau mass is almost degenerate with the LSP neutralino and becomes the next to lightest supersymmetric particle (NLSP), but also for a region of the parameter space, the stau can be much heavier than the neutralino LSP. The lightest stau NLSP solutions compatible with the relic density constraint occur around 500 GeV. One can choose one of these solutions and study relevant neutralino annihilation processes mediated by a light stau \cite{Calibbi:2013poa}. The bottom plots in \autoref{fig:sparticles}, show our results for the sparticle spectrum for the gluino and sneutrinos, with the plots in $ m_{\widetilde{q}} - m_{\widetilde{g}} $, (where $\widetilde{q}$ represents squarks from the first two families), and $ m_{\widetilde{\nu}_1} - m_{\widetilde{\chi}_1^0} $ planes. The $ m_{\widetilde{q}} - m_{\widetilde{g}} $ plane shows that squarks masses for the first two families and gluino masses should be heavier than 2 TeV but lighter than 4 TeV (light blue points). Although the relic density condition and the current ATLAS experimental limit \cite{ATLAS:2017cjl} strictly constrain the crucial portion of the parameter space, most of the solutions are consistent with this experimental exclusion. Finally, the $ m_{\widetilde{\nu}_1} - m_{\widetilde{\chi}_1^0} $ plane reveals that it is hard to find solutions with sneutrino as the supersymmetric NLSP if we require consistency with the relic density bound, and the lightest sneutrino solutions satisfying all bounds can be obtained at around 1 TeV. Note that the graphs contain also information on the composition of the neutralino LSP. As can be seen from gluino vs squarks panel, light red points, which represent the mixed or higgsino-like neutralino LSP solutions consistent with the relic density bounds, are mostly found under the yellow curve (the excluded region). Light blue points representing mixtures of $R$-bino and $B-L$ bino (gauginos of $U(1)_R$ and $U(1)_{B-L}$, respectively) neutralino LSPs are mostly located within the 1 sigma error of the yellow line. \begin{figure} \centering \includegraphics[scale=0.40]{mchi1_masst1}\hspace{0.5cm} \includegraphics[scale=0.40]{mchi1_massb1}\\ \includegraphics[scale=0.40]{mcha1_mchi1}\hspace{0.5cm} \includegraphics[scale=0.40]{mchi1_massstau} \includegraphics[scale=0.40]{gluino_squarks_color}\hspace{0.5cm} \includegraphics[scale=0.40]{mchi1_massSv1} \caption{Plots in (top left) $ m_{\widetilde{t}_1} - m_{\widetilde{\chi}_1^0} $, (top right) $ m_{\widetilde{b}_1} - m_{\widetilde{\chi}_1^0} $, (middle left) $ m_{\widetilde{\chi}_1^{\pm}} - m_{\widetilde{\chi}_1^0}$, (middle right) $ m_{\widetilde{\tau}_1} - m_{\widetilde{\chi}_1^0}$, (bottom left) $ m_{\widetilde{q}} - m_{\widetilde{g}} $, and (bottom right) $ m_{\widetilde{\nu}_1} - m_{\widetilde{\chi}_1^0} $ planes. The color coding is the same as \autoref{fig:freeparams}. In the bottom left panel, the color coding represents the neutralino composition as indicated in the insert. The solid line in each plane indicates the degenerate mass region. } \label{fig:sparticles} \end{figure} To continue the investigation of the neutralino LSP composition, in \autoref{fig:DMneutralinolsp_2} we plot the correlation between the neutralino mass and gaugino and higgsino mass ratios with (top left) $ M_4/M_1 $, (top right) $ M_1/\mu $, (bottom left) $ M_2/\mu $, and (bottom right) $\mu_R$ - $\mu$, for correct relic density. The color coding is the same as \autoref{fig:freeparams}. According to the $ M_4/M_1 - m_{\widetilde{\chi}_1^0} $ plane, there must be a clear relation between the $B-L$ bino $ \widetilde{B}$ and $ \widetilde{B}_R $ masses so that the ratio of $ \widetilde{B}_R / \widetilde{B} $ should be at around $\sim1.8$, decreasing slightly when the neutralino LSP mass increases. The next two plots compare the bino-higgsino (top right) and wino-higgsino (bottom left) masses, respectively, by looking at their mass ratio. In the top right plot, almost all solutions satisfying LHC collider bounds, and {\it all} solutions satisfying relic density constraints have $ M_1/\mu \lesssim $ 1, that is the bino is lighter than the higgsino mass parameter. The left bottom plane shows that, despite allowing for light higgsinos, the wino is mostly lighter than the higgsino over all the parameter space where relic density bounds are satisfied. The $\mu_R - \mu$ plot (bottom right) shows that solutions prefer positive $ \mu_R $ to the negative ones, and $ \mu_R $ can take values in a large range between 500 GeV--7 TeV while the relic density bound can only be fulfilled with the low $\mu$ values. As can be seen from $\mu_R - \mu$ plane, the relic density constraint can be satisfied mostly when $\mu \lesssim 0.5 $ TeV and $ 0.7 $ TeV $ \lesssim \mu \lesssim 1.5 $ TeV. The neutralino LSP content consistent with all constraints (including relic density) is as follows: its mass is constrained as $ 300 $ GeV $ \lesssim m_{\widetilde{\chi}_1^0} \lesssim 500 $ GeV, and for those parameter points, the neutralino LSP content is a $\widetilde{B}_R$-ino, $\widetilde{H}$-ino and $\widetilde{B}$-ino mixture, in this region the wino masses are heavier than the higgsino masses for solutions consistent with the relic density bound. Since $ M_1/\mu \lesssim $ 1, the bino mixes more than the higgsinos to form the LSP neutralino. In the region $ 500 $ GeV $ \lesssim m_{\widetilde{\chi}_1^0} \lesssim 800 $ GeV, the LPS neutralino is about $60\% \widetilde{B}_R - 40\% \widetilde{B}$ admixture, consistent also with the top left plot in \autoref{fig:DMneutralinolsp}. \begin{figure} \centering \includegraphics[scale=0.40]{M4SUSYM1SUSY_mchi1}\hspace{0.5cm} \includegraphics[scale=0.40]{M1SUSYmu_mchi1} \includegraphics[scale=0.40]{M2SUSYmu_mchi1}\hspace{0.5cm} \includegraphics[scale=0.40]{MuR_Mu} \caption{Plots for the neutralino LSP mass and mass ratios: (top left) $ M_4/M_1 $, (top right) $ M_1/\mu $, (bottom left) $ M_2/\mu $, and (bottom right) $\mu_R$ - $\mu$ correlations. The color coding is the same as \autoref{fig:freeparams}.} \label{fig:DMneutralinolsp_2} \end{figure} In \autoref{fig:DMneutralinolsp} we present results specific to dark matter phenomenology, plotting the relic density and spin-independent cross section as a function of the lightest neutralino mass. In addition, we plot the correlation between the lightest pseudoscalar and the third lightest neutral Higgs boson $h_3$, to highlight the fact that dark matter annihilation proceeds through these two funnels. We show (top left) $\Omega_{DM} h^2 - m_{\widetilde{\chi}_1^0} $, (top right) $ \sigma_{nucleon}^{SI}- m_{\widetilde{\chi}_1^0}$, (bottom left) $m_{A_1} - m_{\widetilde{\chi}_1^0} $, and (bottom right) $ m_{h_3} - m_{A_1} $ plots. In the top left and top right plane color coding is indicated in the insert, while for the bottom plots the color coding is the same as \autoref{fig:DMneutralinolsp_2}. The top left plot confirms our previous results on the content of LSP neutralino between 500 -- 800 GeV is composed of 60\% $\widetilde{B}_R$-ino and 40\% $\widetilde{B}$-ino, whereas when 300 GeV $ \lesssim m_{\widetilde{\chi}_1^0} \lesssim $ 500 GeV, its content is shared among $\widetilde{B}_R$-ino, $\widetilde{H}$-ino and $\widetilde{B}$-ino . The top left plot shows the dependence of the relic density, and the right plot shows the dependence of the spin-independent proton and neutron cross section, with neutralino LSP mass. The solid green line represents the current exclusion limit for XENON1T experiment \cite{Aprile:2017iyp}. As can be seen from the graph, most solutions consistent with the relic density constraint can be found below the XENON1T exclusion bound, specifically between $10^{-10} $ pb -- $10^{-11}$ pb. Hence they can be detected by the next generation DM detectors such as XENONnT \cite{Aprile:2015uzo}, LZ and DARWIN \cite{Aalbers:2016jon}. Note that we also have a substantial amount of solutions consistent with the relic density above XENON1T exclusion limit. These solutions correspond to the region where 300 GeV $ \lesssim m_{\widetilde{\chi}_1^0} \lesssim $ 500 GeV and where the LSP content is the mixture of $\widetilde{B}_R$-ino, $\widetilde{H}$-ino and $\widetilde{B}$-ino. Thus all solutions surviving consistency with both the current XENON1T exclusion limit and the relic density constraint consist of LSP neutralinos with 500 GeV $ \lesssim m_{\widetilde{\chi}_1^0} \lesssim $ 800 GeV, and with 60\% $\widetilde{B}_R$ and 40\% $\widetilde{B}$ admixture. Finally, the $ m_{A_1} -m_{\widetilde{\chi}_1^0}$ and $ m_{h_3} - m_{A_1} $ plots indicate the funnel channels for the LSP neutralino. The solid green line displays the degenerate mass region for the lightest CP-odd Higgs boson and the LSP neutralino, while the yellow shadowed region indicates solutions with $ m_{A_1} = 2 m_{\widetilde{\chi}_1^0} $, within 8\% error. As can be seen from the graph, the lightest CP-odd Higgs boson, or the neutral $h_3$ Higgs boson can annihilate into two LSP neutralinos when 450 GeV $ \lesssim m_{\widetilde{\chi}_1^0} \lesssim $ 800 GeV. Solutions consistent with the relic density constraint can be found when $A_1$ is degenerate with $h_3$, with mass between 1 and 3 TeV. In this energy scale $A_1$ and $h_3$ provide the main funnel channels of this model. Apart from these, we have also verified the relation of the relic density with the IceCube confidence level exclusion and the neutrino flux, and all neutralino LSP solutions surviving relic and cross section bounds are within 1\% confidence level of the experimental result. \begin{figure} \centering \includegraphics[scale=0.40]{neutralino_relic_DMcontent}\hspace{0.5cm} \includegraphics[scale=0.40]{nucleoncross_mchi1_color}\\ \includegraphics[scale=0.40]{mchi1_massAh3}\hspace{0.5cm} \includegraphics[scale=0.40]{massAh3_massh3} \caption{Dependence of: (top left) relic density and (top right) spin independent cross section with nuclei on $ m_{\widetilde{\chi}_1^0} $, (bottom left) the lightest pseudoscalar Higgs mass on $ m_{\widetilde{\chi}_1^0} $ planes, and (bottom right) the degeneracy between the lightest pseudoscalar mass and the third lightest neutral Higgs boson. Both of these provide the funnel channel for the LSP neutralino annihilation. All points are consistent with LHC and B-physics bounds. The color coding in the $ m_{\widetilde{\chi}_1^0} - m_{A_1} $ plot is the same as in \autoref{fig:freeparams}. The solid line shows the degenerate mass region in these plots. In addition, the shaded region represents $A_1$ funnel solutions where $ m_{A_1} = 2 m_{\widetilde{\chi}_1^0} $ within 8 \% error.} \label{fig:DMneutralinolsp} \end{figure} \clearpage \subsection{Sneutrino Dark Matter} \label{subsec:sneutrinoDM} The BLRSSM contains, in addition to the three left sneutrinos, six additional singlet states, three right sneutrinos and three $\widetilde S$, the scalar partners of $S$. The latter two provide candidates for sneutrino dark matter, as they do not suffer from too large an annihilation cross section (thus small relic density) from interacting through $Z$ or $W$ bosons. Sneutrinos thus provide alternative candidates for dark matter in this model, and we analyze their consequences in this subsection. In the left and right panels of \autoref{fig:DMsneutrinolsp} we show the dependence of the relic density $\Omega_{DM} h^2 $ as a function of the lightest scalar neutrino mass. The color bars to the right of each plot indicate the right-handed sneutrino and the $\widetilde S$ content, respectively. As can be seen from the plot, even though it is possible to find sneutrino LSP solutions for almost all values of $ m_{\widetilde{\nu}_1} $ between 0--1400 GeV, requiring consistency with the relic density bound constraints LSP sneutrinos to be between 200--400 GeV. Thus the indication would be that sneutrino LSP case allows lighter LSP masses compared to the neutralino LSP scenario. The right-handed content of the sneutrino LSP solutions changes between 45\%-80\%, while $\widetilde S$ composition varies between 20\%-52\%. Imposing relic density bounds, the mixed sneutrino LSP is about 50-50 \% between right-handed and $\widetilde S$. Thus the scalar partner of $S$, introduced for neutrino seesaw, plays a crucial role in the sneutrino LSP composition. \begin{figure} \centering \includegraphics[scale=0.42]{relic_sneutrino_rightcontent}\hspace{0.5cm} \includegraphics[scale=0.42]{relic_sneutrino_Scontent}\\ \caption{ Dependence of the relic density $ \Omega_{DM} h^2 $ on the lightest sneutrino mass $m_{\widetilde{\nu}_1}$, showing the right sneutrino composition (left panel) and $\widetilde S$ composition (right panel). All points are consistent with REWSB, LHC bounds, B-physics constraint and sneutrino LSP, while only the points between the two dashed lines satisfy relic density constraints.} \label{fig:DMsneutrinolsp} \end{figure} In \autoref{fig:DMsneutrinolsp_2} we analyze the dependence of the nucleon spin-independent cross section, $ \sigma_{p}^{SI} $ for both the proton (left panel) and neutron (right panel). The color coding is the same as \autoref{fig:freeparams} and also indicated in the legend of the plots. The plots show the relation for the spin independent cross section for proton and neutron respectively. We note that both dark matter constraints (the relic density and $ \sigma_{p}^{SI} $) severely restrict the parameter space where the sneutrino is the LSP in this model. \begin{figure} \centering \includegraphics[scale=0.42]{protoncross_massSv1foroldZp}\hspace{0.5cm} \includegraphics[scale=0.42]{neutroncross_massSv1foroldZp} \caption{Dependence of the spin independent cross section for the proton $\sigma_{p}^{SI} $ (left) and neutron $ \sigma_{n}^{SI} $ (right) as a function on the sneutrino LSP mass $m_{{\widetilde \nu}_1}$. All points are consistent with REWSB and sneutrino LSP. The color coding in each plane is the same as \autoref{fig:freeparams}. } \label{fig:DMsneutrinolsp_2} \end{figure} \newpage \section{Comparison of the two Dark Matter scenarios} \label{sec:comparisonLSP} In the previous section, we analyzed DM phenomenology for both neutralino LSP and sneutrino LSP scenarios in BLRSSM. As discussed in detail, BLRSSM provides quite different mass spectrum for two distinct variants of LSP, and these relatively two different mass spectra change the low scale DM phenomenology in important manner. While we found sneutrino LSP scenario to be highly constrained and statistically unlikely, there are a few parameter points that survive universal boundary conditions, so in this section, we compare results for the two different LSP scenarios. In \autoref{fig:electroweakcompare} we plot in the $ \mu - \mu_R $ and $ \tan\beta - M_2/\mu $ dependence. Dark blue points satisfy the mass bounds and constraints from the rare $B$-decays for the neutralino LSP solutions. Red points form a subset of dark blue, and represent neutralino LSP solutions which satisfy the relic density constraint. Light blue solutions are consistent with the mass bounds and the constraints from the rare $B$-decays for sneutrino LSP solutions, while yellow points form a subset of light blue, and represent sneutrino LSP solutions consistent with the relic density constraint. The $ \mu - \mu_R $ plots compare the higgsino sectors of our model. We note that while the neutralino LSP solution can allow values of $\mu_R$ between 7--9 TeV, sneutrino LSP solutions prefer low $\mu_R$ values, mainly between 0--4 TeV for positive $\mu_R$. Even this range becomes narrow, around 1.5 TeV, for lighter higgsinos. For the sneutrino LSP solutions, $\mu < 1.5 $ TeV, and $\mu_R$ values favor the region between 4-7 TeV. On the right panel, the $ \tan\beta - M_2/\mu $ plane shows the relative wino and higgsino mass ranges for the two LSP scenarios. From the plots, we conclude that for sneutrino LSP, $M_2/\mu \lesssim $ 1 and the wino is always lighter than the higgsino over all the parameter space. For the neutralino LSP case, the higgsinos can be lighter or heavier than winos. Also, $\tan \beta$ values for sneutrino LSP solutions are found anywhere in the 0--50 range, and solutions consistent with the relic density constraint can be obtained for either $M_2/\mu \lesssim $ 1 or $M_2/\mu \gtrsim $ 1. Requiring consistency with the relic density bound solutions with $M_2/\mu \gtrsim $ 1 correspond to neutralino LSP, and $\tan \beta$ values lie in the 10--50 range. Requiring compatibility with the relic density bound, further constrains the region $M_2/\mu \lesssim $ 1 to correspond to $\widetilde{B}-\widetilde{B}_R$ dominated neutralino LSP solution, where $\tan \beta$ should be between 40--60. \begin{figure} \centering \includegraphics[scale=0.42]{MuR_Mu_compare}\hspace{0.5cm} \includegraphics[scale=0.42]{M2SUSYmu_tanb} \caption{Dependence of higgsino parameters $ \mu_R$ and $\mu$ (left), and of $ M_2/\mu $ of $\tan \beta$ (right). All points are consistent with LHC, B-physics bounds, \textsc{HiggsBounds} and \textsc{HiggsSignals}. Dark blue points displays neutralino LSP solutions whereas light blue ones stand for sneutrino LSP solutions. Red points represent the neutralino LSP solution, while yellow ones stand for sneutrino LSP solutions, consistent in addition, with the relic density bound.} \label{fig:electroweakcompare} \end{figure} In general the model clearly favors solutions with neutralino LSP to those with sneutrino LSP. \subsection{The neutral Higgs sector} \label{subsec:Higgssector} The choice of LSP affects the heavier states in the Higgs Sector of BLRMSS. For both neutralino and sneutrino LSP solutions, the lightest neutral Higgs boson can be lighter than 150 GeV. \autoref{fig:higgssec} shows the results for the values of Higgs masses for both LSP cases with plots for $ m_{h_2}$ relative to $m_{h_1} $ (left) and $ m_{A_1}$ dependence of $\tan \beta$ (right), where $A_1$ is the lightest pseudoscalar. The color coding is described in the legend of these planes. The left plot shows that while the two lightest neutral Higgs bosons can be degenerate when the LSP is neutralino, degenerate solutions cannot be obtained for the sneutrino LSP, where the second lightest Higgs boson mass is between 150 -700 GeV. This phenomenon can be explained as due the contributions obtained from different elements of CP-even Higgs mass matrix. When $ m_{h_2} > $ 150 GeV, the dominant contribution comes from the $m_{RR}^2$ element of CP-even Higgs mass matrix, corresponding to singlet Higgs fields associated with $U(1)_R \times U(1)_{B-L}$. Thus there $h_2$ is mostly a singlet Higgs boson. The off-diagonal term $m_{LR}^2$ which provides essential mixing between the two sectors becomes important when $ m_{h_2} < $ 150 GeV. For the sneutrino LSP solutions, the Yukawa coupling $Y_s$ is constrained to be small (as the sneutrino LSP mass is generated mostly through this term), unlike when the LSP is the neutralino. The $Y_s$ coupling then imposes lighter $h_2$ masses, mostly generated by the singlet Higgs field $\mathcal{X}_{R}$. The other Higgs bosons can be quite heavy. This is seen also in the right-hand side of \autoref{fig:higgssec}, where we plot the dependence of the mass of the lightest pseudoscalar Higgs boson $A_1$ (degenerate with $h_3$), with $\tan \beta$. As before, the region in $\tan \beta \sim$ 40-60 represents the mixed binos neutralino LSP solutions, while for $\tan \beta<40$, regions with larger (smaller) $A_1$ mass correspond to sneutrino (neutralino) LSP. Thus second lightest Higgs boson is a singlet in both scenarios, but, while the sneutrino LSP scenario favors the 150-700 GeV mass range, for the neutralino LSP solutions the second Higgs mass can be much heavier than 700 GeV. \begin{figure} \centering \includegraphics[scale=0.42]{masshh2_mashh1_BOTH}\hspace{0.5cm} \includegraphics[scale=0.42]{massAh3_tanb_BOTH}\\ \caption{Dependence of $ m_{h_2}$ and $m_{h_1} $ (left) and dependence of $ m_{A_1}$ on $\tan \beta $ (right). The color coding is the same as \autoref{fig:electroweakcompare}. In addition, the solid green line shows the degenerate mass region where $ m_{h_1} = m_{h_2} $.} \label{fig:higgssec} \end{figure} \subsection{The muon anomalous magnetic moment} \label{subsec:muong2} The experimental results for the muon anomalous magnetic moment pioneered by the BNL E821 experiment \cite{PhysRevD.73.072003,PhysRevD.80.052008} have been improved with the updated results from FNAL E989 \cite{Grange:2015fou} and J-PARC E34 \cite{Saito:2012zz} experiments. However, the SM prediction for the muon anomalous magnetic moment \cite{Davier:2010nc}, $a_\mu=(g-2)_\mu/2$, indicates a 3.5$\sigma$ deviation from the experimental results, \begin{equation} \Delta{a_\mu} = a_\mu^{\rm exp} - a_\mu^{\rm SM} = (28.7 \pm 8.0) \times 10^{-10} (1\sigma) \end{equation} The SM prediction is limited in precision by the evaluation of hadronic vacuum polarization contributions. Calculations exist for the lowest contributions, evaluated using perturbative QCD and experimental cross section data involving $e^+e^-$ annihilation into hadrons. However, the large discrepancy has motivated possible explanations within new physics scenarios. In MSSM, if one of the smuons and bino or wino soft masses can be sufficiently light, supersymmetry can ameliorate this discrepancy. However, if the model is required to obey universality conditions at $M_{\rm GUT}$, obtaining the correct Higgs boson mass is the greatest challenge to explaining the muon $g-2$ anomaly. We can expect better results from the BLRSSM model since it includes inverse seesaw mechanism and an extra gauge sector. The effect of inverse seesaw mechanism can be read through RGE for the smuons. As can be seen from the last two terms of \autoref{RGslepton}, the Yukawa coupling $Y_\nu$ helps decrease the smuon masses at low scales, as compared to models without inverse seesaw. A similar effect can be read through the RGE of $\mu$ \autoref{RGMu} and sneutrinos \autoref{RGsneutrino}. The presence of another free Yukawa coupling $Y_s$ in addition to $Y_\nu$ contributes to evolving light sneutrino masses to the low scale via RGE as can be seen from the \autoref{RGsneutrino}. Here we investigate the effects on the muon $g-2$ anomaly for both sneutrino and neutralino LSP cases. \autoref{fig:muog2} displays the correlations between muon $a_{\mu}$ and the relevant free parameters in $ m_{0}$, $ M_{1/2} $, $ \tan \beta $ and $\mu$. The color coding is the same as \autoref{fig:higgssec}. In addition, the shadowed regions show 1, 2 and 3 $\sigma$ deviations between the calculated contribution to muon $g-2$ factor and its experimental value. The top left side plot shows that $\Delta a_\mu$ favors low values for $ m_{0}$ (light scalar masses). Similarly light gaugino masses (light electroweakinos) are also required to decrease the $\Delta a_\mu$ discrepancy, as seen from the top right handed plot. The need of light scalars and electroweakinos agrees with large $\tan \beta $ values (bottom left panel). Finally, the $\Delta a_\mu$ depends sensitively on the $\mu$ parameter, as in MSSM, and here the contribution to the muon $g-2$ factor drops sharply for $\mu > $ 1.5 TeV. This is due to one loop contributions effects, where, as the $\mu$ term increases, the contributions where the higgsinos run in the loop are suppressed, while bino-smuon loop is left as only effective contributing diagram. However, as the bino masses cannot be as low as ${\tilde B}_R$ masses, the contribution from this channel is insufficient. And thus, against expectations, the inverse seesaw mechanism cannot sufficiently enhance muon $\Delta a_\mu$ within universality conditions, and the corrections hardly reach the $2\sigma$ region. \begin{figure} \centering \includegraphics[scale=0.40]{DAMU_m0_BOTH}\hspace{0.5cm} \includegraphics[scale=0.40]{DAMU_mhf_BOTH}\\ \includegraphics[scale=0.40]{DAMU_tanb_BOTH}\hspace{0.5cm} \includegraphics[scale=0.40]{DAMU_Mu_BOTH} \caption{$\Delta a_\mu$ dependence of $ m_{0}$ (top left) , $ M_{1/2}$ (top right), $ \tan{\beta}$ (bottom left) and $ \mu $ (bottom right). The color coding is the same as \autoref{fig:higgssec}. In addition, the shadowed regions show 1 $\sigma$, 2 $\sigma$ and 3 $\sigma$ differences between the theoretical contribution to muon $g-2$ factor and its experimental value.} \label{fig:muog2} \end{figure} \subsection{$Z^\prime$ mass constraints} \label{subsec:zprime} To highlight the differences between the two scenarios, we kept the model as general as possible and did not impose $Z^\prime$ mass bounds so far. We include here an investigation of implications of the constraints imposed on the $Z^{\prime}$ mass by a recent new study at ATLAS \cite{ATLAS:2017wce}, requiring an increase in the lower bound for the BLRSSM model to $M_{Z^{\prime}} > 3.9\,(3.6)$ TeV in the $ee (\mu \mu)$ channels. One must be careful when applying these bounds. First, the experiment assumes non-supersymmetric models, and thus a case where $Z^\prime$ does not decay to supersymmetric particles, which will modify its total decay width and thus branching ratios. Second, the parameter choice and unification scale is different from ours: the choice depends on symmetry breaking scales and assumed multiplet composition of the GUT parent. With this note of caution, we explore the parameter space here. First, we show some of the decay rates of the $Z^\prime$ boson in BLRSSM. \autoref{fig:ZpBR} displays some of the important decay channels of $ Z^{\prime} $ where $ BR(Z^{\prime} \to ll) = BR(Z^{\prime} \to ee) + BR(Z^{\prime} \to \mu\mu) $, $ BR(Z^{\prime} \to \widetilde{l} \widetilde{l}) $, $ BR(Z^{\prime} \to qq) $ and $ BR(Z^{\prime} \to \widetilde{\chi} \widetilde{\chi}) $, all plots as a function of $m_{Z^\prime}$. Throughout, all points are consistent with LHC, B-physics bounds, \textsc{HiggsBounds} and \textsc{HiggsSignals}. Dark blue points show neutralino LSP solutions whereas light blue ones stand for sneutrino LSP solutions. The top left panel in \autoref{fig:ZpBR} exhibits the branching ratio of $ Z^{\prime} $ into lepton pairs while the top right panel shows the branching for the supersymmetric partners in the same channel. As can be seen from top left plane, the branching ratio of $ Z^{\prime} $ into leptons changes between $25\%-37\%$ while its decays into their supersymmetric partners, sleptons, are low, in the range of 0\% and 8\%. It is interesting to note that these models, unlike $E_6$-derived models containing an extra $U(1)^\prime$ gauge group, are not likely to be leptophobic as the branching ratio into leptons is significant throughout the parameter space investigated. The bottom panels of \autoref{fig:ZpBR} show the branching ratio into quarks (left) and into neutralinos and/or charginos (right). As usual, the largest branching ratio obtained is hadronic (40\%-62\%), which, though significant, is not as large as for $U(1)^\prime$ models \cite{Araz:2017qcs}, which will likely adversely affect the $Z^\prime$ production cross section. The decay into two charginos or neutralinos occurs above their mass threshold and is very small throughout the whole parameter space (0\%-13\%). So it appears that the decay of the $Z^\prime$ boson is fairly consistent with a non-supersymmetric scenario. Based on this, we shall investigate the effects of setting the mass lower bound to be $ m_{Z^{\prime}} > 3.5 $ TeV throughout our analyses. \begin{figure} \centering \includegraphics[scale=0.42]{BRZll_Z_BOTH}\hspace{0.5cm} \includegraphics[scale=0.42]{BRZpsleptons_Zp_BOTH}\\ \includegraphics[scale=0.42]{BRZpjets_Zp_BOTH}\hspace{0.5cm} \includegraphics[scale=0.42]{BRZpneutchar_Zp_BOTH} \caption{ Branching ratios of $Z^\prime$ in BLRSSM. (Top left): $BR(Z^{\prime} \to ll (ee+\mu \mu)) $; (top right) $BR(Z^{\prime} \to \widetilde{l} \widetilde{l}) $, (bottom left) $BR(Z^{\prime} \to q\bar{q}) $ and (bottom right) $BR(Z^{\prime} \to \widetilde{\chi} \widetilde{\chi}) $. Neutralino LSP points are represented in dark blue, sneutrino LSP points in light blue. The solutions excluded by ATLAS-CONF-2017-027 are in the shaded green region.} \label{fig:ZpBR} \end{figure} With these constraints, we revisit the plots for the spin independent cross section for proton and neutron respectively. While in the \autoref{fig:DMsneutrinolsp_2}, we considered $m_{Z^\prime} \ge 2.5$ TeV, and the spin-independent proton (or neutron) cross sections for sneutrino LSP solutions were satisfied with XENON1T experimental exclusion limit, imposing the the new $Z^\prime $ mass limit excludes most of the parameter space for sneutrino LSP solutions, as shown in \autoref{fig:DMsneutrino_zprime}. Specifically, of about $10^6$ scanned parameter points only 18 solutions compatible with the relic density bound are found, and only 10 of them can survive XENON1T experimental exclusion limit. Imposing $Z^\prime$ mass constraints, the sneutrino LSP scenario thus emerges as extremely constrained and, realistically, ruled out. \begin{figure} \centering \includegraphics[scale=0.42]{protoncross_massSv1}\hspace{0.5cm} \includegraphics[scale=0.42]{neutroncross_massSv1} \caption{Dependence of the spin independent cross section for the proton $\sigma_{p}^{SI} $ (left) and neutron $ \sigma_{n}^{SI} $ (right) as a function on the sneutrino LSP mass $m_{{\widetilde \nu}_1}$, for , $m_{Z^{\prime}} \geq$ 3.5 TeV. All points are consistent with REWSB and sneutrino LSP. The color coding in each plane is the same as \autoref{fig:freeparams}. } \label{fig:DMsneutrino_zprime} \end{figure} \section{Collider Signals} \label{sec:collider} Lastly, we would like to analyze the production and decays for this scenario at the LHC. We choose benchmarks from the parameter scan results which satisfy all experimental bounds, including the relic density constraint and XENON1T exclusion limits, and favor light neutralino LSP solutions as the only ones surviving all constraints. We proceed by exporting the BLRSSM to the UFO format \cite{Degrande:2011ua} and use $\rm {MG5\_aMC@NLO}$ framework version 2.5.5 \cite{Alwall:2014hca} to simulate hard-scattering LHC collisions and evaluate the cross sections for various signals. For the calculation of cross sections, we select four benchmarks with different features, which could showcase different features of the model for detection at the LHC. The first benchmark, benchmark 1 has $H_{\widetilde{R}}$-like neutralino LSP. (Even though parameter scans allow Higgsino-like and higgsino-binos mixed LSP neutralino solutions between 300-500 GeV, no benchmark in this range can be found as these states are completely excluded by the XENON1T exclusion limit.) We thus select benchmarks with mixed $B_{\widetilde{R}}-\widetilde{B}$ content. For benchmarks 2-3, BR$(\widetilde{\chi}_2^0 \to \widetilde{\chi}_1^0 h_1) $ and BR$(\widetilde{\chi}_1^\pm \to \widetilde{\chi}_1^0 W^\pm) $ are almost unity. Sparticle masses are similar in both cases, with the exception of the lightest charging, which is heavier for benchmark 3. Also, for benchmark 3, BR$(\widetilde{\tau}_1 \to \tau_1 \widetilde{\chi}_1^0) \sim1$ while this is much smaller for benchmark 2. Benchmark 4 is selected for light stau masses, leading to increased stau-stau production cross sections. Note that benchmarks satisfy all the constraints, including the Icecube22 exclusion. Our results are shown in Table \autoref{tab:cross-section}. Even though LSP neutralino mass is quite light (67 GeV) for benchmark 1, we find that both chargino-chargino and neutralino-chargino production cross sections are quite low, due to the fact that the neutrino is mostly higgsino. For the other benchmarks, with neutralino contents of mixed binos, the second lightest neutralino and chargino masses are degenerate. We estimated the cross sections for chargino/neutralinos and stau production as being the most promising. The highest cross-section values for chargino-chargino production and chargino-neutralino production are obtained for benchmark 2 whose neutralino and chargino masses are 470 GeV and 767 GeV, respectively. As can be seen from the \autoref{tab:cross-section}, chargino-chargino production and neutralino-chargino production cross sections are 4.623 fb and 2.249 fb, respectively. The cross-section values decrease in benchmark 3 (with respect to benchmark 2) when neutralino and chargino masses are 506 GeV and 954 GeV (versus 470 and 767 GeV), respectively. Finally, the last benchmark is selected to enhance $\sigma(pp \to \widetilde{\tau}_1 \widetilde{\tau}_1)$ where each stau can decay into a tau and a LSP neutralino $(\widetilde{\tau}_1 \to \tau_1 \widetilde{\chi}_1^0)$. Note that here the stau is NLSP, and that the branching ratio of stau into a tau and a LSP neutralino is 1. The relevant cross-section is 0.5713 fb, a factor of 10 larger than for benchmarks 2 and 3, but still too small to be observed at the LHC. For all benchmarks, $Z^{\prime}$ masses are above 4 TeV, consistent with the latest ATLAS result. Note that gluino masses are about 2.5 TeV for benchmarks 2, 3 and 4, making gluino results testable at the HL-LHC or by the next generation colliders \cite{Baer:2017yqq,Baer:2016wkz}. Including all the constraints, we conclude that production of supersymmetric particles in BLRSSM fall below detector sensitivity. Especially because the final signals will have even lower production cross sections, as they will be suppressed by branching ratios of chargino/neutralinos to missing energy + leptons. A way to improve our results is to relax some or most universality constraints, and looking for effective cuts which would enhance the signal over the background. We shall return to this in a future work. \begin{table}[] \centering \label{tab:cross-section} \begin{tabular}{|c|c|c|c|c|} \hline & Benchmark 1 & Benchmark 2 & Benchmark 3 & Benchmark 4 \\ \hline $m_0$ [GeV] & 1960 & 1831.4 & 2073 & 2099.4 \\ \hline $M_{1/2}$ [GeV] & 1723.7 & 1092.4 & 1166.8 & 1285.8 \\ \hline $tan\beta$ & 50.9 & 45.3 & 58.6 & 53.6 \\ \hline $A_0$ [GeV] & 2816.4 & 826.5 & 1652.8 & 4972.9 \\ \hline $<v_R>$ [GeV] & 11611 & 11711 & 12969 & 11283 \\ \hline $Y_\nu \hspace{0.3cm} (M_{\rm SUSY})$& 0.50716 & 0.0021115 & 0.24648 & 0.47416 \\ \hline $Y_s \hspace{0.3cm} (M_{\rm SUSY})$& 0.50672 & 0.62476 & 0.49036 & 0.59198 \\ \hline $\mu$ [GeV] & 2245.1 & 787.67 & 1305.67 & 2191.82 \\ \hline $\mu_R$ [GeV] & -64.2 & 1144.9 & 3817.63 & 4473.89 \\ \hline $m_{\widetilde{\chi}_1^0}$ [GeV] & \textbf{67} ($\widetilde{H}_R$-like)& \textbf{470} (mixed $\widetilde{B}_R-\widetilde{B}$) & \textbf{506} (mixed $\widetilde{B}_R-\widetilde{B}$) & \textbf{523} (mixed $\widetilde{B}_R-\widetilde{B}$) \\ \hline $m_{\widetilde{\chi}_2^0}$ [GeV] & \textbf{757} & \textbf{768} & \textbf{954} & \textbf{983} \\ \hline $m_{\widetilde{\chi}_1^\pm}$ [GeV] & \textbf{1421} & \textbf{767} & \textbf{954} & \textbf{983} \\ \hline $m_{h_2}$ [GeV] & 358 & 380 & 224 & 239 \\ \hline $m_{h_3}$ [GeV] & 2221 & 1018 & 1013 & 1049 \\ \hline $m_{A_1}$ [GeV] & 2149 & 1020 & 1017 & 1051 \\ \hline $m_{\widetilde{t}_1}$ [GeV] & 2893 & 1977 & 2209 & 2057 \\ \hline $m_{\widetilde{b}_1}$ [GeV] & 3257 & 2279 & 2458 & 2290 \\ \hline $m_{\widetilde{\tau}_1}$ [GeV] & 1154 & 1332 & 1064 & 559 \\ \hline $m_{\widetilde{\nu}_1}$ [GeV] & 1972 & 2036 & 1858 & 1332 \\ \hline $m_{Z^{\prime}} $ [GeV] & 4157 & 4182 & 4632 & 4090 \\ \hline $m_{\widetilde{g}}$ [GeV] & 3720 & 2473 & 2634 & 2675 \\ \hline $\Omega h^2 $ & 0.112621 & 0.103201 & 0.096158 & 0.090515 \\ \hline $\sigma^{SI}_{nucleon}$ [fb] & 2.0771$\times 10^{-11}$ & 1.80137 $\times 10^{-10}$ & 2.43005 $\times 10^{-11}$ & 2.40945 $\times 10^{-11}$ \\ \hline Icecube22 Exclusion CL $[\%]$ & 0.014308 & 0.607672 & 0.029368 & 0.029803 \\ \hline $\sigma (pp \to \widetilde{\chi}_1^{\pm} \widetilde{\chi}_2^0) $ [fb] & 0.001017 & 2.249 & 1.543 & 1.104 \\ \hline $\sigma (pp \to \widetilde{\chi}_1^{+} \widetilde{\chi}_1^{-}) $ [fb] & 0.3289 & 4.623 & 2.598 & 1.941 \\ \hline $\sigma (pp \to \widetilde{\tau}_1 \widetilde{\tau}_1) $ [fb] & 0.0799 & 0.05059 & 0.06468 & 0.5713 \\ \hline BR$(\widetilde{\chi}_2^0 \to \widetilde{\chi}_1^0 h_1) $ & - & 0.936403 & 0.885733 & 0.133714 \\ \hline BR$(\widetilde{\chi}_1^\pm \to \widetilde{\chi}_1^0 W^\pm) $ & - & 0.998671 & 0.992318 & 0.151927 \\ \hline BR$(\widetilde{\tau}_1 \to \tau_1 \widetilde{\chi}_1^0) $ & - & 0.512641 & 0.991196 & 1.00 \\ \hline \end{tabular} \vskip0.1in \caption{Benchmarks for BLRSSM with relevant cross-sections and branching ratios. In bold, the lightest chargino and the two lightest neutralino states.} \label{crosssections} \end{table} \section{Summary and Conclusion} \label{sec:conclusion} We analyzed the predictions of the mass spectrum in the BLRSSM framework with universal boundary condition, highlighting the solutions consistent with the DM restrictions (relic density and spin independent cross sections with nucleons) for both neutralino and sneutrino LSP scenarios. We found that the stop and sbottom masses are between 2-3 TeV, and the chargino can be degenerate with the LSP neutralino between 300-500 GeV. In addition, the relic density constraint can be satisfied for masses in the range 300 $ \lesssim m_{\chi_1^0} \lesssim $ 800 GeV. $\widetilde{H}_R $ dominated LSP neutralino solution can be obtained below 300 GeV, however these solutions are ruled out by the XENON1T spin independent cross section exclusion curve. When all DM constraints are taken into account, the model favours LSP neutralinos with masses between 500 $ \lesssim m_{\chi_1^0} \lesssim $ 800 GeV, bino-dominated, and with composition 60\% $\widetilde{B}_R$ - 40\% $ \tilde B$. We also that, when the LSP is neutralino, $A_1$ and $h_3$ are funnel channels for pair-producing them. In addition, the model allows in principle a sneutrino LSP where its content can be either right-handed dominated or mixed, ${\widetilde \nu}_R$ and $\widetilde S$, with masses between 250-1300 GeV. In this sense, sneutrino LSP solutions can be lighter than the neutralino LSP ones. Purely right-handed dominated sneutrino LSP solutions have difficulty to satisfy the relic density constraint and only mixed ones survive. In addition most of the sneutrino LSP solutions are consistent with the XENON 1T spin independent cross section exclusion curve. However, strict imposition of the $Z^\prime$ mass bounds basically rule out the sneutrino solutions, while not having any effect on the neutralino LSP parameter space. This is one of the most important predictions of the model. The parameter spaces corresponding to neutralino and sneutrino are quite different. If allowed, sneutrino LSP solutions favor low singlet higgsino mass parameter, $\mu_R$, and the second lightest neutral Higgs boson a singlet, while neutralino LSP favor larger $\mu_R$ parameters. Sneutrino LSP solutions are spread out over the whole range of $\tan \beta$, while neutralino solutions are restricted in the $40 \lesssim \tan \beta \lesssim 60$. Neutralino LSP solutions allow for degenerate masses of the two lightest neutral Higgs bosons, while the sneutrino LSP, although favoring a light $m_{h_2}$, does not. The anomaly in the anomalous magnetic moment of the muon favors neutralino LSP contributions, where for a large range of scalar masses, and a more restricted one for gauginos and higgsinos, the corrections are within $2 \sigma$ of the experimental result, while sneutrino LSP solutions can at best produce results within $3 \sigma$ of the desired values. We analyzed collider signatures of this proposed scenario, including all constraints, and they are not promising. The largest cross sections are obtained for chargino/neutralino production, and they are at most of ${\cal O}(4)$ fb, without including cascade decays into leptons which would reduce them further. In the future, collider signals could be enhanced by relaxing some of the severe constraints on the model, such as the universality conditions, and finding suitable cuts to enhance signal versus background. This may extend the parameter space, allowing neutrino LSP back into the consideration. Work in these directions is underway. \begin{acknowledgments} Part of numerical calculations reported in this paper was performed using the National Academic Network and Information Center (ULAKBIM) of TUBITAK, High Performance and Grid Computing Center (TRUBA resources), and using High Performance Computing (HPC), managed by Calcul Qu{\'e}bec and Compute Canada. MF acknowledges NSERC for partial financial support under grant number SAP105354. \end{acknowledgments}
3,212,635,537,886
arxiv
\section{Introduction}\label{c:sec1} In a recent work we parameterized the fully unintegrated, off-diagonal quark-quark correlator for a spin-0 hadron in terms of so-called generalized parton correlation functions (GPCFs)~\cite{Meissner:2008ay}. The GPCFs depend on the full 4-momentum of the quark and, in addition, on the momentum transfer to the hadron. As such they contain the maximum amount of information about the partonic structure of hadrons. The purpose of the present paper is to extend this analysis to the more interesting but at the same time more challenging case of a spin-1/2 hadron. Related work on the (simpler) unintegrated diagonal quark-quark correlator for a spin-1/2 hadron can be found in refs.~\cite{Goeke:2005hb,Collins:2007ph,Rogers:2008jk}. GPCFs are of particular interest because of their connection to the generalized parton distributions (GPDs)~\cite{Mueller:1998fv,Ji:1996ek,Radyushkin:1996nd,Goeke:2001tz,Diehl:2003ny, Belitsky:2005qn,Boffi:2007yc} and the transverse momentum dependent parton distributions (TMDs) \cite{Mulders:1995dh, Barone:2001sp, Bacchetta:2006tn,D'Alesio:2007jt}. Both GPDs and TMDs have been intensely studied during the last 15 years. While GPDs appear in the QCD-description of hard exclusive reactions such as deep virtual Compton scattering or hard exclusive meson production, TMDs can be measured in certain semi-inclusive reactions like semi-inclusive deep inelastic scattering (SIDIS) or the Drell-Yan (DY) process. These two types of parton distributions provide a 3-dimensional picture of the nucleon --- either in a mixed position-momentum representation or in pure momentum space. Moreover, they contain important information on the orbital motion of partons inside the nucleon. The important point is that both the GPDs and the TMDs appear as two different limiting cases of the GPCFs. Therefore, the GPCFs can be considered as {\it mother distributions} of GPDs and TMDs~\cite{Ji:2003ak,Belitsky:2003nz,Belitsky:2005qn}. Note that the GPCFs also have a direct connection to the so-called Wigner distributions --- the quantum mechanical analogues of classical phase space distributions --- of the hadron-parton system~\cite{Ji:2003ak,Belitsky:2003nz,Belitsky:2005qn}. In the present paper, as the major application of the classification of the GPCFs, we obtain new, model-independent information on the nontrivial relations between GPDs and TMDs which have been suggested in the literature~\cite{Burkardt:2002ks,Burkardt:2003uw,Burkardt:2003je,Diehl:2005jf, Burkardt:2005hp,Lu:2006kt,Meissner:2007rx,Pasquini:2008ax}. In order to study this point we exploit the connection between the GPCFs on the one hand as well as the GPDs and TMDs on the other, and explore, in particular, which GPDs and TMDs have the same {\it mother distributions}. The nontrivial relations between GPDs and TMDs attracted a lot of attention during the last years. The most prominent case, first proposed in ref.~\cite{Burkardt:2002ks}, is the relation between the so-called Sivers TMD~\cite{Sivers:1989cc,Sivers:1990fh} and the GPD $E$. This connection provides a rather intuitive understanding of the Sivers single spin asymmetry in SIDIS which has been explored by the HERMES and the COMPASS experiments~\cite{Airapetian:2004tw,Alexakhin:2005iw,Ageev:2006da,Collaboration:2009ti}. Although in the meantime various nontrivial relations between GPDs and TMDs were established in simple models (see~\cite{Meissner:2007rx} for an overview and~\cite{Pasquini:2008ax}), no model-independent relations have been obtained so far. In fact, our previous work on GPCFs showed that for spin-0 hadrons no model-independent relations between GPDs and TMDs can be established. In the present work we arrive at the same conclusion for spin-1/2 hadrons. A first account on the spin-1/2 case can be found in the conference contribution~\cite{Meissner:2008xs}. If the GPCFs are integrated upon one light-cone component of the quark momentum one arrives at the so-called generalized transverse momentum dependent parton distributions (GTMDs) which can show up in the description of hard exclusive reactions. While quark GTMDs typically appear at subleading twist --- and in cases where the standard collinear factorization cannot be applied --- (see, e.g., refs.~\cite{Vanderhaeghen:1999xj,Diehl:2007hd,Goloskokov:2007nt}), gluon GTMDs have been extensively used to describe processes at high energies (low $x$) like, for instance, diffractive vector meson~\cite{Martin:1999wb} and Higgs production at the Tevatron and the LHC~\cite{Khoze:2000cy,Albrow:2008pn,Martin:2009ku} in the framework of the so-called $k_T$ factorization. Also an approximate method for (theoretically) constraining the unpolarized gluon GTMD has been proposed~\cite{Martin:2001ms}. In the present work we will not further elaborate on the phenomenology of GTMDs, although it is an important topic (for related work see also refs.~\cite{Collins:2007ph,Rogers:2008jk}). The plan of the manuscript is as follows. In the next section the parameterization of the generalized quark-quark correlator for a spin-1/2 hadron in terms of GPCFs is presented. This parameterization forms the basis for the rest of the paper. In section~\ref{c:sec3} we consider the GTMDs. The results in that section follow in a straightforward way from those in section~\ref{c:sec2}. The TMD-limit and the GPD-limit for the GTMDs are investigated in section~\ref{c:sec4}, providing us with the first complete counting of GPDs beyond leading twist. In particular, we also explore which GPDs and TMDs have the same {\it mother distributions}. The outcome of this analysis allows us to investigate the model-independent status of possible nontrivial relations between GPDs and TMDs. Section \ref{c:sec5} contains the conclusions. Details of the (technically demanding) derivation of the classification for the GPCFs can be found in appendix~\ref{c:app_par}. The exact relations between the GPCFs and the GTMDs defined in the manuscript are given in appendix~\ref{c:app_gtmd_gpcf}, while in appendix~\ref{c:app_gtmd_model} our model-independent study is supplemented by the calculation of the leading twist GTMDs in a simple diquark spectator model for the nucleon. \section{Generalized parton correlation functions}\label{c:sec2} \subsection{Definition} In this section we derive the structure of the generalized, fully-unintegrated quark-quark correlator for a spin-1/2 hadron which is defined as \begin{equation} W_{\lambda \lambda'}^{[\Gamma]}(P, k, \Delta, N; \eta) = \frac{1}{2} \int \frac{d^4 z}{(2\pi)^4} \, e^{i k \cdot z} \, \langle p', \lambda' | \, \bar{\psi}(-\tfrac{1}{2}z) \, \Gamma \, \mathcal{W}(-\tfrac{1}{2}z, \tfrac{1}{2}z \, | \, n) \, \psi(\tfrac{1}{2}z) \, | p, \lambda \rangle \,. \label{e:corr_gpcf} \end{equation} The correlator $W$ depends on the helicities $\lambda$ and $\lambda'$, the average momentum $P = (p+p')/2$ of the initial and final hadron, the momentum transfer $\Delta = p' - p$ to the hadron, and the average quark momentum $k$. (For the kinematics we also refer to figure~\ref{f:kinematics}.) The object $\Gamma$ is an element of the complete basis $\{1, \gamma_5, \gamma^\mu, \gamma^\mu\gamma_5, i\sigma^{\mu\nu}\}$ with $\sigma^{\mu\nu} = i [\gamma^{\mu},\gamma^{\nu}] / 2$. The Wilson line $\mathcal{W}$ ensures the color gauge invariance of the correlator in eq.~(\ref{e:corr_gpcf}) and is running along the path\footnote{The path of the Wilson line is chosen such that appropriate Wilson lines are obtained when taking the GPD-limit and the TMD-limit (see also section 2.4).} \begin{equation} -\tfrac{1}{2}z \;\to\; -\tfrac{1}{2}z + \infty \cdot n \;\to\; \tfrac{1}{2}z + \infty \cdot n \;\to\; \tfrac{1}{2}z \,, \label{e:path} \end{equation} with all four points connected by straight lines. It is now important to realize that the integration contour of the Wilson line not only depends on the coordinates of the initial and final points but also on the light-cone direction which is opposite to the direction of $P$~\cite{Goeke:2003az}. This induces a dependence on a light-cone vector $n$. In fact, instead of using $n$ a rescaled vector $\lambda n$ with some positive parameter $\lambda$ could be taken in order to specify the Wilson line. Therefore, the correlator actually only depends on the vector \begin{equation} N = \frac{M^2 \, n}{P \cdot n} \,, \label{e:direction} \end{equation} which is invariant under the mentioned rescaling. For convenience in~(\ref{e:direction}) the hadron mass $M$ is used such that $N$ has the same mass dimension as an ordinary 4-momentum. The parameter $\eta$ in~(\ref{e:corr_gpcf}) is defined through the zeroth component of $n$ according to \begin{equation} \eta = \text{sign}(n_0) \,, \end{equation} which means that we simultaneously treat future-pointing $(\eta = +1)$ and past-pointing $(\eta = - 1)$ Wilson lines. Keeping this dependence is particularly convenient once we make the projection of the correlator in~(\ref{e:corr_gpcf}) onto the correlator defining TMDs. \FIGURE[t]{% \includegraphics{Fig1_Kinematics.eps} \caption{Kinematics for GPCFs.} \label{f:kinematics}} \subsection{Parameterization} In order to obtain the parameterization of the correlator in~(\ref{e:corr_gpcf}) in terms of GPCFs it is necessary to analyze its behavior under parity. One finds that \begin{align} &W_{\lambda \lambda'}^{[\Gamma]}(P, k, \Delta, N; \eta) \nonumber\\ &\quad = \frac{1}{2} \int \frac{d^4 z}{(2\pi)^4} \, e^{i k \cdot z} \, \langle p', \lambda' | \, \hat{P}^\dagger \hat{P} \, \bar{\psi}(-\tfrac{1}{2}z) \, \hat{P}^\dagger \hat{P} \, \Gamma \, \hat{P}^\dagger \hat{P} \, \mathcal{W}(-\tfrac{1}{2}z, \tfrac{1}{2}z \, | \, n) \, \hat{P}^\dagger \hat{P} \, \psi(\tfrac{1}{2}z) \, \hat{P}^\dagger \hat{P} \, | p, \lambda \rangle \nonumber\\ &\quad = \frac{1}{2} \int \frac{d^4 z}{(2\pi)^4} \, e^{i k \cdot z} \, \langle \bar{p}', \lambda_P' | \, \bar{\psi}(-\tfrac{1}{2}\bar{z}) \, \gamma_0 \, \Gamma \, \gamma_0 \, \mathcal{W}(-\tfrac{1}{2}\bar{z}, \tfrac{1}{2}\bar{z} \, | \, \bar{n}) \, \psi(\tfrac{1}{2}\bar{z}) \, | \bar{p}, \lambda_P \rangle \nonumber\\ &\quad = \frac{1}{2} \int \frac{d^4 z}{(2\pi)^4} \, e^{i \bar{k} \cdot z} \, \langle \bar{p}', \lambda_P' | \, \bar{\psi}(-\tfrac{1}{2}z) \, \gamma_0 \, \Gamma \, \gamma_0 \, \mathcal{W}(-\tfrac{1}{2}z, \tfrac{1}{2}z \, | \, \bar{n}) \, \psi(\tfrac{1}{2}z) \, | \bar{p}, \lambda_P \rangle \nonumber\\ &\quad = W_{\lambda_P \lambda_P'}^{[\gamma_0 \, \Gamma \, \gamma_0]} (\bar{P}, \bar{k}, \bar{\Delta}, \bar{N}; \eta) \,, \label{e:par} \end{align} where $\bar{P}^\mu = P_\mu = (P^0,-\vec{P})$ etc., while $\lambda_P$ and $\lambda_P'$ denote the parity-reversed helicities $\lambda$ and $\lambda'$. We now introduce the (dimensionless) matrix functions $\Gamma_\text{S}$, $\Gamma_\text{P}$, $\Gamma^\mu_\text{V}$, $\Gamma^\mu_\text{A}$, and $\Gamma^{\mu\nu}_\text{T}$ through \begin{align} W_{\lambda \lambda'}^{[1]}(P, k, \Delta, N; \eta) &= \bar{u}(p', \lambda') \, \Gamma_\text{S}(P, k, \Delta, N; \eta) \, u(p, \lambda) && \text{(scalar)}\label{e:sdb} \,,\\ W_{\lambda \lambda'}^{[\gamma_5]}(P, k, \Delta, N; \eta) &= \bar{u}(p', \lambda') \, \Gamma_\text{P}(P, k, \Delta, N; \eta) \, u(p, \lambda) && \text{(pseudoscalar)} \label{e:pdb} \,,\\ W_{\lambda \lambda'}^{[\gamma^\mu]}(P, k, \Delta, N; \eta) &= \bar{u}(p', \lambda') \, \Gamma^\mu_\text{V}(P, k, \Delta, N; \eta) \, u(p, \lambda) && \text{(vector)} \label{e:vdb} \,,\\ W_{\lambda \lambda'}^{[\gamma^\mu \gamma_5]}(P, k, \Delta, N; \eta) &= \bar{u}(p', \lambda') \, \Gamma^\mu_\text{A}(P, k, \Delta, N; \eta) \, u(p, \lambda) && \text{(axial vector)} \label{e:adb} \,,\\ W_{\lambda \lambda'}^{[i\sigma^{\mu\nu}]}(P, k, \Delta, N; \eta) &= \bar{u}(p', \lambda') \, \Gamma^{\mu\nu}_\text{T}(P, k, \Delta, N; \eta) \, u(p, \lambda) && \text{(tensor)} \label{e:tdb} \,. \end{align} From eq.~(\ref{e:par}) it follows for the scalar matrix function in eq.~(\ref{e:sdb}) \begin{align} &\bar{u}(p', \lambda') \, \Gamma_\text{S}(P, k, \Delta, N; \eta) \, u(p, \lambda) \nonumber\\ &\ =\bar{u}(\bar{p}', \lambda_P') \, \Gamma_\text{S}(\bar{P}, \bar{k}, \bar{\Delta}, \bar{N}; \eta) \, u(\bar{p}, \lambda_P) \nonumber\\ &\ =\bar{u}(p', \lambda') \, \hat{P}^\dagger \, \Gamma_\text{S}(\bar{P}, \bar{k}, \bar{\Delta}, \bar{N}; \eta) \, \hat{P} \, u(p, \lambda) \nonumber\\ &\ =\bar{u}(p', \lambda') \, \gamma_0 \, \Gamma_\text{S}(\bar{P}, \bar{k}, \bar{\Delta}, \bar{N}; \eta) \, \gamma_0 \, u(p, \lambda) \,. \label{e:par2} \end{align} Analogous results hold for the other matrix functions in eqs.~(\ref{e:pdb})--(\ref{e:tdb}), and one finds \begin{eqnarray} \Gamma_\text{S}(P, k, \Delta, N; \eta) &=& + \gamma_0 \, \Gamma_\text{S}(\bar{P}, \bar{k}, \bar{\Delta}, \bar{N}; \eta) \, \gamma_0 \,, \label{e:ws_parity}\\ \Gamma_\text{P}(P, k, \Delta, N; \eta) &=& - \gamma_0 \, \Gamma_\text{P}(\bar{P}, \bar{k}, \bar{\Delta}, \bar{N}; \eta) \, \gamma_0 \,, \label{e:wp_parity}\\ \Gamma^\mu_\text{V}(P, k, \Delta, N; \eta) &=& + \gamma_0 \, \Gamma^{\bar{\mu}}_\text{V}(\bar{P}, \bar{k}, \bar{\Delta}, \bar{N}; \eta) \, \gamma_0 \,, \label{e:wv_parity}\\ \Gamma^\mu_\text{A}(P, k, \Delta, N; \eta) &=& - \gamma_0 \, \Gamma^{\bar{\mu}}_\text{A}(\bar{P}, \bar{k}, \bar{\Delta}, \bar{N}; \eta) \, \gamma_0 \,, \label{e:wa_parity}\\ \Gamma^{\mu\nu}_\text{T}(P, k, \Delta, N; \eta) &=& + \gamma_0 \, \Gamma^{\bar{\mu}\bar{\nu}}_\text{T}(\bar{P}, \bar{k}, \bar{\Delta}, \bar{N}; \eta) \, \gamma_0 \label{e:wt_parity} \end{eqnarray} for their behavior under parity. It turns out that the general structure of the correlator $W$ can already be obtained on the basis of the parity constraints in~(\ref{e:ws_parity})--(\ref{e:wt_parity}). One ends up with 64 linearly independent matrix structures multiplied by scalar functions (for the derivation see appendix~\ref{c:app_par}), \begin{align} &W_{\lambda \lambda'}^{[1]}(P, k, \Delta, N; \eta) \nonumber\\* &\quad = \bar{u}(p', \lambda') \, \bigg[ A^E_{1} + \frac{i\sigma^{k\Delta}}{M^2} \, A^E_{2} + \frac{i\sigma^{kN}}{M^2} \, A^E_{3} + \frac{i\sigma^{\Delta N}}{M^2} \, A^E_{4} \bigg] \, u(p, \lambda) \,, \label{e:ws_res}\\ &W_{\lambda \lambda'}^{[\gamma_5]}(P, k, \Delta, N; \eta) \nonumber\\* &\quad = \bar{u}(p', \lambda') \, \bigg[ \frac{i\varepsilon^{Pk\Delta N}}{M^4} \, A^E_{5} + \frac{i\sigma^{PN} \gamma_5}{M^2} \, A^E_{6} + \frac{i\sigma^{kN} \gamma_5}{M^2} \, A^E_{7} + \frac{i\sigma^{\Delta N} \gamma_5}{M^2} \, A^E_{8} \bigg] \, u(p, \lambda) \,, \label{e:wp_res}\\ &W_{\lambda \lambda'}^{[\gamma^\mu]}(P, k, \Delta, N; \eta) \nonumber\\* &\quad = \bar{u}(p', \lambda') \, \bigg[ \frac{P^\mu}{M} \, A^F_{1} + \frac{k^\mu}{M} \, A^F_{2} + \frac{\Delta^\mu}{M} \, A^F_{3} + \frac{N^\mu}{M} \, A^F_{4} + \frac{i\sigma^{\mu k}}{M} \, A^F_{5} + \frac{i\sigma^{\mu \Delta}}{M} \, A^F_{6} + \frac{i\sigma^{\mu N}}{M} \, A^F_{7} \nonumber\\* &\quad\hspace{2.55ex} + \frac{P^\mu \, i\sigma^{k\Delta}}{M^3} \, A^F_{8} + \frac{k^\mu \, i\sigma^{k\Delta}}{M^3} \, A^F_{9} + \frac{N^\mu \, i\sigma^{k\Delta}}{M^3} \, A^F_{10} + \frac{P^\mu \, i\sigma^{kN}}{M^3} \, A^F_{11} + \frac{k^\mu \, i\sigma^{kN}}{M^3} \, A^F_{12} \nonumber\\* &\quad\hspace{2.55ex} + \frac{N^\mu \, i\sigma^{kN}}{M^3} \, A^F_{13} + \frac{P^\mu \, i\sigma^{\Delta N}}{M^3} \, A^F_{14} + \frac{\Delta^\mu \, i\sigma^{\Delta N}}{M^3} \, A^F_{15} + \frac{N^\mu \, i\sigma^{\Delta N}}{M^3} \, A^F_{16} \bigg] \, u(p, \lambda) \,, \label{e:wv_res}\\ &W_{\lambda \lambda'}^{[\gamma^\mu \gamma_5]}(P, k, \Delta, N; \eta) \nonumber\\* &\quad = \bar{u}(p', \lambda') \, \bigg[ \frac{i\varepsilon^{\mu Pk\Delta}}{M^3} \, A^G_{1} + \frac{i\varepsilon^{\mu PkN}}{M^3} \, A^G_{2} + \frac{i\varepsilon^{\mu P\Delta N}}{M^3} \, A^G_{3} + \frac{i\varepsilon^{\mu k\Delta N}}{M^3} \, A^G_{4} \nonumber\\* &\quad\hspace{2.55ex} + \frac{i\sigma^{\mu P} \gamma_5}{M} \, A^G_{5} + \frac{i\sigma^{\mu k} \gamma_5}{M} \, A^G_{6} + \frac{i\sigma^{\mu N} \gamma_5}{M} \, A^G_{7} + \frac{k^\mu \, i\sigma^{PN} \gamma_5}{M^3} \, A^G_{8} + \frac{\Delta^\mu \, i\sigma^{PN} \gamma_5}{M^3} \, A^G_{9} \nonumber\\* &\quad\hspace{2.55ex} + \frac{N^\mu \, i\sigma^{PN} \gamma_5}{M^3} \, A^G_{10} + \frac{k^\mu \, i\sigma^{kN} \gamma_5}{M^3} \, A^G_{11} + \frac{\Delta^\mu \, i\sigma^{kN} \gamma_5}{M^3} \, A^G_{12} + \frac{N^\mu \, i\sigma^{kN} \gamma_5}{M^3} \, A^G_{13} \nonumber\\* &\quad\hspace{2.55ex} + \frac{P^\mu \, i\sigma^{\Delta N} \gamma_5}{M^3} \, A^G_{14} + \frac{\Delta^\mu \, i\sigma^{\Delta N} \gamma_5}{M^3} \, A^G_{15} + \frac{N^\mu \, i\sigma^{\Delta N} \gamma_5}{M^3} \, A^G_{16} \bigg] \, u(p, \lambda) \,, \label{e:wa_res}\\ &W_{\lambda \lambda'}^{[i\sigma^{\mu\nu}]}(P, k, \Delta, N; \eta) \nonumber\\* &\quad = (\delta^\mu_\rho \delta^\nu_\sigma - \delta^\nu_\rho \delta^\mu_\sigma) \, \bar{u}(p', \lambda') \, \bigg[ \frac{P^\rho k^\sigma}{M^2} \, A^H_{1} + \frac{P^\rho \Delta^\sigma}{M^2} \, A^H_{2} + \frac{P^\rho N^\sigma}{M^2} \, A^H_{3} + \frac{k^\rho \Delta^\sigma}{M^2} \, A^H_{4} \nonumber\\* &\quad\hspace{2.55ex} + \frac{k^\rho N^\sigma}{M^2} \, A^H_{5} + \frac{\Delta^\rho N^\sigma}{M^2} \, A^H_{6} + i\sigma^{\rho\sigma} \, A^H_{7} + \frac{P^\rho \, i\sigma^{\sigma k}}{M^2} \, A^H_{8} + \frac{N^\rho \, i\sigma^{\sigma k}}{M^2} \, A^H_{9} \nonumber\\* &\quad\hspace{2.55ex} + \frac{P^\rho \, i\sigma^{\sigma\Delta}}{M^2} \, A^H_{10} + \frac{N^\rho \, i\sigma^{\sigma\Delta}}{M^2} \, A^H_{11} + \frac{P^\rho \, i\sigma^{\sigma N}}{M^2} \, A^H_{12} + \frac{k^\rho \, i\sigma^{\sigma N}}{M^2} \, A^H_{13} + \frac{\Delta^\rho \, i\sigma^{\sigma N}}{M^2} \, A^H_{14} \nonumber\\* &\quad\hspace{2.55ex} + \frac{N^\rho \, i\sigma^{\sigma N}}{M^2} \, A^H_{15} + \frac{P^\rho k^\sigma \, i\sigma^{k\Delta}}{M^4} \, A^H_{16} + \frac{P^\rho N^\sigma \, i\sigma^{k\Delta}}{M^4} \, A^H_{17} + \frac{k^\rho N^\sigma \, i\sigma^{k\Delta}}{M^4} \, A^H_{18} \nonumber\\* &\quad\hspace{2.55ex} + \frac{P^\rho k^\sigma \, i\sigma^{kN}}{M^4} \, A^H_{19} + \frac{P^\rho N^\sigma \, i\sigma^{kN}}{M^4} \, A^H_{20} + \frac{k^\rho N^\sigma \, i\sigma^{kN}}{M^4} \, A^H_{21} + \frac{P^\rho \Delta^\sigma \, i\sigma^{\Delta N}}{M^4} \, A^H_{22} \nonumber\\* &\quad\hspace{2.55ex} + \frac{P^\rho N^\sigma \, i\sigma^{\Delta N}}{M^4} \, A^H_{23} + \frac{\Delta^\rho N^\sigma \, i\sigma^{\Delta N}}{M^4} \, A^H_{24} \bigg] \, u(p, \lambda) \,, \label{e:wt_res} \end{align} where we used $\varepsilon^{abcd} = \varepsilon^{\mu\nu\rho\sigma} a_\mu b_\nu c_\rho d_\sigma$ and $\sigma^{ab} = \sigma^{\mu\nu} a_\mu b_\nu$ to shorten the notation. Our treatment leading to~(\ref{e:ws_res})--(\ref{e:wt_res}) is analogous to what has already been done for a spin-0 hadron~\cite{Meissner:2008ay}. The functions $A^E_i$, $A^F_i$, $A^G_i$, and $A^H_i$ are independent and represent the GPCFs. They depend on all possible scalar products of the momenta $P$, $k$, $\Delta$, and $N$ as well as the parameter $\eta$. The various factors of $M$ are introduced in order to assign the same mass dimension to all GPCFs. Note that the parameterizations~(\ref{e:ws_res})--(\ref{e:wt_res}) are ambiguous in the sense that one can always rewrite them into other forms by means of the Gordon identities~(\ref{e:gi1})--(\ref{e:gi4}). However, the amount of structures as presented in eqs.~(\ref{e:ws_res})--(\ref{e:wt_res}) is minimized. For further details we refer to appendix~\ref{c:app_par}. \subsection{Properties} By applying hermiticity and time reversal to the correlator in~(\ref{e:corr_gpcf}) it is possible to derive some basic properties of the GPCFs. From hermiticity it follows that \begin{align} &\Big[ W_{\lambda \lambda'}^{[\Gamma]}(P, k, \Delta, N; \eta) \Big]^* \nonumber\\ &\quad = \frac{1}{2} \int \frac{d^4 z}{(2\pi)^4} \, e^{-i k \cdot z} \, \langle p', \lambda' | \, \bar{\psi}(-\tfrac{1}{2}z) \, \Gamma \, \mathcal{W}(-\tfrac{1}{2}z, \tfrac{1}{2}z \, | \, n) \, \psi(\tfrac{1}{2}z) \, | p, \lambda \rangle^* \nonumber\\ &\quad = \frac{1}{2} \int \frac{d^4 z}{(2\pi)^4} \, e^{-i k \cdot z} \, \langle p, \lambda | \, \bar{\psi}(\tfrac{1}{2}z) \, \gamma_0 \, \Gamma^\dagger \, \gamma_0 \, \mathcal{W}(\tfrac{1}{2}z, -\tfrac{1}{2}z \, | \, n) \, \psi(-\tfrac{1}{2}z) \, | p', \lambda' \rangle \nonumber\\ &\quad = \frac{1}{2} \int \frac{d^4 z}{(2\pi)^4} \, e^{i k \cdot z} \, \langle p, \lambda | \, \bar{\psi}(-\tfrac{1}{2}z) \, \gamma_0 \, \Gamma^\dagger \, \gamma_0 \, \mathcal{W}(-\tfrac{1}{2}z, \tfrac{1}{2}z \, | \, n) \, \psi(\tfrac{1}{2}z) \, | p', \lambda' \rangle \nonumber\\ &\quad = W_{\lambda' \lambda}^{[\gamma_0 \, \Gamma^\dagger \, \gamma_0]}(P, k, -\Delta, N; \eta) \,. \end{align} For the matrix functions in eqs.~(\ref{e:sdb})--(\ref{e:tdb}) this leads to \begin{eqnarray} \Big[ \Gamma_\text{S}(P, k, \Delta, N; \eta) \Big]^\dagger &=& + \gamma_0 \, \Gamma_\text{S}(P, k, -\Delta, N; \eta) \, \gamma_0 \,, \label{e:ws_hermiticity}\\ \Big[ \Gamma_\text{P}(P, k, \Delta, N; \eta) \Big]^\dagger &=& - \gamma_0 \, \Gamma_\text{P}(P, k, -\Delta, N; \eta) \, \gamma_0 \,, \label{e:wp_hermiticity}\\ \Big[ \Gamma^\mu_\text{V}(P, k, \Delta, N; \eta) \Big]^\dagger &=& + \gamma_0 \, \Gamma^\mu_\text{V}(P, k, -\Delta, N; \eta) \, \gamma_0 \,, \label{e:wv_hermiticity}\\ \Big[ \Gamma^\mu_\text{A}(P, k, \Delta, N; \eta) \Big]^\dagger &=& + \gamma_0 \, \Gamma^\mu_\text{A}(P, k, -\Delta, N; \eta) \, \gamma_0 \,, \label{e:wa_hermiticity}\\ \Big[ \Gamma^{\mu\nu}_\text{T}(P, k, \Delta, N; \eta) \Big]^\dagger &=& - \gamma_0 \, \Gamma^{\mu\nu}_\text{T}(P, k, -\Delta, N; \eta) \, \gamma_0 \,. \label{e:wt_hermiticity} \end{eqnarray} Applying the hermiticity constraints~(\ref{e:ws_hermiticity})--(\ref{e:wt_hermiticity}) to the decomposition in~(\ref{e:ws_res})--(\ref{e:wt_res}) one finds \begin{equation} X^*(P, k, \Delta, N; \eta) = \pm X(P, k, -\Delta, N; \eta) \,, \label{e:gpcf_hermiticity} \end{equation} where the plus sign holds for $X = A^E_{1}$, $A^E_{2}$, $A^E_{4}$, $A^E_{8}$, $A^F_{1}$, $A^F_{2}$, $A^F_{4}$, $A^F_{6}$, $A^F_{8}$, $A^F_{9}$, $A^F_{10}$, $A^F_{14}$, $A^F_{16}$, $A^G_{1}$, $A^G_{3}$, $A^G_{4}$, $A^G_{5}$, $A^G_{6}$, $A^G_{7}$, $A^G_{8}$, $A^G_{10}$, $A^G_{11}$, $A^G_{13}$, $A^G_{15}$, $A^H_{2}$, $A^H_{4}$, $A^H_{6}$, $A^H_{7}$, $A^H_{8}$, $A^H_{9}$, $A^H_{12}$, $A^H_{13}$, $A^H_{15}$, $A^H_{19}$, $A^H_{20}$, $A^H_{21}$, $A^H_{22}$, $A^H_{24}$ and the minus sign for all the other GPCFs. In addition, time reversal leads to \begin{align} &\Big[ W_{\lambda \lambda'}^{[\Gamma]}(P, k, \Delta, N; \eta) \Big]^* \nonumber\\ &\quad = \frac{1}{2} \int \frac{d^4 z}{(2\pi)^4} \, e^{-i k \cdot z} \, \langle p', \lambda' | \, \bar{\psi}(-\tfrac{1}{2}z) \, \Gamma \, \mathcal{W}(-\tfrac{1}{2}z, \tfrac{1}{2}z \, | \, n) \, \psi(\tfrac{1}{2}z) \, | p, \lambda \rangle^* \nonumber\\ &\quad = \frac{1}{2} \int \frac{d^4 z}{(2\pi)^4} \, e^{-i k \cdot z} \, \langle p', \lambda' | \, \hat{T}^\dagger \hat{T} \, \bar{\psi}(-\tfrac{1}{2}z) \, \hat{T}^\dagger \hat{T} \, \Gamma \, \hat{T}^\dagger \hat{T} \, \mathcal{W}(-\tfrac{1}{2}z, \tfrac{1}{2}z \, | \, n) \, \hat{T}^\dagger \hat{T} \, \psi(\tfrac{1}{2}z) \, \hat{T}^\dagger \hat{T} \, | p, \lambda \rangle \nonumber\\ &\quad = \frac{1}{2} \int \frac{d^4 z}{(2\pi)^4} \, e^{-i k \cdot z} \, \langle \bar{p}', \lambda_T' | \, \bar{\psi}(\tfrac{1}{2}\bar{z}) \, (-i\gamma_5 C) \, \Gamma^* \, (-i\gamma_5 C) \, \mathcal{W}(\tfrac{1}{2}\bar{z}, -\tfrac{1}{2}\bar{z} \, | \, -\bar{n}) \, \psi(-\tfrac{1}{2}\bar{z}) \, | \bar{p}, \lambda_T \rangle \nonumber\\ &\quad = \frac{1}{2} \int \frac{d^4 z}{(2\pi)^4} \, e^{i \bar{k} \cdot z} \, \langle \bar{p}', \lambda_T' | \, \bar{\psi}(-\tfrac{1}{2}z) \, (-i\gamma_5 C) \, \Gamma^* \, (-i\gamma_5 C) \, \mathcal{W}(-\tfrac{1}{2}z, \tfrac{1}{2}z \, | \, -\bar{n}) \, \psi(\tfrac{1}{2}z) \, | \bar{p}, \lambda_T \rangle \nonumber\\ &\quad = W_{\lambda_T \lambda_T'}^{[(-i\gamma_5 C) \, \Gamma^* \, (-i\gamma_5 C)]} (\bar{P}, \bar{k}, \bar{\Delta}, \bar{N}; -\eta) \,, \end{align} where $C$ is the charge conjugation matrix, while $\lambda_T$ and $\lambda_T'$ denote the time-reversed helicities $\lambda$ and $\lambda'$. Analogous to eq.~(\ref{e:par2}) one finds for the matrix functions in eqs.~(\ref{e:sdb})--(\ref{e:tdb}) \begin{eqnarray} \Big[ \Gamma_\text{S}(P, k, \Delta, N; \eta) \Big]^* &=& (-i\gamma_5 C) \, \Gamma_\text{S}(\bar{P}, \bar{k}, \bar{\Delta}, \bar{N}; -\eta) \, (-i\gamma_5 C) \,, \label{e:ws_timereversal}\\ \Big[ \Gamma_\text{P}(P, k, \Delta, N; \eta) \Big]^* &=& (-i\gamma_5 C) \, \Gamma_\text{P}(\bar{P}, \bar{k}, \bar{\Delta}, \bar{N}; -\eta) \, (-i\gamma_5 C) \,, \label{e:wp_timereversal}\\ \Big[ \Gamma^\mu_\text{V}(P, k, \Delta, N; \eta) \Big]^* &=& (-i\gamma_5 C) \, \Gamma^{\bar{\mu}}_\text{V}(\bar{P}, \bar{k}, \bar{\Delta}, \bar{N}; -\eta) \, (-i\gamma_5 C) \,, \label{e:wv_timereversal}\\ \Big[ \Gamma^\mu_\text{A}(P, k, \Delta, N; \eta) \Big]^* &=& (-i\gamma_5 C) \, \Gamma^{\bar{\mu}}_\text{A}(\bar{P}, \bar{k}, \bar{\Delta}, \bar{N}; -\eta) \, (-i\gamma_5 C) \,, \label{e:wa_timereversal}\\ \Big[ \Gamma^{\mu\nu}_\text{T}(P, k, \Delta, N; \eta) \Big]^* &=& (-i\gamma_5 C) \, \Gamma^{\bar{\mu}\bar{\nu}}_\text{T}(\bar{P}, \bar{k}, \bar{\Delta}, \bar{N}; -\eta) \, (-i\gamma_5 C) \,. \label{e:wt_timereversal} \end{eqnarray} The time-reversal constraints~(\ref{e:ws_timereversal})--(\ref{e:wt_timereversal}) provide \begin{equation} X^*(P, k, \Delta, N; \eta) = X(P, k, \Delta, N; -\eta) \label{e:gpcf_timereversal} \end{equation} for all GPCFs, relating those defined with future-pointing Wilson lines to those defined with past-pointing lines. From these considerations it follows that in general GPCFs, unlike GPDs or TMDs, are complex-valued functions. Keeping now in mind that $\eta \in \{-1\, ,1\}$ and using eq.~(\ref{e:gpcf_timereversal}) one finds immediately that only the imaginary part of the GPCFs depends on $\eta$. This allows one to write \begin{equation} X(P, k, \Delta, N; \eta) = X^{e}(P, k, \Delta, N) + i \, X^{o}(P, k, \Delta, N; \eta) \,, \label{e:gpcf_decomp} \end{equation} with \begin{equation} X^{o}(P, k, \Delta, N; \eta) = - X^{o}(P, k, \Delta, N; -\eta) \,, \label{e:gpcf_sign} \end{equation} where we call $X^{e}$ the T-even and $X^{o}$ the T-odd part of the generic GPCF $X$. The sign reversal of $X^{o}$ in eq.~(\ref{e:gpcf_sign}) when going from future-pointing to past-pointing Wilson lines corresponds to the sign reversal discussed in ref.~\cite{Collins:2002kn} for T-odd TMDs. \subsection{Limits} Now we would like to give a first account on the relation between GPCFs on the one hand and GPDs as well as TMDs on the other. To this end we consider the quark-quark correlator $F$ defining GPDs for a spin-1/2 target, which can be obtained from the correlator $W$ in eq.~(\ref{e:corr_gpcf}) by means of the projection \begin{align} & F_{\lambda \lambda'}^{[\Gamma]}(P, x, \Delta, N) = \int dk^- \, d^2\vec{k}_T \, W_{\lambda \lambda'}^{[\Gamma]}(P, k, \Delta, N; \eta) \nonumber \\ & \quad = \frac{1}{2}\int \frac{dz^-}{2 \pi} \, e^{i k \cdot z} \, \langle p', \lambda' | \, \bar{\psi}(-\tfrac{1}{2}z) \, \Gamma \, {\cal W}(-\tfrac{1}{2}z,\tfrac{1}{2}z\,|\,n) \, \psi(\tfrac{1}{2}z) \, | p, \lambda \rangle \, \Big|_{z^+ = \vec{z}_T = 0} \,. \label{e:corr_gpd} \end{align} In this formula we use light-cone components that are specified through $a^{\pm}=(a^0\pm a^3)/\sqrt{2}$ and $\vec{a}_T = (a^1,a^2)$ for a generic 4-vector $a = (a^0,a^1,a^2,a^3)$, where, in particular, we choose $k^+ = x P^+$. Note that after integrating upon $k^-$ and $\vec{k}_T$ the dependence on the parameter $\eta$ drops out. It is well-known that in this case we are dealing with a light-cone correlator and the two quark fields are just connected by a straight line. This means that the choice of the contour in~(\ref{e:path}) leads, after projection, to the appropriate Wilson line for the GPD-correlator. The correlator $\Phi$ defining TMDs can be extracted from $W$ by putting $\Delta = 0$ and integrating out one light-cone component of the quark momentum (which we choose to be $k^-$), \begin{align} & \Phi_{\lambda \lambda'}^{[\Gamma]}(P, x, \vec{k}_T, N; \eta) = \int dk^- \, W_{\lambda \lambda'}^{[\Gamma]}(P, k, 0, N; \eta) \nonumber \\ & \quad = \frac{1}{2}\int \frac{dz^- \, d^2 \vec{z}_T}{(2\pi)^3} \, e^{i k \cdot z} \, \langle P, \lambda' | \, \bar{\psi}(-\tfrac{1}{2}z) \, \Gamma \, {\cal W}(-\tfrac{1}{2}z,\tfrac{1}{2}z\,|\,n) \, \psi(\tfrac{1}{2}z) \, | P, \lambda \rangle \, \Big|_{z^+ = 0} \,. \label{e:corr_tmd} \end{align} Note that for $\Delta = 0$ one has $p = p^{\prime} = P$. We point out that the path specified in~(\ref{e:path}) also leads to a proper Wilson line after taking the TMD-limit~\cite{Collins:1981uw,Collins:1999dz,Collins:2000gd,Collins:2002kn,Ji:2002aa,Belitsky:2002sm,Collins:2004nx,Cherednikov:2007tw,Cherednikov:2008ua}. Since $\Phi$ in eq.~(\ref{e:corr_tmd}) is not a light-cone correlator the dependence on the parameter $\eta$ remains. The case $\eta = +1$ is appropriate for defining TMDs in processes with final state interactions of the struck quark like SIDIS, while $\eta = -1$ can be used for TMDs in DY~\cite{Collins:2002kn}. It has been emphasized in refs.~\cite{Collins:1981uw,Collins:2003fm,Hautmann:2007uw,Collins:2008ht} that, in general, light-like Wilson lines as used in the unintegrated correlators in (\ref{e:corr_gpcf}) and (\ref{e:corr_tmd}) lead to divergences. Such divergences can be avoided, however, by adopting a near light-cone direction. For the purpose of the present work it is sufficient to note that our general reasoning remains valid once a near light-cone direction is used instead of $n$. It is evident that not only the correlators $F$ and $\Phi$ appear as projections of the most general two-parton correlator $W$ as outlined above, but also the GPDs and the TMDs are projections of certain GPCFs. Therefore, GPCFs can be considered as {\it mother distributions}, which actually contain the maximum amount of information on the two-parton structure of hadrons~\cite{Ji:2003ak,Belitsky:2003nz,Belitsky:2005qn}. Despite this fact a classification of the GPCFs as given in~(\ref{e:ws_res})--(\ref{e:wt_res}) has never been worked out. \section{Generalized transverse momentum dependent parton distributions}\label{c:sec3} \subsection{Definition} The projections in~(\ref{e:corr_gpd}) and~(\ref{e:corr_tmd}) contain the integration upon the minus-component of the quark momentum. Therefore, it is useful to consider in more detail the correlator \begin{align} & W_{\lambda \lambda'}^{[\Gamma]}(P, x, \vec{k}_T, \Delta, N; \eta) = \int dk^- \, W_{\lambda \lambda'}^{[\Gamma]}(P, k, \Delta, N; \eta) \nonumber \\ & \quad = \frac{1}{2}\int \frac{dz^- \, d^2 \vec{z}_T}{(2\pi)^3} \, e^{i k \cdot z} \, \langle p', \lambda' | \, \bar{\psi}(-\tfrac{1}{2}z) \, \Gamma \, {\cal W}(-\tfrac{1}{2}z,\tfrac{1}{2}z\,|\,n) \, \psi(\tfrac{1}{2}z) \, | p, \lambda \rangle \, \Big|_{z^+ = 0} \,. \label{e:corr_gtmd} \end{align} Below the parameterization of this object is given in terms of what we call generalized transverse momentum dependent parton distributions (GTMDs). Of course, this result can now be obtained in a straightforward manner on the basis of the decomposition in eqs.~(\ref{e:ws_res})--(\ref{e:wt_res}). On the basis of the above discussion it is obvious that also the GTMDs, like the GPCFs, can be considered as {\it mother distributions} of GPDs and TMDs. It is the correlator in~(\ref{e:corr_gtmd}) which for instance can enter the description of hard exclusive meson production~\cite{Goloskokov:2007nt}, while the corresponding correlator for gluons appears when considering diffractive processes in lepton-hadron as well as hadron-hadron collisions~\cite{Martin:1999wb,Khoze:2000cy,Albrow:2008pn,Martin:2009ku}. The question whether or not it appears with a Wilson line as defined in~(\ref{e:path}) to our knowledge has never been addressed in the literature and requires further investigation that goes beyond the scope of the present work. For our analysis we choose an infinite momentum frame such that $P$ has a large plus-momentum and no transverse momentum. The plus-component of $\Delta$ is expressed through the commonly used variable $\xi$. To be now precise the 4-momenta in~(\ref{e:ws_res})--(\ref{e:wt_res}) are specified according to \begin{eqnarray} P & = & \bigg[\, P^+ \, , \, \frac{\vec{\Delta}_T^2 + 4M^2}{8(1-\xi^2)P^+} \, , \, \vec{0}_T \, \bigg] \,, \\ k & = & \bigg[\, x P^+ \, ,\, k^- \, , \, \vec{k}_T \, \bigg] \,, \\ \Delta & = & \bigg[\, -2 \xi P^+ \, , \, \frac{\xi\vec{\Delta}_T^2 + 4\xi M^2}{4(1-\xi^2)P^+} \, , \, \vec{\Delta}_T \, \bigg] \,, \\ n & = & \bigg[\, 0\, , \, \pm 1 \, , \, \vec{0}_T \, \bigg] \,. \label{e:lcvec} \end{eqnarray} The vector $n$ in eq.~(\ref{e:lcvec}) is of course not the most general light-cone vector. In particular, it has no transverse component and points opposite to the direction of $P$ as already mentioned earlier. However, if one wants to arrive at an appropriate definition of TMDs for SIDIS and DY, there is no freedom left for this vector because it is fixed by the external momenta of the respective processes. \subsection{Parameterization} Now we have all the ingredients which are needed for writing down the final result for the generalized $k_T$-dependent correlator~(\ref{e:corr_gtmd}) in terms of GTMDs. We start with the twist-2 case for which one gets \begin{eqnarray} W_{\lambda \lambda'}^{[\gamma^+]} &=& \frac{1}{2M} \, \bar{u}(p', \lambda') \, \bigg[ F_{1,1} + \frac{i\sigma^{i+} k_T^i}{P^+} \, F_{1,2} + \frac{i\sigma^{i+} \Delta_T^i}{P^+} \, F_{1,3} \nonumber\\* & & + \frac{i\sigma^{ij} k_T^i \Delta_T^j}{M^2} \, F_{1,4} \bigg] \, u(p, \lambda) \,, \label{e:gtmd_1}\\ W_{\lambda \lambda'}^{[\gamma^+\gamma_5]} &=& \frac{1}{2M} \, \bar{u}(p', \lambda') \, \bigg[ - \frac{i\varepsilon_T^{ij} k_T^i \Delta_T^j}{M^2} \, G_{1,1} + \frac{i\sigma^{i+}\gamma_5 k_T^i}{P^+} \, G_{1,2} + \frac{i\sigma^{i+}\gamma_5 \Delta_T^i}{P^+} \, G_{1,3} \nonumber\\* & & + i\sigma^{+-}\gamma_5 \, G_{1,4} \bigg] \, u(p, \lambda) \,, \label{e:gtmd_2}\\ W_{\lambda \lambda'}^{[i\sigma^{j+}\gamma_5]} &=& \frac{1}{2M} \, \bar{u}(p', \lambda') \, \bigg[ - \frac{i\varepsilon_T^{ij} k_T^i}{M} \, H_{1,1} - \frac{i\varepsilon_T^{ij} \Delta_T^i}{M} \, H_{1,2} + \frac{M \, i\sigma^{j+}\gamma_5}{P^+} \, H_{1,3} \nonumber\\* & & + \frac{k_T^j \, i\sigma^{k+}\gamma_5 k_T^k}{M \, P^+} \, H_{1,4} + \frac{\Delta_T^j \, i\sigma^{k+}\gamma_5 k_T^k}{M \, P^+} \, H_{1,5} + \frac{\Delta_T^j \, i\sigma^{k+}\gamma_5 \Delta_T^k}{M \, P^+} \, H_{1,6} \nonumber\\* & & + \frac{k_T^j \, i\sigma^{+-}\gamma_5}{M} \, H_{1,7} + \frac{\Delta_T^j \, i\sigma^{+-}\gamma_5}{M} \, H_{1,8} \bigg] \, u(p, \lambda) \,. \label{e:gtmd_3} \end{eqnarray} Here the definitions $\varepsilon^{0123} = 1$ and $\varepsilon_T^{ij} = \varepsilon^{-+ij}$ are used. The 16 complex-valued twist-2 GTMDs $F_{1,i}$, $G_{1,i}$, and $H_{1,i}$ are given by $k^{-}$-integrals of certain linear combinations of the GPCFs in~(\ref{e:wv_res})--(\ref{e:wt_res}), where the explicit relations are listed in appendix~\ref{c:app_gtmd_gpcf}. To shorten the notation the arguments on both sides of the eqs.~(\ref{e:gtmd_1})--(\ref{e:gtmd_3}) are omitted. All GTMDs depend on the set of variables $(x,\xi,\vec{k}_T^2,\vec{k}_T \cdot \vec{\Delta}_T,\vec{\Delta}_T^2;\eta)$. In the twist-3 case, characterized through a suppression by one power in $P^{+}$, we find \begin{eqnarray} W_{\lambda \lambda'}^{[1]} &=& \frac{1}{2P^+} \, \bar{u}(p', \lambda') \, \bigg[ E_{2,1} + \frac{i\sigma^{i+} k_T^i}{P^+} \, E_{2,2} + \frac{i\sigma^{i+} \Delta_T^i}{P^+} \, E_{2,3} \nonumber\\* & & + \frac{i\sigma^{ij} k_T^i \Delta_T^j}{M^2} \, E_{2,4} \bigg] \, u(p, \lambda) \,, \label{e:gtmd_4}\\ W_{\lambda \lambda'}^{[\gamma_5]} &=& \frac{1}{2P^+} \, \bar{u}(p', \lambda') \, \bigg[ - \frac{i\varepsilon_T^{ij} k_T^i \Delta_T^j}{M^2} \, E_{2,5} + \frac{i\sigma^{i+}\gamma_5 k_T^i}{P^+} \, E_{2,6} + \frac{i\sigma^{i+}\gamma_5 \Delta_T^i}{P^+} \, E_{2,7} \nonumber\\* & & + i\sigma^{+-}\gamma_5 \, E_{2,8} \bigg] \, u(p, \lambda) \,, \label{e:gtmd_5}\\ W_{\lambda \lambda'}^{[\gamma^j]} &=& \frac{1}{2P^+} \, \bar{u}(p', \lambda') \, \bigg[ \frac{k_T^j}{M} \, F_{2,1} + \frac{\Delta_T^j}{M} \, F_{2,2} + \frac{M \, i\sigma^{j+}}{P^+} \, F_{2,3} \nonumber\\* & & + \frac{k_T^j \, i\sigma^{k+} k_T^k}{M \, P^+} \, F_{2,4} + \frac{\Delta_T^j \, i\sigma^{k+} k_T^k}{M \, P^+} \, F_{2,5} + \frac{\Delta_T^j \, i\sigma^{k+} \Delta_T^k}{M \, P^+} \, F_{2,6} \nonumber\\* & & + \frac{i\sigma^{ij} k_T^i}{M} \, F_{2,7} + \frac{i\sigma^{ij} \Delta_T^i}{M} \, F_{2,8} \bigg] \, u(p, \lambda) \,, \label{e:gtmd_6}\\ W_{\lambda \lambda'}^{[\gamma^j\gamma_5]} &=& \frac{1}{2P^+} \, \bar{u}(p', \lambda') \, \bigg[ - \frac{i\varepsilon_T^{ij} k_T^i}{M} \, G_{2,1} - \frac{i\varepsilon_T^{ij} \Delta_T^i}{M} \, G_{2,2} + \frac{M \, i\sigma^{j+}\gamma_5}{P^+} \, G_{2,3} \nonumber\\* & & + \frac{k_T^j \, i\sigma^{k+}\gamma_5 k_T^k}{M \, P^+} \, G_{2,4} + \frac{\Delta_T^j \, i\sigma^{k+}\gamma_5 k_T^k}{M \, P^+} \, G_{2,5} + \frac{\Delta_T^j \, i\sigma^{k+}\gamma_5 \Delta_T^k}{M \, P^+} \, G_{2,6} \nonumber\\* & & + \frac{k_T^j \, i\sigma^{+-}\gamma_5}{M} \, G_{2,7} + \frac{\Delta_T^j \, i\sigma^{+-}\gamma_5}{M} \, G_{2,8} \bigg] \, u(p, \lambda) \,, \label{e:gtmd_7}\\ W_{\lambda \lambda'}^{[i\sigma^{ij}\gamma_5]} &=& - \frac{i\varepsilon_T^{ij}}{2P^+} \, \bar{u}(p', \lambda') \, \bigg[ H_{2,1} + \frac{i\sigma^{k+} k_T^k}{P^+} \, H_{2,2} + \frac{i\sigma^{k+} \Delta_T^k}{P^+} \, H_{2,3} \nonumber\\* & & + \frac{i\sigma^{kl} k_T^k \Delta_T^l}{M^2} \, H_{2,4} \bigg] \, u(p, \lambda) \,, \label{e:gtmd_8}\\ W_{\lambda \lambda'}^{[i\sigma^{+-}\gamma_5]} &=& \frac{1}{2P^+} \, \bar{u}(p', \lambda') \, \bigg[ - \frac{i\varepsilon_T^{ij} k_T^i \Delta_T^j}{M^2} \, H_{2,5} + \frac{i\sigma^{i+}\gamma_5 k_T^i}{P^+} \, H_{2,6} + \frac{i\sigma^{i+}\gamma_5 \Delta_T^i}{P^+} \, H_{2,7} \nonumber\\* & & + i\sigma^{+-}\gamma_5 \, H_{2,8} \bigg] \, u(p, \lambda) \,. \label{e:gtmd_9} \end{eqnarray} The twist-4 result, which is basically a copy of the twist-2 case, reads \begin{eqnarray} W_{\lambda \lambda'}^{[\gamma^-]} &=& \frac{M}{2(P^+)^2} \, \bar{u}(p', \lambda') \, \bigg[ F_{3,1} + \frac{i\sigma^{i+} k_T^i}{P^+} \, F_{3,2} + \frac{i\sigma^{i+} \Delta_T^i}{P^+} \, F_{3,3} \nonumber\\* & & + \frac{i\sigma^{ij} k_T^i \Delta_T^j}{M^2} \, F_{3,4} \bigg] \, u(p, \lambda) \,, \label{e:gtmd_10}\\ W_{\lambda \lambda'}^{[\gamma^-\gamma_5]} &=& \frac{M}{2(P^+)^2} \, \bar{u}(p', \lambda') \, \bigg[ - \frac{i\varepsilon_T^{ij} k_T^i \Delta_T^j}{M^2} \, G_{3,1} + \frac{i\sigma^{i+}\gamma_5 k_T^i}{P^+} \, G_{3,2} + \frac{i\sigma^{i+}\gamma_5 \Delta_T^i}{P^+} \, G_{3,3} \nonumber\\* & & + i\sigma^{+-} \, G_{3,4} \bigg] \, u(p, \lambda) \,, \label{e:gtmd_11}\\ W_{\lambda \lambda'}^{[i\sigma^{j-}\gamma_5]} &=& \frac{M}{2(P^+)^2} \, \bar{u}(p', \lambda') \, \bigg[ - \frac{i\varepsilon_T^{ij} k_T^i}{M} \, H_{3,1} - \frac{i\varepsilon_T^{ij} \Delta_T^i}{M} \, H_{3,2} + \frac{M \, i\sigma^{j+}\gamma_5}{P^+} \, H_{3,3} \nonumber\\* & & + \frac{k_T^j \, i\sigma^{k+}\gamma_5 k_T^k}{M \, P^+} \, H_{3,4} + \frac{\Delta_T^j \, i\sigma^{k+}\gamma_5 k_T^k}{M \, P^+} \, H_{3,5} + \frac{\Delta_T^j \, i\sigma^{k+}\gamma_5 \Delta_T^k}{M \, P^+} \, H_{3,6} \nonumber\\* & & + \frac{k_T^j \, i\sigma^{+-}\gamma_5}{M} \, H_{3,7} + \frac{\Delta_T^j \, i\sigma^{+-}\gamma_5}{M} \, H_{3,8} \bigg] \, u(p, \lambda) \,. \label{e:gtmd_12} \end{eqnarray} The twist-4 case is of course at most of academic interest but is included for completeness. \subsection{Properties} Like in the case of the GPCFs we also consider the implications of hermiticity and time reversal on the GTMDs. Hermiticity leads to \begin{equation} X^*(x,\xi,\vec{k}_T^2,\vec{k}_T \cdot \vec{\Delta}_T,\vec{\Delta}_T^2;\eta) =\pm X(x,-\xi,\vec{k}_T^2,-\vec{k}_T \cdot \vec{\Delta}_T,\vec{\Delta}_T^2;\eta) \,, \label{e:gtmd_hermiticity} \end{equation} with a plus sign for $X = E_{2,1}$, $E_{2,3}$, $E_{2,4}$, $E_{2,7}$, $F_{1,1}$, $F_{1,3}$, $F_{1,4}$, $F_{2,1}$, $F_{2,5}$, $F_{2,8}$, $F_{3,1}$, $F_{3,3}$, $F_{3,4}$, $G_{1,1}$, $G_{1,2}$, $G_{1,4}$, $G_{2,2}$, $G_{2,3}$, $G_{2,4}$, $G_{2,6}$, $G_{2,7}$, $G_{3,1}$, $G_{3,2}$, $G_{3,4}$, $H_{1,2}$, $H_{1,3}$, $H_{1,4}$, $H_{1,6}$, $H_{1,7}$, $H_{2,2}$, $H_{2,5}$, $H_{2,6}$, $H_{2,8}$, $H_{3,2}$, $H_{3,3}$, $H_{3,4}$, $H_{3,6}$, $H_{3,7}$ and a minus sign for all the other GTMDs. These results are a direct consequence of~(\ref{e:gpcf_hermiticity}) and the relations between GTMDs and GPCFs (see appendix~\ref{c:app_gtmd_gpcf} for the explicit formulas for twist-2). On the basis of~(\ref{e:gpcf_timereversal}) one obtains from time reversal \begin{equation} X^*(x,\xi,\vec{k}_T^2,\vec{k}_T \cdot \vec{\Delta}_T,\vec{\Delta}_T^2;\eta) = X(x,\xi,\vec{k}_T^2,\vec{k}_T \cdot \vec{\Delta}_T,\vec{\Delta}_T^2;-\eta) \label{linear} \end{equation} for all GTMDs $X$. This means, in particular, that we can carry over eqs.~(\ref{e:gpcf_decomp}) and (\ref{e:gpcf_sign}) to the GTMD case and write \begin{equation} X(x,\xi,\vec{k}_T^2,\vec{k}_T \cdot \vec{\Delta}_T,\vec{\Delta}_T^2;\eta) = X^{e}(x,\xi,\vec{k}_T^2,\vec{k}_T \cdot \vec{\Delta}_T,\vec{\Delta}_T^2) + i \, X^{o}(x,\xi,\vec{k}_T^2,\vec{k}_T \cdot \vec{\Delta}_T,\vec{\Delta}_T^2;\eta) \,, \end{equation} with the real valued functions $X^{e}$ and $X^{o}$ respectively representing the real and imaginary part of the GTMD $X$. Only the T-odd part $X^{o}$ depends on the sign of $\eta$ according to \begin{equation} X^{o}(x,\xi,\vec{k}_T^2,\vec{k}_T \cdot \vec{\Delta}_T,\vec{\Delta}_T^2;\eta) = - X^{o}(x,\xi,\vec{k}_T^2,\vec{k}_T \cdot \vec{\Delta}_T,\vec{\Delta}_T^2;-\eta) \,, \end{equation} i.e., the imaginary parts of GTMDs defined with future-pointing and past-pointing Wilson lines have a reversed sign. In order to give an estimate we have calculated the leading twist GTMDs in the scalar diquark model of the nucleon. The results are presented in appendix~\ref{c:app_gtmd_model}. Our treatment is restricted to lowest order in perturbation theory. To this order all T-odd parts of the GTMDs vanish --- a feature which is also well-known from spectator model calculations of T-odd TMDs. All the results listed in eqs.~(\ref{e:gtmd_model_1})--(\ref{e:gtmd_model_16}) are in accordance with the hermiticity constraint~(\ref{e:gtmd_hermiticity}). \section{Projection of GTMDs onto TMDs and GPDs}\label{c:sec4} In this section we consider the generalized $k_T$-dependent correlator in eq.~(\ref{e:corr_gtmd}) for the specific TMD-kinematics and the GPD-kinematics. This procedure provides the relations between the {\it mother distributions} (GTMDs) on the one hand and the TMDs as well as the GPDs on the other. On the basis of these results one can check whether there exists model-independent support for possible nontrivial relations between GPDs and TMDs. \subsection{TMD-limit} We start with the TMD-limit corresponding to a vanishing momentum transfer $\Delta = 0$. In this limit exactly half of the real-valued distributions vanish because they are odd as function of $\Delta$ due to the hermiticity constraint~(\ref{e:gtmd_hermiticity}): $E_{2,1}^o$, $E_{2,2}^e$, $E_{2,3}^o$, $E_{2,4}^o$, $E_{2,5}^e$, $E_{2,6}^e$, $E_{2,7}^o$, $E_{2,8}^e$, $F_{1,1}^o$, $F_{1,2}^e$, $F_{1,3}^o$, $F_{1,4}^o$, $F_{2,1}^o$, $F_{2,2}^e$, $F_{2,3}^e$, $F_{2,4}^e$, $F_{2,5}^o$, $F_{2,6}^e$, $F_{2,7}^e$, $F_{2,8}^o$, $F_{3,1}^o$, $F_{3,2}^e$, $F_{3,3}^o$, $F_{3,4}^o$, $G_{1,1}^o$, $G_{1,2}^o$, $G_{1,3}^e$, $G_{1,4}^o$, $G_{2,1}^e$, $G_{2,2}^o$, $G_{2,3}^o$, $G_{2,4}^o$, $G_{2,5}^e$, $G_{2,6}^o$, $G_{2,7}^o$, $G_{2,8}^e$, $G_{3,1}^o$, $G_{3,2}^o$, $G_{3,3}^e$, $G_{3,4}^o$, $H_{1,1}^e$, $H_{1,2}^o$, $H_{1,3}^o$, $H_{1,4}^o$, $H_{1,5}^e$, $H_{1,6}^o$, $H_{1,7}^o$, $H_{1,8}^e$, $H_{2,1}^e$, $H_{2,2}^o$, $H_{2,3}^e$, $H_{2,4}^e$, $H_{2,5}^o$, $H_{2,6}^o$, $H_{2,7}^e$, $H_{2,8}^o$, $H_{3,1}^e$, $H_{3,2}^o$, $H_{3,3}^o$, $H_{3,4}^o$, $H_{3,5}^e$, $H_{3,6}^o$, $H_{3,7}^o$, $H_{3,8}^e$. In addition, the distributions $E_{2,3}^e$, $E_{2,4}^e$, $E_{2,5}^o$, $E_{2,7}^e$, $F_{1,3}^e$, $F_{1,4}^e$, $F_{2,2}^o$, $F_{2,5}^e$, $F_{2,6}^o$, $F_{2,8}^e$, $F_{3,3}^e$, $F_{3,4}^e$, $G_{1,1}^e$, $G_{1,3}^o$, $G_{2,2}^e$, $G_{2,5}^o$, $G_{2,6}^e$, $G_{2,8}^o$, $G_{3,1}^e$, $G_{3,3}^o$, $H_{1,2}^e$, $H_{1,5}^o$, $H_{1,6}^e$, $H_{1,8}^o$, $H_{2,3}^o$, $H_{2,4}^o$, $H_{2,5}^e$, $H_{2,7}^o$, $H_{3,2}^e$, $H_{3,5}^o$, $H_{3,6}^e$, $H_{3,8}^o$ do not appear in the correlator any more, because they are multiplied by a coefficient which is linear in $\Delta$. Therefore, in the TMD-limit only the following 32 (20 T-even and 12 T-odd) distributions survive: $E_{2,1}^e$, $E_{2,2}^o$, $E_{2,6}^o$, $E_{2,8}^o$, $F_{1,1}^e$, $F_{1,2}^o$, $F_{2,1}^e$, $F_{2,3}^o$, $F_{2,4}^o$, $F_{2,7}^o$, $F_{3,1}^e$, $F_{3,2}^o$, $G_{1,2}^e$, $G_{1,4}^e$, $G_{2,1}^o$, $G_{2,3}^e$, $G_{2,4}^e$, $G_{2,7}^e$, $G_{3,2}^e$, $G_{3,4}^e$, $H_{1,1}^o$, $H_{1,3}^e$, $H_{1,4}^e$, $H_{1,7}^e$, $H_{2,1}^o$, $H_{2,2}^e$, $H_{2,6}^e$, $H_{2,8}^e$, $H_{3,1}^o$, $H_{3,3}^e$, $H_{3,4}^e$, $H_{3,7}^e$. The complete list of TMDs for a spin-1/2 hadron has been given in ref.~\cite{Goeke:2005hb} (see also the review article~\cite{Bacchetta:2006tn}). Here the spin vector \begin{equation} S = \bigg[\, \lambda \frac{P^+}{M} \, , \, - \lambda \frac{M}{2P^+} \, , \, \vec{S}_T \, \bigg] \end{equation} of the nucleon was introduced leading to the linear combination~\cite{Meissner:2007rx} \begin{eqnarray} \Phi^{[\Gamma]}(P, x, \vec{k}_T, N; S; \eta) &=& \tfrac{1 + \lambda}{2} \, \Phi_{++}^{[\Gamma]}(P, x, \vec{k}_T, N; \eta) + \tfrac{1 - \lambda}{2} \, \Phi_{--}^{[\Gamma]}(P, x, \vec{k}_T, N; \eta) \nonumber\\* & & + \tfrac{S_T^1 - i S_T^2}{2} \, \Phi_{+-}^{[\Gamma]}(P, x, \vec{k}_T, N; \eta) + \tfrac{S_T^1 + i S_T^2}{2} \, \Phi_{-+}^{[\Gamma]}(P, x, \vec{k}_T, N; \eta) \,. \qquad \end{eqnarray} Now using the conventions of~\cite{Bacchetta:2006tn} for the TMDs one finds the following explicit relations between the TMDs and the GTMDs: \begin{eqnarray} f_1(x,\vec{k}_T^2) & = & F_{1,1}^e(x,0,\vec{k}_T^2,0,0) \,, \label{e:tmd_gtmd_1} \\ f_{1T}^\bot(x,\vec{k}_T^2;\eta) & = & - F_{1,2}^o(x,0,\vec{k}_T^2,0,0;\eta) \,, \label{e:tmd_gtmd_2} \\ g_{1L}(x,\vec{k}_T^2) & = & G_{1,4}^e(x,0,\vec{k}_T^2,0,0) \,, \label{e:tmd_gtmd_3} \\ g_{1T}(x,\vec{k}_T^2) & = & G_{1,2}^e(x,0,\vec{k}_T^2,0,0) \,, \label{e:tmd_gtmd_4} \\ h_1^\bot(x,\vec{k}_T^2;\eta) & = & - H_{1,1}^o(x,0,\vec{k}_T^2,0,0;\eta) \,, \label{e:tmd_gtmd_5} \\ h_{1L}^\bot(x,\vec{k}_T^2) & = & H_{1,7}^e(x,0,\vec{k}_T^2,0,0) \,, \label{e:tmd_gtmd_6} \\ h_{1T}(x,\vec{k}_T^2) & = & H_{1,3}^e(x,0,\vec{k}_T^2,0,0) \,, \label{e:tmd_gtmd_7} \\ h_{1T}^\bot(x,\vec{k}_T^2) & = & H_{1,4}^e(x,0,\vec{k}_T^2,0,0) \,, \label{e:tmd_gtmd_8} \\ e(x,\vec{k}_T^2) & = & E_{2,1}^e(x,0,\vec{k}_T^2,0,0) \,, \label{e:tmd_gtmd_9} \\ e_L(x,\vec{k}_T^2;\eta) & = & - E_{2,8}^o(x,0,\vec{k}_T^2,0,0;\eta) \,, \label{e:tmd_gtmd_10} \\ e_T(x,\vec{k}_T^2;\eta) & = & - E_{2,6}^o(x,0,\vec{k}_T^2,0,0;\eta) \,, \label{e:tmd_gtmd_11} \\ e_T^\bot(x,\vec{k}_T^2;\eta) & = & - E_{2,2}^o(x,0,\vec{k}_T^2,0,0;\eta) \,, \label{e:tmd_gtmd_12} \\ f^\bot(x,\vec{k}_T^2) & = & F_{2,1}^e(x,0,\vec{k}_T^2,0,0) \,, \label{e:tmd_gtmd_13} \\ f_L^\bot(x,\vec{k}_T^2;\eta) & = & F_{2,7}^o(x,0,\vec{k}_T^2,0,0;\eta) \,, \label{e:tmd_gtmd_14} \\ f_T'(x,\vec{k}_T^2;\eta) & = & F_{2,3}^o(x,0,\vec{k}_T^2,0,0;\eta) \,, \label{e:tmd_gtmd_15} \\ f_T^\bot(x,\vec{k}_T^2;\eta) & = & F_{2,4}^o(x,0,\vec{k}_T^2,0,0;\eta) \,, \label{e:tmd_gtmd_16} \\ g^\bot(x,\vec{k}_T^2;\eta) & = & - G_{2,1}^o(x,0,\vec{k}_T^2,0,0;\eta) \,, \label{e:tmd_gtmd_17} \\ g_L^\bot(x,\vec{k}_T^2) & = & G_{2,7}^e(x,0,\vec{k}_T^2,0,0) \,, \label{e:tmd_gtmd_18} \\ g_T'(x,\vec{k}_T^2) & = & G_{2,3}^e(x,0,\vec{k}_T^2,0,0) \,, \label{e:tmd_gtmd_19} \\ g_T^\bot(x,\vec{k}_T^2) & = & G_{2,4}^e(x,0,\vec{k}_T^2,0,0) \,, \label{e:tmd_gtmd_20} \\ h(x,\vec{k}_T^2;\eta) & = & - H_{2,1}^o(x,0,\vec{k}_T^2,0,0;\eta) \,, \label{e:tmd_gtmd_21}\\ h_L(x,\vec{k}_T^2) & = & H_{2,8}^e(x,0,\vec{k}_T^2,0,0) \,, \label{e:tmd_gtmd_22} \\ h_T(x,\vec{k}_T^2) & = & H_{2,6}^e(x,0,\vec{k}_T^2,0,0) \,, \label{e:tmd_gtmd_23} \\ h_T^\bot(x,\vec{k}_T^2) & = & H_{2,2}^e(x,0,\vec{k}_T^2,0,0) \,, \label{e:tmd_gtmd_24} \\ f_3(x,\vec{k}_T^2) & = & F_{3,1}^e(x,0,\vec{k}_T^2,0,0) \,, \label{e:tmd_gtmd_25} \\ f_{3T}^\bot(x,\vec{k}_T^2;\eta) & = & - F_{3,2}^o(x,0,\vec{k}_T^2,0,0;\eta) \,, \label{e:tmd_gtmd_26} \\ g_{3L}(x,\vec{k}_T^2) & = & G_{3,4}^e(x,0,\vec{k}_T^2,0,0) \,, \label{e:tmd_gtmd_27} \\ g_{3T}(x,\vec{k}_T^2) & = & G_{3,2}^e(x,0,\vec{k}_T^2,0,0) \,, \label{e:tmd_gtmd_28} \\ h_3^\bot(x,\vec{k}_T^2;\eta) & = & - H_{3,1}^o(x,0,\vec{k}_T^2,0,0;\eta) \,, \label{e:tmd_gtmd_29} \\ h_{3L}^\bot(x,\vec{k}_T^2) & = & H_{3,7}^e(x,0,\vec{k}_T^2,0,0) \,, \label{e:tmd_gtmd_30} \\ h_{3T}(x,\vec{k}_T^2) & = & H_{3,3}^e(x,0,\vec{k}_T^2,0,0) \,, \label{e:tmd_gtmd_31} \\ h_{3T}^\bot(x,\vec{k}_T^2) & = & H_{3,4}^e(x,0,\vec{k}_T^2,0,0) \,. \label{e:tmd_gtmd_32} \end{eqnarray} These results are obtained by means of eqs.~(\ref{e:corr_tmd}) and~(\ref{e:gtmd_1})--(\ref{e:gtmd_12}). The 12 TMDs $f_{1T}^\bot$, $h_1^\bot$, $e_L$, $e_T$, $e_T^\bot$, $f_L^\bot$, $f_T$, $f_T^\bot$, $g^\bot$, $h$, $f_{3T}^\bot$, $h_3^\bot$ are T-odd and are related to T-odd parts of GTMDs. \subsection{GPD-limit} In a second step we focus on the GPD-limit which appears when integrating upon the transverse parton momentum $\vec{k}_T$. As already discussed after eq.~(\ref{e:corr_gpd}) the dependence on $\eta$ drops out in this case which implies, in particular, that all effects of T-odd parts of GTMDs disappear. In the literature only the twist-2 and the chiral-even twist-3 GPDs have been introduced~\cite{Diehl:2001pm,Kiptily:2002nx}. Therefore, we give here for the first time a complete list of GPDs for all twists. The GPDs parameterize the correlator in~(\ref{e:corr_gpd}). One finds 8 GPDs for twist-2, 16 GPDs for twist-3, and 8 GPDs for twist-4. To be explicit the GPDs can be defined according to \begin{eqnarray} F_{\lambda \lambda'}^{[\gamma^+]} &=& \frac{1}{2P^+} \, \bar{u}(p', \lambda') \, \bigg[ \gamma^+ \, H(x, \xi, t) + \frac{i\sigma^{+\Delta}}{2M} \, E(x, \xi, t) \bigg] \, u(p, \lambda) \,, \label{e:gpd_1}\\ F_{\lambda \lambda'}^{[\gamma^+\gamma_5]} &=& \frac{1}{2P^+} \, \bar{u}(p', \lambda') \, \bigg[ \gamma^+\gamma_5 \, \tilde{H}(x, \xi, t) + \frac{\Delta^+ \gamma_5}{2M} \, \tilde{E}(x, \xi, t) \bigg] \, u(p, \lambda) \,, \label{e:gpd_2}\\ F_{\lambda \lambda'}^{[i\sigma^{j+}\gamma_5]} &=& - \frac{i\varepsilon_T^{ij}}{2P^+} \, \bar{u}(p', \lambda') \, \bigg[ i\sigma^{+i} \, H_T(x, \xi, t) + \frac{\gamma^+ \Delta_T^i - \Delta^+ \gamma^i}{2M} \, E_T(x, \xi, t) \nonumber\\* & & + \frac{P^+ \Delta_T^i - \Delta^+ P_T^i}{M^2} \, \tilde{H}_T(x, \xi, t) + \frac{\gamma^+ P_T^i - P^+ \gamma^i}{M} \, \tilde{E}_T(x, \xi, t) \bigg] \, u(p, \lambda) \,, \label{e:gpd_3}\\ F_{\lambda \lambda'}^{[1]} &=& \frac{M}{2(P^+)^2} \, \bar{u}(p', \lambda') \, \bigg[ \gamma^+ \, H_2(x, \xi, t) + \frac{i\sigma^{+\Delta}}{2M} \, E_2(x, \xi, t) \bigg] \, u(p, \lambda) \,, \label{e:gpd_4}\\ F_{\lambda \lambda'}^{[\gamma_5]} &=& \frac{M}{2(P^+)^2} \, \bar{u}(p', \lambda') \, \bigg[ \gamma^+\gamma_5 \, \tilde{H}_2(x, \xi, t) + \frac{P^+ \gamma_5}{M} \, \tilde{E}_2(x, \xi, t) \bigg] \, u(p, \lambda) \,, \label{e:gpd_5}\\ F_{\lambda \lambda'}^{[\gamma^j]} &=& \frac{M}{2(P^+)^2} \, \bar{u}(p', \lambda') \, \bigg[ i\sigma^{+j} \, H_{2T}(x, \xi, t) + \frac{\gamma^+ \Delta_T^j - \Delta^+ \gamma^j}{2M} \, E_{2T}(x, \xi, t) \nonumber\\* & & + \frac{P^+ \Delta_T^j - \Delta^+ P_T^j}{M^2} \, \tilde{H}_{2T}(x, \xi, t) + \frac{\gamma^+ P_T^j - P^+ \gamma^j}{M} \, \tilde{E}_{2T}(x, \xi, t) \bigg] \, u(p, \lambda) \,, \label{e:gpd_6}\\ F_{\lambda \lambda'}^{[\gamma^j\gamma_5]} &=& - \frac{i\varepsilon_T^{ij} M}{2(P^+)^2} \, \bar{u}(p', \lambda') \, \bigg[ i\sigma^{+i} \, H'_{2T}(x, \xi, t) + \frac{\gamma^+ \Delta_T^i - \Delta^+ \gamma^i}{2M} \, E'_{2T}(x, \xi, t) \nonumber\\* & & + \frac{P^+ \Delta_T^i - \Delta^+ P_T^i}{M^2} \, \tilde{H}'_{2T}(x, \xi, t) + \frac{\gamma^+ P_T^i - P^+ \gamma^i}{M} \, \tilde{E}'_{2T}(x, \xi, t) \bigg] \, u(p, \lambda) \,, \label{e:gpd_7}\\ F_{\lambda \lambda'}^{[i\sigma^{ij}\gamma_5]} &=& - \frac{i\varepsilon_T^{ij} M}{2(P^+)^2} \, \bar{u}(p', \lambda') \, \bigg[ \gamma^+ \, H'_2(x, \xi, t) + \frac{i\sigma^{+\Delta}}{2M} \, E'_2(x, \xi, t) \bigg] \, u(p, \lambda) \,, \label{e:gpd_8}\\ F_{\lambda \lambda'}^{[i\sigma^{+-}\gamma_5]} &=& \frac{M}{2(P^+)^2} \, \bar{u}(p', \lambda') \, \bigg[ \gamma^+\gamma_5 \, \tilde{H}'_2(x, \xi, t) + \frac{P^+ \gamma_5}{M} \, \tilde{E}'_2(x, \xi, t) \bigg] \, u(p, \lambda) \,, \label{e:gpd_9}\\ F_{\lambda \lambda'}^{[\gamma^-]} &=& \frac{M^2}{2(P^+)^3} \, \bar{u}(p', \lambda') \, \bigg[ \gamma^+ \, H_3(x, \xi, t) + \frac{i\sigma^{+\Delta}}{2M} \, E_3(x, \xi, t) \bigg] \, u(p, \lambda) \,, \label{e:gpd_10}\\ F_{\lambda \lambda'}^{[\gamma^-\gamma_5]} &=& \frac{M^2}{2(P^+)^3} \, \bar{u}(p', \lambda') \, \bigg[ \gamma^+\gamma_5 \, \tilde{H}_3(x, \xi, t) + \frac{\Delta^+ \gamma_5}{2M} \, \tilde{E}_3(x, \xi, t) \bigg] \, u(p, \lambda) \,, \label{e:gpd_11}\\ F_{\lambda \lambda'}^{[i\sigma^{j-}\gamma_5]} &=& - \frac{i\varepsilon_T^{ij} M^2}{2(P^+)^3} \, \bar{u}(p', \lambda') \, \bigg[ i\sigma^{+i} \, H_{3T}(x, \xi, t) + \frac{\gamma^+ \Delta_T^i - \Delta^+ \gamma^i}{2M} \, E_{3T}(x, \xi, t) \nonumber\\* & & + \frac{P^+ \Delta_T^i - \Delta^+ P_T^i}{M^2} \, \tilde{H}_{3T}(x, \xi, t) + \frac{\gamma^+ P_T^i - P^+ \gamma^i}{M} \, \tilde{E}_{3T}(x, \xi, t) \bigg] \, u(p, \lambda) \,, \quad \label{e:gpd_12} \end{eqnarray} where $t = \Delta^2$. The structure of the traces in~(\ref{e:gpd_1})--(\ref{e:gpd_12}) follows readily from eqs.~(\ref{e:gtmd_1})--(\ref{e:gtmd_12}) if one keeps in mind that after integrating upon $\vec{k}_T$ the only transverse vector left is $\vec{\Delta}_T$. Altogether there exist 32 GPDs corresponding to the number of TMDs. The 16 GPDs $H$, $E$, $\tilde{H}$, $\tilde{E}$, $H_{2T}$, $E_{2T}$, $\tilde{H}_{2T}$, $\tilde{E}_{2T}$, $H'_{2T}$, $E'_{2T}$, $\tilde{H}'_{2T}$, $\tilde{E}'_{2T}$, $H_3$, $E_3$, $\tilde{H}_3$, $\tilde{E}_3$ are chiral-even, while the remaining ones are chiral-odd. The definition of the twist-2 GPDs corresponds follows the common definition \cite{Diehl:2001pm}. The chiral-even twist-3 GPDs $H_{2T}$, $E_{2T}$, $\tilde{H}_{2T}$, $\tilde{E}_{2T}$, $H'_{2T}$, $E'_{2T}$, $\tilde{H}'_{2T}$, $\tilde{E}'_{2T}$ are related to the functions $G_1$, $G_2$, $G_3$, $G_4$, $\tilde{G}_1$, $\tilde{G}_2$, $\tilde{G}_3$, $\tilde{G}_4$ that were introduced in ref.~\cite{Kiptily:2002nx}. It is now straightforward to write down the following expressions for the GPDs in terms of $k_T$-integrals of GTMDs: \begin{eqnarray} H(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ F_{1,1}^e + 2 \xi^2 \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, F_{1,2}^e + F_{1,3}^e \bigg) \bigg] \,, \label{e:gpd_gtmd_1} \\ E(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ - F_{1,1}^e + 2 (1 - \xi^2) \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, F_{1,2}^e + F_{1,3}^e \bigg) \bigg] \,, \label{e:gpd_gtmd_2} \\ \tilde{H}(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ 2 \xi \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, G_{1,2}^e + G_{1,3}^e \bigg) + G_{1,4}^e \bigg] \,, \label{e:gpd_gtmd_3} \\ \tilde{E}(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ \frac{2(1 - \xi^2)}{\xi} \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, G_{1,2}^e + G_{1,3}^e \bigg) - G_{1,4}^e \bigg] \,, \label{e:gpd_gtmd_4} \\ H_T(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ H_{1,3}^e + \frac{\vec{\Delta}_T^2}{M^2} \bigg( \frac{(\vec{k}_T \cdot \vec{\Delta}_T)^2}{(\vec{\Delta}_T^2)^2} \, H_{1,4}^e + \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, H_{1,5}^e + H_{1,6}^e \bigg) \nonumber\\* && \qquad\qquad - \frac{\xi(\vec{\Delta}_T^2 + 4M^2)}{2(1 - \xi^2)M^2} \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, H_{1,7}^e + H_{1,8}^e \bigg) \bigg] \,, \label{e:gpd_gtmd_5} \\ E_T(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ 4 \bigg( \frac{2 (\vec{k}_T \cdot \vec{\Delta}_T)^2 - \vec{k}_T^2 \vec{\Delta}_T^2}{(\vec{\Delta}_T^2)^2} \, H_{1,4}^e + \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, H_{1,5}^e + H_{1,6}^e \bigg) \bigg] \,, \label{e:gpd_gtmd_6} \\ \tilde{H}_T(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, H_{1,1}^e + H_{1,2}^e \bigg) \nonumber\\* && \qquad\qquad - 2(1 - \xi^2) \bigg( \frac{2 (\vec{k}_T \cdot \vec{\Delta}_T)^2 - \vec{k}_T^2 \vec{\Delta}_T^2}{(\vec{\Delta}_T^2)^2} \, H_{1,4}^e + \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, H_{1,5}^e + H_{1,6}^e \bigg) \nonumber\\* && \qquad\qquad + \xi \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, H_{1,7}^e + H_{1,8}^e \bigg) \bigg] \,, \label{e:gpd_gtmd_7} \\ \tilde{E}_T(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ 4\xi \bigg( \frac{2 (\vec{k}_T \cdot \vec{\Delta}_T)^2 - \vec{k}_T^2 \vec{\Delta}_T^2}{(\vec{\Delta}_T^2)^2} \, H_{1,4}^e + \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, H_{1,5}^e + H_{1,6}^e \bigg) \nonumber\\* && \qquad\qquad + 2 \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, H_{1,7}^e + H_{1,8}^e \bigg) \bigg] \,, \label{e:gpd_gtmd_8} \\ H_2(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ E_{2,1}^e + 2 \xi^2 \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, E_{2,2}^e + E_{2,3}^e \bigg) \bigg] \,, \label{e:gpd_gtmd_9} \\ E_2(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ - E_{2,1}^e + 2 (1 - \xi^2) \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, E_{2,2}^e + E_{2,3}^e \bigg) \bigg] \,, \label{e:gpd_gtmd_10} \\ \tilde{H}_2(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ 2 \xi \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, E_{2,6}^e + E_{2,7}^e \bigg) + E_{2,8}^e \bigg] \,, \label{e:gpd_gtmd_11} \\ \tilde{E}_2(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ - 2(1 - \xi^2) \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, E_{2,6}^e + E_{2,7}^e \bigg) + \xi E_{2,8}^e \bigg] \,, \label{e:gpd_gtmd_12} \\ H'_2(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ H_{2,1}^e + 2 \xi^2 \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, H_{2,2}^e + H_{2,3}^e \bigg) \bigg] \,, \label{e:gpd_gtmd_13} \\ E'_2(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ - H_{2,1}^e + 2 (1 - \xi^2) \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, H_{2,2}^e + H_{2,3}^e \bigg) \bigg] \,, \label{e:gpd_gtmd_14} \\ \tilde{H}'_2(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ 2 \xi \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, H_{2,6}^e + H_{2,7}^e \bigg) + H_{2,8}^e \bigg] \,, \label{e:gpd_gtmd_15} \\ \tilde{E}'_2(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ - 2(1 - \xi^2) \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, H_{2,6}^e + H_{2,7}^e \bigg) + \xi H_{2,8}^e \bigg] \,, \label{e:gpd_gtmd_16} \\ H_{2T}(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ -F_{2,3}^e + \frac{(\vec{k}_T \cdot \vec{\Delta}_T)^2 - \vec{k}_T^2 \vec{\Delta}_T^2}{M^2 \, \vec{\Delta}_T^2} \, F_{2,4}^e \nonumber\\* && \qquad\qquad + \frac{\xi(\vec{\Delta}_T^2 + 4M^2)}{2(1 - \xi^2)M^2} \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, F_{2,7}^e + F_{2,8}^e \bigg) \bigg] \,, \label{e:gpd_gtmd_17} \\ E_{2T}(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ 4 \bigg( \frac{2 (\vec{k}_T \cdot \vec{\Delta}_T)^2 - \vec{k}_T^2 \vec{\Delta}_T^2}{(\vec{\Delta}_T^2)^2} \, F_{2,4}^e + \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, F_{2,5}^e + F_{2,6}^e \bigg) \bigg] \,, \label{e:gpd_gtmd_18} \\ \tilde{H}_{2T}(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, F_{2,1}^e + F_{2,2}^e \bigg) \nonumber\\* && \qquad\qquad - 2(1 - \xi^2) \bigg( \frac{2 (\vec{k}_T \cdot \vec{\Delta}_T)^2 - \vec{k}_T^2 \vec{\Delta}_T^2}{(\vec{\Delta}_T^2)^2} \, F_{2,4}^e + \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, F_{2,5}^e + F_{2,6}^e \bigg) \nonumber\\* && \qquad\qquad - \xi \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, F_{2,7}^e + F_{2,8}^e \bigg) \bigg] \,, \label{e:gpd_gtmd_19} \\ \tilde{E}_{2T}(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ 4\xi \bigg( \frac{2 (\vec{k}_T \cdot \vec{\Delta}_T)^2 - \vec{k}_T^2 \vec{\Delta}_T^2}{(\vec{\Delta}_T^2)^2} \, F_{2,4}^e + \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, F_{2,5}^e + F_{2,6}^e \bigg) \nonumber\\* && \qquad\qquad - 2 \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, F_{2,7}^e + F_{2,8}^e \bigg) \bigg] \,, \label{e:gpd_gtmd_20}\\ H'_{2T}(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ G_{2,3}^e + \frac{\vec{\Delta}_T^2}{M^2} \bigg( \frac{(\vec{k}_T \cdot \vec{\Delta}_T)^2}{(\vec{\Delta}_T^2)^2} \, G_{2,4}^e + \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, G_{2,5}^e + G_{2,6}^e \bigg) \nonumber\\* && \qquad\qquad - \frac{\xi(\vec{\Delta}_T^2 + 4M^2)}{2(1 - \xi^2)M^2} \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, G_{2,7}^e + G_{2,8}^e \bigg) \bigg] \,, \label{e:gpd_gtmd_21} \\ E'_{2T}(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ 4 \bigg( \frac{2 (\vec{k}_T \cdot \vec{\Delta}_T)^2 - \vec{k}_T^2 \vec{\Delta}_T^2}{(\vec{\Delta}_T^2)^2} \, G_{2,4}^e + \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, G_{2,5}^e + G_{2,6}^e \bigg) \bigg] \,, \label{e:gpd_gtmd_22} \\ \tilde{H}'_{2T}(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, G_{2,1}^e + G_{2,2}^e \bigg) \nonumber\\* && \qquad\qquad - 2(1 - \xi^2) \bigg( \frac{2 (\vec{k}_T \cdot \vec{\Delta}_T)^2 - \vec{k}_T^2 \vec{\Delta}_T^2}{(\vec{\Delta}_T^2)^2} \, G_{2,4}^e + \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, G_{2,5}^e + G_{2,6}^e \bigg) \nonumber\\* && \qquad\qquad + \xi \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, G_{2,7}^e + G_{2,8}^e \bigg) \bigg] \,, \label{e:gpd_gtmd_23} \\ \tilde{E}'_{2T}(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ 4\xi \bigg( \frac{2 (\vec{k}_T \cdot \vec{\Delta}_T)^2 - \vec{k}_T^2 \vec{\Delta}_T^2}{(\vec{\Delta}_T^2)^2} \, G_{2,4}^e + \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, G_{2,5}^e + G_{2,6}^e \bigg) \nonumber\\* && \qquad\qquad + 2 \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, G_{2,7}^e + G_{2,8}^e \bigg) \bigg] \,, \label{e:gpd_gtmd_24}\\ H_3(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ F_{3,1}^e + 2 \xi^2 \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, F_{3,2}^e + F_{3,3}^e \bigg) \bigg] \,, \label{e:gpd_gtmd_25} \\ E_3(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ - F_{3,1}^e + 2 (1 - \xi^2) \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, F_{3,2}^e + F_{3,3}^e \bigg) \bigg] \,, \label{e:gpd_gtmd_26} \\ \tilde{H}_3(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ 2 \xi \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, G_{3,2}^e + G_{3,3}^e \bigg) + G_{3,4}^e \bigg] \,, \label{e:gpd_gtmd_27} \\ \tilde{E}_3(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ \frac{2(1 - \xi^2)}{\xi} \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, G_{3,2}^e + G_{3,3}^e \bigg) - G_{3,4}^e \bigg] \,, \label{e:gpd_gtmd_28} \\ H_{3T}(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ H_{3,3}^e + \frac{\vec{\Delta}_T^2}{M^2} \bigg( \frac{(\vec{k}_T \cdot \vec{\Delta}_T)^2}{(\vec{\Delta}_T^2)^2} \, H_{3,4}^e + \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, H_{3,5}^e + H_{3,6}^e \bigg) \nonumber\\* && \qquad\qquad - \frac{\xi(\vec{\Delta}_T^2 + 4M^2)}{2(1 - \xi^2)M^2} \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, H_{3,7}^e + H_{3,8}^e \bigg) \bigg] \,, \label{e:gpd_gtmd_29} \\ E_{3T}(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ 4 \bigg( \frac{2 (\vec{k}_T \cdot \vec{\Delta}_T)^2 - \vec{k}_T^2 \vec{\Delta}_T^2}{(\vec{\Delta}_T^2)^2} \, H_{3,4}^e + \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, H_{3,5}^e + H_{3,6}^e \bigg) \bigg] \,, \label{e:gpd_gtmd_30} \\ \tilde{H}_{3T}(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, H_{3,1}^e + H_{3,2}^e \bigg) \nonumber\\* && \qquad\qquad - 2(1 - \xi^2) \bigg( \frac{2 (\vec{k}_T \cdot \vec{\Delta}_T)^2 - \vec{k}_T^2 \vec{\Delta}_T^2}{(\vec{\Delta}_T^2)^2} \, H_{3,4}^e + \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, H_{3,5}^e + H_{3,6}^e \bigg) \nonumber\\* && \qquad\qquad + \xi \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, H_{3,7}^e + H_{3,8}^e \bigg) \bigg] \,, \label{e:gpd_gtmd_31} \\ \tilde{E}_{3T}(x,\xi,t) & = & \int d^2\vec{k}_T \, \bigg[ 4\xi \bigg( \frac{2 (\vec{k}_T \cdot \vec{\Delta}_T)^2 - \vec{k}_T^2 \vec{\Delta}_T^2}{(\vec{\Delta}_T^2)^2} \, H_{3,4}^e + \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, H_{3,5}^e + H_{3,6}^e \bigg) \nonumber\\* && \qquad\qquad + 2 \bigg( \frac{\vec{k}_T \cdot \vec{\Delta}_T}{\vec{\Delta}_T^2} \, H_{3,7}^e + H_{3,8}^e \bigg) \bigg] \,. \label{e:gpd_gtmd_32} \end{eqnarray} The hermiticity constraint~(\ref{e:gtmd_hermiticity}) for the GTMDs, in combination with the relations~(\ref{e:gpd_gtmd_1})--(\ref{e:gpd_gtmd_32}), determines the symmetry behavior of the GPDs under the transformation $\xi \to - \xi$. One finds that the 10 GPDs $\tilde{E}_T$, $\tilde{H}_2$, $H'_2$, $E'_2$, $\tilde{E}'_2$, $H_{2T}$, $E_{2T}$, $\tilde{H}_{2T}$, $\tilde{E}'_{2T}$, $\tilde{E}_{3T}$ are odd functions in $\xi$, while all the other GPDs are even in $\xi$. This implies that the limit $\xi \to 0$ can be performed in eqs.~(\ref{e:gpd_gtmd_4}) and~(\ref{e:gpd_gtmd_28}) without encountering a singularity as the GPDs $\tilde{E}$ and $\tilde{E}_3$ are even functions in $\xi$. In addition, note that there appears no problem when performing the limit $\vec{\Delta}_T \to 0$ in eqs.~(\ref{e:gpd_gtmd_1})--(\ref{e:gpd_gtmd_32}) because of \begin{eqnarray} \int d^2\vec{k}_T \, k_T^i \, X(x,\xi,\vec{k}_T^2,\vec{k}_T \cdot \vec{\Delta}_T,\vec{\Delta}_T^2;\eta) &\propto& \Delta_T^i \,, \\ \int d^2\vec{k}_T \, (2 k_T^i k_T^j - \delta_T^{ij} \vec{k}_T^2) \, X(x,\xi,\vec{k}_T^2,\vec{k}_T \cdot \vec{\Delta}_T,\vec{\Delta}_T^2;\eta) &\propto& (2 \Delta_T^i \Delta_T^j - \delta_T^{ij} \vec{\Delta}_T^2) \,, \end{eqnarray} which holds for any GTMD $X$. \subsection{Relations between GPDs and TMDs} Having established the precise connection of the GPDs and TMDs with their respective {\it mother distributions} we are now in a position to search for possible model-independent relations between GPDs and TMDs. From~(\ref{e:tmd_gtmd_1}) and~(\ref{e:gpd_gtmd_1}) it is obvious that the GPD $H$ and the TMD $f_1$ can be related since both functions are projections of the GTMD $F_{1,1}^e$. With an analogous reasoning two additional relations can be obtained for twist-2, three for twist-3, and three for twist-4 leading altogether to \begin{eqnarray} H(x,0,0) &=& \int d^2\vec{k}_T \, F_{1,1}^e(x,0,\vec{k}^2_T,0,0) = \int d^2 \vec{k}_T \, f_1(x,\vec{k}^2_T) \,, \label{e:trivial_1} \\ \tilde{H}(x,0,0) &=& \int d^2\vec{k}_T \, G_{1,4}^e(x,0,\vec{k}^2_T,0,0) = \int d^2 \vec{k}_T \, g_{1L}(x,\vec{k}^2_T) \,, \label{e:trivial_2} \\ H_T(x,0,0) &=& \int d^2\vec{k}_T \, \bigg[ H_{1,3}^e(x,0,\vec{k}^2_T,0,0) + \frac{\vec{k}_T^2}{2 M^2} \, H_{1,4}^e(x,0,\vec{k}^2_T,0,0) \bigg] \nonumber\\* &=& \int d^2 \vec{k}_T \, \bigg[ h_{1T}(x,\vec{k}^2_T) + \frac{\vec{k}_T^2}{2 M^2} \, h_{1T}^\bot(x,\vec{k}^2_T) \bigg]\,, \label{e:trivial_3}\\ H_2(x,0,0) &=& \int d^2\vec{k}_T \, E_{2,1}^e(x,0,\vec{k}^2_T,0,0) = \int d^2 \vec{k}_T \, e(x,\vec{k}^2_T) \,, \\ \tilde{H}'_2(x,0,0) &=& \int d^2\vec{k}_T \, H_{2,8}^e(x,0,\vec{k}^2_T,0,0) = \int d^2 \vec{k}_T \, h_L(x,\vec{k}^2_T) \,, \\ H'_{2T}(x,0,0) &=& \int d^2\vec{k}_T \, \bigg[ G_{2,3}^e(x,0,\vec{k}^2_T,0,0) + \frac{\vec{k}_T^2}{2 M^2} \, G_{2,4}^e(x,0,\vec{k}^2_T,0,0) \bigg] \nonumber\\* &=& \int d^2 \vec{k}_T \, \bigg[ g'_T(x,\vec{k}^2_T) + \frac{\vec{k}_T^2}{2 M^2} \, g_{T}^\bot(x,\vec{k}^2_T) \bigg]\,, \\ H_3(x,0,0) &=& \int d^2\vec{k}_T \, F_{3,1}^e(x,0,\vec{k}^2_T,0,0) = \int d^2 \vec{k}_T \, f_3(x,\vec{k}^2_T) \,, \\ \tilde{H}_3(x,0,0) &=& \int d^2\vec{k}_T \, G_{3,4}^e(x,0,\vec{k}^2_T,0,0) = \int d^2 \vec{k}_T \, g_{3L}(x,\vec{k}^2_T) \,, \\ H_{3T}(x,0,0) &=& \int d^2\vec{k}_T \, \bigg[ H_{3,3}^e(x,0,\vec{k}^2_T,0,0) + \frac{\vec{k}_T^2}{2 M^2} \, H_{3,4}^e(x,0,\vec{k}^2_T,0,0) \bigg] \nonumber\\* &=& \int d^2 \vec{k}_T \, \bigg[ h_{3T}(x,\vec{k}^2_T) + \frac{\vec{k}_T^2}{2 M^2} \, h_{3T}^\bot(x,\vec{k}^2_T) \bigg]\,. \end{eqnarray} These formulas can be considered as trivial model-independent relations between GPDs and TMDs (called relations of first type in ref.~\cite{Meissner:2007rx}). Of course, the twist-2 relations~(\ref{e:trivial_1})--(\ref{e:trivial_3}) were already known before. Here, we are mainly interested in nontrivial relations between GPDs and TMDs that have been suggested in the literature~\cite{Burkardt:2002ks,Burkardt:2003uw,Burkardt:2003je,Diehl:2005jf,Burkardt:2005hp,Lu:2006kt,Meissner:2007rx,Pasquini:2008ax}. So far explicit relations have only been established in low-order calculations in the framework of simple spectator models~\cite{Burkardt:2003je,Burkardt:2005hp,Lu:2006kt,Meissner:2007rx}, and in one case in a light-cone constituent quark model~\cite{Pasquini:2008ax}. Our GTMD-analysis can now shed light on the question if model-independent nontrivial relations exist. \FIGURE[t]{% \includegraphics{Fig2_Model_TMDs.eps} \caption{Lowest nontrivial order diagram for T-odd TMDs in the scalar diquark spectator model. The Hermitian conjugate diagram (h.c.) is not shown. The eikonal propagator arising from the Wilson line in the operator definition of TMDs is indicated by a double line.} \label{f:todd}} A complete classification of the nontrivial relations between GPDs and TMDs in leading twist has been performed in~\cite{Meissner:2007rx}, where explicit formulae have been obtained in the same diquark spectator model as discussed in appendix~\ref{c:app_gtmd_model}. In that work two distinct types of nontrivial relations between quark distributions have been considered --- one connecting certain GPDs with the T-odd\footnote{Note that in order to generate T-odd TMDs one has to take into account rescattering effects between the active parton and the spectator system. Therefore, in the diquark spectator model the lowest order contribution to T-odd TMDs comes from the diagram shown in figure~\ref{f:todd}.} Sivers function $f_{1T}^\bot$~\cite{Sivers:1989cc,Sivers:1990fh} and the Boer-Mulders function $h_1^\bot$~\cite{Boer:1997nt} (called relations of second type in ref.~\cite{Meissner:2007rx}), \begin{eqnarray} E(x,0,-\vec{\Delta}_T^2) &\leftrightarrow& -f_{1T}^\bot(x,\vec{k}_T^2;\eta) \,, \label{e:model_rel_1} \\ E_T(x,0,-\vec{\Delta}_T^2) + 2\tilde{H}_T(x,0,-\vec{\Delta}_T^2) &\leftrightarrow& -h_1^\bot(x,\vec{k}_T^2;\eta) \,, \label{e:model_rel_2} \end{eqnarray} and one connecting a GPD and the T-even pretzelosity TMD $h_{1T}^\bot$ (called relation of third type in ref.~\cite{Meissner:2007rx}), \begin{eqnarray} \tilde{H}_T(x,0,-\vec{\Delta}_T^2) &\leftrightarrow& \tfrac{1}{2} \, h_{1T}^\bot(x,\vec{k}_T^2) \,. \label{e:model_rel_3} \end{eqnarray} As we discuss in the following our GTMD-analysis, however, does not support a model-independent status of any such relations. For the relations of second type in eqs.~(\ref{e:model_rel_1}) and~(\ref{e:model_rel_2}) this is obvious because, according to eqs.~(\ref{e:tmd_gtmd_2}), (\ref{e:tmd_gtmd_5}), (\ref{e:gpd_gtmd_2}), (\ref{e:gpd_gtmd_6}), and~(\ref{e:gpd_gtmd_7}) the involved GPDs and TMDs have different, independent {\it mother distributions}. In particular, the GPDs are connected to T-even parts of GTMDs while the TMDs are connected to T-odd parts of GTMDs. Unless, for some reason, the GTMDs are subject to further constraints one has to conclude that there cannot exist a model-independent relation between the GPDs and TMDs given in eqs.~(\ref{e:model_rel_1}) and~(\ref{e:model_rel_2}). This conclusion is in accordance with the observation made in~\cite{Meissner:2007rx} that nontrivial relations of second type are likely to even break down in spectator models once higher order contributions are taken into account. Therefore, one has to attribute the relations to the simplicity of the used model. Nevertheless, it may well be that numerically the model-dependent nontrivial relations work reasonably well when comparing to experimental data. In fact such a case is already known for distributions of the nucleon, namely the relation between the Sivers function and the GPD $E$~\cite{Burkardt:2002ks,Burkardt:2003uw,Burkardt:2003je,Meissner:2007rx}. For the relation of third type in eq.~(\ref{e:model_rel_3}) the GPD as well as the TMD are, according to eqs.~(\ref{e:tmd_gtmd_8}) and~(\ref{e:gpd_gtmd_7}), related to T-even parts of GTMDs. But the linear combinations of GTMDs differ in both cases such that no model-independent nontrivial relation of the type~(\ref{e:model_rel_3}) can exist. In the context of the diquark spectator model the explicit relation \begin{equation} \label{e:rel_expl} \frac{3}{(1-x)^2} \, \tilde{H}_T(x,0,0) = \int d^2 \vec{k}_T \, h_{1T}^\bot(x,\vec{k}^2_T)\,, \end{equation} was established~\cite{Meissner:2007rx}. One may wonder if, in general, the specific kinematical point $\vec{\Delta}_T^2 = \xi = 0$ and the $k_T$-integration used in~(\ref{e:rel_expl}) might spoil the above argument about different linear combinations of GTMDs. However, by taking all known symmetry properties of the GTMDs into account one is still left with such different linear combinations. Even in the simple diquark spectator model this is the case, and the relation~(\ref{e:rel_expl}) also just holds due to the simplicity of the model. In order to illustrate this point we calculate the involved GPD $\tilde{H}_T$ and TMD $h_{1T}^\bot$ in the scalar diquark model and try to preserve their respective GTMD structure as far as possible. By inserting the model results for the GTMDs in appendix~\ref{c:app_gtmd_model} into eq.~(\ref{e:gpd_gtmd_7}) one finds for the GPD $\tilde{H}_T$ in the case $\xi=0$ \begin{align} &\tilde{H}_T(x,0,-\vec{\Delta}_T^2) \nonumber\\* &\quad= \int d^2\vec{k}_T \, \tilde{C} \, \bigg[ \tilde{H}_{1,2}^e(x) - 2 \bigg( \frac{2 (\vec{k}_T \cdot \vec{\Delta}_T)^2 - \vec{k}_T^2 \vec{\Delta}_T^2}{(\vec{\Delta}_T^2)^2} \, \tilde{H}_{1,4}^e(x) + \tilde{H}_{1,6}^e(x) \bigg) \bigg] \,. \end{align} Here we have extracted all dependence on the vectors $\vec{k}_T$ and $\vec{\Delta}_T$ from the GTMDs and put it either into their coefficients or into the overall factor \begin{equation} \tilde{C} = \frac{g^2 \, (1-x)}{2(2\pi)^3} \, \frac{1}{ [(\vec{k}_T + \tfrac{1}{2}(1-x) \, \vec{\Delta}_T)^2 + \tilde{M}^2(x)] \, [(\vec{k}_T - \tfrac{1}{2}(1-x) \, \vec{\Delta}_T)^2 + \tilde{M}^2(x)]} \,, \end{equation} with \begin{equation} \tilde{M}^2(x) = x \, m_s^2 + (1-x) \, m_q^2 - x (1-x) \, M^2 \,. \end{equation} Therefore, the remnants of the GTMDs \begin{eqnarray} \tilde{H}_{1,2}^e(x) & = & (1-x) \, (m_q + x M) M\,, \\ \tilde{H}_{1,4}^e(x) & = & -2 M^2\,, \\ \tilde{H}_{1,6}^e(x) & = & \tfrac{1}{2}(1-x) \, (m_q + M) M \end{eqnarray} only depend on the momentum fraction $x$. This allows one to perform the $\vec{k}_T$ integration, which yields \begin{align} &\tilde{H}_T(x,0,-\vec{\Delta}_T^2) \nonumber\\* &\quad= \frac{g^2 \, (1-x)}{8(2\pi)^2} \, \int_0^1 d\alpha \, \frac{ 2 \tilde{H}_{1,2}^e(x) - (1 - 2\alpha)^2 \, (1-x)^2 \, \tilde{H}_{1,4}^e(x) - 4 \tilde{H}_{1,6}^e(x) }{\alpha(1-\alpha) \, (1-x)^2 \, \vec{\Delta}_T^2 + \tilde{M}^2(x)} \,. \end{align} In the forward limit this leads to \begin{equation} \tilde{H}_T(x,0,0) = \frac{g^2 \, (1-x)}{8(2\pi)^2} \, \frac{ 2 \tilde{H}_{1,2}^e(x) - \tfrac{1}{3} (1-x)^2 \, \tilde{H}_{1,4}^e(x) - 4 \tilde{H}_{1,6}^e(x) }{\tilde{M}^2(x)} \,. \end{equation} On the other hand, one finds for the zeroth moment of the TMD $h_{1T}^\bot$ by inserting the model results for the GTMDs in appendix~\ref{c:app_gtmd_model} into eq.~(\ref{e:tmd_gtmd_8}) \begin{equation} \int d^2 \vec{k}_T \, h_{1T}^\bot(x,\vec{k}^2_T) = \frac{g^2 \, (1-x)}{4(2\pi)^2} \, \frac{\tilde{H}_{1,4}^e(x)}{\tilde{M}^2(x)} \,. \end{equation} This shows explicitly that the GPD $\tilde{H}_T$ and the TMD $h_{1T}^\bot$ are connected to different remnants of GTMDs even in the scalar diquark model. However, due to the simplicity of the scalar diquark model the remnants of the GTMDs are related according to \begin{equation} 2\tilde{H}_{1,2}^e(x) - 4\tilde{H}_{1,6}^e(x) = -2 (1-x)^2 M^2 = (1-x)^2 \, \tilde{H}_{1,4}^e(x) \,. \label{e:gtmd_rel} \end{equation} This immediately implies the relation \begin{equation} \frac{3}{(1-x)^2} \, \tilde{H}_T(x,0,0) = \frac{g^2 \, (1-x)}{4(2\pi)^2} \, \frac{\tilde{H}_{1,4}^e(x)}{\tilde{M}^2(x)} = \int d^2 \vec{k}_T \, h_{1T}^\bot(x,\vec{k}^2_T)\,, \end{equation} which we already quoted above in~(\ref{e:rel_expl}). It should be stressed once again, that this relation only holds due to the simplicity of the scalar diquark model. In general, no dependence like eq.~(\ref{e:gtmd_rel}) will exist between the different, independent GTMDs. We note that a relation like~(\ref{e:rel_expl}) was also obtained in a specific light-cone quark model~\cite{Pasquini:2008ax}, but in that model a factor different from 3 on the {\it l.h.s.}~of~(\ref{e:rel_expl}) shows up\footnote{Actually, in ref.~\cite{Pasquini:2008ax} the factor 3 appeared, but later on an error in the calculation was found~\cite{Pasquini:2009}.}. The fact that a formula corresponding to~(\ref{e:rel_expl}) emerges in the framework of another model does not contradict our general argument that in full QCD a relation of the type~(\ref{e:model_rel_3}) cannot hold. By extending our GTMD analysis we find that also for twist-3 and twist-4 no model-independent nontrivial relations between GPDs and TMDs exist. On the other hand such relations may well emerge in the framework of simple models. \section{Conclusions}\label{c:sec5} In summary, we have derived the structure of the fully unintegrated, off-diagonal quark-quark correlator for a spin-1/2 hadron, and thus extended our previous study of the spin-0 case~\cite{Meissner:2008ay}. This object, which contains the most general information on the two-parton structure of a hadron, has been parameterized in terms of so-called generalized parton correlation functions (GPCFs). The major challenge in this derivation was to eliminate all redundant terms without missing any relevant term at the same time. Integrating the GPCFs upon a light-cone component of the quark momentum one ends up with entities which we called generalized transverse momentum dependent parton distributions (GTMDs). In general, GTMDs can be of direct relevance for the phenomenology of various hard (diffractive) processes (see, e.g., refs.~\cite{Martin:1999wb,Khoze:2000cy,Goloskokov:2007nt,Albrow:2008pn}). Our analysis shows that both the GPCFs and the GTMDs in general are complex-valued functions. This is different from the (simpler) forward parton distributions, GPDs, and TMDs all of which are real. Suitable projections of GTMDs lead to GPDs on the one hand and TMDs on the other. Therefore, GTMDs can be considered as {\it mother distributions} of GPDs and TMDs~\cite{Ji:2003ak,Belitsky:2003nz,Belitsky:2005qn}. To study these two limiting cases of GTMDs was the main motivation of the present work. One outcome was the first complete classification of GPDs for a spin-1/2 hadron beyond leading twist. Most importantly, we were able to determine which of the GPDs and TMDs have the same {\it mother distributions} allowing us to explore whether model-independent relations between GPDs and TMDs can be established. One ends up with nine such model-independent relations. Actually, these cases can be considered as trivial ones because the respective GPDs and TMDs also have a relation to the same forward parton distributions (see also ref.~\cite{Meissner:2007rx}). Our main interest was to investigate nontrivial relations between GPDs and TMDs which have been obtained in models and extensively discussed in the recent literature~\cite{Burkardt:2002ks,Burkardt:2003uw,Burkardt:2003je,Diehl:2005jf,Burkardt:2005hp,Lu:2006kt,Meissner:2007rx,Pasquini:2008ax}. We have restricted this study to leading twist where three nontrivial relations have been found (see~\cite{Meissner:2007rx} for an overview) --- two involving the T-odd Sivers TMD $f_{1T}^\bot$ and the Boer-Mulders TMD $h_1^\bot$, and one in which the T-even pretzelosity TMD $h_{1T}^\bot$ shows up. It turns out that none of these relations can be promoted to a model-independent status as the respective functions are related to different (linear combinations of) GTMDs. For the relations containing T-odd TMDs this finding agrees with ref.~\cite{Meissner:2007rx} where it has been argued that these nontrivial relations between GPDs and TMDs are likely to break down even in spectator models if the parton distributions are evaluated to higher order in perturbation theory. Moreover, our model-independent study for the Boer-Mulders function of a spin-0 hadron came to the same conclusion~\cite{Meissner:2008ay}. We emphasize that our finding does not tell anything about the numerical violation of (model-dependent) nontrivial relations between GPDs and TMDs. On the other hand, such relations have hardly any predictive power and only after all the involved distributions have been measured one can really judge about their quality. \acknowledgments The work has partially been supported by the Verbundforschung ``Hadronen und Kerne'' of the BMBF and by the Deutsche Forschungsgemeinschaft (DFG). \\[0.3cm] \noindent \textbf{Notice:} Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177. The U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce this manuscript for U.S. Government purposes.
3,212,635,537,887
arxiv
\section{Introduction}\label{sect:introduction} In this paper we consider the nonlinear scalar field equation with an $L^2$ constraint: \begin{linenomath*} \begin{equation*}\tag{$P_m$}\label{problem} \left\{ \begin{aligned} -\Delta u&=f(u)-\mu u\quad\text{in}~\mathbb{R}^N,\\ \|u\|^2_{L^2(\mathbb{R}^N)}&=m,\\ u&\in H^1(\mathbb{R}^N). \end{aligned} \right. \end{equation*} \end{linenomath*} Here $N\geq1$, $f\in C(\mathbb{R},\mathbb{R})$, $m>0$ is a given constant and $\mu\in\mathbb{R}$ will arise as a Lagrange multiplier. In particular $\mu\in\mathbb{R}$ does depend on the solution $u \in H^1(\mathbb{R}^N)$ and is not a priori given. The main feature of \eqref{problem} is that the desired solutions have an a priori prescribed $L^2$-norm. Solutions of this type are often referred to as \emph{normalized solutions}. A strong motivation to study problem \eqref{problem} is that it naturally arises in the search of standing waves of Schr\"{o}dinger type equations of the form \begin{linenomath*} \begin{equation}\label{eq:equation-evolution} i \psi_t + \Delta \psi + g(|\psi|^2)\psi =0, \qquad \psi : \mathbb{R}_+ \times \mathbb{R}^N \to \mathbb{C}. \end{equation} \end{linenomath*} Here, by standing waves, we mean solutions of \eqref{eq:equation-evolution} of the special form $\psi(t,x) = e^{i\mu t} u(x)$ with $\mu \in \mathbb{R}$ and $u \in H^1(\mathbb{R}^N)$. The study of such type of equations, which had already a strong motivation thirty years ago, see \cite{Be83-1,Lions84-1,Lions84-2}, now lies at the root of several models directly linked with current applications (such as nonlinear optics, the theory of water waves, ...). For these equations, finding solutions with a prescribed $L^2$-norm is particularly relevant since this quantity is preserved along the time evolution. In that direction we refer, in particular, to \cite{C03,CL82,HaSt04,Sh14}. See also the very recent work \cite{St19}. Under mild assumptions on $f$, it is possible to define the $C^1$ functional $I: H^1(\mathbb{R}^N)\to\mathbb{R}$ by \begin{linenomath*} \begin{equation*}\label{eq:functional} I(u):=\frac{1}{2}\int_{\mathbb{R}^N}|\nabla u|^2dx-\int_{\mathbb{R}^N}F(u)dx, \end{equation*} \end{linenomath*} where $F(t):=\int^t_0f(\tau)d\tau$ for $t\in\mathbb{R}$. Clearly then, solutions of \eqref{problem} can be characterized as critical points of $I$ submitted to the constraint \begin{linenomath*} \begin{equation*}\label{eq:constraint} S_m:=\left\{u\in H^1(\mathbb{R}^N)~|~\|u\|^2_{L^2(\mathbb{R}^N)}=m\right\}. \end{equation*} \end{linenomath*} For future reference, the value $I(u)$ is called the \emph{energy} of $u$. It is well-known that the study of \eqref{problem} and the type of results one can expect, do depend on the behavior of the nonlinearity $f$ at infinity. In particular, this behavior determines whether $I$ is bounded from below on $S_m$. One speaks of a mass subcritical case if $I$ is bounded from below on $S_m$ for any $m>0$, and of a mass supercritical case if $I$ is unbounded from below on $S_m$ for any $m>0$. One also refers to a mass critical case when the boundedness from below does depend on the value $m>0$. In this paper we focus on mass subcritical cases and we refer to the papers \cite{BDV13,BS17, Je97} for results in the mass supercritical cases. The study of the constrained problem \eqref{problem}, or of connected ones, in the mass subcritical case which started with the work of C.A. Stuart in the eighties \cite{St82} had seen a major advance with the work of P.L. Lions \cite{Lions84-1,Lions84-2} on the concentration-compactness principle. Nowadays it is still the object of an intense activity. We refer, in particular, to \cite{CL82, CJS10,HaSt04,JS11,Sh14,St19}. In these works the authors are mainly interested in the existence of \emph{ground states}, namely of solutions to \eqref{problem} which can be characterized as minimizers of $I$ among all the solutions. An emphasize is also given to the issue of stability of these solutions, as standing waves of \eqref{eq:equation-evolution}. This is done either, following the strategy laid down in \cite{CL82}, by showing that any minimizing sequences of $I$ on $S_m$ is precompact up to translations \cite{HaSt04,Sh14} or by using more analytic approaches \cite{St19}. Likely, as far as the existence of ground states and their orbital stability is concerned, the most general result is contained in \cite{Sh14}. Concerning the existence of more than one solution, the particular case $f(u) = |u|^{\sigma}u$ with $0 < \sigma < 4/N$ and $N \geq 2$ was considered in \cite{Je92} where infinitely many \emph{radial solutions} (with negative energies) were obtained. For the general result we refer to the recent paper \cite{HT18} by Hirata and Tanaka which still concerns radial solutions. At the end of this paper we shall present the multiplicity result of \cite{HT18} in some details and show that the method we develop in this paper can be used to give an alternative shorter proof of it under a slightly more general setting. Our aim in the present work is to make further progress in the understanding of the set of solutions to \eqref{problem}. Roughly speaking, when $N\geq4$, we derive existence and multiplicity results for \emph{nonradial solutions} to \eqref{problem}. We assume that the nonlinearity $f$ satisfies \begin{itemize} \item[$(f1)$] $f\in C(\mathbb{R},\mathbb{R})$, \item[$(f2)$] $\lim_{t\rightarrow0}f(t)/t=0$, \item[$(f3)$] $\lim_{t\rightarrow\infty}f(t)/|t|^{q-1}=0$ for some $q < 2^*$ and $\limsup_{t\rightarrow \pm\infty}f(t)t/|t|^{2+4/N}\leq 0$, \item[$(f4)$] there exists $\zeta>0$ such that $F(\zeta)>0$, \item[$(f5)$] $f(-t)=-f(t)$ for all $t\in\mathbb{R}$. \end{itemize} We shall also make use of the following condition \begin{linenomath*} \begin{equation}\label{eq:f_key1} \lim_{t\to0}\frac{F(t)}{|t|^{2+\frac{4}{N}}}=+\infty, \end{equation} \end{linenomath*} which is originally introduced in \cite{Lions84-2}; see also \cite{Sh14}. As a simple example of the nonlinearity satisfying $(f1)-(f5)$ (and also \eqref{eq:f_key1}) we have \begin{linenomath} \begin{equation*} f(t)=|t|^{p-2}t-|t|^{q-2}t\qquad\text{with}~ 2< p <2+\frac{4}{N} < q <2^*. \end{equation*} \end{linenomath} To state our results, we introduce some notations. Assume that $N\geq4$ and $2\leq M\leq N/2$. Let us fix $\tau\in \mathcal{O}(N)$ such that $\tau(x_1,x_2,x_3)=(x_2,x_1,x_3)$ for $x_1,x_2\in\mathbb{R}^M$ and $x_3\in\mathbb{R}^{N-2M}$, where $x=(x_1,x_2,x_3)\in\mathbb{R}^N=\mathbb{R}^M\times\mathbb{R}^M\times\mathbb{R}^{N-2M}$. We define \begin{linenomath*} \begin{equation*} X_\tau:=\left\{u\in H^1(\mathbb{R}^N)~|~u(\tau x)=-u(x)~\text{for all}~x\in\mathbb{R}^N\right\}. \end{equation*} \end{linenomath*} It is clear that $X_\tau$ does not contain nontrivial radial functions. Let $H^1_{\mathcal{O}_1}(\mathbb{R}^N)$ denote the subspace of invariant functions with respect to $\mathcal{O}_1$, where $\mathcal{O}_1:=\mathcal{O}(M)\times\mathcal{O}(M)\times \text{id}\subset \mathcal{O}(N)$ acts isometrically on $H^1(\mathbb{R}^N)$. We also consider $\mathcal{O}_2:=\mathcal{O}(M)\times\mathcal{O}(M)\times\mathcal{O}(N-2M)\subset \mathcal{O}(N)$ acting isometrically on $H^1(\mathbb{R}^N)$ with the subspace of invariant functions denoted by $H^1_{\mathcal{O}_2}(\mathbb{R}^N)$. Here we agree that the components corresponding to $N-2M$ do not exist when $N=2M$. Clearly, $H^1_{\mathcal{O}_2}(\mathbb{R}^N)$ is in general a subspace of $H^1_{\mathcal{O}_1}(\mathbb{R}^N)$, but $H^1_{\mathcal{O}_2}(\mathbb{R}^N)= H^1_{\mathcal{O}_1}(\mathbb{R}^N)$ when $N=2M$. \smallskip For notational convenience, we set $X_1:= H^1_{\mathcal{O}_1}(\mathbb{R}^N)\cap X_\tau$ and $X_2:= H^1_{\mathcal{O}_2}(\mathbb{R}^N)\cap X_\tau$. Our first main result concerns the existence of one nonradial solution to \eqref{problem}. \begin{theorem}\label{theorem:nonradialsolution} Assume that $N\geq4$ and $f$ satisfies $(f1)-(f5)$. Define \begin{linenomath*} \begin{equation*}\label{eq:infimum1} E_m:=\inf_{u\in S_m\cap X_1}I(u). \end{equation*} \end{linenomath*} Then $E_m > - \infty$ and the mapping $m \mapsto E_m$ is nonincreasing and continuous. Moreover \begin{itemize} \item[$(i)$] there exists a uniquely determined number $m^*\in[0,\infty)$ such that \begin{linenomath*} \begin{equation*} E_m=0\quad\text{if}~0<m\leq m^*,\qquad E_m<0\quad\text{when}~m>m^*; \end{equation*} \end{linenomath*} \item[$(ii)$] when $m>m^*$, the infimum $E_m$ is reached and thus \eqref{problem} has one nonradial solution $w\in X_1$ such that $I(w)=E_m$; \item[$(iii)$] when $0 < m <m^*$, $E_m$ is not reached; \item[$(iv)$] $m^*=0$ if in addition \eqref{eq:f_key1} holds. \end{itemize} \end{theorem} Our second main result concerns the multiplicity of nonradial solutions to \eqref{problem}. Let $\Sigma(S_m\cap X_2)$ be the family of closed symmetric subsets of $S_m\cap X_2$, that is \begin{linenomath*} \begin{equation*} \Sigma(S_m\cap X_2):= \big\{ A \subset S_m\cap X_2 ~|~A~\text{is closed},~-A= A\big\}, \end{equation*} \end{linenomath*} and denote by $\mathcal{G}(A)$ the genus of $A\in \Sigma(S_m\cap X_2)$. For the definition of the genus and its basic properties, one may refer to Section \ref{sect:minimaxtheorem}. \begin{theorem}\label{theorem:nonradialsolutions} Assume that $N\geq4$, $N-2M\neq1$ and $f$ satisfies $(f1)-(f5)$. Define the minimax values \begin{linenomath*} \begin{equation*} E_{m,k}:=\inf_{A\in \Gamma_{m,k}}\sup_{u\in A}I(u), \end{equation*} \end{linenomath*} where $\Gamma_{m,k}:=\{A\in \Sigma(S_m\cap X_2)~|~\mathcal{G}(A)\geq k\}$. Then the following statements hold. \begin{itemize} \item[$(i)$] $-\infty<E_{m,k}\leq E_{m,k+1}\leq0$ for all $m>0$ and $k\in\mathbb{N}$. \item[$(ii)$] For any $k\in\mathbb{N}$, the mapping $m\mapsto E_{m,k}$ is nonincreasing and continuous. \item[$(iii)$] For each $k\in\mathbb{N}$, there exists a uniquely determined $m_k\in[0,\infty)$ such that \begin{linenomath*} \begin{equation*} E_{m,k}=0\quad\text{if}~0<m\leq m_k,\qquad E_{m,k}<0\quad\text{when}~m>m_k. \end{equation*} \end{linenomath*} When $m>m_k$, \eqref{problem} has $k$ distinct nonradial solutions belonging to $X_2$ and associated to the levels $E_{m,j}$ ($j=1,2, \cdots, k$). \item[$(iv)$] Assume in addition \eqref{eq:f_key1}, then $m_k =0$ for any $k \in \mathbb{N}$ and thus \eqref{problem} has infinitely many nonradial solutions $\{w_k\}^\infty_{k=1}\subset X_2$ for all $m>0$. In particular, $I(w_k)=E_{m,k}<0$ for each $k\in\mathbb{N}$ and $I(w_k)\to0$ as $k\to\infty$. \end{itemize} \end{theorem} The question of the existence of nonradial solutions to the free equation \begin{linenomath*} \begin{equation}\label{eq:equation-free} - \Delta u = f(u) - \mu u, \qquad u \in H^1(\mathbb{R}^N) \end{equation} \end{linenomath*} was raised in \cite[Section 10.8]{Be83-2} and remained open for a long time. Partial results, namely for specific nonlinearities $f$, were first obtained by Bartsch and Willem \cite{Ba93} (we refer to \cite{Lions86} if nonradial complex solutions are of interest to the reader). The authors worked in dimension $N=4$ and $N \geq 6$ assuming an Ambrosetti-Rabinowitz type condition. Actually the idea of considering subspaces of $H^1(\mathbb{R}^N)$ as $H^1_{\mathcal{O}_2}(\mathbb{R}^N)$ originates from \cite{Ba93}. Note also the work \cite{Lo04} in which the problem is solved when $N=5$ by introducing the $\mathcal{O}_1$ action on $H^1(\mathbb{R}^N)$. Finally it was only very recently that, under general assumptions on $f$, a positive answer to the existence and multiplicity of nonradial solutions was given \cite{Me17}. Note also that in \cite{JL18} the authors gave an alternative proof of the results of \cite{Me17} with more elementary arguments. All these results however consider the equation \eqref{eq:equation-free} without prescribing the $L^2$-norm of the solutions. The present paper is, up to our knowledge, the first to consider the existence of nonradial normalized solutions. We also observe that the nonradial solutions given by Theorems \ref{theorem:nonradialsolution} and \ref{theorem:nonradialsolutions} change signs. In sharp constrast to the unconstrained case \eqref{eq:equation-free} where numerous results have been established, see for example \cite{Ba93_1,LW08,MPW12}, the existence of \emph{sign-changing solutions} had not been studied yet for $L^2$-constrained problems. Let us now give some ideas of the proofs of Theorems \ref{theorem:nonradialsolution} and \ref{theorem:nonradialsolutions}. To prove the multiplicity result stated in Theorem \ref{theorem:nonradialsolutions}, we work in the space $X_2:= H^1_{\mathcal{O}_2}(\mathbb{R}^N)\cap X_\tau$ and make use of classical minimax arguments (see Theorem \ref{theorem:minimax} below). Since $N \geq 4$ and $N-2M \neq 1$, we can benefit from the compact inclusion $X_2\hookrightarrow L^p(\mathbb{R}^N)$ for all $2<p<2N/(N-2)$. This result, which is due to P. L. Lions \cite{Lions82}, allows to show that $I_{|S_m\cap X_2}$ satisfies the Palais-Smale condition at any level $c<0$, see Lemma \ref{lemma:PS}. We observe that the proof of Lemma \ref{lemma:PS}, and likely its conclusion, fail at level $c\geq 0$. Thus another key point is to verify that the minimax levels $E_{m,k}$ are indeed negative for some or any $m>0$ and $k\in\mathbb{N}$. Relying on the construction of some special mappings done in \cite{Be83-2,JL18, Me17}, we manage to do this in Lemmas \ref{lemma:geo2} and \ref{lemma:Emk}. Note also that to derive the existence of $m_k$ in Theorem \ref{theorem:nonradialsolutions} $(iii)$, we need Theorem \ref{theorem:nonradialsolutions} $(ii)$ which is proved in Lemma \ref{lemma:Emk}. For the proof of Theorem \ref{theorem:nonradialsolution}, we work in the space $X_1:= H^1_{\mathcal{O}_1}(\mathbb{R}^N)\cap X_\tau$. In the case where $N-2M=0$, we have $X_1=X_2$ (with $N-2M\neq1$). Since then $E_m= E_{m,1}$ and $m^* = m_1$, the existence of a minimizer for $E_m$ when $m>m^*$ follows directly from Theorem \ref{theorem:nonradialsolutions}. When $N-2M\neq0$, the inclusion $X_1\hookrightarrow L^p(\mathbb{R}^N)$ is not compact for any $2<p<2N/(N-2)$ and the Palais-Smale condition does not hold any more. To derive our existence result in this case, using concentration compactness type arguments, we study carefully the behavior of the minimizing sequences of $E_m$. Here again it is essential to know in advance that the suspected critical level is negative. \begin{remark}\label{remark:stability} If the stability of the ground states, as studied for example in \cite{HaSt04, Sh14, St19}, is by now relatively well understood, the issue of the orbital stability, or more likely orbital instability, of the other critical points of $I$ restricted to $S_m$ is still totally open. We believe an interesting but challenging question would be to prove that the solution obtained in Theorem \ref{theorem:nonradialsolution}, which enjoys a well defined variational characterization, is orbitally unstable. \end{remark} \begin{remark}\label{remark:extension} Taking advantage of an idea first introduced in \cite{Be83-1}, it is possible to find solutions to \eqref{problem} under $(f1)-(f5)$ when $(f3)$ is replaced by the more general condition \begin{itemize} \item[$(f3)'$] $\limsup_{t\rightarrow +\infty}f(t)/t^{1+4/N} \leq 0$. \end{itemize} Indeed, assume that $f$ satisfies $(f1)$, $(f2)$, $(f3)'$, $(f4)$ and $(f5)$. If $f(t)\geq0$ for all $t\geq\zeta$, then $f$ satisfies $(f1)-(f5)$. Otherwise, we set \begin{linenomath*} \begin{equation*} \zeta_1:=\inf\{t\geq\zeta~|~f(t)=0\}\qquad\text{and}\qquad \widetilde{f}(t):=\left\{ \begin{aligned} &f(t),~&\text{for}~|t|\leq \zeta_1,\\ &0,&\text{for}~|t|>\zeta_1.\\ \end{aligned} \right. \end{equation*} \end{linenomath*} Clearly $\widetilde{f}$ satisfies $(f1)-(f5)$. Also, for any couple $(u, \mu)\in S_m \times \mathbb{R}_+$ satisfying \begin{linenomath*} \begin{equation*} -\Delta{u}= \widetilde{f}(u)-\mu u\quad\text{in}~\mathbb{R}^N, \end{equation*} \end{linenomath*} the strong maximum principle tells us that $|u(x)|\leq \zeta_1$ for all $x\in\mathbb{R}^N$ and so $u \in S_m$ actually satisfies $- \Delta u = f(u) - \mu u$ in $\mathbb{R}^N$. Applying Theorems \ref{theorem:nonradialsolution} and \ref{theorem:nonradialsolutions} with $\widetilde{f}$ and noting that the Lagrange multipliers associated to the solutions obtained by these theorems belong to $\mathbb{R}_+$ (see the proof of Lemma \ref{lemma:PS}), we thus obtain existence and multiplicity results to \eqref{problem}. Note however that under $(f3)'$ the functional $I$ is in general not more defined and so there is no direct connection between our solutions and the evolution equation \eqref{eq:equation-evolution}. \end{remark} \begin{remark}\label{remark:partially_radial} Since we work in the spaces $X_{1}$ and $X_{2}$, the nonradial solutions we obtain are still partially radial, that is, radial with respect to certain groups of directions. Actually, in order to keep a minimal compactness, we find necessary to impose that the subspaces in which radial symmetry is preserved are at least two dimensional. This implies the condition $M \geq 2$ and in turn that $N \geq 4$. In addition, in the definition of $X_1$ and $X_2$, we introduce the parity property $X_{\tau}$ with respect to a certain ``diagonal". It is this odd property which ensures that the solutions are not globally radial. As an open problem, it would be interesting to inquire if there exist nonradial solutions of \eqref{problem} having less symmetry or directly living in $\mathbb{R}^2$ or $\mathbb{R}^3$. \end{remark} The paper is organized as follows. In Section \ref{sect:minimaxtheorem} we present the version of the minimax theorem that will be used in the proof of Theorem \ref{theorem:nonradialsolutions}. Section \ref{sect:preliminaries} establishes some key technical points to be used in the proofs of the main results. In Section \ref{sect:proofs} we prove Theorems \ref{theorem:nonradialsolution} and \ref{theorem:nonradialsolutions}. Finally, with the approach used to prove Theorem \ref{theorem:nonradialsolutions}, we prove in Section \ref{sect:theoremB} a slight extension of the multiplicity result due to Hirata and Tanaka \cite[Theorem 0.2]{HT18}. \section{A minimax theorem}\label{sect:minimaxtheorem} In this section, we present a minimax theorem for a class of constrained even functionals. Let us point out that closely related results do exist in the literature, see in particular e.g., \cite[Section 8]{Be83-2}, \cite{Ra86}, \cite{Sz88} and \cite[Chapter 5]{Wi96}. The present version is well suited to deal with the nonlinear scalar field equations considered in this paper. To formulate the minimax theorem, we need some notations. Let $\mathcal{E}$ be a real Banach space with norm $\|\cdot\|_\mathcal{E}$ and $\mathcal{H}$ be a real Hilbert space with inner product $(\cdot,\cdot)_\mathcal{H}$. We identify $\mathcal{H}$ with its dual space and assume that $\mathcal{E}$ is embedded continuously in $\mathcal{H}$. For any $m>0$, define the manifold \begin{linenomath*} \begin{equation*} \mathcal{M}:=\{u\in \mathcal{E}~|~(u,u)_\mathcal{H}=m\}, \end{equation*} \end{linenomath*} which is endowed with the topology inherited from $\mathcal{E}$. Clearly, the tangent space of $\mathcal{M}$ at a point $u\in\mathcal{M}$ is defined by \begin{linenomath*} \begin{equation*} T_u\mathcal{M}:=\{v\in \mathcal{E}~|~(u,v)_\mathcal{H}=0\}. \end{equation*} \end{linenomath*} Let $I\in C^1(\mathcal{E},\mathbb{R})$, then $I_{|\mathcal{M}}$ is a functional of class $C^1$ on $\mathcal{M}$. The norm of the derivative of $I_{|\mathcal{M}}$ at any point $u\in\mathcal{M}$ is defined by \begin{linenomath*} \begin{equation*} \|I_{|\mathcal{M}}'(u)\|:=\sup_{\|v\|_\mathcal{E}\leq 1,~v\in T_u\mathcal{M}}|\langle I'(u),v\rangle|. \end{equation*} \end{linenomath*} A point $u\in\mathcal{M}$ is said to be a critical point of $I_{|\mathcal{M}}$ if $I'_{|\mathcal{M}}(u)=0$ (or, equivalently, $\|I_{|\mathcal{M}}'(u)\|=0$). A number $c\in\mathbb{R}$ is called a critical value of $I_{|\mathcal{M}}$ if $I_{|\mathcal{M}}$ has a critical point $u\in\mathcal{M}$ such that $c=I(u)$. We say that $I_{|\mathcal{M}}$ satisfies the Palais-Smale condition at a level $c\in\mathbb{R}$, $(PS)_c$ for short, if any sequence $\{u_n\}\subset \mathcal{M}$ with $I(u_n)\to c$ and $\|I'_{|\mathcal{M}}(u_n)\|\to 0$ contains a convergent subsequence. Noting that $\mathcal{M}$ is symmetric with respect to $0\in\mathcal{E}$ and $0\notin\mathcal{M}$, we introduce the notation of the genus. Let $\Sigma(\mathcal{M})$ be the family of closed symmetric subsets of $\mathcal{M}$. For any nonempty set $A\in \Sigma(\mathcal{M})$, the genus $\mathcal{G}(A)$ of $A$ is defined as the least integer $k\geq1$ for which there exists an odd continuous mapping $\varphi:A\to\mathbb{R}^k\setminus\{0\}$. We set $\mathcal{G}(A)=\infty$ if such an integer does not exist, and set $\mathcal{G}(A)=0$ if $A=\emptyset$. For each $k\in\mathbb{N}$, let $\Gamma_k:=\{A\in \Sigma(\mathcal{M})~|~\mathcal{G}(A)\geq k\}$. We now state the minimax theorem and then give a detailed proof for completeness. \begin{theorem}[Minimax theorem]\label{theorem:minimax} Let $I:\mathcal{E}\to\mathbb{R}$ be an even functional of class $C^1$. Assume that $I_{|\mathcal{M}}$ is bounded from below and satisfies the $(PS)_c$ condition for all $c<0$, and that $\Gamma_k \neq \emptyset $ for each $k \in \mathbb{N}$. Then a sequence of minimax values $-\infty<c_1\leq c_2\leq\cdots\leq c_k\leq \cdots$ can be defined as follows: \begin{linenomath*} \begin{equation*} c_k:=\inf_{A\in\Gamma_k}\sup_{u\in A}I(u),\qquad k\geq1, \end{equation*} \end{linenomath*} and the following statements hold. \begin{itemize} \item[$(i)$] $c_k$ is a critical value of $I_{|\mathcal{M}}$ provided $c_k<0$. \item[$(ii)$] Denote by $K^c$ the set of critical points of $I_{|\mathcal{M}}$ at a level $c\in\mathbb{R}$. If \begin{linenomath*} \begin{equation*} c_k=c_{k+1}=\cdots=c_{k+l-1}=:c<0\qquad\text{for some}~k,l\geq1, \end{equation*} \end{linenomath*} then $\mathcal{G}(K^c)\geq l$. In particular, $I_{|\mathcal{M}}$ has infinitely many critical points at the level $c$ if $l\geq2$. \item[$(iii)$] If $c_k<0$ for all $k\geq1$, then $c_k\to0^-$ as $k\to\infty$. \end{itemize} \end{theorem} To prove Theorem \ref{theorem:minimax}, we shall need some basic properties of the genus. For $A\subset\mathcal{M}$ and $\delta>0$, denote by $A_\delta$ the uniform $\delta$-neighborhood of $A$ in $\mathcal{M}$, that is, \begin{linenomath*} \begin{equation*} A_\delta:=\{u\in\mathcal{M}~|~\inf_{v\in A}\|u-v\|_\mathcal{E}\leq\delta\}. \end{equation*} \end{linenomath*} Since $\mathcal{M}$ is a closed symmetric subset of $\mathcal{E}$, repeating the arguments in \cite[Section 7]{Ra86}, one can get Proposition \ref{proposition:genus} below which is sufficient for our use. \begin{proposition}\label{proposition:genus} Let $A,B\in\Sigma(\mathcal{M})$. Then the following statements hold. \begin{itemize} \item[$(i)$] If $\mathcal{G}(A)\geq2$, then $A$ contains infinitely many distinct points. \item[$(ii)$] $\mathcal{G}(\overline{A\setminus B})\geq \mathcal{G}(A)-\mathcal{G}(B)$ if $\mathcal{G}(B)<\infty$. \item[$(iii)$] If there exists an odd continuous mapping $\psi:\mathbb{S}^{k-1}\to A$, then $\mathcal{G}(A)\geq k$. \item[$(iv)$] If $A$ is compact, then $\mathcal{G}(A)<\infty$ and there exists $\delta>0$ such that $A_\delta\in\Sigma(\mathcal{M})$ and $\mathcal{G}(A_\delta)=\mathcal{G}(A)$. \end{itemize} \end{proposition} We shall also need the following quantitative deformation lemma whose proof is similar to that of \cite[Lemma 2.3]{Wi96}. For $c<d$, set $I^c_{|\mathcal{M}}:=\{u\in\mathcal{M}~|~I(u)\leq c\}$ and $I^{-1}_{|\mathcal{M}}([c,d]):=\{u\in\mathcal{M}~|~c\leq I(u)\leq d\}$. \begin{lemma}\label{lemma:QDL} Assume $I_{|\mathcal{M}}\in C^1(\mathcal{M},\mathbb{R})$. Let $S\subset\mathcal{M}$, $c\in\mathbb{R}$, $\varepsilon>0$ and $\delta>0$ such that \begin{linenomath*} \begin{equation}\label{eq:QDL} \|I'_{|\mathcal{M}}(u)\|\geq \frac{8\varepsilon}{\delta}\qquad\text{for all}~u\in I^{-1}_{|\mathcal{M}}([c-2\varepsilon,c+2\varepsilon])\cap S_{2\delta}. \end{equation} \end{linenomath*} Then there exists a mapping $\eta\in C([0,1]\times \mathcal{M},\mathcal{M})$ such that \begin{itemize} \item[$(i)$] $\eta(t,u)=u$ if $t=0$ or if $u\not\in I^{-1}_{|\mathcal{M}}([c-2\varepsilon,c+2\varepsilon])\cap S_{2\delta}$, \item[$(ii)$] $\eta(1,I^{c+\varepsilon}_{|\mathcal{M}}\cap S)\subset I^{c-\varepsilon}_{|\mathcal{M}}$, \item[$(iii)$] $I(\eta(t,u))$ is nonincreasing in $t\in[0,1]$ for any $u\in\mathcal{M}$, \item[$(iv)$] $\eta(t,u)$ is odd in $u\in\mathcal{M}$ for any $t\in[0,1]$ if $I_{|\mathcal{M}}$ is even, \item[$(v)$] $\eta(t, \cdot)$ is a homeomorphism of $\mathcal{M}$ for each $t \in [0,1]$. \end{itemize} \end{lemma} With Proposition \ref{proposition:genus} and Lemma \ref{lemma:QDL} in hand, we can now prove Theorem \ref{theorem:minimax}. \medskip \noindent \textbf{Proof of Theorem \ref{theorem:minimax}.} Item $(i)$ is a special case of Item $(ii)$ when $l=1$, so we go straight to the proof of Item $(ii)$. Obviously, $K^c\in \Sigma(\mathcal{M})$ and $K^c$ is compact by the $(PS)_c$ condition. If $\mathcal{G}(K^c)\leq l-1$, by Proposition \ref{proposition:genus} $(iv)$, there exists $\delta>0$ such that \begin{linenomath*} \begin{equation*} \mathcal{G}(K^c_{3\delta})=\mathcal{G}(K^c)\leq l-1. \end{equation*} \end{linenomath*} We remark here that $K^c_{3\delta}=\emptyset$ if $K^c=\emptyset$. Let $S:=\overline{\mathcal{M}\setminus K^c_{3\delta}}\subset \mathcal{M}$. Clearly, there exists $\varepsilon>0$ small enough such that \eqref{eq:QDL} holds (if not, one will get a Palais-Smale sequence $\{u_n\}\subset S_{2\delta}$ of $I_{|\mathcal{M}}$ at the level $c<0$ and thus an element $v\in S_{2\delta}\cap K^c$ by the $(PS)_c$ condition, which leads a contradiction since $S_{2\delta}\cap K^c=\emptyset$). Therefore, Lemma \ref{lemma:QDL} yields a mapping $\eta\in C([0,1]\times\mathcal{M},\mathcal{M})$ such that \begin{linenomath*} \begin{equation*} \eta(1,I^{c+\varepsilon}_{|\mathcal{M}}\cap S)\subset I^{c-\varepsilon}_{|\mathcal{M}}\qquad\text{and}\qquad \eta(t,\cdot)~\text{is odd for all}~t\in[0,1]. \end{equation*} \end{linenomath*} Choose $A\in\Gamma_{k+l-1}$ such that $\sup_{u\in A}I(u)\leq c+\varepsilon$. It is clear that $\overline{A\setminus K^c_{3\delta}}\subset I^{c+\varepsilon}_{|\mathcal{M}}\cap S$ and thus \begin{linenomath*} \begin{equation}\label{eq:minimax} \eta(1,\overline{A\setminus K^c_{3\delta}})\subset\eta(1,I^{c+\varepsilon}_{|\mathcal{M}}\cap S)\subset I^{c-\varepsilon}_{|\mathcal{M}}. \end{equation} \end{linenomath*} On the other hand, since $\mathcal{G}(\overline{A\setminus K^c_{3\delta}})\geq \mathcal{G}(A)-\mathcal{G}(K^c_{3\delta})\geq k$ by Proposition \ref{proposition:genus} $(ii)$, we have $\overline{A\setminus K^c_{3\delta}}\in\Gamma_k$ and then $\eta(1,\overline{A\setminus K^c_{3\delta}})\in\Gamma_k$. Now, by the definition of $c_k$ and \eqref{eq:minimax}, we get a contradiction: \begin{linenomath*} \begin{equation*} c=c_k\leq \sup_{u\in\eta(1,\overline{A\setminus K^c_{3\delta}})}I(u)\leq c-\varepsilon. \end{equation*} \end{linenomath*} Thus $\mathcal{G}(K^c)\geq l$. In view of Proposition \ref{proposition:genus} $(i)$, we complete the proof of Item $(ii)$. To prove Item $(iii)$, we assume by contradiction that there exists $c<0$ such that $c_k\leq c$ for all $k\geq 1$ and $c_k\to c$ as $k\to\infty$. By the $(PS)_c$ condition, $K^c$ is a (symmetric) compact set. Thus, by Proposition \ref{proposition:genus} $(iv)$, there exists $\delta>0$ such that \begin{linenomath*} \begin{equation*} \mathcal{G}(K^c_{3\delta})=\mathcal{G}(K^c)=:q<\infty. \end{equation*} \end{linenomath*} Let $S:=\overline{\mathcal{M}\setminus K^c_{3\delta}}\subset\mathcal{M}$. Since \eqref{eq:QDL} holds for small enough $\varepsilon>0$, we know from Lemma \ref{lemma:QDL} that a mapping $\eta\in C([0,1]\times\mathcal{M},\mathcal{M})$ exists such that $\eta(1,I^{c+\varepsilon}_{|\mathcal{M}}\cap S)\subset I^{c-\varepsilon}_{|\mathcal{M}}$ and $\eta(t,\cdot)$ is odd for any $t\in[0,1]$. Choose $k\geq1$ large enough such that $c_k>c-\varepsilon$ and take $A\in\Gamma_{k+q}$ such that $\sup_{u\in A}I(u)\leq c_{k+q}+\varepsilon$. Noting that $c_{k+q}\leq c$, we have $\overline{A\setminus K^c_{3\delta}}\subset I^{c+\varepsilon}_{|\mathcal{M}}\cap S$ and thus \begin{linenomath*} \begin{equation*} \eta(1,\overline{A\setminus K^c_{3\delta}})\subset\eta(1,I^{c+\varepsilon}_{|\mathcal{M}}\cap S)\subset I^{c-\varepsilon}_{|\mathcal{M}}. \end{equation*} \end{linenomath*} On the other hand, since $\mathcal{G}(\overline{A\setminus K^c_{3\delta}})\geq \mathcal{G}(A)-\mathcal{G}(K^c_{3\delta})\geq k$, we have $\overline{A\setminus K^c_{3\delta}}\in\Gamma_k$ and then $\eta(1,\overline{A\setminus K^c_{3\delta}})\in\Gamma_k$. We now reach a contradiction: \begin{linenomath*} \begin{equation*} c_k\leq \sup_{u\in\eta(1,\overline{A\setminus K^c_{3\delta}})}I(u)\leq c-\varepsilon, \end{equation*} \end{linenomath*} for we chosen $k$ large enough such that $c_k>c-\varepsilon$. Thus $c_k\to0^-$ as $k\to\infty$.~~$\square$ To end this section, we recall a characterization result in \cite{Be83-2} which allows to check the $(PS)_c$ condition in a convenient way. \begin{lemma}[{\cite[Lemma 3]{Be83-2}}]\label{lemma:characterization} Assume that $I:\mathcal{E}\to\mathbb{R}$ is of class $C^1$. Let $\{u_n\}$ be a sequence in $\mathcal{M}$ which is bounded in $\mathcal{E}$. Then the following are equivalent: \begin{itemize} \item[$(i)$] $\|I'_{|\mathcal{M}}(u_n)\|\to0$ as $n\to\infty$. \item[$(ii)$] $I'(u_n)-m^{-1}\langle I'(u_n),u_n\rangle u_n\to0$ in $\mathcal{E}^{-1}$ (the dual space of $\mathcal{E}$) as $n\to\infty$. \end{itemize} Here the last $u_n$ in $(ii)$ is an element of $\mathcal{E}^{-1}$ such that $\langle u_n,v\rangle:=(u_n,v)_\mathcal{H}$ for all $v\in\mathcal{E}$. \end{lemma} \section{Preliminary results}\label{sect:preliminaries} In this section we present some preliminary results. For later convenience but without loss of generality, the exponent $q$ appeared in $(f3)$ will be denoted by $q_*$ and always understood as $2<q_*<2^*$. The first technical result is Lemma \ref{lemma:geo1} below, which is a slightly modified version of \cite[Lemma 2.2]{Sh14}. \begin{lemma}\label{lemma:geo1} Assume that $N\geq1$ and $f$ satisfies $(f1)-(f3)$. Then the following statements hold. \begin{itemize} \item[$(i)$] Let $\{u_n\}$ be a bounded sequence in $H^1(\mathbb{R}^N)$. We have \begin{linenomath*} \begin{equation*} \underset{n\to\infty}{\lim}\int_{\mathbb{R}^N}F(u_n)dx=0 \end{equation*} \end{linenomath*} if either $\lim_{n\to\infty}\|u_n\|_{L^2(\mathbb{R}^N)}=0$ or $\lim_{n\to\infty}\|u_n\|_{L^{q_*}(\mathbb{R}^N)}=0$. \item[$(ii)$] There exists $C=C(f,N,m)>0$ depending on $f$, $N$ and $m>0$ such that \begin{linenomath*} \begin{equation}\label{eq:geo1_1} I(u)\geq\frac{1}{4}\int_{\mathbb{R}^N}|\nabla u|^2dx-C(f,N,m) \end{equation} \end{linenomath*} for all $u\in H^1(\mathbb{R}^N)$ satisfying $\|u\|^2_{L^2(\mathbb{R}^N)}\leq m$. \end{itemize} \end{lemma} \begin{lemma}\label{lemma:BL} Assume that $N\geq2$, $\{u_n\}\subset H^1(\mathbb{R}^N)$ is a bounded sequence, and $u_n\to u$ almost everywhere in $\mathbb{R}^N$ for some $u\in H^1(\mathbb{R}^N)$. Let $F:\mathbb{R}\to\mathbb{R}$ be a function of class $C^1$ with $F(0)=0$. If \begin{itemize} \item[$(i)$] when $N=2$, for any $\alpha>0$, there exists $C_\alpha>0$ such that \begin{linenomath*} \begin{equation*} |F'(t)|\leq C_\alpha \left[|t|+\left(e^{\alpha t^2}-1\right)\right]\qquad\text{for all}~t\in\mathbb{R}. \end{equation*} \end{linenomath*} \item[$(ii)$] when $N\geq3$, there exists $C>0$ such that $|F'(t)|\leq C\left(|t|+|t|^{2^*-1}\right)$ for all $t\in\mathbb{R}$, \end{itemize} then \begin{linenomath*} \begin{equation}\label{eq:BL1} \lim_{n\to\infty}\int_{\mathbb{R}^N}\big|F(u_n)-F(u_n-u)-F(u)\big|dx=0. \end{equation} \end{linenomath*} \end{lemma} \proof We prove this lemma by applying the Brezis-Lieb Lemma (\cite[Theorem 2]{BL83}). Clearly, \begin{linenomath*} \begin{equation}\label{eq:BL2} |F(a+b)-F(a)|=\left|\int^1_0\frac{d}{d\tau}F(a+\tau b)d\tau\right|=\left|\int^1_0F'(a+\tau b)bd\tau\right|\qquad\text{for all}~a,b\in\mathbb{R}. \end{equation} \end{linenomath*} Let $\varepsilon>0$ be arbitrary. When $N=2$, by \eqref{eq:BL2}, $(i)$ and Young's inequality, one has \begin{linenomath*} \begin{equation*} \begin{split} |F(a+b)-F(a)| &\leq C_\alpha \int^1_0\left\{|a+\tau b|+\left[e^{\alpha \left(a+\tau b\right)^2}-1\right]\right\}|b|d\tau\\ &\leq C_\alpha\left[|a|+|b|+\left(e^{4\alpha a^2}-1\right)+\left(e^{4\alpha b^2}-1\right)\right]|b|\\ &\leq C_\alpha\left[\varepsilon a^2+\varepsilon^{-1}b^2+b^2+\varepsilon\left(e^{4\alpha a^2}-1\right)^2+\varepsilon^{-1} b^2+\left(e^{4\alpha b^2}-1\right)^2+b^2\right]\\ &\leq \varepsilon C_\alpha\left[a^2+\left(e^{8\alpha a^2}-1\right)\right]+C_\alpha\left[2\left(1+\varepsilon^{-1}\right)b^2+\left(e^{8\alpha b^2}-1\right)\right]\\ &=:\varepsilon\varphi(a)+\psi_\varepsilon(b). \end{split} \end{equation*} \end{linenomath*} In particular, $|F(b)|\leq \psi_\varepsilon(b)$ for all $b\in\mathbb{R}$. Choose $M\geq1$ sufficiently large and $\alpha>0$ small enough such that \begin{linenomath*} \begin{equation*} \|u_n\|_{H^1(\mathbb{R}^2)},~\|u_n-u\|_{H^1(\mathbb{R}^2)},~\|u\|_{H^1(\mathbb{R}^2)}\leq M \end{equation*} \end{linenomath*} and \begin{linenomath*} \begin{equation*} 8\alpha\leq \frac{\beta}{M^2}\qquad\text{for some}~\beta\in(0,4\pi). \end{equation*} \end{linenomath*} By the Moser-Trudinger inequality, we know that $\int_{\mathbb{R}^2}\varphi(u_n-u)dx$ is bounded uniformly in $\varepsilon$ and $n$, $\int_{\mathbb{R}^2}\psi_\varepsilon(u)dx<\infty$ for any $\varepsilon>0$, and $F(u)\in L^1(\mathbb{R}^2)$. In view of \cite[Theorem 2]{BL83}, we obtain \eqref{eq:BL1}. When $N\geq3$, by \eqref{eq:BL2}, $(ii)$ and Young's inequality, one has \begin{linenomath*} \begin{equation*} \begin{split} |F(a+b)-F(a)| &\leq C\int^1_0\left(|a+\tau b|+|a+\tau b|^{2^*-1}\right)|b|d\tau\\ &\leq C\left(|a|+2^{2^*}|a|^{2^*-1}+|b|+ 2^{2^*}|b|^{2^*-1}\right)|b|\\ &\leq \varepsilon C\left(a^2+|2a|^{2^*}\right)+C\left[\left(1+\varepsilon^{-1}\right)b^2+\left(1+\varepsilon^{1-2^*}\right)|2b|^{2^*}\right]\\ &=:\varepsilon\varphi(a)+\psi_\varepsilon(b). \end{split} \end{equation*} \end{linenomath*} Applying Sobolev inequality and \cite[Theorem 2]{BL83}, one can conclude easily that \eqref{eq:BL1} holds.~~$\square$ \begin{lemma}[{\cite[Corollary 3.2]{Me17}}]\label{lemma:lions} Assume that $N\geq4$ and $N-2M\neq0$. Let $\{u_n\}$ be a bounded sequence in $H^1_{\mathcal{O}_1}(\mathbb{R}^N)$ which satisfies \begin{linenomath*} \begin{equation}\label{eq:lions1} \lim_{r\to\infty}\left(\underset{n\to\infty}{\lim}\underset{y\in\{0\}\times\{0\}\times\mathbb{R}^{N-2M}}{\sup}\int_{B(y,r)}|u_n|^2dx\right)=0. \end{equation} \end{linenomath*} Then $u_n\to0$ in $L^p(\mathbb{R}^N)$ for any $2<p<2^*$. \end{lemma} \proof We give here a complete proof for the reader's convenience. By \cite[Lemma 1.21]{Wi96}, the proof will be over if we can show that \begin{linenomath*} \begin{equation}\label{eq:lions2} \underset{n\to\infty}{\lim}\underset{y\in\mathbb{R}^N}{\sup}\int_{B(y,1)}|u_n|^2dx=0. \end{equation} \end{linenomath*} We assume by contradiction that \eqref{eq:lions2} does not hold. Thus, up to a subsequence, there exist $\delta>0$ and $\{y_n\}\subset \mathbb{R}^N$ such that \begin{linenomath*} \begin{equation}\label{eq:lions3} \int_{B(y_n,1)}|u_n|^2dx\geq \delta>0\qquad\text{for}~n\geq1~\text{large enough}. \end{equation} \end{linenomath*} Since $\{u_n\}$ is bounded in $L^2(\mathbb{R}^N)$ and invariant with respect to $\mathcal{O}_1$, in view of \eqref{eq:lions3}, we deduce that $\{|(y^1_n,y^2_n)|\}$ must be bounded. Indeed, if $|(y^1_n,y^2_n)|\to\infty$, then one will derive the existence of an arbitrarily large number of disjoint unit balls in the family $\{B(g^{-1}y_n,1)\}_{g\in\mathcal{O}_1}$. Thus, for sufficiently large $r$, we have \begin{linenomath*} \begin{equation*} \int_{B((0,0,y^3_n),r)}|u_n|^2dx\geq\int_{B(y_n,1)}|u_n|^2dx\geq \delta>0, \end{equation*} \end{linenomath*} which contradicts \eqref{eq:lions1}. Therefore, \eqref{eq:lions2} is satisfied and the desired conclusion follows.~~$\square$ For any $k\in\mathbb{N}$, let $\mathbb{S}^{k-1}$ be the unit sphere in $\mathbb{R}^k$, i.e., \begin{linenomath*} \begin{equation*} \mathbb{S}^{k-1}:=\{\sigma\in\mathbb{R}^{k}~|~|\sigma|=1\}. \end{equation*} \end{linenomath*} Recall that $X_2:=H^1_{\mathcal{O}_2}(\mathbb{R}^N)\cap X_\tau$. To proceed further, we need \begin{lemma}\label{lemma:keymapping} Assume that $N\geq 4$ and $f$ is an odd continuous function satisfying $(f4)$. Then, for any $k\in\mathbb{N}$, there exists an odd continuous mapping $\pi_k:\mathbb{S}^{k-1}\to X_2\setminus\{0\}$ such that \begin{linenomath*} \begin{equation*} \inf_{\sigma\in\mathbb{S}^{k-1}}\int_{\mathbb{R}^N}F(\pi_k[\sigma])dx\geq1\qquad\text{and}\qquad \sup_{\sigma\in\mathbb{S}^{k-1}}\|\pi_k[\sigma]\|_{L^\infty(\mathbb{R}^N)}\leq 2\zeta. \end{equation*} \end{linenomath*} \end{lemma} \proof The first construction of such a mapping was done in \cite{JL18}, see \cite[Lemmas 4.2 and 4.3]{JL18}. For completeness we include here a shorter construction which borrows elements from \cite{Me17}. Fix $k\in\mathbb{N}$. In view of \cite[Theorem 10]{Be83-2} and \cite[Proof of Lemma 8]{Be83-2}, there exist constants $R(k)>2(k+1)$ and $c_k>0$ such that, for any $R\geq R(k)$, there exists an odd continuous mapping $\tau_{k,R}:\mathbb{S}^{k-1} \to H^1(\mathbb{R}^N)$ having the properties that $\tau_{k,R}[\sigma]$ is a radial function, $\text{supp}\big(\tau_{k,R}[\sigma]\big)\subset \overline{B}(0,R)$ for any $\sigma \in \mathbb{S}^{k-1}$, $\sup_{\sigma\in\mathbb{S}^{k-1}}\|\tau_{k,R}[\sigma]\|_{L^\infty(\mathbb{R}^N)} = \zeta$ and \begin{linenomath*} \begin{equation}\label{eq:tau} \inf_{\sigma\in \mathbb{S}^{k-1}}\int_{\mathbb{R}^N}F\big(\tau_{k,R}[\sigma]\big)dx \geq c_k R^N. \end{equation} \end{linenomath*} Let $\chi: \mathbb{R} \to [0,1]$ be an odd smooth function such that $\chi(t)=1$ for any $t \geq 1$. Following \cite[Remark 4.2]{Me17}, we define \begin{linenomath*} \begin{equation*} \pi_{k,R}[\sigma](x):=\tau_{k,R}[\sigma](x)\cdot \chi\big(|x_1|-|x_2|\big) \end{equation*} \end{linenomath*} where $\sigma \in \mathbb{S}^{k-1}$ and $x=(x_1,x_2,x_3) \in \mathbb{R}^M \times \mathbb{R}^M \times \mathbb{R}^{N-2M}$. Here, we agree that the component $x_3$ does not exist when $N=2M$. Clearly, $\pi_{k,R}$ is an odd continuous mapping from $\mathbb{S}^{k-1}$ to $X_2$, \begin{linenomath*} \begin{equation*} \sup_{\sigma \in \mathbb{S}^{k-1}}\|\pi_{k,R}[\sigma]\|_{L^\infty(\mathbb{R}^N)} \leq \zeta \qquad \text{and}\qquad \text{supp}\big(\pi_{k,R}[\sigma]\big)\subset \overline{B}(0,R) \quad\text{for any}~\sigma \in \mathbb{S}^{k-1}. \end{equation*} \end{linenomath*} Denoted by $\omega_l$ the surface area of $\mathbb{S}^l$. Set $A := \max_{t \in [0,\zeta]}|F(t)|$, $\omega:=\omega_{N-2M-1}\omega^2_{M-1}$ and $r_i:=|x_i|$ for $i=1,2,3$. For any $\sigma\in \mathbb{S}^{k-1}$, it is not difficult to see that \begin{linenomath} \begin{equation}\label{eq:pi} \begin{split} \int_{\mathbb{R}^N}F\big(\pi_{k,R}[\sigma]) dx &= \omega \int_0^{\infty} \int_0^{\infty} \int_0^{\infty} F\big(\pi_{k,R}[\sigma]) r_1^{M-1} r_2^{M-1} r_3^{N-2M-1} dr_1 dr_2 dr_3\\ &= 2 \omega\int_0^{R} \int_0^{R} \int_{r_2}^{R} F\big(\pi_{k,R}[\sigma]) r_1^{M-1} r_2^{M-1} r_3^{N-2M-1} dr_1 dr_2 dr_3\\ &= 2 \omega\int_0^{R} \int_0^{R} \int_{r_2}^{R} F\big(\tau_{k,R}[\sigma]) r_1^{M-1} r_2^{M-1} r_3^{N-2M-1} dr_1 dr_2 dr_3\\ &\qquad -2 \omega\int_0^{R} \int_0^{R} \int_{r_2}^{r_2+1} F\big(\tau_{k,R}[\sigma]) r_1^{M-1} r_2^{M-1} r_3^{N-2M-1} dr_1 dr_2 dr_3\\ &\qquad + 2 \omega\int_0^{R} \int_0^{R} \int_{r_2}^{r_2+1} F\big(\pi_{k,R}[\sigma]\big) r_1^{M-1} r_2^{M-1} r_3^{N-2M-1} dr_1 dr_2 dr_3\\ &\geq \int_{\mathbb{R}^N} F\big(\tau_{k,R}[\sigma]) dx - 2\omega A C_N R^{N-1}, \end{split} \end{equation} \end{linenomath} where $C_N >0$ is a constant depending only on $N$. Thus, in view of \eqref{eq:tau} and \eqref{eq:pi}, we obtain the desired mapping $\pi_k := \pi_{k,R}$ by taking $R> R(k)$ large enough.~~$\square$ \begin{lemma}\label{lemma:geo2} Assume that $N\geq4$ and $f$ satisfies $(f1)-(f5)$. Then, for any $m>0$ and $k\in\mathbb{N}$, there exists an odd continuous mapping $\gamma_{m,k}:\mathbb{S}^{k-1}\to S_m\cap X_2$. Moreover, the following statements hold. \begin{itemize} \item[$(i)$] For any $k\in\mathbb{N}$, there exists $m(k)> 0$ large enough such that \begin{linenomath*} \begin{equation}\label{eq:geo2_1} \sup_{\sigma\in\mathbb{S}^{k-1}}I(\gamma_{m,k}[\sigma])<0\qquad\text{for all}~m > m(k). \end{equation} \end{linenomath*} \item[$(ii)$] For any $s>0$, define $\gamma^s_{m,k}:\mathbb{S}^{k-1}\to S_m\cap X_2$ as follows: \begin{linenomath*} \begin{equation*} \gamma^s_{m,k}[\sigma](x):=s^{N/2}\gamma_{m,k}[\sigma](sx),\qquad x\in\mathbb{R}^N~\text{and}~\sigma\in \mathbb{S}^{k-1}. \end{equation*} \end{linenomath*} Then \begin{linenomath*} \begin{equation}\label{eq:geo2_2} \limsup_{s\to0^+}\left(\sup_{\sigma\in\mathbb{S}^{k-1}}I(\gamma^s_{m,k}[\sigma])\right)\leq 0. \end{equation} \end{linenomath*} If in addition \eqref{eq:f_key1} holds, then there exists $s_*>0$ small enough such that \begin{linenomath*} \begin{equation}\label{eq:geo2_3} \sup_{\sigma\in\mathbb{S}^{k-1}}I(\gamma^s_{m,k}[\sigma])< 0\qquad\text{for any}~s\in(0,s_*). \end{equation} \end{linenomath*} \end{itemize} \end{lemma} \proof Fix $m>0$ and $k\in\mathbb{N}$. Using the mapping $\pi_k$ obtained in Lemma \ref{lemma:keymapping}, we can define an odd continuous mapping $\gamma_{m,k}:\mathbb{S}^{k-1}\to S_m\cap X_2$ as follows: \begin{linenomath*} \begin{equation*} \gamma_{m,k}[\sigma](x):=\pi_k[\sigma]\left(m^{-1/N}\cdot \|\pi_k[\sigma]\|^{2/N}_{L^2(\mathbb{R}^N)}\cdot x\right),\qquad x\in\mathbb{R}^N~\text{and}~\sigma\in\mathbb{S}^{k-1}. \end{equation*} \end{linenomath*} Clearly, \begin{linenomath*} \begin{equation*} \sup_{\sigma\in \mathbb{S}^{k-1}}\|\gamma_{m,k}(\sigma)\|_{L^\infty(\mathbb{R}^N)}\leq 2\zeta. \end{equation*} \end{linenomath*} We next show that this mapping $\gamma_{m,k}$ satisfies Items $(i)$ and $(ii)$. $(i)$ Since $\mathbb{S}^{k-1}$ is compact and $0\notin \pi_k[\mathbb{S}^{k-1}]$, one can find $\alpha_k,\beta_k,\beta'_k>0$ independent of $\sigma\in\mathbb{S}^{k-1}$ such that \begin{linenomath*} \begin{equation*} \int_{\mathbb{R}^N}\bigl|\nabla\pi_k[\sigma]\bigr|^2dx\leq \alpha_k\qquad\text{and}\qquad \beta_k\leq\|\pi_k[\sigma]\|^2_{L^2(\mathbb{R}^N)}\leq \beta'_k. \end{equation*} \end{linenomath*} Thus, \begin{linenomath*} \begin{equation*} \begin{split} I(\gamma_{m,k}[\sigma])&=\frac{1}{2}\int_{\mathbb{R}^N}\bigl|\nabla \gamma_{m,k}[\sigma]\bigr|^2dx-\int_{\mathbb{R}^N}F(\gamma_{m,k}[\sigma])dx\\ &=\frac{m^{\frac{N-2}{N}}}{2\|\pi_k[\sigma]\|^{2(N-2)/N}_{L^2(\mathbb{R}^N)}}\int_{\mathbb{R}^N}\bigl|\nabla \pi_k[\sigma]\bigr|^2dx-\frac{m}{\|\pi_k[\sigma]\|^2_{L^2(\mathbb{R}^N)}}\int_{\mathbb{R}^N}F(\pi_k[\sigma])dx\\ &\leq \frac{1}{2} \alpha_k \beta^{(2-N)/N}_k\cdot m^{\frac{N-2}{N}}-\big(\beta'_k\big)^{-1}\cdot m =:g_k(m). \end{split} \end{equation*} \end{linenomath*} Clearly, $g_k(m)<0$ for sufficiently large $m>0$ and thus \eqref{eq:geo2_1} holds. $(ii)$ We first prove \eqref{eq:geo2_2}. Let $\varepsilon>0$ be arbitrary. By $(f2)$, there exists $\delta>0$ such that \begin{linenomath*} \begin{equation*} |F(t)|\leq\varepsilon t^2\qquad\text{for all}~|t|\leq\delta. \end{equation*} \end{linenomath*} Noting that \begin{linenomath*} \begin{equation}\label{eq:geo2_4} \|\gamma^s_{m,k}[\sigma]\|_{L^\infty(\mathbb{R}^N)}\leq 2s^{N/2}\zeta\qquad\text{for all}~\sigma\in\mathbb{S}^{k-1}, \end{equation} \end{linenomath*} one can find $s(\varepsilon)>0$ small enough such that \begin{linenomath*} \begin{equation*} \sup_{\sigma\in\mathbb{S}^{k-1}}\|\gamma^s_{m,k}[\sigma]\|_{L^\infty(\mathbb{R}^N)}\leq \delta\qquad\text{for all}~0<s<s(\varepsilon). \end{equation*} \end{linenomath*} Therefore, for any $\sigma\in\mathbb{S}^{k-1}$ and $0<s<s(\varepsilon)$, we have \begin{linenomath*} \begin{equation*} \begin{split} I(\gamma^s_{m,k}[\sigma])&\leq \frac{1}{2}\int_{\mathbb{R}^N}\bigl|\nabla \gamma^s_{m,k}[\sigma]\bigr|^2dx+\int_{\mathbb{R}^N}\bigl|F(\gamma^s_{m,k}[\sigma])\bigr|dx\\ &\leq\frac{1}{2}s^2\int_{\mathbb{R}^N}\bigl|\nabla \gamma_{m,k}[\sigma]\bigr|^2dx+\varepsilon \int_{\mathbb{R}^N} \bigl|\gamma_{m,k}[\sigma]\bigr|^2dx\\ &=\frac{1}{2}s^2\int_{\mathbb{R}^N}\bigl|\nabla \gamma_{m,k}[\sigma]\bigr|^2dx+m\varepsilon. \end{split} \end{equation*} \end{linenomath*} Since $\mathbb{S}^{k-1}$ is compact, there exists $C>0$ (independent of $\varepsilon>0$ and $s>0$) such that \begin{linenomath*} \begin{equation*} \sup_{\sigma\in\mathbb{S}^{k-1}}\int_{\mathbb{R}^N}\bigl|\nabla \gamma_{m,k}[\sigma]\bigr|^2dx\leq C. \end{equation*} \end{linenomath*} Thus, for any $0<s<\min\bigl\{s(\varepsilon),(2\varepsilon/C)^{1/2}\bigr\}$, we obtain \begin{linenomath*} \begin{equation*} \sup_{\sigma\in\mathbb{S}^{k-1}}I(\gamma^s_{m,k}[\sigma])\leq \frac{1}{2}Cs^2+m\varepsilon\leq(m+1)\varepsilon. \end{equation*} \end{linenomath*} Clearly, it follows that \eqref{eq:geo2_2} holds. We now assume \eqref{eq:f_key1} and prove \eqref{eq:geo2_3}. For \begin{linenomath*} \begin{equation*} D:=\sup_{\sigma\in\mathbb{S}^{k-1}}\int_{\mathbb{R}^N}\bigl|\nabla \gamma_{m,k}[\sigma]\bigr|^2dx\biggm/\inf_{\sigma\in\mathbb{S}^{k-1}}\int_{\mathbb{R}^N}\bigl|\gamma_{m,k}[\sigma]\bigr|^{2+\frac{4}{N}}dx>0, \end{equation*} \end{linenomath*} by \eqref{eq:f_key1}, there exists a $\delta>0$ such that \begin{linenomath*} \begin{equation*} F(t)\geq D|t|^{2+\frac{4}{N}}\qquad\text{for all}~|t|<\delta. \end{equation*} \end{linenomath*} Also, in view of \eqref{eq:geo2_4}, one can find $s_*>0$ small enough such that \begin{linenomath*} \begin{equation*} \sup_{\sigma\in\mathbb{S}^{k-1}}\|\gamma^s_{m,k}[\sigma]\|_{L^\infty(\mathbb{R}^N)}\leq \delta\qquad\text{for all}~0<s<s_*. \end{equation*} \end{linenomath*} Thus, for any $\sigma\in\mathbb{S}^{k-1}$ and $0<s<s_*$, we have \begin{linenomath*} \begin{equation*} \begin{split} I(\gamma^s_{m,k}[\sigma])&\leq \frac{1}{2}\int_{\mathbb{R}^N}\bigl|\nabla \gamma^s_{m,k}[\sigma]\bigr|^2dx-D\int_{\mathbb{R}^N}\bigl|\gamma^s_{m,k}[\sigma]\bigr|^{2+\frac{4}{N}}dx\\ &=\frac{1}{2}s^2\int_{\mathbb{R}^N}\bigl|\nabla \gamma_{m,k}[\sigma]\bigr|^2dx- Ds^2\int_{\mathbb{R}^N}\bigl|\gamma_{m,k}[\sigma]\bigr|^{2+\frac{4}{N}}dx\\ &\leq-\frac{1}{2}s^2\sup_{\sigma\in\mathbb{S}^{k-1}}\int_{\mathbb{R}^N}\bigl|\nabla \gamma_{m,k}[\sigma]\bigr|^2dx<0, \end{split} \end{equation*} \end{linenomath*} which implies \eqref{eq:geo2_3}.~~$\square$ \begin{lemma}\label{lemma:PS} Assume that $N\geq4$, $N-2M\neq1$, and $f$ satisfies $(f1)-(f3)$ and $(f5)$. Then $I_{|S_m\cap X_2}$ satisfies the $(PS)_c$ condition for all $c<0$. \end{lemma} \proof For given $c<0$, let $\{u_n\}\subset S_m\cap X_2$ be any sequence such that \begin{linenomath*} \begin{equation}\label{eq:PS1} I(u_n)\to c<0 \end{equation} \end{linenomath*} and \begin{linenomath*} \begin{equation}\label{eq:PS2} I'_{|S_m\cap X_2}(u_n)\to0. \end{equation} \end{linenomath*} By \eqref{eq:PS1} and Lemma \ref{lemma:geo1} $(ii)$, we see that $\{u_n\}$ is bounded in $X_2$. Thus, up to a subsequence, we may assume that $u_n\rightharpoonup u$ in $X_2$ and $u_n\to u$ almost everywhere in $\mathbb{R}^N$ for some $u\in X_2$. In addition, thanks to \cite[Corollary 1.25]{Wi96}, $u_n\to u$ in $L^p(\mathbb{R}^N)$ for any $p\in(2, 2^*)$. Also, we know from \eqref{eq:PS2} and Lemma \ref{lemma:characterization} that \begin{linenomath*} \begin{equation}\label{eq:PS3} -\Delta u_n+\mu_n u_n-f(u_n)\to0\qquad\text{in}~(X_2)^{-1}, \end{equation} \end{linenomath*} where \begin{linenomath*} \begin{equation*} \mu_n:=\frac{1}{m}\left(\int_{\mathbb{R}^N}f(u_n)u_ndx-\int_{\mathbb{R}^N}|\nabla u_n|^2dx\right). \end{equation*} \end{linenomath*} Since $\{\mu_n\}$ is bounded by $(f1)-(f3)$, we may assume that $\mu_n\to \mu$ for some $\mu\in\mathbb{R}$ and thus \begin{linenomath*} \begin{equation} -\Delta u+\mu u=f(u)\qquad\text{in}~(X_2)^{-1}. \end{equation} \end{linenomath*} To show that $u_n\to u$ in $X_2$, the following two claims are needed. \smallskip \textbf{Claim 1.} \begin{linenomath*} \begin{equation}\label{eq:PS4} \lim_{n\to\infty}\int_{\mathbb{R}^N}f(u_n)u_ndx=\int_{\mathbb{R}^N}f(u)udx. \end{equation} \end{linenomath*} Let $v_n:=u_n-u$. Clearly, \begin{linenomath*} \begin{equation*} \int_{\mathbb{R}^N}\left[f(u_n)u_n-f(u)u\right]dx=\int_{\mathbb{R}^N}f(u_n)v_ndx+\int_{\mathbb{R}^N}\left[f(u_n)-f(u)\right]udx. \end{equation*} \end{linenomath*} Since $u_n\rightharpoonup u$ in $X_2$, one can show in a standard way that $\int_{\mathbb{R}^N}\big[f(u_n)-f(u)\big]udx\to0$. We estimate the remaining term $\int_{\mathbb{R}^N}f(u_n)v_ndx$. For any $\varepsilon>0$, there exists $C_\varepsilon>0$ such that \begin{linenomath*} \begin{equation*} |f(t)|\leq \varepsilon |t|+C_\varepsilon|t|^{q_*-1}\qquad\text{for all}~t\in\mathbb{R}. \end{equation*} \end{linenomath*} Therefore, by H\"{o}lder inequality, we have \begin{linenomath*} \begin{equation*} \left|\int_{\mathbb{R}^N}f(u_n)v_ndx\right|\leq \varepsilon \|u_n\|_{L^2(\mathbb{R}^N)}\|v_n\|_{L^2(\mathbb{R}^N)}+C_\varepsilon\|u_n\|^{q_*-1}_{L^{q_*}(\mathbb{R}^N)}\|v_n\|_{L^{q_*}(\mathbb{R}^N)}. \end{equation*} \end{linenomath*} Since $\|v_n\|_{L^{q_*}(\mathbb{R}^N)}\to0$ and $\varepsilon>0$ is arbitrary, we see that \begin{linenomath*} \begin{equation*} \lim_{n\to\infty}\int_{\mathbb{R}^N}f(u_n)v_ndx=0 \end{equation*} \end{linenomath*} and thus \eqref{eq:PS4} holds. \smallskip \textbf{Claim 2.} $\mu>0$. Since $u_n-u\to0$ in $L^{q_*}(\mathbb{R}^N)$, we have $\int_{\mathbb{R}^N}F(u_n-u)dx\to0$ via Lemma \ref{lemma:geo1} $(i)$ and then $\int_{\mathbb{R}^N}F(u_n)dx\to\int_{\mathbb{R}^N}F(u)dx$ by Lemma \ref{lemma:BL}. In view of \eqref{eq:PS1}, we deduce that \begin{linenomath*} \begin{equation*} \begin{split} I(u) &=\frac{1}{2}\int_{\mathbb{R}^N}|\nabla u|^2dx-\int_{\mathbb{R}^N}F(u)dx\\ &\leq \frac{1}{2}\lim_{n\to\infty}\int_{\mathbb{R}^N}|\nabla u_n|^2dx-\lim_{n\to\infty}\int_{\mathbb{R}^N}F(u_n)dx\\ &=\lim_{n\to\infty}I(u_n)= c<0. \end{split} \end{equation*} \end{linenomath*} Since, by Palais principle of symmetric criticality \cite{Pa79} and Poho\u{z}aev identity, \begin{linenomath*} \begin{equation*} P(u):=\frac{N-2}{2N}\int_{\mathbb{R}^N}|\nabla u|^2dx+\frac{1}{2}\mu\int_{\mathbb{R}^N}u^2dx-\int_{\mathbb{R}^N}F(u)dx=0, \end{equation*} \end{linenomath*} we conclude that \begin{linenomath*} \begin{equation*} 0>I(u)=I(u)-P(u)=\frac{1}{N}\int_{\mathbb{R}^N}|\nabla u|^2dx-\frac{1}{2}\mu\int_{\mathbb{R}^N}u^2dx. \end{equation*} \end{linenomath*} Clearly, this implies that $\mu>0$. With Claims 1 and 2 in hand, we can now show the strong convergence. By \eqref{eq:PS3}-\eqref{eq:PS4} and the fact that $\mu_n\to\mu>0$, we have \begin{linenomath*} \begin{equation*} \begin{split} \int_{\mathbb{R}^N}|\nabla u|^2dx+\mu\int_{\mathbb{R}^N}u^2dx &=\int_{\mathbb{R}^N}f(u)udx\\ &=\lim_{n\to\infty}\int_{\mathbb{R}^N}f(u_n)u_ndx=\lim_{n\to\infty}\int_{\mathbb{R}^N}|\nabla u_n|^2dx+\mu m\\ &\geq \int_{\mathbb{R}^N}|\nabla u|^2dx+\mu\int_{\mathbb{R}^N}u^2dx. \end{split} \end{equation*} \end{linenomath*} Clearly, $\lim_{n\to\infty}\int_{\mathbb{R}^N}|\nabla u_n|^2dx=\int_{\mathbb{R}^N}|\nabla u|^2dx$, $\int_{\mathbb{R}^N}u^2dx=m$, and thus $u_n\to u$ in $X_2$.~~$\square$ \section{Proofs of the main results}\label{sect:proofs} \subsection{Proof of Theorem \ref{theorem:nonradialsolutions}}\label{subsect:theorem1.2} In this subsection, we shall use Theorem \ref{theorem:minimax} to prove Theorem \ref{theorem:nonradialsolutions}. Recall that $N\geq4$, $N-2M\neq1$ and $X_2:=H^1_{\mathcal{O}_2}(\mathbb{R}^N)\cap X_\tau$. For any $m>0$ and $k\in\mathbb{N}$, we define \begin{linenomath*} \begin{equation*} A_{m,k}:=\gamma_{m,k}[\mathbb{S}^{k-1}], \end{equation*} \end{linenomath*} where $\gamma_{m,k}$ is the odd continuous mapping given by Lemma \ref{lemma:geo2}. Clearly, $A_{m,k}$ is a closed symmetric set and $\mathcal{G}(A_{m,k})\geq k$ by Proposition \ref{proposition:genus} $(iii)$. Therefore, the class \begin{linenomath*} \begin{equation*} \Gamma_{m,k}:=\{A\in \Sigma(S_m\cap X_2)~|~\mathcal{G}(A)\geq k\} \end{equation*} \end{linenomath*} is nonempty and the minimax value \begin{linenomath*} \begin{equation*} E_{m,k}:=\inf_{A\in \Gamma_{m,k}}\sup_{u\in A}I(u) \end{equation*} \end{linenomath*} is well defined. \begin{lemma}\label{lemma:Emk} \begin{itemize} \item[$(i)$] $-\infty<E_{m,k}\leq E_{m,k+1}\leq0$ for all $m>0$ and $k\in\mathbb{N}$. \item[$(ii)$] For any $k\in\mathbb{N}$, there exists $m(k) > 0$ large enough such that $E_{m,k}<0$ for all $m > m(k)$. \item[$(iii)$] If in addition \eqref{eq:f_key1} holds, then $E_{m,k}<0$ for all $m>0$ and $k\in\mathbb{N}$. \item[$(iv)$] For any $k\in\mathbb{N}$, the mapping $m\mapsto E_{m,k}$ is nonincreasing and continuous. \end{itemize} \end{lemma} \proof $(i)$ Since $\Gamma_{m,k+1}\subset\Gamma_{m,k}$ and $I$ is bounded from below on $S_m\cap X_2$ by Lemma \ref{lemma:geo1} $(ii)$, we have \begin{linenomath*} \begin{equation*} E_{m,k+1}\geq E_{m,k}>-\infty. \end{equation*} \end{linenomath*} Let $\gamma^s_{m,k}$ be the odd continuous mapping given by Lemma \ref{lemma:geo2} $(ii)$. Clearly, $\gamma^s_{m,k}[\mathbb{S}^{k-1}]\in \Gamma_{m,k}$ and thus \begin{linenomath*} \begin{equation}\label{eq:Emk1} E_{m,k}\leq \sup_{\sigma\in\mathbb{S}^{k-1}}I(\gamma^s_{m,k}[\sigma])\qquad\text{for any}~s>0. \end{equation} \end{linenomath*} In view of \eqref{eq:geo2_2}, we conclude that $E_{m,k}\leq 0$. The proof of Item $(i)$ is complete. $(ii)$ This item follows from the fact that \begin{linenomath*} \begin{equation*} E_{m,k}\leq \sup_{\sigma\in\mathbb{S}^{k-1}}I(\gamma_{m,k}[\sigma]) \end{equation*} \end{linenomath*} and Lemma \ref{lemma:geo2} $(i)$. $(iii)$ This item is a direct consequence of \eqref{eq:Emk1} and \eqref{eq:geo2_3}. $(iv)$ Fix $k\in\mathbb{N}$. To prove the claim that the mapping $m\mapsto E_{m,k}$ is nonincreasing, we only need to show that, when $s>m>0$, \begin{linenomath*} \begin{equation}\label{eq:Emk2} E_{s,k}\leq E_{m,k}+\varepsilon \qquad\text{for any}~\varepsilon>0~\text{sufficiently small}. \end{equation} \end{linenomath*} Clearly, \eqref{eq:Emk2} follows from Item $(i)$ if $E_{m,k}=0$. Thus, without loss of generality, we may assume that $E_{m,k}<0$. Let $\varepsilon\in(0,-E_{m,k}/2)$ be arbitrary. By the definition of $E_{m,k}$, there exists $A_{m,k}(\varepsilon)\in\Gamma_{m,k}$ such that \begin{linenomath*} \begin{equation}\label{eq:Emk3} \sup_{u\in A_{m,k}(\varepsilon)}I(u)\leq E_{m,k}+\varepsilon<0. \end{equation} \end{linenomath*} Let \begin{linenomath*} \begin{equation*} B_{s,k}:=\{v(\cdot)=u(\cdot/t)~|~u\in A_{m,k}(\varepsilon)\}, \end{equation*} \end{linenomath*} where $t:=(s/m)^{1/N}>1$. It is clear that $B_{s,k}\in\Gamma_{s,k}$ and thus \begin{linenomath*} \begin{equation}\label{eq:Emk4} E_{s,k}\leq \sup_{v\in B_{s,k}}I(v)=\sup_{u\in A_{m,k}(\varepsilon)}I(u(\cdot/t)). \end{equation} \end{linenomath*} For any $u\in A_{m,k}(\varepsilon)$, by \eqref{eq:Emk3} and the fact that $t>1$, we have \begin{linenomath*} \begin{equation*} I(u(\cdot/t))=t^NI(u)+\frac{1}{2}t^{N-2}(1-t^2)\int_{\mathbb{R}^N}|\nabla u|^2dx\leq I(u)\leq E_{m,k}+\varepsilon. \end{equation*} \end{linenomath*} In view of \eqref{eq:Emk4}, we get the desired conclusion \eqref{eq:Emk2} and thus $E_{m,k}$ is nonincreasing in $m>0$. We next show that the mapping $m\mapsto E_{m,k}$ is continuous. Let $s\in(0,m/2)$. Since $E_{m,k}$ is nonincreasing in $m>0$, we see that $E_{m-s,k}$ and $E_{m+s,k}$ are monotonic and bounded as $s\to0^+$. Therefore, $E_{m-s,k}$ and $E_{m+s,k}$ have limits as $s\to0^+$. Noting that $E_{m-s,k}\geq E_{m,k}\geq E_{m+s,k}$ for all $s\in(0,m/2)$, we have \begin{linenomath*} \begin{equation*} \lim_{s\to0^+}E_{m-s,k}\geq E_{m,k}\geq\lim_{s\to0^+}E_{m+s,k}. \end{equation*} \end{linenomath*} To complete the proof, we only need to prove the reverse inequality, that is, \begin{linenomath*} \begin{equation*} \lim_{s\to0^+}E_{m-s,k}\leq E_{m,k}\leq\lim_{s\to0^+}E_{m+s,k}. \end{equation*} \end{linenomath*} \smallskip \textbf{Claim 1.} $\lim_{s\to0^+}E_{m-s,k}\leq E_{m,k}$. Let $\varepsilon\in(0,1)$ be arbitrary. By the definition of $E_{m,k}$ and Item $(i)$, there exists $A_{m,k}(\varepsilon)\in\Gamma_{m,k}$ such that \begin{linenomath*} \begin{equation*} \sup_{u\in A_{m,k}(\varepsilon)}I(u)\leq E_{m,k}+\varepsilon\leq 1. \end{equation*} \end{linenomath*} Since $A:=\cup_{0<\varepsilon<1}A_{m,k}(\varepsilon)\subset S_m\cap X_2$, we know from Lemma \ref{lemma:geo1} $(ii)$ that $A$ is a bounded set in $X_2$. We define \begin{linenomath*} \begin{equation*} B_{m-s,k}:=\{v(\cdot)=u(\cdot/t_s)~|~u\in A_{m,k}(\varepsilon)\}, \end{equation*} \end{linenomath*} where $t_s:=[(m-s)/m]^{1/N}>0$. Clearly, $B_{m-s,k}\in\Gamma_{m-s,k}$ and thus \begin{linenomath*} \begin{equation*} \begin{aligned} E_{m-s,k}\leq \sup_{v\in B_{m-s,k}}I(v)&=\sup_{u\in A_{m,k}(\varepsilon)}I(u(\cdot/t_s))\\ &\leq \sup_{u\in A_{m,k}(\varepsilon)}I(u)+\sup_{u\in A_{m,k}(\varepsilon)}\left|I(u(\cdot/t_s))-I(u)\right|\\ &\leq E_{m,k}+\varepsilon+\sup_{u\in A}\left|I(u(\cdot/t_s))-I(u)\right|. \end{aligned} \end{equation*} \end{linenomath*} Since $\varepsilon\in(0,1)$ is arbitrary and $\sup_{u\in A}\left|I(u(\cdot/t_s))-I(u)\right|$ is independent of $\varepsilon\in(0,1)$, one will obtain Claim 1 if \begin{linenomath*} \begin{equation}\label{eq:Emk5} \lim_{s\to0^+}\left(\sup_{u\in A}\left|I(u(\cdot/t_s))-I(u)\right|\right)=0. \end{equation} \end{linenomath*} We now prove \eqref{eq:Emk5}. Noting that $t_s$ is only dependent on $s$, we have \begin{linenomath*} \begin{equation*} \sup_{u\in A}\left|I(u(\cdot/t_s))-I(u)\right|\leq \frac{1}{2}\left|t^{N-2}_s-1\right|\sup_{u\in A}\int_{\mathbb{R}^N}|\nabla u|^2dx+\left|t^N_s-1\right|\sup_{u\in A}\int_{\mathbb{R}^N}|F(u)|dx. \end{equation*} \end{linenomath*} Since $A$ is bounded in $X_2$, by $(f1)-(f3)$, we see that $\sup_{u\in A}\int_{\mathbb{R}^N}|F(u)|dx$ is bounded uniformly in $s\in(0,m/2)$. In view of the fact that $\lim_{s\to0^+}t_s=1$, we obtain \eqref{eq:Emk5}. \smallskip \textbf{Claim 2.} $\lim_{s\to0^+}E_{m+s,k}\geq E_{m,k}$. For any $s\in(0,m/2)$, by the definition of $E_{m+s,k}$, there exists $A_{m+s,k}\in\Gamma_{m+s,k}$ such that \begin{linenomath*} \begin{equation*} \sup_{u\in A_{m+s,k}}I(u)\leq E_{m+s,k}+s. \end{equation*} \end{linenomath*} Let $A:=\cup_{0<s<m/2}A_{m+s,k}$. Since $\sup_{u\in A}\|u\|^2_{L^2(\mathbb{R}^N)}\leq 3m/2$ and $\sup_{u\in A}I(u)\leq m/2$ by Item $(i)$, we know from Lemma \ref{lemma:geo1} $(ii)$ that $A$ is a bounded set in $X_2$. Define \begin{linenomath*} \begin{equation*} B_{m,k}(s):=\{v(\cdot)=u(\cdot/t_s)~|~u\in A_{m+s,k}\}, \end{equation*} \end{linenomath*} where $t_s:=[m/(m+s)]^{1/N}>0$. Clearly, $B_{m,k}(s)\in\Gamma_{m,k}$ and thus \begin{linenomath*} \begin{equation*} \begin{aligned} E_{m,k}\leq \sup_{v\in B_{m,k}(s)}I(v)&=\sup_{u\in A_{m+s,k}}I(u(\cdot/t_s))\\ &\leq \sup_{u\in A_{m+s,k}}I(u)+\sup_{u\in A_{m+s,k}}\left|I(u(\cdot/t_s))-I(u)\right|\\ &\leq E_{m+s,k}+s+\sup_{u\in A}\left|I(u(\cdot/t_s))-I(u)\right|. \end{aligned} \end{equation*} \end{linenomath*} Arguing as the proof of \eqref{eq:Emk5}, we have that \begin{linenomath*} \begin{equation*} \lim_{s\to0^+}\left(\sup_{u\in A}\left|I(u(\cdot/t_s))-I(u)\right|\right)=0. \end{equation*} \end{linenomath*} Therefore, \begin{linenomath*} \begin{equation*} E_{m,k}\leq \lim_{s\to0^+}\left(E_{m+s,k}+s+\sup_{u\in A}\left|I(u(\cdot/t_s))-I(u)\right|\right)=\lim_{s\to0^+}E_{m+s,k}. \end{equation*} \end{linenomath*} The proof of Claim 2 is complete. ~~$\square$ \medskip \noindent \textbf{Proof of Theorem \ref{theorem:nonradialsolutions}.} Clearly, Theorem \ref{theorem:nonradialsolutions} $(i)$ and $(ii)$ are Lemma \ref{lemma:Emk} $(i)$ and $(iv)$ respectively. Let $\mathcal{E}:=X_2$ and $\mathcal{H}:=L^2(\mathbb{R}^N)$. For any $k\in\mathbb{N}$, we define \begin{linenomath*} \begin{equation*} m_k:=\inf\{m>0~|~E_{m,k}<0\}. \end{equation*} \end{linenomath*} By Lemma \ref{lemma:Emk} $(i)$, $(ii)$ and $(iv)$, it follows that $m_k \in [0,\infty)$, \begin{linenomath*} \begin{equation*} E_{m,k}=0\quad\text{if}~0<m\leq m_k,\qquad E_{m,k}<0\quad\text{when}~m>m_k. \end{equation*} \end{linenomath*} Fixing $k\in\mathbb{N}$, when $m>m_k$, we have \begin{linenomath*} \begin{equation*} -\infty<E_{m,1}\leq E_{m,2}\leq \cdots\leq E_{m,k}<0. \end{equation*} \end{linenomath*} In view of Lemma \ref{lemma:geo1} $(ii)$, Lemma \ref{lemma:PS}, and Theorem \ref{theorem:minimax} $(i)$ and $(ii)$, we know that $I_{|S_m\cap X_2}$ has at least $k$ distinct critical points associated to the levels $E_{m,j}$ ($j= 1, 2, \cdots, k$). Thus, by Palais principle of symmetric criticality, we obtain Theorem \ref{theorem:nonradialsolutions} $(iii)$. If \eqref{eq:f_key1} holds, by Lemma \ref{lemma:Emk} $(iii)$, we see that $E_{m,k}<0$ for any $m>0$ and $k \in \mathbb{N}$ (and thus $m_k=0$ for any $k\in\mathbb{N}$). Applying Theorem \ref{theorem:minimax} $(i)$ and $(iii)$ to $I_{|S_m\cap X_2}$, we get Theorem \ref{theorem:nonradialsolutions} $(iv)$. ~~$\square$ \subsection{Proof of Theorem \ref{theorem:nonradialsolution}}\label{subsect:theorem1.1} This subsection is devoted to the proof of Theorem \ref{theorem:nonradialsolution}. Recall that $N\geq4$, $X_1:=H^1_{\mathcal{O}_1}(\mathbb{R}^N)\cap X_\tau$ and \begin{linenomath*} \begin{equation*} E_m:=\inf_{u\in S_m\cap X_1}I(u). \end{equation*} \end{linenomath*} Clearly, $E_m>-\infty$ by Lemma \ref{lemma:geo1} $(ii)$. Since $X_2\subset X_1$, using Lemma \ref{lemma:geo2} and arguing as the proof of Lemma \ref{lemma:Emk}, we also have \begin{lemma}\label{lemma:E1m} \begin{itemize} \item[$(i)$] $-\infty<E_m\leq0$ for all $m>0$. \item[$(ii)$] There exists $m_0>0$ large enough such that $E_m<0$ for all $m > m_0$. \item[$(iii)$] If in addition \eqref{eq:f_key1} holds, then $E_m<0$ for all $m>0$. \item[$(iv)$] The mapping $m\mapsto E_m$ is nonincreasing and continuous. \end{itemize} \end{lemma} \begin{lemma}\label{lemma:E1m_1} For any $m>s>0$, one has \begin{linenomath*} \begin{equation}\label{eq:E1m3} E_m\leq \frac{m}{s}E_s. \end{equation} \end{linenomath*} If $E_s$ is reached, then the inequality is strict. \end{lemma} \proof Let $t:=m/s>1$. For any $\varepsilon>0$, there exists $u\in S_s\cap X_1$ such that \begin{linenomath*} \begin{equation*} I(u)\leq E_s+\varepsilon. \end{equation*} \end{linenomath*} Clearly, $w:=u(t^{-1/N}\cdot)\in S_m\cap X_1$ and then \begin{linenomath*} \begin{equation}\label{eq:E1m4} E_m\leq I(w)=tI(u)+\frac{1}{2}t^{\frac{N-2}{N}}\left(1-t^{\frac{2}{N}}\right)\int_{\mathbb{R}^N}|\nabla u|^2dx< tI(u)\leq\frac{m}{s}(E_s+\varepsilon). \end{equation} \end{linenomath*} Since $\varepsilon>0$ is arbitrary, we see that \eqref{eq:E1m3} holds. If $E_s$ is reached, for example, at some $u\in S_s\cap X_1$, then we can let $\varepsilon=0$ in \eqref{eq:E1m4} and thus the strict inequality follows.~~$\square$ \medskip \noindent \textbf{Proof of Theorem \ref{theorem:nonradialsolution}.} We define \begin{linenomath*} \begin{equation}\label{eq:m^*} m^*:=\inf\{m>0~|~E_m<0\}. \end{equation} \end{linenomath*} By Lemma \ref{lemma:E1m}, it is clear that $m^*\in[0,\infty)$, \begin{linenomath*} \begin{equation}\label{eq:E1m1} E_m=0\quad\text{if}~0<m\leq m^*,\qquad E_m<0\quad\text{when}~m>m^*; \end{equation} \end{linenomath*} in particular, $m^*=0$ if \eqref{eq:f_key1} holds. Let us show that if $0<m< m^*$, then $E_m=0$ is not reached. Indeed, assuming by contradiction that $E_m$ is reached for some $m\in(0,m^*)$, in view of Lemma \ref{lemma:E1m_1}, we have \begin{linenomath*} \begin{equation*} E_{m^*}<\frac{m^*}{m}E_m=0 \end{equation*} \end{linenomath*} which leads to a contradiction since $E_{m^*}=0$. To complete the proof of Theorem \ref{theorem:nonradialsolution}, the only remaining task is to show that the infimum $E_m$ is reached when $m>m^*$. When $N-2M=0$, we have $X_1=X_2$ (with $N-2M \neq 1$). Since in that case $E_m = E_{m,1}$ and $m^*= m_1$, the result follows directly from the property, established in Theorem \ref{theorem:nonradialsolutions}, that $E_{m,1}$ is a critical value. The rest of the proof is devoted to deal with the delicate case, that is when $N-2M \neq 0$. Fix $m>m^*$ and let $\{u_n\}\subset S_m\cap X_1$ be a minimizing sequence with respect to $E_m$. Clearly, $\{u_n\}$ is bounded in $X_1$ by Lemma \ref{lemma:geo1} $(ii)$. Up to a subsequence, we may assume that $\lim_{n\to\infty}\int_{\mathbb{R}^N}|\nabla u_n|^2dx$ and $\lim_{n\to\infty}\int_{\mathbb{R}^N}F(u_n)dx$ exist. Since $E_m<0$ by \eqref{eq:E1m1}, we have that \begin{linenomath*} \begin{equation}\label{eq:nonvanishing_1} \lim_{r\to\infty}\left(\underset{n\to\infty}{\lim}\underset{y\in\{0\}\times\{0\}\times\mathbb{R}^{N-2M}}{\sup}\int_{B(y,r)}|u_n|^2dx\right)>0. \end{equation} \end{linenomath*} Indeed, if \eqref{eq:nonvanishing_1} does not hold, then $u_n\to0$ in $L^{q_*}(\mathbb{R}^N)$ by Lemma \ref{lemma:lions} and thus \begin{linenomath*} \begin{equation*} \lim_{n\to\infty}\int_{\mathbb{R}^N}F(u_n)dx=0 \end{equation*} \end{linenomath*} via Lemma \ref{lemma:geo1} $(i)$; since $I(u_n)\geq -\int_{\mathbb{R}^N}F(u_n)dx$, a contradiction is obtained as follows: \begin{linenomath*} \begin{equation*} 0>E_m=\lim_{n\to\infty}I(u_n)\geq -\lim_{n\to\infty}\int_{\mathbb{R}^N}F(u_n)dx=0. \end{equation*} \end{linenomath*} With \eqref{eq:nonvanishing_1} in hand, we see that there exist $r_0>0$ and $\{y_n\}\subset\{0\}\times\{0\}\times\mathbb{R}^{N-2M}$ such that \begin{linenomath*} \begin{equation}\label{eq:nonvanishing_2} \underset{n\to\infty}{\lim}\int_{B(y_n,r_0)}|u_n|^2dx>0. \end{equation} \end{linenomath*} Since $\{u_n(\cdot-y_n)\}\subset S_m\cap X_1$ is bounded, up to a subsequence, we may assume that $u_n(\cdot-y_n)\rightharpoonup u$ in $X_1$ for some $u\in X_1$, $u_n(\cdot-y_n)\to u$ in $L^2_{\text{loc}}(\mathbb{R}^N)$ and $u_n(\cdot-y_n)\to u$ almost everywhere in $\mathbb{R}^N$. Clearly, $u\neq0$ by \eqref{eq:nonvanishing_2}. Let $v_n:=u_n(\cdot-y_n)-u$. Noting that $v_n\rightharpoonup0$ in $X_1$, we have \begin{linenomath*} \begin{equation*} \int_{\mathbb{R}^N}|u+v_n|^2dx=\int_{\mathbb{R}^N}|u|^2dx+\int_{\mathbb{R}^N}|v_n|^2dx+o_n(1) \end{equation*} \end{linenomath*} and \begin{linenomath*} \begin{equation*} \int_{\mathbb{R}^N}|\nabla (u+v_n)|^2dx=\int_{\mathbb{R}^N}|\nabla u|^2dx+\int_{\mathbb{R}^N}|\nabla v_n|^2dx+o_n(1), \end{equation*} \end{linenomath*} where $o_n(1)\to0$ as $n\to\infty$. With the aid of Lemma \ref{lemma:BL}, we also have \begin{linenomath*} \begin{equation*} \lim_{n\to\infty}\int_{\mathbb{R}^N}F(u+v_n)dx=\int_{\mathbb{R}^N}F(u)dx+\lim_{n\to\infty}\int_{\mathbb{R}^N}F(v_n)dx. \end{equation*} \end{linenomath*} Since $I(u_n)=I(u_n(\cdot-y_n))=I(u+v_n)$, it follows that \begin{linenomath*} \begin{equation}\label{eq:key_1} m=\|u\|^2_{L^2(\mathbb{R}^N)}+\lim_{n\to\infty}\|v_n\|^2_{L^2(\mathbb{R}^N)} \end{equation} \end{linenomath*} and \begin{linenomath*} \begin{equation}\label{eq:key_2} E_m=\lim_{n\to\infty}I(u+v_n)=I(u)+\lim_{n\to\infty}I(v_n). \end{equation} \end{linenomath*} We prove below a claim and then conclude the proof. \smallskip \textbf{Claim.} $\lim_{n\to\infty}\|v_n\|^2_{L^2(\mathbb{R}^N)}=0$. In particular, by \eqref{eq:key_1}, $\|u\|^2_{L^2(\mathbb{R}^N)}=m$. Let $t_n:=\|v_n\|^2_{L^2(\mathbb{R}^N)}$ for every $n\in\mathbb{N}$. If we assume that $\lim_{n\to\infty}t_n>0$, then \eqref{eq:key_1} implies that $s:=\|u\|^2_{L^2(\mathbb{R}^N)}\in (0, m)$. By the definition of $E_{t_n}$ and Lemma \ref{lemma:E1m} $(iv)$, we have \begin{linenomath*} \begin{equation*} \lim_{n\to\infty}I(v_n)\geq \lim_{n\to\infty}E_{t_n}= E_{m-s}. \end{equation*} \end{linenomath*} From \eqref{eq:key_2} and Lemma \ref{lemma:E1m_1}, it follows that \begin{linenomath*} \begin{equation*} E_m\geq I(u) + E_{m-s} \geq E_s + E_{m-s} \geq \frac{s}{m}E_m + \frac{m-s}{m}E_m= E_m. \end{equation*} \end{linenomath*} Thus necessarily $I(u) = E_s$ and this shows that $E_s$ is reached at $u$. But then still from \eqref{eq:key_2} and Lemma \ref{lemma:E1m_1}, one has \begin{linenomath*} \begin{equation*} E_m \geq E_s + E_{m-s} > \frac{s}{m}E_m + \frac{m-s}{m}E_m = E_m \end{equation*} \end{linenomath*} which is a contradiction and thus proves the Claim. \smallskip \textbf{Conclusion.} Clearly, $u\in S_m\cap X_1$ by the Claim and thus $I(u)\geq E_m$. Now since the Claim and Lemma \ref{lemma:geo1} $(i)$ imply that \begin{linenomath*} \begin{equation*} \lim_{n\to\infty}\int_{\mathbb{R}^N}F(v_n)dx=0, \end{equation*} \end{linenomath*} we also have $\lim_{n \to \infty} I(v_n) \geq 0$. Thus, by \eqref{eq:key_2}, we get $E_m \geq I(u)$, hence $E_m$ is reached at $u\in S_{m}\cap X_1$. This completes the proof of Theorem \ref{theorem:nonradialsolution}.~~$\square$ \begin{remark}\label{remark:positive} In the proof of Theorem \ref{theorem:nonradialsolution} we define the number $m^*$ via \eqref{eq:m^*}. When we do not have \eqref{eq:f_key1}, this number can be positive. To see this, following closely \cite{Sh14}, we assume, in addition to $(f1)-(f5)$, that \begin{linenomath*} \begin{equation}\label{eq:f_key2} \limsup_{t\to0}\frac{F(t)}{|t|^{2+\frac{4}{N}}}<+\infty. \end{equation} \end{linenomath*} Since there exists $C(f)>0$ such that $F(t)\leq C(f)|t|^{2+4/N}$ for any $t\in\mathbb{R}$, by Gagliardo-Nirenberg inequality, it follows that \begin{linenomath*} \begin{equation*} \int_{\mathbb{R}^N}F(u)dx\leq C(f)\int_{\mathbb{R}^N}|u|^{2+4/N}dx\leq C(f)C(N)m^{2/N}\int_{\mathbb{R}^N}|\nabla u|^2dx \end{equation*} \end{linenomath*} for all $u\in S_m$. Then, for any $m>0$ small enough such that $C(f)C(N)m^{2/N}\leq 1/4$, we have \begin{linenomath*} \begin{equation*} I(u)\geq\frac{1}{2}\int_{\mathbb{R}^N}|\nabla u|^2dx-\frac{1}{4}\int_{\mathbb{R}^N}|\nabla u|^2dx=\frac{1}{4}\int_{\mathbb{R}^N}|\nabla u|^2dx>0. \end{equation*} \end{linenomath*} Clearly, this implies that $E_m\geq0$ when $m>0$ is small. \end{remark} \section{Multiple radial solutions}\label{sect:theoremB} Based on the approach developed to prove Theorem \ref{theorem:nonradialsolutions} and under weak conditions, we give in this section a new proof for the result due to Hirata and Tanaka \cite{HT18} on multiple radial solutions. \begin{theorem}\label{theorem:radialsolutions} Assume that $N\geq2$ and $f$ satisfies $(f1),(f2),(f3)',(f4)$ and $(f5)$. Then the following statements hold. \begin{itemize} \item[$(i)$] For each $k\in\mathbb{N}$ there exists $\overline{m}_k\in[0,\infty)$ such that, when $m>\overline{m}_k$, \eqref{problem} has at least $k$ radial solutions (with negative energies). \item[$(ii)$] Assume in addition \eqref{eq:f_key1}, then \eqref{problem} has infinitely many radial solutions $\{v_n\}^\infty_{n=1}$ for all $m>0$. In particular, $I(v_n)<0$ for each $n\in\mathbb{N}$ and $I(v_n)\to0$ as $n\to\infty$. \end{itemize} \end{theorem} Note that in \cite[Theorem 0.2]{HT18}, instead of $(f3)'$, it is required the stronger condition \begin{itemize} \item[$(f3)''$] $\lim_{t\rightarrow\infty}f(t)/|t|^{1+4/N}=0$. \end{itemize} To derive their result, Hirata and Tanaka apply a version of \emph{symmetric mountain pass argument} to $I(\lambda,u):\mathbb{R}\times H^1_r(\mathbb{R}^N)\to\mathbb{R}$, a Lagrange formulation of \eqref{problem} defined as \begin{linenomath*} \begin{equation*} I(\lambda,u)=\frac{1}{2}\int_{\mathbb{R}^N}|\nabla u|^2dx-\int_{\mathbb{R}^N}F(u)dx+\frac{1}{2}e^\lambda\left(\int_{\mathbb{R}^N}u^2dx-m\right). \end{equation*} \end{linenomath*} Here $H^1_r(\mathbb{R}^N)$ stands for the space of radially symmetric functions in $H^1(\mathbb{R}^N)$. Note also that just assuming $(f1),(f2),(f3)''$ and $(f4)$, they established the existence of one radial solution via a \emph{mountain pass argument} applied to $I(\lambda,u)$. As a consequence they derived a minimax characterization of the global infimum $E_m$, see \cite[Theorem 0.1]{HT18} for more details. To prove Theorem \ref{theorem:radialsolutions}, in view of Remark \ref{remark:extension}, we can assume without loss of generality that $(f3)$ holds. Since when $N\geq2$ the embedding $H^1_r(\mathbb{R}^N)\hookrightarrow L^p(\mathbb{R}^N)$ is compact for any $2<p <2^*$, modifying the proof of Lemma \ref{lemma:PS} accordingly, we have the following compactness result. \begin{lemma}\label{lemma:PS1} The constrained functional $I_{|S_m\cap H^1_r(\mathbb{R}^N)}$ satisfies the $(PS)_c$ condition for all $c<0$. \end{lemma} Since the remaining arguments are similar to that for Theorem \ref{theorem:nonradialsolutions}, we just outline the proof. Fix $m>0$ and $k\in\mathbb{N}$. By \cite[Theorem 10]{Be83-2}, there exists an odd continuous mapping $\overline{\pi}_k:\mathbb{S}^{k-1}\to H^1_r(\mathbb{R}^N)\setminus\{0\}$ such that \begin{linenomath*} \begin{equation*} \inf_{\sigma\in\mathbb{S}^{k-1}}\int_{\mathbb{R}^N}F(\overline{\pi}_k[\sigma])dx\geq1\qquad\text{and}\qquad \sup_{\sigma\in\mathbb{S}^{k-1}}\|\overline{\pi}_k[\sigma]\|_{L^\infty(\mathbb{R}^N)}\leq \zeta. \end{equation*} \end{linenomath*} Thus, an odd continuous mapping $\overline{\gamma}_{m,k}:\mathbb{S}^{k-1}\to S_m\cap H^1_r(\mathbb{R}^N)$ can be introduced as follows: \begin{linenomath*} \begin{equation*} \overline{\gamma}_{m,k}[\sigma](x):=\overline{\pi}_k[\sigma]\left(m^{-1/N}\cdot \|\overline{\pi}_k[\sigma]\|^{2/N}_{L^2(\mathbb{R}^N)}\cdot x\right),\qquad x\in\mathbb{R}^N~\text{and}~\sigma\in\mathbb{S}^{k-1}. \end{equation*} \end{linenomath*} For any $s>0$, we then define $\overline{\gamma}^s_{m,k}[\sigma](x):=s^{N/2}\overline{\gamma}_{m,k}[\sigma](sx)$. Arguing as the proof of Lemma \ref{lemma:geo2}, we see that $\overline{\gamma}_{m,k}$ and $\overline{\gamma}^s_{m,k}$ satisfy the following lemma. \begin{lemma}\label{lemma:geo3} \begin{itemize} \item[$(i)$] For any $k\in\mathbb{N}$, there exists $\overline{m}(k) > 0$ large enough such that \begin{linenomath*} \begin{equation*} \sup_{\sigma\in\mathbb{S}^{k-1}}I(\overline{\gamma}_{m,k}[\sigma])<0\qquad\text{for all}~m > \overline{m}(k). \end{equation*} \end{linenomath*} \item[$(ii)$] For any $m>0$ and $k\in\mathbb{N}$, we have \begin{linenomath*} \begin{equation*} \limsup_{s\to0^+}\left(\sup_{\sigma\in\mathbb{S}^{k-1}}I(\overline{\gamma}^s_{m,k}[\sigma])\right)\leq 0. \end{equation*} \end{linenomath*} If in addition \eqref{eq:f_key1} holds, then there exists $s^*>0$ small enough such that \begin{linenomath*} \begin{equation*} \sup_{\sigma\in\mathbb{S}^{k-1}}I(\overline{\gamma}^s_{m,k}[\sigma])< 0\qquad\text{for any}~s\in(0,s^*). \end{equation*} \end{linenomath*} \end{itemize} \end{lemma} Since $\mathcal{G}(\overline{\gamma}_{m,k}[\mathbb{S}^{k-1}])\geq k$, the class \begin{linenomath*} \begin{equation*} \overline{\Gamma}_{m,k}:=\{A\in \Sigma(S_m\cap H^1_r(\mathbb{R}^N))~|~\mathcal{G}(A)\geq k\} \end{equation*} \end{linenomath*} is nonempty and the minimax value \begin{linenomath*} \begin{equation*} \overline{E}_{m,k}:=\inf_{A\in \overline{\Gamma}_{m,k}}\sup_{u\in A}I(u) \end{equation*} \end{linenomath*} is well defined. With the aid of Lemma \ref{lemma:geo3}, repeating the argument of Lemma \ref{lemma:Emk}, we obtain \begin{lemma}\label{lemma:Ebarmk} \begin{itemize} \item[$(i)$] $-\infty<\overline{E}_{m,k}\leq \overline{E}_{m,k+1}\leq0$ for all $m>0$ and $k\in\mathbb{N}$. \item[$(ii)$] For any $k\in\mathbb{N}$, there exists $\overline{m}(k) > 0$ large enough such that $\overline{E}_{m,k}<0$ for all $m > \overline{m}(k)$. \item[$(iii)$] If in addition \eqref{eq:f_key1} holds, then $\overline{E}_{m,k}<0$ for all $m>0$ and $k\in\mathbb{N}$. \item[$(iv)$] For any $k\in\mathbb{N}$, the mapping $m\mapsto \overline{E}_{m,k}$ is nonincreasing and continuous. \end{itemize} \end{lemma} \medskip \noindent \textbf{Conclusion.} Let $\mathcal{E}:=H^1_r(\mathbb{R}^N)$ and $\mathcal{H}:=L^2(\mathbb{R}^N)$. For any $k\in\mathbb{N}$, define \begin{linenomath*} \begin{equation*} \overline{m}_k:=\inf\{m>0~|~\overline{E}_{m,k}<0\}. \end{equation*} \end{linenomath*} In view of Lemma \ref{lemma:geo1} $(ii)$, Lemma \ref{lemma:PS1}, Lemma \ref{lemma:Ebarmk} and Theorem \ref{theorem:minimax}, we obtain Theorem \ref{theorem:radialsolutions}.~~$\square$ { \small
3,212,635,537,888
arxiv
\section{Introduction} It has long been known that particles with non-zero spin can have ``toroidal'' electromagnetic form factors that are odd under charge conjugation $C$, which implies that they violate either parity ($P$) or time reversal ($T$), but not both symmetries simultaneously~\cite{Khr97}. The toroidal dipole form factor (TDFF), also called the anapole \cite{dianapole}, requires spin 1/2 or higher, violates $P$ and conserves $T$. The toroidal quadrupole form factor (TQFF), which requires spin 1 or higher, violates $T$ and conserves $P$, and so on \cite{quadruanapole}. Toroidal form factors produce no physical effects when the photon is on-shell, and correspond in a classical picture to fields within the charge distribution~\cite{Gra10}. These features contrast with the more familiar $C$-even electric and magnetic form factors, which respect or violate both $P$ and $T$ simultaneously, and produce effects for on-shell photons. The only form factors allowed for massive particles that are their own antiparticles are toroidal \cite{boudjema}. The toroidal form factors do contribute to the short-range interaction with a charged particle. For nucleons and nuclei, in particular, they are in principle accessible via lepton scattering. While there exist calculations of the TDFFs of the nucleon and nuclei, there is apparently no calculation of a nuclear TQFF. The TQFF of positronium was calculated in Ref. \cite{atomic}. The aim of this paper is to provide the first controlled calculation of the TQFF of the simplest nucleus, the deuteron, at low momentum. The Lorentz-covariant electromagnetic current of a particle with spin $1$ is described by seven electromagnetic form factors: charge, magnetic dipole, and electric quadrupole, which are $P$- and $T$-conserving ($PT$); electric dipole and magnetic quadrupole, which are $P$- and $T$-violating $(\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T)$; TDFF, which is $P$-violating and $T$-conserving ($\slash\hspace{-0.6em}P T$); and, finally, TQFF, which is $P$-conserving and $T$-violating $(P\slash\hspace{-0.4em}T)$. We can write the spatial, $P \slash\hspace{-0.4em}T$ component of the electromagnetic current as \cite{quadruanapole} \begin{equation} \langle \vec p^{\,\prime}, j | J^{k}_{P\slash\hspace{-0.4em}T} | \vec p, i \rangle = i \left[ q^i q^j q^k - \frac{\vec q^{\, 2}}{2} \left(\delta^{i k} q^j + \delta^{j k} q^i \right)\right] F_{P \slash\hspace{-0.4em}T}(\vec q^{\, 2}), \label{PCTVFF} \end{equation} where $| \vec p, i \rangle$ is a deuteron state with momentum $\vec p$ and polarization $\delta^\mu_i$ in the rest frame, normalized so that $\langle \vec p^{\,\prime}, j | \vec p, i \rangle = \sqrt{1+{\vec p}^{\,2}/m_d^2}\, (2\pi)^3 \delta^{(3)}(\vec q)\delta_{ij}$, $\vec q= \vec p - \vec p^{\,\prime}$ is the (outgoing) momentum of the photon, $m_d=2m_N -\gamma^2/m_N +\ldots$ is the deuteron mass in terms of the nucleon mass $m_N\simeq 940$ MeV and the binding momentum $\gamma \simeq 45$ MeV. $F_{P \slash\hspace{-0.4em}T}(\vec q^{\, 2})$ is the TQFF, which is proportional to the proton charge $e=\sqrt{4\pi \alpha_{\mathrm{em}}}$ and has dimensions of mass$^{-3}$. We express it in units of $e$ fm$^3$. We denote the corresponding toroidal quadrupole moment (TQM) by $\mathcal T_d = F_{P \slash\hspace{-0.4em}T}(0)$. It can be viewed as an interaction of the deuteron $d$ with the second derivative of the magnetic field $\vec{B}$ of the form \begin{equation} \mathcal L = \frac{\mathcal T_d }{2} \, d^{\dagger} \left\{ S_{i}, S_{j} \right\}d \ \nabla_i \left(\vec{\nabla} \times \vec B\right)_j, \end{equation} where $S$ denotes the deuteron spin, and $\{ .\, , \, .\}$ the anticommutator. Using Maxwell's equations to replace the curl of the magnetic field with a current, we can trade the $P \slash\hspace{-0.4em}T$ moment for a contact interaction. For example, the $P\slash\hspace{-0.4em}T$ interaction of a non-relativistic lepton of mass $m_l$ with the deuteron becomes a dimension-eight contact interaction, \begin{equation} V = \frac{e \mathcal T_d }{2 m_l} \left\{ S_{i}, S_{j} \right\} \left[\left(\nabla_i \delta^{(3)}(\vec x)\right) \hat{p}_j + \epsilon_{i k m} \sigma_{k} \, \nabla_{m} \nabla_{j} \delta^{(3)}(\vec x)\right]. \end{equation} The first term is due to the lepton kinetic term and gives rise to a non-local interaction involving $\hat{p}=-i{\vec\nabla}$. The second one comes from the interaction of the lepton spin $\vec\sigma$ with the deuteron $P\slash\hspace{-0.4em}T$ form factor. Effects of a TQFF on polarization observables in lepton-deuteron scattering have been investigated \cite{edscatt}. There should be similar effects in proton-deuteron scattering such as in the planned TRIC experiment at COSY \cite{COSY}, but there they are likely swamped by non-electromagnetic interactions. We work in the framework of chiral effective field theory (EFT) and take into account the dominant parity and time-reversal violation in and beyond the Standard Model (SM) of particle physics. $P$ violation is commonplace in the weak interaction of the SM. $T$ violation, on the other hand, is small in the SM, which opens up the possibility that operators involving the SM fields but having dimension larger than four could be noticeable. $T$ violation from the CKM quark-mixing matrix is suppressed with respect to other aspects of weak interactions by a small combination of matrix elements \cite{Jarlskog}, $J_{C\!P}\simeq 3 \cdot 10^{-5}$. Moreover, it is loop suppressed in flavor-conserving quantities, such as $T$-violating form factors of the nucleon and nuclei. This leaves the QCD vacuum angle $\bar\theta$ \cite{'tHooft} as the potentially largest dimension-four source of such form factors. However, the stringent experimental limit on the neutron electric dipole moment, $|d_n| < 2.9\cdot 10^{-13}\, e$ fm \cite{dnbound}, constrains it to $\bar\theta \hspace*{0.2em}\raisebox{0.5ex}{$<$ 10^{-10}$. Therefore, we also consider $T$ violation originating beyond the SM at a high energy scale $M_{\slash\hspace{-0.4em}T}$. The dominant such higher-dimensional $T$-violating operators are of effective dimension six. The TQFF is in principle sensitive to $P\slash\hspace{-0.4em}T$ physics beyond the SM. However, the lowest dimension where we find $P\slash\hspace{-0.4em}T$ operators is eight, which means that, in the simplest scenarios, they would be highly suppressed by the presumably high scale of physics beyond the SM. Discussions and references on $P\slash\hspace{-0.4em}T$ interactions at low energies, including situations where they could be relatively enhanced, can be found in Ref. \cite{PslashT}. We focus here on what is likely to be the largest ``background'' in the deuteron TQFF: the combination of $\slash\hspace{-0.6em}P T$ from the ordinary weak interactions with $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ from the $\bar\theta$ term and from the dimension-six operators. Not surprisingly, we find a very small background value for the deuteron TQFF, so that any experimental evidence for a nonzero TQFF likely results from new $P\slash\hspace{-0.4em}T$ physics. Our discussion is organized as follows. In Section \ref{interactions} we construct the effective chiral Lagrangian for the relevant $PT$, $\slash\hspace{-0.6em}P T$, and $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ interactions and currents involving nucleons, pions, and photons. In Section \ref{calculation} we calculate the long-range contributions of these interactions to the deuteron TQFF. In Section \ref{discussion} we discuss our results and compare the deuteron TQM to its $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ electric dipole moment (EDM) and magnetic quadrupole moment (MQM). Three Appendices are devoted to details of our calculations. In Appendix \ref{app:dimsix} the various $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ operators are presented in more detail, and the orders of magnitude of their contributions are given in Appendix \ref{app:ordmag}. In Appendix \ref{app:ffs} we give the expansion of loop diagrams that define the deuteron TQFF. \section{The effective chiral Lagrangian} \label{interactions} At a momentum $Q$ much below the characteristic QCD scale, $M_{\mathrm{QCD}} \sim 1$ GeV, electromagnetic form factors can be calculated with low-energy effective field theories (EFTs). The most predictive such an EFT is chiral EFT (for a review, see Ref. \cite{paulo}), a generalization to an arbitrary number of nucleons of chiral perturbation theory (ChPT) (for a review, see Ref. \cite{veronique}), where $Q\sim m_\pi$, with $m_\pi\simeq 140$ MeV the pion mass. In this EFT pion propagation is included explicitly, and the properties and interactions of the pions are strongly constrained by the approximate chiral symmetry of QCD. For the nucleon, form factors can be calculated in perturbation theory as a systematic expansion in $Q/M_{\mathrm{QCD}}$ \cite{veronique}. The $\slash\hspace{-0.6em}P T$ anapole and the $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ electric dipole form factor of the nucleon have been calculated to next-to-leading order (NLO) in Refs. \cite{maekawa} and \cite{BiraHockings}, respectively. In nuclei, pions can still be treated in perturbation theory \cite{ksw}, but then the expansion is in powers of $Q/M_{N\!N}$, where $M_{N\!N}\equiv 4\pi F_\pi^2/m_N\sim F_\pi$ in terms of the pion decay constant $F_\pi\simeq 186$ MeV. For observables involving momenta above $M_{N\!N}$, one-pion exchange needs to be iterated to all orders \cite{fms}, which complicates renormalization \cite{nogga}. However, light nuclei are dilute systems and, unless one is interested in form factors at high momentum, one can use a chiral EFT with perturbative pions. Indeed, the $C$-even electromagnetic form factors of the deuteron, both $PT$ (charge, electric quadrupole, and magnetic dipole) \cite{deutEMFF} and $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ (electric dipole and magnetic quadrupole) \cite{Vri11b} have been successfully derived in this EFT. The TDFF of the deuteron has been calculated at LO in Ref. \cite{springer}. Similar calculations could be performed for other light nuclei. The relevant low-energy EFT can be written in terms of nucleon, pion, and photon fields. The nucleon field $N=(p \; n)^T$ is an isospinor bi-spinor, with isospin $\mbox{\boldmath $\tau$}/2$ and spin $S^\mu=(0, {\vec \sigma}/2)$ in the rest frame, where the velocity is $v^\mu=(1, {\vec 0})$. The pion field $\mbox{\boldmath $\pi$}$ is an isovector pseudoscalar, for which we choose a stereographic parametrization (see, {\it e.g.}, Ref. \cite {Weinberg}) of the coset space $SO(4)$/$SO(3)$, where $SU(2)\times SU(2)\sim SO(4)$ is the spontaneously broken, approximate chiral symmetry of QCD, and $SU(2)\sim SO(3)$ its unbroken isospin subgroup. We define $D\equiv 1+\mbox{\boldmath $\pi$}^2/F_\pi^2$. The photon field $A_\mu$ ensures electromagnetic $U(1)$ gauge invariance, appearing in the gauge and chiral covariant derivatives $D_\mu \pi_a =D^{-1} (\delta_{ab}\partial_\mu +e\epsilon_{3ab}A_\mu)\pi_b$ and ${\cal D}_\mu N=[\partial_\mu +ieA_\mu (1+\tau_3)/2 + i \mbox{\boldmath $\tau$}\cdot (\mbox{\boldmath $\pi$} \times D_\mu\mbox{\boldmath $\pi$})/F_\pi^2]N$, and in the field strength $F_{\mu\nu}=\partial_\mu A_\nu - \partial_\nu A_\mu$. We use the notation $\mathcal D_{\perp\,\pm}^\mu \equiv \mathcal D_\perp^\mu\pm \mathcal D_\perp^{\dagger\mu}$, where $\mathcal D^{\mu}_{\perp} =\mathcal D^{\mu} - v^{\mu} v \cdot \mathcal D$ and $\bar{N} {\cal D}^{\dagger}_\mu =\overline{{\cal D}_\mu N}$. The coefficients of interactions constructed with up to two nucleon fields are estimated, in the absence of other information from QCD, by naive dimensional analysis (NDA) \cite{NDA}. For multi-nucleon couplings the scaling of a coefficient on the various scales depends also on the number of $S$ waves the operator connects \cite{ksw,paulo}. In the following we will need only a few terms in the leading pion-nucleon-photon $PT$ chiral Lagrangians, {\it viz.} \begin{eqnarray} \mathcal L^{(0)}_{PT} &=& \frac{1}{2}D_\mu \mbox{\boldmath $\pi$} \cdot D^\mu \mbox{\boldmath $\pi$} -\frac{m_\pi^2}{2D}\mbox{\boldmath $\pi$}^2 +i\bar N v \cdot {\cal D} N -\frac{2g_A}{F_{\pi}} (D_\mu\mbox{\boldmath $\pi$}) \cdot \bar N S^\mu \mbox{\boldmath $\tau$} N \nonumber\\ &&- \frac{1}{2} C_0 \left( \bar N\!N \, \bar N \! N - 4 \bar N S^\mu N \cdot \bar N S_\mu N \right) +\ldots, \label{gALag} \end{eqnarray} where $g_A\simeq 1.27$ is the nucleon axial coupling and $C_0$ a contact two-nucleon parameter, and \begin{eqnarray} \mathcal L^{(1)}_{PT} &=& -\frac{1}{2m_N} \bar N {\cal D}_{\perp}^2N \nonumber\\ && +\frac{e}{4m_N}\epsilon_{\rho\sigma\mu\nu}F^{\rho\sigma} v^\mu \bar N \left\{1+\kappa_0 + (1+\kappa_1) \left[\tau_3 -\frac{2}{F_\pi^2D} \left(\mbox{\boldmath $\pi$}^2\tau_3-\pi_3\mbox{\boldmath $\pi$}\cdot\mbox{\boldmath $\tau$}\right)\right]\right\} S^\nu N +\ldots, \nonumber\\ \label{subPTLag} \end{eqnarray} where $\kappa_0\simeq -0.12$ and $\kappa_1\simeq 3.7$ are, respectively, the isoscalar and isovector anomalous magnetic moments of the nucleon, and $\epsilon^{0123}=1$. The $P\slash\hspace{-0.4em}T$ TQFF vanishes unless there is, in the EFT, either a $P\slash\hspace{-0.4em}T$ interaction or a combination of $\slash\hspace{-0.6em}P T$ and $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ interactions between the two nucleons. $P\slash\hspace{-0.4em}T$ operators in the EFT Lagrangian arise in two ways. First, they represent dimension-seven $P\slash\hspace{-0.4em}T$ operators in the quark-gluon Lagrangian just above $M_{\mathrm{QCD}}$. These dimension-seven operators in turn can have two origins above the electroweak scale $v$. On one hand, they can be generated by possible gauge-invariant dimension-eight $P\slash\hspace{-0.4em}T$ operators, in which case they would be expected to be suppressed by four powers of the high, new-physics scale $M_{\slash\hspace{-0.4em}T}$, that is, they would scale as $v/M^4_{\slash\hspace{-0.4em}T}$. On the other hand, they can arise from the interplay of $\slash\hspace{-0.6em}P T$ in the SM and possible dimension-six $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ operators, when one would expect the suppression scale to be $v^2 M^2_{\slash\hspace{-0.4em}T}$ rather than $M^4_{\slash\hspace{-0.4em}T}$. A second way to generate $P\slash\hspace{-0.4em}T$ operators in the EFT Lagrangian is from $\slash\hspace{-0.6em}P T$ and $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ interactions in the quark-gluon Lagrangian at low energy, when we integrate out non-perturbative dynamics on scale of order of the typical hadronic scale $M_{\mathrm{QCD}}$. Again here we expect a suppression of $v^2 M^2_{\slash\hspace{-0.4em}T}$ rather than $M^4_{\slash\hspace{-0.4em}T}$. If the new-physics scale is much higher than the electroweak scale, the contributions from $\slash\hspace{-0.6em}P T$ and $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ interactions are likely to dominate the $P\slash\hspace{-0.4em}T$ interactions in the EFT. Interesting scenarios in which this is not the case are discussed in Ref. \cite{PslashT}. Here we are interested in the background to genuine $P\slash\hspace{-0.4em}T$ interactions at the high energy scale. In this case, as discussed in App. \ref{app:ordmag}, the contributions from $P\slash\hspace{-0.4em}T$ interactions in the EFT are likely smaller than the long-range components from $\slash\hspace{-0.6em}P T$ and $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ interactions, which we can, and will, calculate. $\slash\hspace{-0.6em}P T$ interactions in chiral EFT have been discussed for example in Refs. \cite{Kaplan:1992vj,Zhu:2004vw,maekawa}. They originate at the QCD scale from four-quark interactions proportional to the Fermi constant $G_F\simeq 1.2 \cdot 10^{-5}$ GeV$^{-2}$. A dimensionless measure of the relative strength of the weak interactions at low energies is $G_FF_\pi^2\sim 4\cdot 10^{-7}$. The most important interaction is the $\slash\hspace{-0.6em}P T$ pion-nucleon interaction \begin{equation} \mathcal L^{(-1)}_{\slash\hspace{-0.6em}P T} = \frac{h_1}{F_{\pi}} \bar N \left(\mbox{\boldmath $\pi$} \times \mbox{\boldmath $\tau$}\right)_3 N +\ldots, \label{Parity1} \end{equation} with $h_1 = {\mathcal O}(G_F F^2_{\pi} M_{\mathrm{QCD}}) $. The $\slash\hspace{-0.6em}P T$ pion-nucleon coupling $h_1$ is not well-known. In LO of the EFT with perturbative pions, which we are employing, the $\slash\hspace{-0.6em}P T$ asymmetry in $n+p\to d+\gamma$ is $A_\gamma=0.24 h_1/F_\pi$ \cite{npdgammath}, so the recent experimental result $A_\gamma=[-1.2 \pm 2.1 (\mathrm{stat}) \pm 0.2 (\mathrm{sys})]\cdot 10^{-7}$ \cite{npdgammaexp} gives a bound $| h_1|/F_{\pi} \hspace*{0.2em}\raisebox{0.5ex}{$<$ 10^{-6}$, which is the order of magnitude expected by NDA. A first lattice QCD calculation at a pion mass $m_\pi\simeq 389$ MeV gives, in our convention for $h_1$, $\sqrt{2} h_1/F_{\pi} = [1.099\pm 0.505 ^{+0.058}_{-0.064}]\cdot 10^{-7}$ \cite{hpilatt}. $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ interactions are expected to be due, mostly, to the dimension-four QCD $\bar\theta$ term, parameterized by $\bar\theta \ll 1$, and the dimension-six operators that result from integrating out physics at the scale $M_{\slash\hspace{-0.4em}T}$ and the heavy degrees of freedom in the SM. The complete set of $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ dimension-six operators at the electro-weak scale has been given in Ref. \cite{Buchmuller:1985jz}, and the relevant operators at the hadronic scale have been summarized in Ref. \cite{deVries:2012ab}. They are the isoscalar and isovector quark EDM (qEDM) and quark chromo-EDM (qCEDM), the Weinberg operator, which gives rise to a gluon chromo-EDM (gCEDM), and four $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ four-quark operators. Two of these four-quark operators are invariant under the SM gauge group and can be generated directly at the electroweak scale. Their effect in the chiral EFT at low energy cannot be separated from the gCEDM and we refer to these collectively as chiral-invariant sources ($\chi$ISs). The other two four-quark operators break isospin and result from integrating out the weak gauge bosons and running to low energy. Because they mix left- and right-handed quarks we denote these as FQLR. The various $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ sources are further discussed in App. \ref{app:dimsix}. The dimension-four and six $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ operators have different transformation properties under the chiral group $SU_L(2) \times SU_R(2)$, which has consequences for the $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ couplings in chiral EFT \cite{BiraEmanuele,deVries:2012ab}. The interactions relevant to the rest of the paper are \begin{eqnarray} \mathcal L_{\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T} &=& -\frac{\bar g_0}{F_{\pi}} \bar N \mbox{\boldmath $\pi$}\cdot\mbox{\boldmath $\tau$} N -\frac{\bar g_1}{F_{\pi}} \pi_3 \bar N N -2 \bar N \left(\bar d_0 + \bar d_1 \tau_3 \right) S^{\mu} \left( v^{\nu}+ \frac{i \mathcal D^{\nu}_{\perp\, -}}{2 m_N}\right) N F_{\mu \nu} \nonumber \\ & &+ \frac{1}{4} \bar C_0 \left[ \bar N\!N \, \partial_{\mu} (\bar N S^{\mu} N ) - \bar N \mbox{\boldmath $\tau$} N \cdot \mathcal D_{\mu} \left( \bar N S^{\mu} \mbox{\boldmath $\tau$} N \right) \right] , \label{g0Lag} \end{eqnarray} where $\bar g_0$ ($\bar g_1$) is the isoscalar (isovector) $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ pion-nucleon coupling, $\bar d_0$ ($\bar d_1$) a short-range contribution to the isoscalar (isovector) nucleon EDM, and $\bar C_0 $ a short-range $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ two-nucleon interaction. The term proportional to $1/m_N$ is a recoil correction and depends on the sum of the incoming and outgoing nucleon momenta. Other $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ interactions, some expected to be of comparable size, will not be needed below because of the quantum numbers of the deuteron. The relative importance of the operators in Eq. \eqref{g0Lag} depends on the chiral properties of the $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ source at the quark-gluon level. As described in App. \ref {app:ordmag}, the dimensionless one-nucleon couplings $\bar g_{0,1}/M_{\mathrm{QCD}}$ and $M_{\mathrm{QCD}} \bar d_{0,1}/e$ are given by the dimensionless strengths of the underlying $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ interactions, times factors of $(m_\pi/M_{\mathrm{QCD}})^2$ that depend on the chiral transformation properties of the source. For the QCD $\bar\theta$ term, the qCEDM, and the FQLR, which violate chiral symmetry, non-derivative pion-nucleon couplings like $\bar g_0$ can appear in the chiral Lagrangian at LO. In this case $\bar g_0 /M_{\mathrm{QCD}} = {\cal O}(M_{\mathrm{QCD}}\bar d_1/e)$ and pion effects tend to dominate because of the low mass. In contrast, $\chi$ISs can generate pion-nucleon non-derivative couplings only through insertion of the quark mass, which costs two powers of $m_\pi/M_{\mathrm{QCD}}$, so that, for example, $\bar g_0 /M_{\mathrm{QCD}} = {\cal O}( (m_\pi/ M_{\mathrm{QCD}})^2 M_{\mathrm{QCD}}\bar d_1/e)$. The $\bar g_0$ term still appears in the LO Lagrangian, but it is accompanied by the equally important two-nucleon and electromagnetic operators, whose construction does not require any insertion of the quark mass. Finally, the presence of a photon field causes the qEDM to contribute mainly to the photon-nucleon sector, purely hadronic operators being suppressed by powers of $\alpha_{\textrm{em}}/4\pi$. The interactions in Eqs. \eqref{Parity1} and \eqref{g0Lag} can be used to compute the $\slash\hspace{-0.6em}P T$, $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ and $P \slash\hspace{-0.4em}T$ form factors of nuclei. The nucleon does not possess a $P\slash\hspace{-0.4em}T$ form factor. We summarize here the results for the nucleon TDFF and electric dipole form factor (EDFF), which are needed for the calculation of the deuteron TQFF in Sec. \ref{calculation}. The $\slash\hspace{-0.6em}P T$ and $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ currents are written as, respectively, \begin{equation} J^{\mu}_{\slash\hspace{-0.5em}P T}(q) = \frac{2}{m_N^2} \left( F_{\slash\hspace{-0.5em}P T,\, 0}(-q^2) + F_{\slash\hspace{-0.5em}P T, 1}(-q^2) \tau_3\right) \left[S^{\mu} q^2 - S \cdot q q^{\mu} +\ldots\right] \label{currentTDFF} \end{equation} and \begin{equation} J^{\mu}_{\slash\hspace{-0.45em}P\slash\hspace{-0.4em}T}\left(q,K\right) = 2 i \left( F_{\slash\hspace{-0.45em}P\slash\hspace{-0.4em}T,\, 0}(-q^2) + F_{\slash\hspace{-0.45em}P\slash\hspace{-0.4em}T,\, 1}(-q^2) \tau_3\right) \left[S^{\mu}\left( v\cdot q +\frac{K\cdot q}{m_N}\right) - S\cdot q \left(v^{\mu} + \frac{K^{\mu}}{m_N} \right) +\ldots\right], \label{currentEDFF} \end{equation} where $q$ denotes the four-momentum of the photon and $2 K$ is the sum of the nucleon momenta. We write \begin{equation} F_{\slash\hspace{-0.5em}P T,\, i}(-q^2) = a_i \ f_{ i}\left(-q^2/4m_\pi^2\right), \end{equation} and \begin{equation} F_{\slash\hspace{-0.45em}P\slash\hspace{-0.4em}T,\, i}(-q^2) = d_i -q^{2} \ S^{\prime}_{i}\left(-q^2/4m_\pi^2\right) , \end{equation} where $a_0$ and $a_1$ ($d_0$ and $d_1$) are the nucleon isoscalar and isovector anapole (electric dipole) moments, $f_{i}(0)$=1, and $S^{\prime}_{i}(0)$ is finite. At LO, the nucleon TDFFs come entirely from pion loops, in which one vertex is the $\slash\hspace{-0.6em}P T$ pion-nucleon coupling $h_1$. By NDA one expects $a_i/m_N^2= {\cal O} (e h_1/m_\pi M_{\mathrm{QCD}}^2)$. The calculation of Ref. \cite{maekawa} shows that the nucleon anapole form factor is, at LO, isoscalar and finite, \begin{equation} a_0^{({\rm LO})} = \frac{e g_A h_1 m_N^2}{24\pi F^2_{\pi} m_{\pi}}, \qquad f_{0}^{({\rm LO})}\left(x^2\right)= \frac{3}{2x^2} \left[\frac{1+x^2}{x}\arctan x-1\right] , \qquad a_1^{({\rm LO})} = 0. \label{LeadingAnapolescalar} \end{equation} The isovector anapole form factor appears only at NLO, where short-range contributions to the moments also are present. Neglecting ${\cal O}(1)$ numbers, the result \eqref{LeadingAnapolescalar} for $a_0^{({\rm LO})}$ is a factor of $4\pi$ larger than the NDA estimate, as often happens in baryon ChPT. The nucleon EDFF was computed in Ref. \cite{BiraHockings} to NLO for all $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ sources of dimension up to six. For the QCD $\bar\theta$ term, the qCEDM, and the FQLR, the isovector nucleon EDM receives a one-loop contribution from $\bar g_0$ at LO. At the same order there are also short-range isoscalar ($\bar d_0$) and isovector ($\bar d_1$) contributions, the latter being required by renormalization-group invariance. The isoscalar and isovector nucleon EDMs are given by \cite{CDVW79} \begin{equation} d_0^{({\rm LO})} = \bar d_0, \qquad d_1^{({\rm LO})} = \bar d_1 (\mu) + \frac{e g_A \bar g_0}{\left(2\pi F_{\pi}\right)^2} \left( L - \ln \frac{m^2_{\pi}}{\mu^2}\right), \label{LeadingEDM} \end{equation} where we used dimensional regularization in $d$ spacetime dimensions, with $L=2/(4-d)-\gamma_E+\ln 4\pi$, and $\mu$ the renormalization scale. In this case there is no $4\pi$ enhancement, and the nucleon EDM is suppressed by the loop factor $(2\pi F_{\pi})^2 \sim M_{\mathrm{QCD}}^2$ with respect to the pion nucleon coupling $\bar g_0$. The momentum dependence of the EDFF is purely isovector in LO and governed by the scale $m_{\pi}$, as is the case for the isoscalar TDFF \eqref{LeadingAnapolescalar}, but it is not needed in the following. For the qEDM and the $\chi$ISs, $e\bar g_0/m_\pi^2$ is at most as large as the short-range coupling $\bar d_{1}$, and the loop suppression makes its contribution negligible. The EDFF is then momentum independent at LO and completely determined by the low-energy constants $\bar d_{0,1}$, \begin{equation} d_{0,1}^{({\rm LO})} = \bar d_{0,1}, \qquad S^{\prime\, \textrm{(LO)}}_{0,1}\left(x^2\right)= 0. \label{LeadingEDMprime} \end{equation} In this case the momentum dependence appears in higher order and is determined by short-range physics. The $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ couplings $\bar g_0$, $d_1$, and $\bar C_0$ are not known and, in order to estimate the magnitude of the TQFF they induce, we will need to make some reasonable assumptions. First, we assume that there are no cancellations between $d_0^{({\rm LO})}$ and $d_1^{({\rm LO})}$, so that, for $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ violation from the qEDM and $\chi$ISs, the bound on the neutron EDM $|d_n|$ can be directly translated into the bound $| \bar d_1 |< 2.9 \cdot 10^{-13}\, e$ fm. Second, as pointed out in Ref. \cite{CDVW79}, we should not expect any cancellation in Eq. \eqref{LeadingEDM} between pieces that are non-analytic and analytic in $m_\pi^2$. With the reasonable value $\mu = m_N$, the same bound applies for $| \bar d_1 (m_N)|$ in the case of $\bar\theta$ term, qCEDM and FQLR. Moreover, since the long-range contributions give the estimate $|d_1|\sim 0.13 (|\bar g_0|/F_{\pi}) \, e$ fm, the existing experimental bound on the neutron EDM yields an approximate bound on the $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ pion-nucleon coupling, $|\bar g_0|/F_{\pi} \hspace*{0.2em}\raisebox{0.5ex}{$<$ 2\cdot 10^{-12}$. \section{TQFF of the deuteron} \label{calculation} With the interactions described in Sec. \ref{interactions}, we can calculate the long-range contributions to the deuteron TQFF, using the techniques of Refs. \cite{deutEMFF,springer,Vri11b}. As usual in such a calculation, the orders of magnitude of the various contributions can be found by combining the power counting rules of ChPT based on NDA with the rules for two-nucleon states as summarized, for example, in Ref. \cite{paulo}. A pion propagator scales as $1/Q^2$. A loop involving a single nucleon contributes a factor $Q^4/(4\pi)^2$ from the integration and a factor $1/Q$ from the nucleon propagator. The infrared enhancement of a loop involving two nucleons gives a factor $Q^5/4\pi m_N$ from the integration and a factor $m_N/Q^2$ from each nucleon propagator. The deuteron wavefunction contributes an overall normalization factor $4\pi Q/m_N^2$. The deuteron itself is built out of the two-nucleon contact interaction with coefficient $C_0={\mathcal O}(4\pi/m_N \gamma)$ and the nucleon kinetic terms in Eqs. \eqref{gALag} and \eqref{subPTLag}. Pion exchange originating from the pion kinetic terms and pion-nucleon coupling in Eq. \eqref{gALag} contributes to the deuteron structure at relative ${\cal O}(Q/M_{N\!N})$, together with a two-derivative contact interaction that accounts for short-range energy dependence in the on-shell two-nucleon amplitude \cite{ksw}. Since we calculate the TQFF to LO only, $\gamma$ is the sole $PT$ two-nucleon input needed. The $P\slash\hspace{-0.4em}T$ TQFF is an intrinsically two-nucleon observable, which requires at least one symmetry-violating interaction between the two nucleons. We argue in App. \ref{app:ordmag} that $P \slash\hspace{-0.4em}T$ interactions are much smaller than contributions from separate $\slash\hspace{-0.6em}P T$ and $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ interactions. The lowest-order diagrams involving the $\slash\hspace{-0.6em}P T$ vertex $h_1$ and one of the $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ couplings are shown in Figs. \ref{Fig1}--\ref{Fig3}. In these figures, only one possible ordering is shown. Circles, triangles, and squares denote the leading $PT$, $\slash\hspace{-0.6em}P T$, and $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ interactions in Eqs. \eqref{gALag}, \eqref{Parity1}, and \eqref{g0Lag}, respectively; a circled circle, the $PT$ magnetic photon-nucleon interactions in Eq. \eqref{subPTLag}; a twice circled triangle, the $\slash\hspace{-0.6em}P T$ anapole moment of the nucleon in Eq. \eqref{LeadingAnapolescalar}. The hatched circles denote deuteron states obtained from the iteration of the leading two-nucleon interaction, which brings in dependence on the binding momentum $\gamma$. The natural scale for momentum dependence of the TQFF is $4\gamma$, so we express our results in terms of $\vec x = \vec q/4\gamma$. We also define the ratio $\xi = \gamma/m_{\pi}$ of low-momentum scales. \begin{figure}[t] \center \includegraphics[width=15cm]{Fig1.pdf} \caption{Two-pion-exchange (TPE) contributions to the deuteron TQFF, $F_{P\slash\hspace{-0.4em}T}(\vec q^{\, 2})$. Nucleons. pions and photons are represented by solid, dashed and wavy lines, respectively. LO $PT$, $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$, and $\slash\hspace{-0.6em}P T$ interactions are denoted by circles, squares, and triangles, respectively. An NLO $PT$ interaction is denoted by a circled circle. Deuteron states obtained from the iteration of the leading $PT$ two-nucleon interaction are represented by hatched circles.} \label{Fig1} \end{figure} \begin{figure}[t] \center \includegraphics[width=15cm]{Fig2.pdf} \caption{Short-range two-nucleon ($4N$) contributions to the deuteron TQFF, $F_{P\slash\hspace{-0.4em}T}(\vec q^{\, 2})$. The notation is as in Fig. \ref{Fig1}.} \label{Fig2} \end{figure} \begin{figure} \center \includegraphics[width=15cm]{Fig3.pdf} \caption{Nucleon anapole form factor (TDFF) and electric dipole moment (EDM) contributions to the deuteron TQFF, $F_{P\slash\hspace{-0.4em}T}(\vec q^{\, 2})$. The twice-circled triangle stands for the anapole form factor. The other notation is as in Fig. \ref{Fig1}.} \label{Fig3} \end{figure} Let us first consider a photon which interacts without breaking $P$ and $T$. In this case the photon couples to the nucleon via the magnetic couplings in Eq. \eqref{subPTLag} or to a pion via interactions obtained by gauging the derivatives in the pion kinetic energy and pion-nucleon axial coupling in Eq. \eqref{gALag}. Diagrams with only one pion exchange and $\bar g_0$ and $h_1$ vertices on each end vanish. This can be understood from the fact that such diagrams do not have enough powers of momentum in the vertices to generate a form factor of the form of Eq. \eqref{PCTVFF}, and it agrees with the more general analysis of the $P\slash\hspace{-0.4em}T$ two-nucleon interaction \cite{simonius}. This leaves three-loop diagrams, containing either two pion exchanges (TPE) or one pion exchange and a short-range $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ two-nucleon interaction (4N), Figs. \ref{Fig1} and \ref{Fig2} respectively. Using the power-counting rules outlined above, the sizes of the diagrams in Figs. \ref{Fig1} and \ref{Fig2} are \begin{eqnarray} \textrm{Fig. \ref{Fig1}} &=& \mathcal O\left(\frac{e h_1}{Q^2 M_{N\!N}^2} \frac{\bar g_0}{M_{\mathrm{QCD}}}\right), \label{Fig1scaleprime}\\ \textrm{Fig. \ref{Fig2}} &=& \mathcal O\left(\frac{e h_1}{Q^2 M_{N\!N}^2} \frac{ m_N \gamma \bar C_0}{4\pi}\frac{Q M_{N\!N}}{M_{\mathrm{QCD}}}\right). \label{Fig2scaleprime} \end{eqnarray} Whether the diagrams in Figs. \ref{Fig1} or \ref{Fig2} are more important depends on the $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ source. For the $\bar \theta$ term, the qEDM, the qCEDM, and the FQLR operator the contributions from the short-range interaction $\bar C_0$ are always suppressed, in this case by $Q/M_{N\!N}$, with respect to TPE, because for these sources $\bar g_0 =\mathcal O(M_{N\!N}^2 m_N \gamma \bar C_0/4\pi)$, see App. \ref{app:ordmag}. For $\chi$ISs, the opposite is true because of the extra $(Q/M_{\mathrm{QCD}})^2$ suppression of $\bar g_0/M_{\mathrm{QCD}}$, which makes the short-range contributions larger by a factor of $\mathcal O(M_{N\!N}/Q)$. The diagrams in Fig. \ref{Fig1} are formally the leading contributions for the QCD $\bar \theta$ term, the qCEDM, and the FQLR, while those in Fig. \ref{Fig2} are leading for the $\chi$ISs. Note that for the isovector qCEDM and the FQLR, one should consider not only the isoscalar pion-nucleon coupling $\bar g_0$ but also the isovector pion-nucleon coupling $\bar g_1$, but such diagrams vanish. Alternatively, the photon can interact with the nucleon with a $\slash\hspace{-0.6em}P T$ or $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ interaction, in which case a single two-nucleon interaction, $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ or $\slash\hspace{-0.6em}P T$ respectively, is sufficient to produce a TQFF --- see Fig. \ref{Fig3}. In diagrams \ref{Fig3}(a,b) one of the nucleons couples to the magnetic field via its anapole moment, with $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ coming either from pion exchange or from a two-nucleon interaction. Here the anapole ``vertex'' stands for a one-loop diagram, which produces the result \eqref{LeadingAnapolescalar}. In diagram \ref{Fig3}(c) the photon couples to the nucleon through a recoil correction to the $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ EDM, with $\slash\hspace{-0.6em}P T$ coming from pion exchange. By power counting, the contributions of diagrams \ref{Fig3}(a,b) to the TQFF are \begin{eqnarray} \textrm{Fig. \ref{Fig3}(a)} &=& \mathcal O\left( \frac{a_i}{m_N^2} \frac{\bar g_0}{Q M_{N\!N}} \right) = \mathcal O\left(\frac{e h_1}{Q^2 M_{N\!N}^2} \frac{\bar g_0 M_{N\!N}}{M_{\mathrm{QCD}}^2}\right), \label{Fig3ascaleprime}\\ \textrm{Fig. \ref{Fig3}(b)} &=& \mathcal O\left( \frac{a_i}{m_N^2} \frac{m_N \gamma \bar C_0 }{4\pi }\right) = \mathcal O\left( \frac{eh_1}{Q^2 M_{N\!N}^2} \frac{m_N \gamma\bar C_0 }{4\pi } \frac{QM_{N\!N}^2}{M_{\mathrm{QCD}}^2} \right) \label{Fig3bscaleprime}, \end{eqnarray} were we used the NDA expectation for the anapole moment. Diagrams \ref{Fig3}(a,b) are thus suppressed by one power of $M_{N\!N}/M_{\mathrm{QCD}}\sim 1/4\pi$ compared to the contributions of Figs. \ref{Fig1} and \ref{Fig2}. However, $a_0$ in Eq.~\eqref{LeadingAnapolescalar} is a factor of $4\pi$ larger than the NDA estimate, making the corresponding contributions to the TQFF competitive with LO. Again, of other possible $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ couplings only $\bar g_0$ and $\bar C_0$ contribute. Diagram 3(c) represents contributions to the TQFF coming from the nucleon EDFF. It scales as \begin{equation}\label{Fig3cscaleprime} \textrm{Fig. \ref{Fig3}(c)} = \mathcal O\left( \frac{eh_1}{Q^2M_{N\!N}^2} \frac{ \bar d_1}{e} \frac{QM_{N\!N}}{M_{\mathrm{QCD}}} \right), \end{equation} and there is no contribution from $\bar d_0$ at this order. For $\bar\theta$ term, qCEDM, and FQLR, $\bar d_1 /e=\mathcal O(\bar g_0/M_{\mathrm{QCD}}^2)$ and this contribution is suppressed by $Q/M_{\mathrm{QCD}}$ (a factor coming from the recoil) compared to the analogous anapole diagram \ref{Fig3}(a). For $\chi$ISs, this contribution is comparable to Fig. \ref{Fig2}, while for the qEDM it is the sole leading contribution, since numerically $\alpha_{\mathrm{em}}/4\pi\sim (Q/M_{\mathrm{QCD}})^3$. To summarize these power-counting arguments, we expect the TQFF induced by the $\bar\theta$ term, the qCEDM, and the FQLR to be dominated by the TPE diagrams in Fig. \ref{Fig1}, with possibly large corrections from the nucleon TDFF, diagram \ref{Fig3}(a). For $\chi$ISs, the dominant contribution should come from the diagrams involving the $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ short-range two-nucleon interaction in Fig. \ref{Fig2} and from the nucleon EDFF, diagram \ref{Fig3}(c), with a sizable correction from the nucleon TDFF, diagram \ref{Fig3}(b). For the qEDM only the nucleon EDFF contribution \ref{Fig3}(c) should be important. We now proceed to the evaluation of diagrams in Figs. \ref{Fig1}, \ref{Fig2}, and \ref{Fig3}. We find that only the isovector magnetic moment gives a non-vanishing contribution to diagram \ref{Fig1}(a) and \ref{Fig2}(a). Diagrams with other photon-nucleon interactions (the isoscalar magnetic moment and the minimal coupling through the covariant derivative in the nucleon kinetic term) vanish. Similar diagrams where pion exchanges and contact interaction occur both before or after the insertion of the photon coupling also vanish. Diagrams \ref{Fig1}(b,c) and \ref{Fig1}(d,e) differ only by an isospin factor. The diagrams in Fig. \ref{Fig1} are finite in four (and three) dimensions. We express the result for their contribution to the TQFF as \begin{equation}\label{I3ab} F_{P \slash\hspace{-0.4em}T}^{(\textrm{TPE})}(\vec q^{\, 2}) = -\frac{e g_A^2 \bar g_0 h_1}{m^2_{\pi}} \frac{m_N}{\left(4\pi F^2_{\pi}\right)^2} \left[ (1+\kappa_1) \, I^{(3)}_a\left( \frac{\gamma}{m_{\pi}}, \frac{\vec q}{4\gamma}\right) + I^{(3)}_b\left(\frac{\gamma}{m_{\pi}}, \frac{\vec q}{4\gamma}\right) \right], \end{equation} in terms of two three-loop integrals $I^{(3)}_{a,b}(\xi, \vec x)$. Likewise, the results for the TQFF in Fig. \ref{Fig2} are expressed in terms of two two-loop functions $I_{a,b}^{(2)}(\xi, \vec x)$ as \begin{equation} F_{P \slash\hspace{-0.4em}T}^{(\textrm{4N})}(\vec q^{\, 2}) = \frac{eg_A h_1}{m_{\pi}} \frac{m_N}{4\pi F_{\pi}^2}\frac{\mu - \gamma}{4\pi} \bar C_0 \left[ (1+\kappa_1) \, I^{(2)}_a\left( \frac{\gamma}{m_{\pi}}, \frac{\vec q}{4\gamma}\right) + I^{(2)}_b\left( \frac{\gamma}{m_{\pi}}, \frac{\vec q}{4\gamma}\right) \right], \label{I2ab} \end{equation} where we have used power-divergence subtraction \cite{ksw}. The $\mu$ dependence is absorbed in $\bar C_0$ itself, since here it appears in the same combination as in the magnetic quadrupole form factor \cite{Vri11b}. The expansions to order $\vec{q}\,^2$ of the integrals $I^{(2,3)}_{a,b}$ are given in App. \ref{app:ffs}. The resulting contributions to the TQM are \begin{equation} \mathcal T_d^{({\rm TPE})}\simeq \left[ 0.8\, (1+\kappa_1) - 0.9 \right] \cdot 10^{-2} \; \frac{\bar g_0 h_1}{F_{\pi}^2} \; e \, \textrm{fm}^3. \label{valueLO} \end{equation} and \begin{equation}\label{LOvalue4N} \mathcal T^{(\textrm{4N})}_d \simeq \left[ 1.0\, (1+\kappa_1) - 0.7 \right] \cdot 10^{-2} \, m_N \frac{\mu - \gamma}{4\pi} \bar C_0 \, h_1 \, e\, \textrm{fm}^3. \end{equation} Finally, we consider diagrams \ref{Fig3}(a,b) and (c). For isoscalar $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$, only the isoscalar TDFF $F_{\slash\hspace{-0.5em}P T,\, 0}$ contributes in diagrams \ref{Fig3}(a,b). Isospin-breaking $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$, for example from insertions of $\bar g_1$, would contribute to diagram \ref{Fig3}(a) together with the isovector TDFF $F_{\slash\hspace{-0.5em}P T,\, 1}$. However, the isovector TDFF is suppressed by $Q/M_{\mathrm{QCD}}$ (and by a factor of $4\pi$) with respect to the isoscalar TDFF. Therefore, even for sources that generate $\bar g_0$ and $\bar g_1$ at the same level, $\bar g_1$ contributions to the TQFF are subleading. Diagram \ref{Fig3}(c) is leading only for the qEDM and $\chi$ISs, for which the isovector EDFF is momentum independent and coincides with the EDM. Diagrams \ref{Fig3}(a,b) and (c) result in contributions to the TQFF given by \begin{eqnarray}\label{I2c} F_{P \slash\hspace{-0.4em}T}^{({\rm TDFF})}(\vec q^{\, 2}) = \frac{F_{\slash\hspace{-0.5em}P T,\, 0}(\vec q^{\, 2})}{4\pi m_N} \left[ \frac{g_A \bar g_0}{m_{\pi}F^2_{\pi}} \, I_c^{(2)}\left(\frac{\gamma}{m_{\pi}},\frac{\vec q}{4\gamma}\right) + (\mu-\gamma) \bar C_0 \; I^{(1)}\left(\frac{\vec q}{4\gamma}\right) \right] \label{FF1} \end{eqnarray} and \begin{equation}\label{nEDM} F_{P\slash\hspace{-0.4em}T}^{({\rm EDM})}(\vec q^{\, 2}) = \frac{g_A}{4\pi F^2_{\pi} m_{\pi}} h_{1} \bar d_1 \; I^{(2)}_d \left(\frac{\gamma}{m_{\pi}}, \frac{\vec q}{4\gamma}\right), \end{equation} respectively. The expression for the one-loop integral $I^{(1)}$ along with the expansions to order $\vec{q}\,^2$ of two-loop integrals $I^{(2)}_{c,d}$ can be found in App. \ref{app:ffs}. Numerically this gives \begin{equation} \mathcal T_d^{({\rm TDFF})} \simeq \left[ 3.5 \frac{\bar g_0}{F^2_{\pi}} + 2.7 \, m_N \frac{\mu - \gamma}{4\pi } \bar C_0\right] \cdot 10^{-2 } \; h_1 \; e \, \textrm{fm}^3 \label{valueLO'} \end{equation} and \begin{equation} \label{TQMEDM} \mathcal T_d^{({\rm EDM})} \simeq 1.3 \cdot 10^{-2} \; \bar d_1 h_1 \, \textrm{fm}^3. \end{equation} The result \eqref{valueLO'} shows that the TDFF contribution, though expected to be suppressed by the factor $M_{N\!N}/M_{\mathrm{QCD}}$, is comparable to the LO values in Eqs.~\eqref{valueLO} and \eqref{LOvalue4N}, in line with the $4\pi$ enhancement in the TDM. For the QCD $\bar\theta$ term, the qCEDM, and the FQLR, we can assess the importance of the EDM contribution to the TQFF by substituting in Eq. \eqref{TQMEDM} the estimate for the nucleon EDM in terms of $\bar g_0$, $|d_1| \sim 0.13 (|\bar g_0|/F_{\pi}) \, e$ fm. We find $\mathcal T_d^{({\rm EDM})}\sim 0.2\cdot 10^{-2} (\bar g_0 h_1/F^2_{\pi})\, e$ fm$^3$, which is numerically small compared to Eqs. \eqref{valueLO} and \eqref{valueLO'}, as expected by power counting. We can now combine the results found so far. For the QCD $\bar\theta$ term, the qCEDM, and the FQLR, the TPE contributions from the diagrams in Fig. \ref{Fig1} and the TDFF contributions in Fig. \ref{Fig3}(a) have comparable size, giving \begin{equation}\label{Tdtheta} (\mathcal T_d)_{\bar\theta,\, \textrm{qCEDM},\, \textrm{FQLR}}\simeq 6.3 \cdot 10^{-2} \; \frac{\bar g_0 h_1}{F^2_{\pi}} \, e \,\textrm{fm}^3. \end{equation} This number is within a factor $\simeq 2$ of the power counting estimate in Eq. \eqref{Fig3ascaleprime}, indicating that the power counting works well (apart from the $4\pi$ in the anapole moment). For $\chi$ISs, by power counting the leading contributions are expected to come from the diagrams in Fig. \ref{Fig2}, with insertions of the four-nucleon coupling $\bar C_0$, and from the EDM in diagram \ref{Fig3}(c). Also in this case, the contribution of the TDFF in diagram \ref{Fig3}(b) is numerically important. We find \begin{equation} (\mathcal T_d)_{\textrm{$\chi$ISs}} \simeq \left[ 6.7 \, m_N \frac{\mu - \gamma}{4\pi} \bar C_0 + 1.3\, \frac{\bar d_1}{e} \right]\cdot 10^{-2 } \; h_1 \;e \,\textrm{fm}^3 . \label{TdchiISs} \end{equation} If $\bar C_0$ and $\bar d_1$ have their NDA values, their respective contributions are numerically comparable. In the case of the qEDM, the TQM is dominated by the contribution from the nucleon EDM, and we find \begin{equation}\label{TdqEDM} (\mathcal T_d)_{\textrm{qEDM}} \simeq 1.3 \cdot 10^{-2} \; \bar d_1 h_1 \,\textrm{fm}^3. \end{equation} Because of dimensionless numerical factors this value is about an order of magnitude smaller than expected by the power-counting estimates based on NDA. \section{Discussion and conclusion} \label{discussion} It is interesting to compare our results for the deuteron TQM in Eqs. \eqref{Tdtheta}, \eqref{TdchiISs}, and \eqref{TdqEDM} with the largest $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ moment of the deuteron, EDM or MQM, for the respective $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ sources. In Ref. \cite{Vri11b} power-counting estimates and LO results were given for the deuteron EDM, $d_d$, and MQM, ${\mathcal M}_d$, in chiral EFT with perturbative pion exchange. For sources that break chiral symmetry and generate non-derivative $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ pion-nucleon couplings in LO, $d_d$ and ${\mathcal M}_d$ are expected to be dominated by two-body effects and be enhanced with respect to the nucleon EDM. In the case of the QCD $\bar\theta$ term, ${\mathcal M}_d$ is expected to be the largest moment (in natural units), because at LO $\bar g_0$ does not contribute to $d_d$ (except through the nucleon EDM, Eq. \eqref{LeadingEDM}). On the other hand, the qCEDM and the FQLR, which generate also the isovector coupling $\bar g_1$ in LO, induce $d_d$ and ${\mathcal M}_d$ of the same size. For $\chi$ISs, $d_d$ and ${\mathcal M}_d$ are also expected to be of the same size, and of similar size as the nucleon EDM. The deuteron EDM is in fact expected to be well approximated by (twice) the isoscalar nucleon EDM, while the deuteron MQM, in the perturbative-pion counting, receives the largest contribution from the four-nucleon coupling $\bar C_0$. For all these sources, we compare the deuteron TQM to its MQM, given by \cite{Vri11b} \begin{equation}\label{MQMscale} \mathcal M_d =\frac{ e g_A \bar g_0}{m_{\pi}} \frac{1}{2\pi F^2_{\pi}} \left[ (1+\kappa_0) + \frac{\bar g_1}{3 \bar g_0} (1+\kappa_1)\right] \, \frac{1+\xi}{(1+2\xi)^2} + e (1+\kappa_0) \frac{\mu-\gamma}{2\pi}\bar C_0. \end{equation} We consider the dimensionless ratio $F_{\pi} \mathcal T_d/\mathcal M_d$, which, by power counting, is expected to be of order $h_1/F_{\pi}$. For the $\bar\theta$ term, qCEDM, and FQLR, $\bar C_0$ is subleading in Eq. \eqref{MQMscale}, so that from Eq. \eqref{Tdtheta} \begin{equation}\label{ratiog0} F_{\pi} \left| \frac{ \mathcal T_d }{ \mathcal M_d} \right|_{\bar\theta,\, \textrm{qCEDM},\, \textrm{FQLR}} \simeq 0.4 \left|\frac{\bar g_0}{ \bar g_0 + 1.8 \bar g_1}\right| \frac{|h_1|}{F_{\pi}}. \end{equation} For the $\bar\theta$ term, one can neglect $\bar g_1$ and the $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ couplings drop out of the ratio, which is approximately $|h_1|/F_{\pi}$, as expected by power counting. For the qCEDM the ratio in Eq. \eqref{ratiog0} depends on $|\bar g_0/\bar g_1|$, which by NDA is expected to be order one. For the FQLR, as discussed in Ref. \cite{deVries:2012ab}, $\bar g_0$ is somewhat suppressed with respect to $\bar g_1$, further suppressing the deuteron TQM with respect to the MQM. In the case of $\chi$ISs, $\bar C_0$ is expected to be the leading term in Eq. \eqref{MQMscale}; if we neglect the contribution of the nucleon EDM in Eq. \eqref{TdchiISs}, we get \begin{equation}\label{ratioC0} F_{\pi}\left| \frac{ \mathcal T_d}{ \mathcal M_d} \right|_{\textrm{$\chi$IS}} \simeq 0.2 \frac{|h_1|}{F_{\pi}}, \end{equation} which is also in good agreement with the NDA expectation. For the remaining dimension-six source, the qEDM, $d_d$ is also well approximated by the isoscalar nucleon EDM, \begin{equation} d_d = 2 \bar d_0, \end{equation} while ${\mathcal M}_d$ is suppressed by one power of $Q/M_{N\!N}$ with respect to the EDM \cite{Vri11b}. Therefore, for the qEDM we compare the deuteron TQM with its EDM using the dimensionless ratio $m_N F_{\pi} \mathcal T_d/d_d$. {}From Eq. \eqref{TdqEDM}, \begin{equation}\label{ratioEDM} m_N F_{\pi}\left| \frac{ \mathcal T_d}{d_d} \right|_{\textrm{qEDM}} \simeq 0.03 \left|\frac{\bar d_1}{\bar d_0}\right| \frac{|h_1|}{F_{\pi}} , \end{equation} which is a bit smaller than naively expected. Equations \eqref{ratiog0}, \eqref{ratioC0} and \eqref{ratioEDM} make it explicit that the deuteron TQFF, in natural units, is suppressed roughly by a factor of $h_1/F_{\pi} \sim G_F M_{\mathrm{QCD}}^2/4\pi \sim 10^{-6}$ with respect to the largest $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ moment. The lack of any significant numerical enhancement thus leads to a very small TQFF. The bounds on $\bar g_0$, $\bar d_1$, and $h_1$ inferred in Sec. \ref{interactions} allow us to estimate the size of the TQM. For the QCD $\bar\theta$ term, the qCEDM, and the FQLR we find \begin{equation} | \mathcal T_d |_{\bar\theta,\, \textrm{qCEDM},\, \textrm{FQLR}} \lesssim 1.2 \cdot 10^{-19}\, e \, \textrm{fm}^3, \end{equation} while for the qEDM we find the even smaller value \begin{equation} |\mathcal T_d |_{\textrm{qEDM}} \lesssim 3.5 \cdot 10^{-21}\, e \, \textrm{fm}^3. \end{equation} For $\chi$ISs, one expects a similar value, but to be more precise a bound on $\bar C_0$ is needed. These estimates have been obtained in chiral EFT with perturbative pions. Iterating pions one can extend the regime of validity of the theory beyond $M_{N\!N}$ at the cost of much more complicated renormalization \cite{nogga}. Because the binding momentum of nucleons in the deuteron is $\gamma\ll M_{N\!N}$, we do not expect drastic changes in the quantities calculated here. In the case of our comparison $\slash\hspace{-0.6em}P\slash\hspace{-0.4em}T$ moments, this expectation has been checked \cite{liu} and shown to be reasonable. We conclude that the value of the deuteron TQM from parity violation in the SM and parity- and time-reversal violation due to the SM $\bar{\theta}$ term or dimension-six operators originating beyond the SM is, not surprisingly, tiny. Evidence for a nonzero value for the deuteron TQM that is larger than the ``background'' value $\sim 10^{-19}\, e \, \textrm{fm}^3$ would likely be due to new $P\slash\hspace{-0.4em}T$ interactions. \section*{Acknowledgements} U. van Kolck acknowledges the hospitality of the KVI Groningen on many occasions. This research was supported by the Dutch Stichting FOM under programs 104 and 114 (JdV, RGET) and in part by the DFG and the NSFC through funds provided to the Sino-German CRC 110 ``Symmetries and the Emergence of Structure in QCD'' (JdV), by the US DOE under contract DE-AC02-05CH11231 with the Director, Office of Science, Office of High Energy Physics (EM), and under grant DE-FG02-04ER41338 (UvK), and by the Universit\'e Paris Sud under the program Attractivit\'e 2013 (UvK). \section*{Appendices}
3,212,635,537,889
arxiv
\section{Sequential Fully Dynamic Triangle Counting of~\cite{KNNOZ19}}\label{app:triangle} Here, we present the sequential fully dynamic triangle counting algorithm of Kara et al.~\cite{KNNOZ19} that operates in $O(m)$ space, $O(\sqrt{m})$ amortized work per edge update, and $O(m^{3/2})$ work for preprocessing. This algorithm returns the exact count of the number of triangles in an undirected graph under both edge insertions and deletions. Kara et al.~\cite{KNNOZ19} present their algorithm for directed $3$-cycles using relational database terminology (where each edge in the triangle may be drawn from a different relation), but we simplify their algorithm for the case of undirected graphs. Kara et al.~\cite{KNNOZ19} prove the following theorem. \begin{theorem}[Fully Dynamic Triangle Counting~\cite{KNNOZ19}]\label{thm:knnoz19} There exists a sequential algorithm to count the number of triangles in an undirected graph $G = (V, E)$ using $O(m^{3/2})$ preprocessing work that can handle an edge update in $O(\sqrt{m})$ amortized work and $O(m)$ space. \end{theorem} We now explain the fully dynamic triangle counting algorithm of~\cite{KNNOZ19} in greater detail. Given a graph $G= (V, E)$ with $n = |V|$ vertices and $m = |E|$ edges, we initialize the following variables: $M = 2m+1$, $t_1 = \sqrt{M}/2$, and $t_2 = 3\sqrt{M}/2$. We define a vertex to be \defn{low-degree} if its degree is at most $t_1$ and \defn{high-degree} if its degree is at least $t_2$. Vertices with degree in between $t_1$ and $t_2$ can be classified either way. Let $C$ be the current count of the number of triangles in the graph. We compute the initial count of the number of triangles in the input graph $G$ using a static triangle counting algorithm~\cite{IR77} in $O(m^{3/2})$ work and $O(m)$ space. Thus, we immediately have a preprocessing work of $O(m^{3/2})$. We create four data structures $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, and $\mathcal{LL}$. $\mathcal{HH}$ stores all of the edges $(u, v)$ where both $u$ and $v$ are high-degree, $\mathcal{HL}$ stores edges $(u,v)$, where $u$ is high-degree and $v$ is low-degree, $\mathcal{LH}$ stores the edges $(u, v)$ where $u$ is low-degree and $v$ is high-degree, and $\mathcal{LL}$ stores edges where both $u$ and $v$ are low-degree. With our data structures, the following operations are supported: \begin{enumerate} \item Given a vertex $v$, determine whether it is low-degree or high-degree in $O(1)$ work. \item Given an edge $(u, v)$, check if it is in $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, or $\mathcal{LL}$ in $O(1)$ work. \item Given a vertex $v$, return all neighbors of $v$ in $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, and $\mathcal{LL}$ in $O(\deg(v))$ work. \item Given an edge $(v, w)$ to insert or delete, update $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, or $\mathcal{LL}$ in $O(1)$ work. \end{enumerate} We can implement $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, and $\mathcal{LL}$ to support these operations by using a two-level hash table for each of these structures and an additional array $\mathcal{D}$. $\mathcal{D}$ is a dynamic hash table containing a key for each vertex that has non-zero degree and stores the degree of the vertex as the value. The data structures support insertions and deletions in $O(1)$ work. $\mathcal{D}$ can be initialized in $O(m)$ work by scanning over all vertices and computing their degree. $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, and $\mathcal{LL}$ can be initialized in $O(m)$ work by scanning over all edges and inserting them into the right table based on the degrees of their endpoints. We maintain one additional data structure $\mathcal{T}$ that counts the number of wedges $(u, w, v)$, where $u$ and $v$ are high-degree vertices and $w$ is a low-degree vertex. $\mathcal{T}$ has the property that given an edge insertion or deletion $(u, v)$ where both $u$ and $v$ are high-degree vertices, it returns the number of such wedges $(u, w, v)$ where $w$ is low-degree that $u$ and $v$ are part of in $O(1)$ work. We can implement this via a hash table indexed by pairs of high-degree vertices that stores the number of wedges for each pair. $\mathcal{T}$ can be initialized in $O(m^{3/2})$ work by iterating over all edges $(u, w)$ in $\mathcal{HL}$ and then for each $w$, iterating over all edges $(w, v)$ in $\mathcal{LH}$ to determine whether $v$ is high-degree, and if so then increment $T(u, v)$ by $1$. There are $O(m)$ edges $(u,w)$ in $\mathcal{HL}$, and for each $w$ there are at most $O(\sqrt{m})$ edges $(w,v)$ in $\mathcal{LH}$ since $w$ is low-degree. Each lookup and increment takes $O(1)$ work, giving an overall work of $O(m^{3/2})$. \ifCameraReady \subsection{Update Procedure~\cite{KNNOZ19}}\label{app:triangle-updates}~ \fi \ifFull \subsection{Update Procedure~\cite{KNNOZ19}}\label{app:triangle-updates} \fi The procedure for handling single edge updates in the sequential setting given by~\cite{KNNOZ19} as follows: For an edge insertion (resp.\ deletion) $(u, v)$, we first find the degree of $u$ and $v$ in $\mathcal{D}$ and then look up the edge in their respective tables $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, or $\mathcal{LL}$. If the edge already exists (resp.\ does not exist) in the table, nothing else is done. Otherwise, we need to find all tuples $(u, w, v)$ such that $(v, u)$ and $(u, w)$ already exist in the graph because for each such tuple, a new triangle will be formed (resp.\ an existing triangle will be deleted). We first update the triangle count, and then we update the data structures. For updating the triangle count $C$, there are $4$ different cases for such tuples, and so we check each of the following cases: \begin{enumerate} \item \textbf{$(u, w)$ is in $\mathcal{HH}$ and $(w, v)$ is in $\mathcal{H} y$ where $y \in \{\mathcal{H}, \mathcal{L}\}$}: We extract all high-degree neighbors of $u$ in $\mathcal{HH}$. Given that the degree of all high-degree vertices is $\Omega(\sqrt{m})$, there are at most $O(\sqrt{m})$ such vertices. For each of these neighbors, we can check in $O(1)$ work for each $w$ whether $(w, v)$ exists in $\mathcal{H} y$. This takes $O(\sqrt{m})$ work. \item \textbf{$(u, w)$ is in $\mathcal{HL}$ and $(w,v)$ is in $\mathcal{LH}$ where $y \in \{\mathcal{H}, \mathcal{L}\}$}: Since both $u$ and $v$ are high-degree in this case, we perform an $O(1)$ work lookup in $\mathcal{T}$ for the count of the number of wedges $(u, w, v)$ in this case. \item \textbf{$(u, w)$ is in $\mathcal{LH}$ and $(w, v)$ is in $\mathcal{H} y$ where $y \in \{\mathcal{H}, \mathcal{L}\}$}: Scan through the neighbors of $u$ in $\mathcal{LH}$. For each neighbors of $u$, check whether $(w, v)$ exists in $\mathcal{H} y$. This takes $O(\sqrt{m})$ work since $u$ has low-degree. \item \textbf{$(u, w)$ is in $\mathcal{LL}$ and $(w, v)$ is in $\mathcal{L} y$ where $y \in \{\mathcal{L}, \mathcal{H}\}$}: Again, scan through the neighbors of $u$ in $\mathcal{LH}$. For each neighbors of $u$, check whether $(w, v)$ exists in $\mathcal{L} y$. This takes $O(\sqrt{m})$ work since $u$ has low-degree. \end{enumerate} After updating the triangle count, we proceed with updating the data structures with the edge insertion (resp.\ deletion). We first update $\mathcal{T}$ given an edge insertion (resp.\ deletion) $(u, v)$ as follows: \begin{enumerate} \item If $u$ is high-degree and $v$ is low-degree, then we find all of $v$'s neighbors in $\mathcal{LH}$ and for each such neighbor $x$, we increment (resp.\ decrement) the entry $\mathcal{T}(u, x)$ by $1$. It takes $O(\sqrt{m})$ work to perform this update since $v$ is low-degree. \item If $u$ is low-degree and $v$ is high-degree, then we scan through all vertices in $\mathcal{HL}$ and for each vertex $x$ in $\mathcal{HL}$ that has $u$ as a neighbor, we increment (resp.\ decrement) $\mathcal{T}(x, v)$ by $1$. This takes $O(\sqrt{m})$ work since there are at most $O(\sqrt{m})$ high-degree vertices. \end{enumerate} In addition to the updates to $\mathcal{T}$, we also insert (resp. delete) $(u, v)$ into $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, and $\mathcal{LL}$ depending on the degrees of $u$ and $v$, and update $\mathcal{D}$. For a given edge $(u, v)$ insertion (resp.\ deletion), we first determine whether $u$ and $v$ are low-degree or high-degree by looking in $\mathcal{D}$ for $u$ and $v$ in $O(1)$ work. $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, and $\mathcal{LL}$ are constructed as hash tables keyed by first the first vertex in the edge tuple and then the second vertex in the edge tuple with pointers to second-level hash tables storing the neighbors of that particular vertex. If $u$ is high-degree, then the edge is inserted (resp. deleted) into $\mathcal{HH}$ or $\mathcal{HL}$ (depending on whether $v$ is low or high-degree) using $u$ as the key and adding $v$ to the second level hash table. Similarly, if $u$ is low-degree, $(u, v)$ is inserted (resp. deleted) into $\mathcal{LH}$ or $\mathcal{LL}$. Furthermore, $(v, u)$ is also inserted into its respective table depending on whether $v$ is low or high-degree. The entries for $u$ and $v$ in $\mathcal{D}$ are then incremented (resp. decremented) in $\mathcal{D}$. The updates to these data structures take $O(1)$ work. We also have to deal with the cases where the degree classification of vertices have changed or the number of edges has changed by too much that the values of $M$, $t_1$, and $t_2$ need to be updated. This is described in the next section. \ifCameraReady \subsection{Rebalancing~\cite{KNNOZ19}}\label{sec:rebalancing}~ \fi \ifFull \subsection{Rebalancing~\cite{KNNOZ19}}\label{sec:rebalancing} \fi We now describe the rebalancing procedure given in~\cite{KNNOZ19} when a low-degree vertex becomes a high-degree vertex (or vice versa) and when too many updates have been applied (and all the data structures must be changed according to the new values of $M$, $t_1$, and $t_2$). \myparagraph{Minor rebalancing} This type of rebalancing occurs if a vertex which was previously high-degree has its degree fall below $t_1$ or if a vertex that was previously low-degree has its degree increase above $t_2$. In the first case, we move the vertex and all its edges from $\mathcal{HH}$ to $\mathcal{HL}$, and from $\mathcal{LH}$ to $\mathcal{LL}$. In the second case, we move the vertex and all its edges from $\mathcal{HL}$ to $\mathcal{HH}$, and from $\mathcal{LL}$ to $\mathcal{LH}$. Since our data structures support additions and deletions of an edge in $O(1)$ work, and since the degree of $v$ is $\Theta(\sqrt{m})$ at this point, we perform $\Theta(\sqrt{m})$ updates. We showed in Section~\ref{app:triangle-updates} that updates take $O(\sqrt{m})$ work so we take $O(m)$ work overall for a minor rebalancing. However, $\Omega(\sqrt{m})$ updates must have occurred on this vertex before we have to perform minor rebalancing since $t_2-t_1 = \Theta(\sqrt{m})$, and so we can amortize this cost over the $\Omega(\sqrt{m})$ updates, resulting in $O(\sqrt{m})$ amortized work per update. \myparagraph{Major rebalancing} A major rebalancing occurs when $m$, the number of edges in the graph, falls outside the range $[M/4, M]$. We simply reinitialize the data structures as in the original algorithm. Major rebalancing can only occur after $\Omega(M)$ updates, and so we can afford to re-initialize our data structure and recompute the triangle count from scratch using an $O(m^{3/2})$ work triangle counting algorithm. The amortized work of major rebalancing over $\Omega(m)$ updates is then $O(\sqrt{m})$. \section{Dynamic $k$-Clique Counting via Fast Static Parallel Algorithms} \label{sec:arboricityclique} In this section, we present a very simple algorithm for dynamically maintaining the number of $k$-cliques for $k > 3$ based on statically enumerating a number of smaller cliques in the graph, and intersecting the enumerated cliques with the edge updates in the input batch. Importantly, the algorithm is space-efficient, and only relies on simple primitives such as clique enumeration of cliques of size smaller than $k$, for which there are highly efficient algorithms both in theory and practice. \myparagraph{Fast Static Parallel $k$-Clique Enumeration} The main tool used by algorithm is the following theorem, which is presented in concurrent and independent work~\cite{shi2020parallel}: \begin{theorem}[Theorem 4.2 of~\cite{shi2020parallel}]\label{thm:static_parallel_enumerate} There is a parallel algorithm that given a graph $G$ can enumerate all $k$-cliques in $G$ in $O(m\alpha^{k-2})$ expected work and $O(\log^{k-2} n)$ depth w.h.p., using $O(m)$ space. \end{theorem} Theorem~\ref{thm:static_parallel_enumerate} is proven by modifying the Chiba-Nishizeki (CN) algorithm in the parallel setting, and combining the CN algorithm with parallel low-outdegree orientation algorithms~\cite{barenboim2010, Goodrich11}. \myparagraph{A Dynamic $k$-Clique Counting Algorithm} Given Theorem~\ref{thm:static_parallel_enumerate}, one approach to maintain the number of $k$-cliques in $G$ upon receiving a batch of insertions or deletions $\mathcal{B}$ is to have each edge $e$ in the batch simply enumerate all $(k-2)$-cliques, check whether $e$ forms a $k$-clique with any of these $(k-2)$-cliques, and update the clique counts based on the newly discovered (or deleted) cliques. Algorithm~\ref{alg:arboricity_dynamic_count} presents a formalized version of this idea. The algorithm first removes all nullifying updates from $\mathcal{B}$. It then checks whether the batch is large ($\Delta \geq m$), and if so simply recomputes the overall $k$-clique count by re-running the static enumeration algorithm. Otherwise, the algorithm inserts the edge insertions in the batch into $G$, and stores them in a static parallel hash table $\mathcal{H}$ that maps each edge in the batch to a value indicating whether the edge is an insertion or deletion in $\mathcal{B}$. \begin{mdframedalg}{Dynamic $k$-Clique Counting}\label{alg:arboricity_dynamic_count} \begin{algorithmic}[1] \Function{$k$-Clique-Count}{$G=(V,E), \mathcal{B}$} \State Let $N$ be the current count of cliques before processing the current batch. \State Remove nullifying updates from $\mathcal{B}$. \If{$\Delta \geq m$} \State Rerun the static $k$-clique counting algorithm. \Else \State Insert all updates that are edge insertions in $\mathcal{B}$ into $G$. \State Let $\mathcal{H}$ be a static parallel hash table representing $\mathcal{B}$. \ParFor {$e=\{u,v\} \in \mathcal{B}$} \State Enumerate all $(k-2)$-cliques in $G$ in parallel using the Algorithm from Theorem~\ref{thm:static_parallel_enumerate}. \ParFor{each enumerated $(k-2)$-clique, $C$} \If {$C$ forms a newly inserted or newly deleted $k$-clique with $e$\label{line:checknewclique}} \If{$e=(u,v)$ is the lexicographically-first edge in $C$ in the batch} \State Atomically update the $k$-clique count with $C \cup \{u,v\}$: $N \leftarrow N + 1$. \EndIf \EndIf \EndParFor \EndParFor \State Delete all updates that are edge deletions in $\mathcal{B}$ from $G$. \EndIf \EndFunction \end{algorithmic} \end{mdframedalg} Then, in parallel, for each edge $e=(u,v)$ in the batch, it enumerates all $(k-2)$-cliques in the graph. For each $(k-2)$-clique, $C$, the algorithm checks whether this clique forms a newly inserted or newly deleted $k$-clique with $e$. A newly inserted $k$-clique is one where at least one edge is an edge insertion in $\mathcal{B}$ and all other edges are not deleted in $\mathcal{B}$. Similarly a newly deleted $k$-clique is one where at least one edge is an edge deletion in $\mathcal{B}$ and all other edges are not edge insertions in $\mathcal{B}$. This step is done by querying the static parallel hash table $\mathcal{H}$ for each edge in the clique to check whether it is an insertion or deletion in $\mathcal{B}$. Cliques consisting of a mix of edge insertions and deletions are cliques that are not previously present before the batch, and will not be present after the batch, and are thus ignored. For a newly inserted or deleted clique, the algorithm then checks whether $e$ is the {\em lexicographically-first edge in the batch} inside of this clique formed by $C \cup \{u,v\}$ (otherwise, a different edge update from the batch will find and handle the processing of this clique).\footnote{An edge $e=(u,v)$ is the lexicographically first edge in the batch in a clique $C$ if, $\forall e' = (u',v') \in C$ such that $(u',v') \in \mathcal{B}$, $e$ is lexicographically smaller than $e'$. Note that we are working over an undirected graph without self-loops. By convention, when discussing lexicographic comparison, we have that for any $e=(u,v)$ that $u < v$; in other words, the order in the tuple representing the edge is based on the lexicographical order of the two endpoints.} Checking whether $e$ is the lexicographically-first edge in a clique $C$ is done by querying the static parallel hash table $\mathcal{H}$. For each clique where $e$ is the lexicographically-first edge in the batch in the clique, we either atomically increment, or decrement the count, based on whether this clique is newly inserted or newly deleted. After the clique count has been updated, the algorithm updates $G$ by performing the edge deletions from $\mathcal{B}$. We note that we could just as well enumerate all of the $(k-2)$-cliques a single time, and then for each $(k-2)$-clique we discover, check whether it forms a $k$-clique with each edge in the batch. A practical optimization of this idea may store edges in a batch incident to their corresponding endpoints, and so vertices in the discovered $(k-2)$-clique would only need to check updates incident to the vertices in this clique. The asymptotic complexity of both ideas---joining cliques with edges, instead of edges with cliques, and pruning edges from the batch to consider---is the same in the worst case. \myparagraph{Correctness and Bounds} If a $k$-clique in the graph is not incident to any edges in the batch, then its count is unaffected (since we only perform modifications to the count for cliques containing edges in $\mathcal{B}$). For cliques incident to edges in $\mathcal{B}$, we consider two cases. If the clique $C$ is deleted after applying $\mathcal{B}$, observe that by decomposing $C$ into a $(k-2)$-clique and the lexicographically-first marked edge $e$ in $C$, $C$ will be found and counted by $e$. The argument that a newly inserted clique, $C$, will be found is similar. Lastly, cliques consisting of both edge insertions and deletions in $\mathcal{B}$ will be correctly ignored by the check on Line~\ref{line:checknewclique}. In other words, we check in parallel whether any enumerated $k$-clique $C \cup \left\{u, v\right\}$ contains both an edge deletion and an edge insertion (by checking in the hash table representing $\mathcal{B}$); if so, the $k$-clique composed of $C \cup \left\{u, v\right\}$ is not counted. This argument proves the following theorem: \begin{theorem}\label{thm:arboricity_dynamic_count_correct} Algorithm~\ref{alg:arboricity_dynamic_count} correctly maintains the number of $k$-cliques in the graph. \end{theorem} \begin{theorem}\label{thm:arboricity_dynamic_count_bound} Given a collection of $\Delta$ updates, there is a batch-dynamic $k$-clique counting algorithm that updates the $k$-clique counts running in $O(\Delta(m+\Delta)\alpha^{k-4})$ expected work and $O(\log^{k-2} n)$ depth w.h.p., using $O(m + \Delta)$ space. \end{theorem} \begin{proof} We analyze Algorithm~\ref{alg:arboricity_dynamic_count}. First, updating the graph, assuming that the edges incident to each vertex are represented sparsely using a parallel hash table, requires $O(\Delta)$ work and $O(\log^{*} n)$ depth w.h.p. If $\Delta \geq m$, the algorithm calls the static $k$-clique counting algorithm, which takes $O((m + \Delta)\alpha^{k-2})$ expected work. Since $m = O(\Delta)$ and $\alpha^{2} = O(m + \Delta)$, the work of calling the static algorithm is upper-bounded by $O(\Delta(m+\Delta) \alpha^{k-4})$ as required. Finally, the depth bound is $O(\log^{k-2} n)$ w.h.p.\ as required. Otherwise, $\Delta < m$. Then, the algorithm first inserts and marks the batch in the graph. It also stores the edges in the batch in a parallel hash table. Creating the parallel hash table takes $O(\Delta)$ work and $O(\log^{*} n)$ depth w.h.p., which are both subsumed by the overall work and depth for the relevant setting of $k > 2$. For each update, we list all $(k-2)$-cliques using the algorithm from Theorem~\ref{thm:static_parallel_enumerate}. This step can be done in $O((m+\Delta)\alpha^{k-4})$ expected work and $O(\log^{k-4} n)$ depth w.h.p. If the $(k-2)$-clique $C$ forms a $k$-clique with $e$, then the cost of checking whether the clique is newly inserted or newly deleted using $\mathcal{H}$ costs $O(k)$ work, which is a constant, and $O(1)$ depth. The cost of checking whether $e$ is the lexicographically first edge in $\mathcal{B}$ is also constant. Multiplying the cost of enumeration by the number of edges in the batch completes the proof. \end{proof} Our batch-dynamic algorithm outperforms re-computation using the static parallel $k$-clique counting algorithm for $\Delta = o(\alpha^{2})$. It is an interesting open question whether our dependence on $m$ could be entirely removed from the update bound. Existing work has provided efficient sequential dynamic algorithms maintaining the $k$-clique count in $\tilde{O}(\alpha^{k-2})$ work per update using dynamic low out-degree orientations~\cite{Dvorak2013}. It would be interesting to understand whether such an algorithm can be work-efficiently parallelized in the parallel batch-dynamic setting, which would allow the dynamic algorithm to match the work of static parallel recomputation up to logarithmic factors. \section{Parallel Batch-Dynamic Triangle Counting}\label{sec:batch-triangle} \ifCameraReady In this section, we present our \fi \ifFull We now present our \fi parallel \batchdynamic{} triangle counting algorithm, which is based on the $O(m)$ space and $O(\sqrt{m})$ amortized update, sequential, dynamic algorithm of Kara et al.~\cite{KNNOZ19}. Theorem~\ref{thm:linear-space-update} summarizes the guarantees of our algorithm. \begin{theorem}\label{thm:linear-space-update} There exists a parallel \batchdynamic{} triangle counting algorithm that requires $O(\Delta(\sqrt{\Delta+m}))$ amortized work and $O(\log^*(\batch + m))$ depth with high probability, and $O(\Delta+m)$ space for a batch of $\Delta$ edge updates. \end{theorem} Our algorithm is work-efficient and achieves a significantly lower depth for a batch of updates than applying the updates one at a time using the sequential algorithm of~\cite{KNNOZ19}. We provide a detailed description of the fully dynamic sequential algorithm of~\cite{KNNOZ19} \ifFull in Appendix~\ref{app:triangle} \fi \ifCameraReady in the full version of our paper~\cite{fullversion} \fi for reference,\footnote{Kara et al.~\cite{KNNOZ19} described their algorithm for counting directed 3-cycles in relational databases, where each triangle edge is drawn from a different relation, and we simplified it for the case of undirected graphs.} and a brief high-level overview of that algorithm in this section. \ifCameraReady \subsection{Sequential Algorithm Overview}\label{sec:seq}~ \fi \ifFull \subsection{Sequential Algorithm Overview}\label{sec:seq} \fi Given a graph $G= (V, E)$ with $n = |V|$ vertices and $m = |E|$ edges, let $M = 2m+1$, $t_1 = \sqrt{M}/2$, and $t_2 = 3\sqrt{M}/2$. We classify a vertex as \defn{low-degree} if its degree is at most $t_1$ and \defn{high-degree} if its degree is at least $t_2$. Vertices with degree in between $t_1$ and $t_2$ can be classified either way. \myparagraph{Data Structures} The algorithm partitions the edges into four edge-stores $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, and $\mathcal{LL}$ based on a degree-based partitioning of the vertices. $\mathcal{HH}$ stores all of the edges $(u, v)$, where both $u$ and $v$ are high-degree. $\mathcal{HL}$ stores edges $(u,v)$, where $u$ is high-degree and $v$ is low-degree. $\mathcal{LH}$ stores the edges $(u, v)$, where $u$ is low-degree and $v$ is high-degree. Finally, $\mathcal{LL}$ stores edges $(u,v)$, where both $u$ and $v$ are low-degree. The algorithm also maintains a wedge-store $\mathcal{T}$ (a wedge is a triple of distinct vertices $(x,y,z)$ where both $(x, y)$ and $(y, z)$ are edges in $E$). For each pair of high-degree vertices $u$ and $v$, the wedge-store $\mathcal{T}$ stores the number of wedges $(u, w, v)$, where $w$ is a low-degree vertex. $\mathcal{T}$ has the property that given an edge insertion (resp.\ deletion) $(u, v)$ where both $u$ and $v$ are high-degree vertices, it returns the number of wedges $(u, w, v)$, where $w$ is low-degree, that $u$ and $v$ are part of in $O(1)$ expected time. $\mathcal{T}$ is implemented via a hash table indexed by pairs of high-degree vertices that stores the number of wedges for each pair. Finally, we have an array containing the degrees of each vertex, $\mathcal{D}$. \myparagraph{Initialization} Given a graph with $m$ edges, the algorithm first initializes the triangle count $C$ using a static triangle counting algorithm in $O(\alpha m)=O(m^{3/2})$ work and $O(m)$ space~\cite{Latapy2008}. The $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, and $\mathcal{LL}$ tables are created by scanning all edges in the input graph and inserting them into the appropriate hash tables. $\mathcal{T}$ can be initialized by iterating over edges $(u,w)$ in $\mathcal{HL}$ and for each $w$, iterating over all edges $(w,v)$ in $\mathcal{LH}$ to find pairs of high-degree vertices $u$ and $v$, and then incrementing $\mathcal{T}(u,v)$. \myparagraph{The Kara et al. Algorithm~\cite{KNNOZ19}} Given an edge insertion $(u, v)$ (deletions are handled similarly, and for simplicity assume that the edge does not already exist in $G$), the update algorithm must identify all tuples $(u,w,v)$ where $(u,w)$ and $(v,w)$ already exist in $G$, since such triples correspond to new triangles formed by the edge insertion. The algorithm proceeds by considering how a triangle's edges can reside in the data structures. For example, if all of $u$, $v$, and $w$ are high-degree, then the algorithm will enumerate these triangles by checking $\mathcal{HH}$ and finding all neighbors $w$ of $u$ that are also high-degree (there are at most $O(\sqrt{m})$ such neighbors), checking if the $(v,w)$ edge exists in constant time. On the other hand, if $u$ is low-degree, then checking its $O(\sqrt{m})$ many neighbors suffices to enumerate all new triangles. The interesting case is if both $u$ and $v$ are high-degree, but $w$ is low-degree, since there can be much more than $O(\sqrt{m})$ such $w$'s. This case is handled using $\mathcal{T}$, which stores for a given $(u,v)$ edge in $\mathcal{HH}$ all $w$ such that $(w,u)$ and $(w,v)$ both exist in $\mathcal{LH}$. Finally, the algorithm updates the data structures, first inserting the new edge into the appropriate edge-store. The algorithm updates $\mathcal{T}$ as follows. If $u$ and $v$ are both low-degree or both high-degree, then no update is needed to $\mathcal{T}$. Otherwise, without loss of generality suppose $u$ is low-degree and $v$ is high-degree. Then, the algorithm enumerates all high-degree vertices $w$ that are neighbors of $u$ and increments the value of $(v,w)$ in $\mathcal{T}$. \ifCameraReady \subsection{Parallel Batch-Dynamic Update Algorithm}\label{sec:update-alg}~ \fi \ifFull \subsection{Parallel Batch-Dynamic Update Algorithm}\label{sec:update-alg} \fi We present a high-level overview of our parallel algorithm in this section, and a more detailed description in Section~\ref{sec:triangle-full-alg}. We consider batches of $\Delta$ edge insertions and/or deletions. Let $\mathtt{insert}(u, v)$ represent the update corresponding to inserting an edge between vertices $u$ and $v$, and $\mathtt{delete}(u, v)$ represent deleting the edge between $u$ and $v$. We first preprocess the batch to account for updates that \emph{nullify} each other. For example, an $\mathtt{insert}(u, v)$ update followed chronologically by a $\mathtt{delete}(u, v)$ update nullify each other because the $(u, v)$ edge that is inserted is immediately deleted, resulting in no change to the graph. To process the batch consisting of nullifying updates, we claim that the only update that is not nullifying for any pair of vertices is the chronologically last update in the batch for that edge. Since all updates contain a timestamp, to account for nullifying updates we first find all updates on the same edge by hashing the updates by the edge that it is being performed on. Then, we run the parallel maximum-finding algorithm given in~\cite{Vishkin08} on each set of updates for each edge in parallel. This maximum-finding algorithm then returns the update with the largest timestamp (the most recent update) from the set of updates for each edge. This set of returned updates then composes a batch of non-nullifying updates. Before we go into the details of our parallel batch-dynamic triangle counting algorithm, we first describe some challenges that must be solved in using Kara et al.~\cite{KNNOZ19} for the parallel batch-dynamic setting. \myparagraph{Challenges} Because Kara et al.~\cite{KNNOZ19} only considers one update at a time in their algorithm, they do not deal with cases where a set of two or more updates creates a new triangle. Since, in our setting, we must account for batches of multiple updates, we encounter the following set of challenges: \begin{enumerate}[label=(\textbf{\arabic*}),topsep=1pt,itemsep=0pt,parsep=0pt,leftmargin=15pt] \item We must be able to efficient find new triangles that are created via two or more edge insertions. \item We must be able to handle insertions and deletions simultaneously meaning that a triangle with one inserted edge and one deleted edge should not be counted as a new triangle. \item We must account for over-counting of triangles due to multiple updates occurring simultaneously. \end{enumerate} For the rest of this section, we assume that $\Delta \leq m$, as otherwise we can re-initialize our data structure using the static parallel triangle-counting algorithm~\cite{ShunT2015}\footnote{The hashing-based version of the algorithm given in~\cite{ShunT2015} can be modified to obtain the stated bounds if it does not do ranking and when using the $O(\log^* n)$ depth w.h.p.\ parallel hash table and uses atomic-add.} to get the count in $O(\Delta^{3/2})$ work, $O(\log^* \Delta)$ depth, and $O(\Delta)$ space (assuming atomic-add), which is within the bounds of Theorem~\ref{thm:linear-space-update}. \myparagraph{Parallel Initialization} Given a graph with $m$ edges, we initialize the triangle count $C$ using a static parallel triangle counting algorithm in $O(\alpha m)=O(m^{3/2})$ work, $O(\log^* m)$ depth, and $O(m)$ space~\cite{ShunT2015}, using atomic-add. We initialize $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, and $\mathcal{LL}$ by scanning the edges in parallel and inserting them into the appropriate parallel hash tables. We initialize the degree array $\mathcal{D}$ by scanning the vertices. Both steps take $O(m)$ work and $O(\log^*m)$ depth w.h.p. $\mathcal{T}$ can be initialized by iterating over edges $(u,w)$ in $\mathcal{HL}$ in parallel and for each $w$, iterating over all edges $(w,v)$ in $\mathcal{LH}$ in parallel to find pairs of high-degree vertices $u$ and $v$, and then incrementing $\mathcal{T}(u,v)$. The number of entries in $\mathcal{HL}$ is $O(m)$ and each $w$ has $O(\sqrt{m})$ neighbors in $\mathcal{LH}$, giving a total of $O(m^{3/2})$ work and $O(\log^* m)$ depth w.h.p.\ for the hash table insertions. The amortized work per edge update is $O(\sqrt{m})$. \myparagraph{Data Structure Modifications} We now describe additional information that is stored in $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, $\mathcal{LL}$, and $\mathcal{T}$, which is used by the \batchdynamic{} update algorithm: \begin{enumerate}[label=(\textbf{\arabic*}),topsep=0pt,itemsep=0pt,parsep=0pt,leftmargin=15pt] \item Every edge stored in $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, and $\mathcal{LL}$ stores an associated state, indicating whether it is an \defn{old edge}, a \defn{new insertion} or a \defn{new deletion}, which correspond to the values of 0, 1, and 2, respectively. \item $\mathcal{T}(u, v)$ stores a tuple with 5 values instead of a single value for each index $(u, v)$. Specifically, a $5$-tuple entry of $\mathcal{T}(u, v) = (\tup{1}, \tup{2}, \tup{3}, \tup{4}, \tup{5})$ represents the following: \begin{itemize}[topsep=0pt,itemsep=0pt,parsep=0pt,leftmargin=15pt] \item $\tup{1}$ represents the number of wedges with endpoints $u$ and $v$ that include only old edges. \item $\tup{2}$ and $\tup{3}$ represent the number of wedges with endpoints $u$ and $v$ containing one or two newly inserted edges, respectively. \item $\tup{4}$ and $\tup{5}$ represent the number of wedges with endpoints $u$ and $v$ containing one or two newly deleted edges, respectively. In other words, they are wedges that do not exist anymore due to one or two edge deletions. \end{itemize} \end{enumerate} \myparagraph{Algorithm Overview} We first remove updates in the batch that either insert edges already in the graph or delete edges not in the graph by using approximate compaction to filter. Next, we update the tables $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, and $\mathcal{LL}$ with the new edge insertions. Recall that we must update the tables with both $(u, v)$ and $(v, u)$ (and similarly when we update these tables with edge deletions). We also mark these edges as newly inserted. Next, we update $\mathcal{D}$ with the new degrees of all vertices due to edge insertions. Since the degrees of some vertices have now increased, for low-degree vertices whose degree exceeds $t_2$, in parallel, we promote them to high-degree vertices, which involves updating the tables $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, $\mathcal{LL}$, and $\mathcal{T}$. Next, we update the tables $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, and $\mathcal{LL}$ with new edge deletions, and mark these edges as newly deleted. We then call the procedures $\mathtt{update\_table\_insertions}$ and $\mathtt{update\_table\_deletions}$, which update the wedge-table $\mathcal{T}$ based on all new insertions and all new deletions, respectively. At this point, our auxiliary data structures contain both new triangles formed by edge insertions, and triangles deleted due to edge deletions. For each update in the batch, we then determine the number of new triangles that are created by counting different types of triangles that the edge appears in (based on the number of other updates forming the triangle). We then aggregate these per-update counts to update the overall triangle count. Now that the count is updated, the remaining steps of the algorithm handle unmarking the edges and restoring the data structures so that they can be used by the next batch. We unmark all newly inserted edges from the tables, and delete all edges marked as deletes in this batch. Finally, we handle updating $\mathcal{T}$, the wedge-table for all insertions and deletions of edges incident to low-degree vertices. The last steps in our algorithm are to update the degrees in response to the newly inserted edges and the now truly deleted edges. Then, since the degrees of some high-degree vertices may drop below $t_1$ (and vice versa), we convert them to low-degree vertices and update the tables $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, $\mathcal{LL}$, and $\mathcal{T}$ (and vice versa). This step is called \defn{minor rebalancing}. Finally, if the number of edges in the graph becomes less than $M/4$ or greater than $M$ we reset the values of $M$, $t_1$, and $t_2$, and re-initialize all of the data structures. This step is called \defn{major rebalancing}. \myparagraph{Algorithm Description} A simplified version of our algorithm is shown below. The following $\textsc{Count-Triangle}$ procedure takes as input a batch of $\Delta$ updates $\mathcal{B}$ and returns the count of the updated number of triangles in the graph (assuming the initialization process has already been run on the input graph and all associated data structures are up-to-date). \begin{mdframedalg}{Simplified parallel batch-dynamic triangle counting algorithm.} \label{alg:count-cliques} \begin{algorithmic}[1] \Function{Count-Triangles}{$\mathcal{B}$} \ParFor{$\mathtt{insert}(u, v) \in \mathcal{B}$} \State Update and label edges $(u, v)$ and $(v, u)$ in $\mathcal{HH}$, \Statex \ \ \ \ \ \ \ \ \ \ \ \ $\mathcal{HL}$, $\mathcal{LH}$, and $\mathcal{LL}$ as inserted edges. \EndParFor \ParFor{$\mathtt{delete}(u, v) \in \mathcal{B}$} \State Update and label edges $(u, v)$ and $(v, u)$ in $\mathcal{HH}$, \Statex \ \ \ \ \ \ \ \ \ \ \ \ $\mathcal{HL}$, $\mathcal{LH}$, and $\mathcal{LL}$ as deleted edges. \EndParFor \ParFor{$\mathtt{insert}(u, v) \in \mathcal{B}$ or $\mathtt{delete}(u, v) \in \mathcal{B}$} \State Update $\mathcal{T}$ with $(u, v)$. $\mathcal{T}$ records the number of \Statex \ \ \ \ \ \ \ \ \ \ \ \ wedges that have $0$, $1$, or $2$ edge updates. \EndParFor \ParFor{$\mathtt{insert}(u, v) \in \mathcal{B}$ or $\mathtt{delete}(u, v) \in \mathcal{B}$} \State Count the number of new triangles and deleted \Statex \ \ \ \ \ \ \ \ \ \ \ \ triangles incident to edge $(u, v)$, and account for \Statex \ \ \ \ \ \ \ \ \ \ \ \ duplicates. \EndParFor \State Rebalance data structures if necessary. \EndFunction \end{algorithmic} \end{mdframedalg} \myparagraph{Small Example Batch Updates} Here we provide a small example of processing a batch of updates. We assume that no rebalancing occurs. Suppose we have a batch of updates containing an edge insertion $(u, v)$ with timestamp $3$, an edge deletion $(w, x)$ with timestamp $1$, and an edge deletion $(u, v)$ with timestamp $2$. Since the edge insertion $(u, v)$ has the later timestamp, it is the update that remains. After removing nullifying updates, the two updates that remain are insertion of $(u, v)$ and deletion of $(w, x)$. The algorithm first looks in $\mathcal{D}$ to find the degrees of $u$, $v$, $w$, and $x$ in parallel. Suppose $u$, $v$, and $w$ are high-degree and $x$ is low-degree. We need to first update our data structures with the new edge updates. To update the data structure, we first update the edge table $\mathcal{HH}$ with $(u, v)$ marked as an edge insertion. Then, we update the edge tables $\mathcal{HL}$ and $\mathcal{LH}$ with $(w, x)$ as an edge deletion. Finally, we update the counts of wedges in $\mathcal{T}$ with $(w, x)$'s deletion. Specifically, for each of $x$'s neighbors $y$ in $\mathcal{LH}$, we update $\mathcal{T}(w, y)$ by incrementing $t^{(w, y)}_4$ (since $(x, y)$ is not a new update). After updating the data structures, we can count the changes to the total number of triangles in the graph. All of the following actions can be performed in parallel. Suppose that $u$ comes lexicographically before $v$. We count the number of neighbors of $u$ in $\mathcal{HH}$ and this will be the number of new triangles containing three high-degree vertices. To avoid overcounting, we do not count the number of high-degree neighbors of $v$. Since we are counting the number of triangles containing updates, we also do not count the number of high-degree neighbors of $w$ since $(w, x)$ cannot be part of any new triangles containing three high-degree vertices. Then, in parallel, we count the number of neighbors of $x$ in $\mathcal{LL}$ and $\mathcal{LH}$; this is the number of deleted triangles containing one and two high-degree vertices, respectively. We use $\mathcal{T}$ to count the number of triangles containing one low-degree vertex and $(u, v)$. To count the number of inserted triangles containing $(u, v)$ and a low-degree vertex, we look up $t_1^{(u, v)}$ in $\mathcal{T}$ and add it to our final triangle count; all other stored count values for $(u, v)$ in $\mathcal{T}$ are $0$ since there are no other new updates incident to $u$ or $v$. \ifCameraReady \subsection{Parallel Batch-Dynamic Triangle Counting Detailed Algorithm}\label{sec:triangle-full-alg}~ \fi \ifFull \subsection{Parallel Batch-Dynamic Triangle Counting Detailed Algorithm}\label{sec:triangle-full-alg} \fi The detailed pseudocode of our parallel \batchdynamic{} triangle counting algorithm are shown below. Recall that the update procedure for a set of $\Delta\leq m$ non-nullifying updates is as follows (the subroutines used in the following steps are described afterward). \begin{breakablealgorithm}{}\label{alg:batchdynamictriangle} \caption{Detailed parallel batch-dynamic triangle counting procedure.} \begin{enumerate}[label=(\textbf{\arabic*}),topsep=0pt,itemsep=0pt,parsep=0pt,leftmargin=20pt] \item Remove updates that insert edges already in the graph or delete edges not in the graph as well as nullifying updates using approximate compaction.\label{bt:removebadupdates} \item Update tables $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, and $\mathcal{LL}$ with the new edge insertions using $\mathtt{insert}(u, v)$ and $\mathtt{insert}(v, u)$. Mark these edges as newly inserted by running $\markins{\mathcal{B}}$ on the batch of updates $\mathcal{B}$.\label{bt:updateedgetablesinsert} \item Update tables $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, and $\mathcal{LL}$ with new edge deletions using $\mathtt{delete}(u,v)$ and $\mathtt{delete}(v, u)$. Mark these edges as newly deleted using $\markdels{\mathcal{B}}$ on $\mathcal{B}$.\label{bt:updateedgetablesdelete} \item Call $\tins{\mathcal{B}}$ for the set $\mathcal{B}$ of all edge insertions $\mathtt{insert}(u, w)$, where either $u$ or $w$ is low-degree and the other is high-degree. \label{bt:updatetableinsertions} \item Call $\tdel{\mathcal{B}}$ for the set $\mathcal{B}$ of all edge deletions $\mathtt{delete}(u, w)$ where either $u$ or $w$ is low-degree and the other is high-degree. \label{bt:updatetabledeletions} \item For each update in the batch, determine the number of new triangles that are created by counting 6 values. Count the values using a 6-tuple, $(c_1, c_2, c_3, c_4, c_5, c_6)$ based on the number of other updates contained in a triangle:\label{bt:countnewtrianglesperupdate} \begin{enumerate}[topsep=0pt,itemsep=0pt,parsep=0pt,leftmargin=20pt] \item For each edge insertion $\mathtt{insert}(u, v)$ resulting in a triangle containing only one newly inserted edge (and no deleted edges), increment $c_1$ by $\ctriang{1}{0}{\mathtt{insert}(u, v)}$. \item For each edge insertion $\mathtt{insert}(u, v)$ resulting in a triangle containing two newly inserted edges (and no deleted edges), increment $c_2$ by $\ctriang{2}{0}{\mathtt{insert}(u, v)}$. \item For each edge insertion $\mathtt{insert}(u, v)$ resulting in a triangle containing three newly inserted edges, increment $c_3$ by $\ctriang{3}{0}{\mathtt{insert}(u, v)}$. \item For each edge deletion $\mathtt{delete}(u, v)$ resulting in a deleted triangle with one newly deleted edge, increment $c_4$ by $\ctriang{0}{1}{\mathtt{delete}(u, v)}$. \item For each edge deletion $\mathtt{delete}(u, v)$ resulting in a deleted triangle with two newly deleted edges, increment $c_5$ by $\ctriang{0}{2}{\mathtt{delete}(u, v)}$. \item For each edge deletion $\mathtt{delete}(u, v)$ resulting in a deleted triangle with three newly deleted edges, increment $c_6$ by $\ctriang{0}{3}{\mathtt{delete}(u, v)}$. \end{enumerate} Let $C$ be the previous count of the number of triangles. Update $C$ to be $C + c_1 + (1/2)c_2 + (1/3)c_3 - c_4 - (1/2)c_5 - (1/3)c_6$, which becomes the new count. \item Scan through updates again. For each update, if the value stored in $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, and/or $\mathcal{LL}$ is $2$ (a deleted edge), remove this edge. If stored value is $1$ (an inserted edge), change the value to $0$. For all updates where the endpoints are both high-degree or both low-degree, we are done. For each update $(u, w)$ where either $u$ or $w$ is low-degree (assume without loss of generality that $w$ is) and the other is high-degree, look for all high-degree neighbors $v$ of $w$ and update $\mathcal{T}(u, v)$ by summing all $c_1$, $c_2,$, and $c_3$ of the tuple and subtracting $c_4$ and $c_5$. \label{bt:scanthroughupdatesagain} \item Update $\mathcal{D}$ with the new degrees.\label{bt:updatedegree} \item Perform minor rebalancing for all vertices $v$ that exceed $t_2$ in degree or fall under $t_1$ in parallel using $\minreb{v}$. This makes a formerly low-degree vertex high-degree (and vice versa) and updates relevant structures.\label{bt:performminorrebalancing} \item Perform major rebalancing if necessary (i.e., the total number of edges in the graph is less than $M/4$ or greater than $M$). Major rebalancing re-initializes all structures.\label{bt:majorrebalancing} \end{enumerate} \end{breakablealgorithm} \mysubsection{Procedure $\markins{\mathcal{B}}$} We scan through each of the $\mathtt{insert}(u, v)$ updates in $\mathcal{B}$ and mark $(u, v)$ and $(v, u)$ as newly inserted edges in $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, and/or $\mathcal{LL}$ by storing a value of $1$ associated with the edge. \mysubsection{Procedure $\markdels{\mathcal{B}}$} Because we removed all nullifying updates before $\mathcal{B}$ is passed into the procedure, none of the deletion updates in $\mathcal{B}$ should delete newly inserted edges. For all edge deletions $\mathtt{delete}(u, v)$, we change the values stored under $(u, v)$ and $(v, u)$ from $0$ to $2$ in the tables $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, and/or $\mathcal{LL}$. \mysubsection{Procedure $\tins{\mathcal{B}}$} For each $(u,w) \in \mathcal{B}$, assume without loss of generality that $w$ is the low-degree vertex and do the following. We first find all of $w$'s neighbors, $v$, in $\mathcal{LH}$ in parallel. Then, we determine for each neighbor $v$ if $(w, v)$ is new (marked as $1$). If the edge $(w, v)$ is not new, then increment the second value stored in the tuple with index $\mathcal{T}(u, v)$. If $(w, v)$ is newly inserted, then increment the third value stored in $\mathcal{T}(u, v)$. The first, fourth, and fifth values stored in $\mathcal{T}(u, v)$ do not change in this step. The first, second, and third values count the number of edge insertions contained in the wedge keyed by $(u, v)$. The first value counts all wedges with endpoints $u$ and $v$ that do not contain any edge update, the second count the number of wedges containing one edge insertion, and the third counts the number of wedges containing two edge insertions. Then, intuitively, the first, second, and third values will tell us later for edge insertion $(u, v)$ between two high-degree vertices whether newly created triangles containing $(u, v)$ have one (the only update being $(u, v)$), two, or three, respectively, new edge insertions from the batch update. We do not update the edge insertion counts of wedges which contain a mix of edge insertion updates and edge deletion updates. \mysubsection{Procedure $\tdel{\mathcal{B}}$} For each $(u,w) \in \mathcal{B}$, assume without loss of generality that $w$ is the low-degree vertex and do the following. We first find all of $w$'s neighbors, $v$, in $\mathcal{LH}$ in parallel. Then, we determine for each neighbor $v$ if $(w, v)$ is a newly deleted edge (marked as $2$). If $(w, v)$ is not a newly deleted edge, increment the fourth value in the tuple stored in $\mathcal{T}(u, v)$ and decrement the first value. Otherwise, if $(w, v)$ is a newly deleted edge, increment the fifth value of $\mathcal{T}(u, v)$ and decrement the first value. The second and third values in $\mathcal{T}(u, v)$ do not change in this step. For any key $(u, v)$, the first, fourth, and fifth values gives the number of wedges with endpoints $u$ and $v$ that contain zero, one, or two edge deletions, respectively. Intuitively, the first, fourth, and fifth values tell us later whether newly deleted triangles have one (where the only edge deletion is $(u, v)$), two, or three, respectively, new edge deletions from the batch update. \mysubsection{Procedure $\ctriang{i}{d}{update}$} This procedure returns the number of triangles containing the update $\mathtt{insert}(u, v)$ or $\mathtt{delete}(u, v)$ and exactly $i$ newly inserted edges or exactly $d$ newly deleted edges (the update itself counts as one newly inserted edge or one newly deleted edge). If at least one of $u$ or $v$ is low-degree, we search in the tables, $\mathcal{LH}$, and $\mathcal{LL}$ for neighbors of the low-degree vertex and the number of marked edges per triangle: edges marked as $1$ for insertion updates and edges marked as $2$ for deletion updates. If both $u$ and $v$ are high-degree, we first look through all high-degree vertices using $\mathcal{HH}$ to see if any form a triangle with both high-degree endpoints $u$ and $v$ of the update. This allows us to find all newly updated triangles containing only high-degree vertices. Then, we confirm the existence of a triangle for each neighbor found in the tables by checking for the third edge in $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, or $\mathcal{LL}$. We return only the counts containing the correct number of updates of the correct type. To avoid double counting for each update we do the following. Suppose all vertices are ordered lexicographically in some order. For any edge which contains two high-degree or two low-degree vertices, we search in $\mathcal{LL}$, $\mathcal{HH}$, and $\mathcal{LH}$ for exactly one of the two endpoints, the one that is lexicographically smaller. Then, we return a tuple in $\mathcal{T}(u, v)$ based on the values of $i$ and $d$ to determine the count of triangles containing $u$ and $v$ and one low-degree vertex: \begin{itemize}[topsep=0pt,itemsep=0pt,parsep=0pt,leftmargin=20pt] \item Return the first value $\tup{1}$ if either $i = 1$ or $d = 1$. \item Return the second value $\tup{2}$ if $i = 2$. \item Return the third value $\tup{3}$ if $i = 3$. \item Return the fourth value $\tup{4}$ if $d = 2$. \item Return the fifth value $\tup{5}$ if $d = 3$. \end{itemize} Note that we ignore all triangles that include more than one insertion update \emph{and} more than one deletion update. \mysubsection{Procedure $\minreb{u}$} This procedure performs a minor rebalance when either the degree of $u$ decreases below $t_1$ or increases above $t_2$. We move all edges in $\mathcal{HH}$ and $\mathcal{HL}$ to $\mathcal{LH}$ and $\mathcal{LL}$ and vice versa. We also update $\mathcal{T}$ with new pairs of vertices that became high-degree and delete pairs that are no longer both high-degree. \ifFull \subsection{Analysis} \fi \ifCameraReady \subsection{Analysis}~ \fi We prove the correctness of our algorithm in the following theorem. The proof is based on accounting for the contributions of an edge to each triangle that it participates in based on the number of other updated edges found in the triangle. \begin{restatable}{theorem}{batchtriangcorrect}\label{thm:batch-triang-correct} Our parallel \batchdynamic{} algorithm maintains the number of triangles in the graph. \end{restatable} \begin{proof} All triangles containing at least one low-degree vertex can be found either in $\mathcal{T}$ or by searching through $\mathcal{LH}$ and $\mathcal{LL}$. All triangles containing all high-degree vertices can be found by searching $\mathcal{HH}$. Suppose that an edge update $\mathtt{insert}(u, v)$ (resp.\ $\mathtt{delete}(u,v)$) is part of $I_{(u,v)}$ (resp.\ $D_{(u,v)}$) triangles. We need to add or subtract from the total count of triangles $I_{(u,v)}$ or $D_{(u,v)}$, respectively. However, some of the triangles will be counted twice or three times if they contain more than one edge update. By dividing each triangle count by the number of updated edges they contain, each triangle is counted exactly once for the total count $C$. \end{proof} \myparagraph{Overall Bound} We now prove that our parallel \batchdynamic{} algorithm runs in $O(\Delta \sqrt{\Delta+m})$ work, $O(\log^*(\Delta + m))$ depth, and uses $O(\Delta+m)$ space. Henceforth, we assume that our algorithm uses the atomic-add instruction (see Section~\ref{sec:prelims}). Removing nullifying updates takes $O(\Delta)$ total work, $O(\log^*{\Delta})$ depth w.h.p., and $O(\Delta)$ space for hashing and the find-maximum procedure outlined in Section~\ref{sec:update-alg}. In step~\ref{bt:removebadupdates}, we perform table lookups for the updates into $\mathcal{D}$ and in $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, or $\mathcal{LL}$, followed by approximate compaction to filter. The hash table lookups take $O(\Delta)$ work and $O(\log^*m)$ depth with high probability and $O(m)$ space. Approximate compaction~\cite{Gil91a} takes $O(\Delta)$ work, $O(\log^*\Delta)$ depth, and $O(\Delta)$ space. Steps~\ref{bt:updateedgetablesinsert},~\ref{bt:updateedgetablesdelete}, and~\ref{bt:updatedegree} perform hash table insertions and updates on the batch of $O(\Delta)$ edges, which takes $O(\Delta)$ amortized work and $O(\log^*m)$ depth with high probability. The next lemma shows that updating the tables based on the edges in the update (steps~\ref{bt:updatetableinsertions} and \ref{bt:updatetabledeletions}) can be done in $O(\Delta\sqrt{m})$ work and $O(\log^*m)$ depth w.h.p., and $O(m)$ space. \begin{restatable}{lemma}{updatetableins}\label{lem:update-table-ins} $\tins{\mathcal{B}}$ and $\tdel{\mathcal{B}}$ on a batch $\mathcal{B}$ of size $\Delta$ takes $O(\Delta\sqrt{m})$ work and $O(\log^* (\Delta + m))$ depth w.h.p., and $O(\Delta+m)$ space. \end{restatable} \begin{proof} For each $w$, we find all of its high-degree neighbors in $\mathcal{LH}$ and perform the increment or decrement in the corresponding entry in $\mathcal{T}$ in parallel (at this point, the vertices are still classified based on their original degrees). The total number of new neighbors gained across all vertices is $O(\Delta)$ since there are $\Delta$ updates. Therefore, across all updates, this takes $O(\Delta\sqrt{m}+\Delta)$ work and $O(\log^*{(\Delta+m)})$ depth w.h.p. due to hash table lookup and updates. Then, for all high-degree neighbors found, we perform the increments or decrements on the corresponding entries in $\mathcal{T}$ in parallel, taking the same bounds. All vertices can be processed in parallel, giving a total of $O(\Delta\sqrt{m}+\Delta)$ work and $O(\log^*(\Delta+m))$ depth w.h.p. \end{proof} The next lemma bounds the complexity of updating the triangle count in step~\ref{bt:countnewtrianglesperupdate}. \begin{restatable}{lemma}{updatingtriangles}\label{lem:updating-triangles} Updating the triangle count takes $O(\Delta\sqrt{m})$ work and $O(\log^* (\Delta+m))$ depth w.h.p., and $O(\Delta+m)$ space. \end{restatable} \begin{proof} We initialize $c_1,\ldots,c_6$ to $0$. For each edge update in $\mathcal{B}$ where both endpoints are high-degree, we perform lookups in $\mathcal{T}$ and $\mathcal{HH}$ for the relevant values in parallel and increment the appropriate $c_i$. Finding all triangles containing the edge update and containing only high-degree vertices takes $O(\Delta \sqrt{m})$ work and $O(\log^* (\Delta+m))$ depth w.h.p. This is because there are $O(\sqrt{m})$ high-degree vertices in total, and for each we check whether it appears in the $\mathcal{HH}$ table for both endpoints of each update. Performing lookups in $\mathcal{T}$ takes $O(\Delta)$ work and $O(\log^* (\Delta+m))$ depth w.h.p. For each update containing at least one endpoint with low-degree, we perform lookups in the tables $\mathcal{HL}$, $\mathcal{LH}$, and $\mathcal{LL}$ to find all triangles containing the update and increment the appropriate $c_i$. This takes $O(\Delta \sqrt{m}+\Delta)$ work and $O(\log^*(\Delta+m))$ depth w.h.p. Incrementing all $c_i$'s for all newly updated triangles takes $O(\Delta)$ work and $O(1)$ depth. We then apply the equation in step~\ref{bt:countnewtrianglesperupdate} to update $C$, which takes $O(1)$ work and depth. \end{proof} The following lemma bounds the cost for minor rebalancing, where a low-degree vertex becomes high-degree or vice versa (step~\ref{bt:performminorrebalancing}). \begin{restatable}{lemma}{minorrebalance}\label{lem:minor-rebalancing} Minor rebalancing for edge updates takes $O(\Delta \sqrt{m})$ amortized work and $O(\log^*(\Delta+ m))$ depth w.h.p., and $O(\Delta+m)$ space. \end{restatable} \begin{proof} We describe the case of edge insertions, and the case for edge deletions is similar. Using approximate compaction to perform the filtering, we first find the set $S$ of low-degree vertices exceeding $t_2$ in degree. This step takes $O(\Delta)$ work and $O(\log^* \Delta)$ depth w.h.p. For vertices in $S$, we then delete the edges from their old hash tables and move the edges to their new hash tables. The work for each vertex is proportional to its current degree, giving a total work of $O(\sum_{v\in S}\deg(v)) = O(\Delta\sqrt{m}+\Delta)$ w.h.p. since the original degree of low-degree vertices is $O(\sqrt{m})$ and each edge in the batch could have caused at most 2 such vertices to have their degree increase by 1 (the w.h.p.\ is for parallel hash table operations). In addition to moving the edges into new hash tables, we also have to update $\mathcal{T}$ with new pairs of vertices that became high-degree and delete pairs of vertices that are no longer both high-degree. To update these tables, we need to find all new pairs of high-degree vertices. There are at most $O(\Delta \sqrt{m+\Delta})$ such new pairs, which can be found by filtering neighbors using approximate compaction of vertices in $S$ in $O(\Delta\sqrt{m+\Delta})$ work and $O(\log^*(\Delta + m))$ depth w.h.p. For each pair $(u,v)$, we check all neighbors of an endpoint that just became high-degree and increment the entry $\mathcal{T}(u, v)$ for each low-degree neighbor $w$ found that has edges $(u, w)$ and $(w, v)$. Low-degree neighbors have degree $O(\sqrt{m+\Delta})$, and so the total work is $O(\Delta (m+\Delta))$ and depth is $O(\log^*(\Delta + m))$ w.h.p. using atomic-add. There must have been $\Omega(\sqrt{m})$ updates on a vertex before minor rebalancing is triggered, and so the amortized work per update is $O(\Delta \sqrt{m})$ and the depth is $O(\log^* m)$ w.h.p. The space for filtering is $O(m+\Delta)$. \end{proof} We now finish showing Theorem~\ref{thm:linear-space-update}. Lemma~\ref{thm:batch-triang-correct} shows that our algorithm maintains the correct count of triangles. Lemmas~\ref{lem:update-table-ins},~\ref{lem:updating-triangles}, and~\ref{lem:minor-rebalancing} show that the cost of updating tables to reflect the batch, updating the triangle counts, and minor rebalancing is $O(\Delta\sqrt{m}+\Delta)$ amortized work and $O(\log^*(\Delta+m))$ depth w.h.p., and $O(\Delta+m)$ space. Step~\ref{bt:scanthroughupdatesagain} can be done in $O(\Delta\sqrt{m})$ work and $O(\log^* m)$ depth as follows. We scan through the batch $\mathcal{B}$ in parallel and update the hash tables $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, and $\mathcal{LL}$ in $O(\Delta)$ work and $O(\log^* (\Delta+m))$ depth w.h.p. For all updates in $\mathcal{B}$ containing one high-degree vertex and one low-degree vertex, we update the table $\mathcal{T}$ in parallel by scanning the neighbors in $\mathcal{LH}$ of the low-degree vertex. This step takes $O(\Delta \sqrt{m}+\Delta)$ work and $O(\log^* (\Delta+m))$ depth w.h.p. Major rebalancing (step~\ref{bt:majorrebalancing}) takes $O((\Delta+m)^{3/2})$ work and $O(\log^*(\Delta+ m))$ depth by re-initializing the data structures. The rebalancing happens every $\Omega(m)$ updates, and so the amortized work per update is $O(\sqrt{\Delta+m})$ and depth is $O(\log^* (\Delta+m))$ w.h.p. Therefore, our update algorithm takes $O(\Delta\sqrt{\Delta+m})$ amortized work and $O(\log^* (\Delta+m))$ depth w.h.p., and $O(\Delta+m)$ space overall using atomic-add as stated in Theorem~\ref{thm:linear-space-update}. \myparagraph{Bounds without Atomic-Add} Without the atomic-add instruction, we can use a parallel reduction~\cite{JaJa92} to sum over values when needed. This is work-efficient and takes logarithmic depth, but uses space proportional to the number of values summed over in parallel. For updates, this is bounded by $O(\Delta\sqrt{m}+\Delta)$, and for initialization and major rebalancing, this is bounded by $O(\alpha m)$~\cite{ShunT2015}. This would give an overall bound of $O(\Delta(\sqrt{\Delta+m}))$ work and $O(\log (\Delta+m))$ depth w.h.p., and $O(\alpha m+\Delta\sqrt{m})$ space. \subsection{Comparison with Existing Algorithms}\label{sec:makkar}~ \fi \ifFull \subsection{Comparison with Existing Algorithms}\label{sec:makkar} \fi \myparagraph{Comparison with Ediger et al} We compared our implementation with a shared-memory implementation of the Ediger et al.\ algorithm~\cite{Ediger2010}, which is implemented as part of the STINGER dynamic graph processing system~\cite{ediger2012stinger}. Unfortunately, we found that their implementation is much slower than ours due to bottlenecks in the update time for the underlying dynamic graph data structure. We note that recent work on streaming graph processing observed similar results for using STINGER~\cite{dhulipala2019low}. To obtain a fair comparison, we chose to focus on implementing a more recent GPU \batchdynamic{} triangle counting algorithm ourselves, which we discuss next. \myparagraph{Comparison with Makkar et al} The Makkar et al.\ algorithm~\cite{Makkar2017} is a state-of-the-art parallel \batchdynamic{} triangle counting implementation designed for GPUs. To the best of our knowledge, there is no multicore implementation of this algorithm, and so in this paper we implement an optimized multicore version of their algorithm. The algorithm works as follows. First, their algorithm separates the batch of updates into batches for insertions and deletions. Then, for each batch of updates, it creates an \emph{update graph}, $\hat{G}$, for each batch consisting of only the updates within each batch. Then, it merges the updates from each batch with the original edges in the graph to create an updated graph for each of the batches, $G'$. Note that this graph contains both the edges previously in the graph, as well as the new edges. The merging process to construct $G'$ first sorts the batch to obtain sorted lists of neighbors to add/delete from the adjacency lists of vertices in the graph. Then, the algorithm performs a simple linear-work procedure to merge each existing adjacency list with the sorted updates. In particular, doing $t$ edge updates on a vertex with degree $d$ takes $O(d+t)$ work. Finally, the algorithm counts the triangles by intersecting the adjacency lists of the endpoints of each edge in the batch. For each edge $(u, v)$, they intersect $G'(u)$ with $G'(v)$, $G'(u)$ with $\hat{G}(v)$, and $\hat{G}(u)$ with $\hat{G}(v)$. The count of the number of triangles can be obtained from the number of intersections obtained from each of these cases using a simple inclusion-exclusion formula. They provide a further optimization by only intersecting \emph{truncated} adjacency lists in some of the cases where a truncated adjacency list is one where the list only contains vertices with IDs less than the ID of the vertex that the adjacency list belongs to. Their algorithm has a worst case work bound of $O(n^2)$. \myparagraph{Implementation} We developed a new multicore implementation of the Makkar et al.\ algorithm using the same parallel primitives and framework described earlier for the implementation of our algorithm. We implemented several optimizations that improved performance. First, we handle vertices with degree lower than $16$ by storing their incident edges in a special array of size $16n$, and only allocate memory for vertices with larger degree. Second, we note that their algorithm does not specify how to handle redundant insertions that are already present in the graph. We remove these edge updates by modifying the merge algorithm that constructs $G'$ from $G$. Specifically, during the merge, if we identify that a given edge is already present in $G$, we mark it in the sorted sequence of batch updates that we are merging in. Removing these marked updates to construct $\hat{G}$ without redundant updates is done by using a parallel filter. \myparagraph{Performance Comparison} Table~\ref{table:ourtimes} shows the running times of the Makkar et al.\ algorithm on batches of insertions and deletions of different sizes. The data points for the Twitter graph are also plotted in Figure~\ref{fig:twitter}. We observe that the Makkar et al.\ algorithm is faster than our algorithm on the Orkut graph, especially for large batches. On the other hand, for the Twitter graph, our algorithm is consistently faster for both insertions and deletions across all batch sizes. This is because there are no vertices with very high degree in the Orkut graph, and so the Makkar et al.\ algorithm does less work in merging adjacency lists with updates, while the Twitter graph has vertices with extremely high degree, which are costly to merge. Both algorithms are significantly faster than simply applying the static triangle counting algorithm for the range of batch sizes that we considered. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{figs/twitter.pdf} \caption{ This figure plots the average insertion and deletion round times for each batch size (log-log scale) on Twitter using 72 cores with hyper-threading. The plot is in log-log scale. The lines for our algorithm are solid (blue for insertion and red for deletion) while the lines for Makkar et al. algorithm are dashed (green for insertion and yellow for deletion). The update time of Makkar et al. algorithm for Twitter batch size $2 \times 10^{3}$ is missing because the experiment timed out (due to cumulative runtime being too large). }\label{fig:twitter} \end{figure} Next, we evaluate the performance of insertion batches in our algorithm and the Makkar et al.~algorithm on the synthetic rMAT graph with 3.2 billion generated edges (which have duplicates). This synthetic experiment allows us to study how both algorithms perform as the graph becomes more dense. We evaluate the performance for different insertion batch sizes. The experiment uses prefixes of the rMAT graph (the number of unique edges per prefix is shown in Table~\ref{table:rmatduplicate}) to control the density of the graph. The vertex set in this experiment is fixed, and thus a larger number of unique edges corresponds to a denser graph. Figure~\ref{fig:makkar} plots the running time of both implementations for varying batch sizes as a function of the graph density. We observe that for small batch sizes, the performance of the Makkar et al.\ algorithm degrades significantly as the graph grows more dense and contains more high-degree vertices. On the other hand, our algorithm's performance generally does not degrade as the graph grows denser, across all batch sizes. We also significantly outperform the Makkar et al.\ algorithm for small batch sizes. Specifically, we obtain a maximum speedup of $3.31 \times$ for a batch of size $2 \times 10^{4}$. This is because the overhead of updating of high-degree vertices in the Makkar et al.\ algorithm becomes relatively higher, as work proportional to the vertex degree must be done regardless of the number of new incident edges. \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/makkar} \caption{Comparison of the performance of our implementation (DLSY, solid line) and Makkar et al.\ algorithm~\cite{Makkar2017} (makkar, dotted line) for batches of insertions. The figure shows the average batch time for different batch sizes on the rMAT graph with varying prefixes of the generated edge stream to control density. The number of unique edges in the prefix is shown on the $x$-axis. The number of vertices is fixed at 16,384. The dark blue, red, green, and light blue lines are for batches of size $2\times 10^{3}$, $2\times 10^{4}$, $2\times 10^{5}$, and $2\times 10^{6}$, respectively. We see that our new algorithm is faster for small batches and on denser graphs. }\label{fig:makkar} \end{figure} \section{Conclusion} In this paper, we have given new dynamic algorithms for the $k$-clique problem. We study this fundamental problem in the \batchdynamic{} setting, which is better suited for parallel hardware that is widely available today, and enables dynamic algorithms to scale to high-rate data streams. We have presented a work-efficient parallel \batchdynamic{} triangle counting algorithm. We also gave a simple, enumeration-based algorithm for maintaining the $k$-clique count. In addition, we have presented a novel parallel \batchdynamic{} $k$-clique counting algorithm based on fast matrix multiplication, which is asymptotically faster than existing dynamic approaches on dense graphs. Finally, we provide a multicore implementation of our parallel \batchdynamic{} triangle counting algorithm and compare it with state-of-the-art implementations that have weaker theoretical guarantees, showing that our algorithm is competitive in practice. \section{Experimental Results}\label{sec:exps} \input{experimental_setup} \input{our_implementation} \input{comparison} \section{Introduction} Subgraph counting algorithms are fundamental graph analysis tools, with numerous applications in network classification in domains including social network analysis and bioinformatics. A particularly important type of subgraph for these applications is the triangle, or $3$-clique---three vertices that are all mutually connected~\cite{newman2003structure}. Counting the number of triangles is a basic and fundamental task that is used in numerous social and network science measurements~\cite{granovetter1977strength, watts1998collective}. In this paper, we study the triangle counting problem and its generalization to higher cliques from the perspective of dynamic algorithms. A $k$-clique consists of $k$ vertices and all $k \choose{2}$ possible edges among them (for applications of $k$-cliques, see, e.g.,~\cite{hanneman05-introduction}). As many real-world graphs change rapidly in real-time, it is crucial to design dynamic algorithms that efficiently maintain $k$-cliques upon updates, since the cost of re-computation from scratch can be prohibitive. Furthermore, due to the fact that dynamic updates can occur at a rapid rate in practice, it is increasingly important to design \defn{\batchdynamic{}} algorithms which can take arbitrarily large batches of updates (edge insertions or deletions) as their input. Finally, since the batches, and corresponding update complexity can be large, it is also desirable to use parallelism to speed-up maintenance and design algorithms that map to modern parallel architectures. Due to the broad applicability of $k$-clique counting in practice and the fact that $k$-clique counting is a fundamental theoretical problem of its own right, there has been a large body of prior work on the problem. Theoretically, the fastest static algorithm for arbitrary graphs uses fast matrix multiplication, and counts $3\ell$ cliques in $O(n^{\ell\omega})$ time where $\omega$ is the matrix multiplication exponent~\cite{nevsetvril1985complexity}. Considerable effort has also been devoted to efficient combinatorial algorithms. Chiba and Nishizeki~\cite{Chiba1985} show how to compute $k$-cliques in $O(\alpha^{k-2}m)$ work, where $m$ is the number of edges in the graph and $\alpha$ is the arboricity of the graph. This algorithm was recently parallelized by Danisch et al.~\cite{Danisch2018} (although not in polylogarithmic depth). Worst-case optimal join algorithms can perform $k$-clique counting in $O(m^{k/2})$ work as a special case~\cite{Ngo2018,AbergerLTNOR17}. Alon, Yuster, and Zwick~\cite{AYZ97} design an algorithm for triangle counting in the sequential model, based on fast matrix multiplication. Eisenbrand and Grandoni~\cite{EG04} then extend this result to $k$-clique counting based on fast matrix multiplication. Vassilevska designs a space-efficient combinatorial algorithm for $k$-clique counting~\cite{vassilevska2009efficient}. Finocchi et al.\ give clique counting algorithms for MapReduce~\cite{Finocchi2015}. Jain and Seshadri provide probabilistic algorithms for estimating clique counts~\cite{Jain2017}. The $k$-clique problem is also a classical problem in parameterized-complexity, and is known to be $W[1]$-complete~\cite{downey1995fixed}. The problem of maintaining $k$-cliques under dynamic updates began more recently. Eppstein et al.~\cite{Eppstein2009,Eppstein2012} design sequential dynamic algorithms for maintaining size-3 subgraphs in $O(h)$ amortized time and $O(mh)$ space and size-4 subgraphs in $O(h^2)$ amortized time and $O(mh^2)$ space, where $h$ is the $h$-index of the graph ($h=O(\sqrt{m})$). Ammar et al. extend the worst-case optimal join algorithms to the parallel and dynamic setting~\cite{Ammar2018}. However, their update time is not better than the static worst-case optimal join algorithm. Recently, Kara et al.~\cite{KNNOZ19} present a sequential dynamic algorithm for maintaining triangles in $O(\sqrt{m})$ amortized time and $O(m)$ space. Dvorak and Tuma~\cite{Dvorak2013} present a dynamic algorithm that maintains $k$-cliques as a special case in $O(\alpha^{k-2} \log n)$ amortized time and $O(\alpha^{k-2} m)$ space by using low out-degree orientations for graphs with arboricity $\alpha$. \myparagraph{Designing Parallel Batch-Dynamic Algorithms} Traditional dynamic algorithms receive and apply updates one at a time. However, in the \defn{parallel \batchdynamic{}} setting, the algorithm receives \emph{batches of updates} one after the other, where each batch contains a mix of edge insertions and deletions. Unlike traditional dynamic algorithms, a parallel batch-dynamic algorithm can apply \emph{all} of the updates together, and also take advantage of parallelism while processing the batch. We note that the edges inside of a batch may also be ordered (e.g., by a timestamp). If there are duplicate edge insertions within a batch, or an insertion of an edge followed by its deletion, a batch-dynamic algorithm can easily remove such redundant or nullifying updates. The key challenge is to design the algorithm so that updates can be processed in parallel while ensuring low work and depth bounds. The only existing parallel batch-dynamic algorithms for $k$-clique counting are triangle counting algorithms by Ediger et al.~\cite{Ediger2010} and Makkar et al.~\cite{Makkar2017}, which take linear work per update in the worst case. The algorithms in this paper make use of efficient data structures such as parallel hash tables, which let us perform parallel batches of edge insertions and deletions with better work and (polylogarithmic) depth bounds. To the best of our knowledge, no prior work has designed dynamic algorithms for the problem that support parallel batch updates with non-trivial theoretical guarantees. Theoretically-efficient parallel dynamic (and batch-dynamic) algorithms have been designed for a variety of other graph problems, including minimum spanning tree~\cite{Kopelowitz2018,Ferragina1994,Das1994}, Euler tour trees~\cite{TsengDB19}, connectivity~\cite{Simsiri2018,Acar2019,Ferragina1994}, tree contraction~\cite{Reif1994,Acar2017}, and depth-first search~\cite{Khan2017}. Very recently, parallel dynamic algorithms were also designed for the Massively Parallel Computation (MPC) setting~\cite{italiano2019dynamic, dhulipala2020parallel}. \ifFull \myparagraph{Other Related Work} There has been significant amount of work on practical parallel algorithms for the case of static 3-clique counting, also known as triangle counting. (e.g.,~\cite{Suri2011,Arifuzzaman2013,Park2013,Park14, ShunT2015}, among many others). Due to the importance of the problem, there is even an annual competition for parallel triangle counting solutions~\cite{GraphChallenge}. Practical static counting algorithms for the special cases of $k=4$ and $k=5$ have also been developed~\cite{Hocevar2014,Elenberg2016,Pinar2017,AhmedNRDW17,Dave2017}. Dynamic algorithms have been studied in distributed models of computation under the framework of \emph{self-stabilization}~\cite{schneider1993self}. In this setting, the system undergoes various changes, for example topology changes, and must quickly converge to a stable state. Most of the existing work in this setting focuses on a single change per round~\cite{censor2016optimal, bonne19distributedclique, assadi2019fully}, although algorithms studying multiple changes per round have been considered very recently~\cite{bamberger2019local, censor2019fast}. Understanding how these algorithms relate to parallel \batchdynamic{} algorithms is an interesting question for future work. \fi \myparagraph{Summary of Our Contributions} In this paper, we design parallel algorithms in the \batchdynamic{} setting, where the algorithm receives a batch of $\Delta \geq 1$ edge updates that can be processed in parallel. Our focus is on parallel \batchdynamic{} algorithms that admit strong theoretical bounds on their work and have polylogarithmic depth with high probability. Note that although our work bounds may be amortized, our depth will be polylogarithmic with high probability, leading to efficient $\mathsf{RNC}$ algorithms. As a special case of our results, we obtain algorithms for parallelizing single updates ($\Delta=1$). We first design a parallel \batchdynamic{} triangle counting algorithm based on the sequential algorithm of Kara et al.~\cite{KNNOZ19}. For triangle counting, we obtain an algorithm that takes $O(\Delta\sqrt{\Delta+m})$ amortized work and $O(\log^* (\Delta+m))$ depth w.h.p.\footnote{We use ``with high probability'' (w.h.p.) to mean with probability at least $1-1/n^c$ for any constant $c>0$.} assuming a fetch-and-add instruction that runs in $O(1)$ work and depth, and runs in $O(\Delta+m)$ space. The work of our parallel algorithm matches that of the sequential algorithm of performing one update at a time (i.e., it is work-efficient), and we can perform all updates in parallel with low depth. We then present a new parallel \batchdynamic{} algorithm based on fast matrix multiplication. Using the best currently known parallel matrix multiplication~\cite{Williams12,LeGall14}, our algorithm dynamically maintains the number of $k$-cliques in $O\left(\min\left(\Delta m^{0.469k - 0.235}, (\Delta+m)^{0.469k + 0.469}\right)\right)$ amortized work w.h.p.\ per batch of $\Delta$ updates where $m$ is defined as the maximum number of edges in the graph before and after all updates in the batch are applied. Our approach is based on the algorithm of~\cite{AYZ97,EG04,nevsetvril1985complexity}, and maintains triples of $k/3$-cliques that together form $k$-cliques. The depth is $O(\log (\Delta+m))$ w.h.p.\ and the space is $O\left((\Delta+m)^{0.469k + 0.469}\right)$. Our results also imply an amortized time bound of $O\left(m^{0.469k - 0.235}\right)$ per update for dense graphs in the sequential setting. Of potential independent interest, we present the first proof of logarithmic depth in the parallelization of any tensor-based fast matrix multiplication algorithms. We also give a simple batch-dynamic $k$-clique listing algorithm, based on enumerating smaller cliques and intersecting them with edges in the batch. The algorithm runs in $O(\Delta(m+\Delta)\alpha^{k-4})$ expected work, $O(\log^{k-2}n)$ depth w.h.p., and $O(m + \Delta)$ space. Finally, we implement our new parallel batch-dynamic triangle counting algorithm for multicore CPUs, and present some experimental results on large graphs and with varying batch sizes using a 72-core machine with two-way hyper-threading. We found our parallel implementation to be much faster than the multicore implementation of Ediger et al.~\cite{Ediger2010}. We also developed an optimized multicore implementation of the GPU algorithm by Makkar et al.~\cite{Makkar2017}. We found that our new algorithm is up to an order of magnitude faster than our CPU implementation of the Makkar et al.\ algorithm, and our new algorithm achieves 36.54--74.73x parallel speedup on 72 cores with hyper-threading. Our code is publicly available at {\small \url{https://github.com/ParAlg/gbbs}}. \section{Dynamic $k$-Clique via Fast Matrix Multiplication}\label{sec:mm} In this section, we present our final result which is a parallel \batchdynamic{} algorithm for counting $k$-cliques based on fast matrix multiplication in general graphs (which may be dense). For bounded arboricity graphs, we can also count cliques in $O(\Delta(m + \Delta)\alpha^{k-4})$ expected work and $O(\log^{k-2} n)$ depth w.h.p., using $O(m + \Delta)$ space. Due to the similarity of this result to the static parallel $k$-clique counting algorithm given in~\cite{shi2020parallel}, we do not present the details of the proof of this result here but instead refer the interested reader to Appendix~\ref{sec:arboricityclique}. Using parallel matrix multiplication (discussed in Section~\ref{sec:pmm}), we achieve a better work bound (in terms of $m$) for large values of $k$ than our bound of $O(\Delta(\Delta+m)\alpha^{k-4})$ obtained from the simple algorithm presented in Section~\ref{sec:arboricityclique}. To the best of our knowledge, our algorithm (when made sequential) also achieves the best runtime for any sequential dynamic $k$-clique counting algorithm on dense graphs for large $k$ when using the best currently known matrix multiplication algorithm~\cite{Williams12,LeGall14}. For values of $k > 9$, our MM based algorithm achieves $o(m^{k/2 - 1})$ amortized time compared to the arboricity-based algorithm of~\cite{Dvorak2013} that dynamically counts cliques in $\tilde{O}(\alpha^{k-2})$ amortized time where $\alpha$ is the arboricity of the graph (or $\tilde{O}\left(m^{k/2-1}\right)$ amortized time when $\alpha = \Omega\left(\sqrt{m}\right)$) or the trivial $O\left(m^{k/2-1}\right)$ algorithm of choosing all $k/2 - 1$ combinations of edges containing neighbors of the incident vertices of the inserted edge. Our dynamic algorithm modifies the algorithm of~\cite{AYZ97} for counting triangles based on fast matrix multiplication and combines it with a dynamic version of the static $k$-clique counting algorithm of~\cite{EG04} to count the number of $k$-cliques under edge updates in batches of size $\Delta$. Sections~\ref{sec:kmod3}--\ref{sec:matrix-analysis} proves the following theorem for the case when $k \bmod 3 = 0$. Section~\ref{sec:all-k-alg} describes the changes needed for the case when $k \bmod 3 \neq 0$. \begin{theorem}\label{thm:mm-main} There exists a parallel \batchdynamic{} algorithm for counting the number of $k$-cliques, where $k\bmod 3=0$, that takes $O\left(\min\left(\Delta m^{\frac{(2k - 3)\omega_p}{3(1+\omega_p)}}, (m + \Delta)^{\frac{2k\omega_p}{3(1+\omega_p)}}\right)\right)$ amortized work and $O(\log (m + \Delta))$ depth w.h.p., in $O\left((m + \Delta)^{\frac{2k\omega_p}{3(1+\omega_p)}}\right)$ space, given a parallel matrix multiplication algorithm with exponent $\omega_p$. \end{theorem} Using the best currently known matrix multiplication algorithms with exponent $\omega_p = 2.373$, we obtain the following work and space bounds. \begin{corollary}\label{cor:strassen-ws} There exists a parallel \batchdynamic{} algorithm for counting the number of $k$-cliques, where $k\bmod 3=0$, which takes $O\left(\min(\Delta m^{0.469k - 0.704}, (m + \Delta)^{0.469k})\right)$ work and $O(\log (m + \Delta))$ depth w.h.p., in $O\left((m + \Delta)^{0.469k}\right)$ space by Corollary~\ref{cor:matrix-exp}. \end{corollary} Specifically, when amortized over the total number of edge updates $\Delta$, we obtain an amortized work bound of $O(m^{0.469k - 0.704})$ per edge update which is asymptotically better than the combinatorial bound of $O\left(m^{k/2 - 1}\right)$ per update for $k > 9$. To the best of our knowledge, this is also the best known worst-case bound for dense graphs in the sequential setting. Observe that our update algorithm only needs to handle batches of size $0 < \Delta \leq m^{\omega_p/(1+\omega_p)}$. For batches which have size $\Delta > m^{\omega_p/(1+\omega_p)}$, we can reinitialize our data structures in $O((m + \Delta)^{0.469k})$ work ($O\left(m^{0.469k - 0.704}\right)$ amortized work per update in the batch), $O(\log \Delta)$ depth, and $O((m + \Delta)^{0.469k})$ space using our initialization algorithm described in Lemma~\ref{lem:mmpreprocessing} and the fast parallel matrix multiplication of Corollary~\ref{cor:matrix-exp}, which is faster than using the update algorithm (in general, we can use any fast matrix multiplication algorithm that has low depth, but the cutoff for when to reinitialize would be different). The analysis of the reinitialization procedure (similar to the static case presented by Alon, Yuster, and Zwick~\cite{AYZ97}) is provided in Section~\ref{sec:matrix-analysis}. Thus, in the following sections, we only describe our dynamic update procedures for batches of size $0 < \Delta \leq m^{\omega_p/(1+\omega_p)}$. \ifCameraReady \subsection{Our Algorithm}\label{sec:kmod3}~ \fi \ifFull \subsection{Our Algorithm}\label{sec:kmod3} \fi In what follows, we assume that $k \bmod 3 = 0$ (please refer to Section~\ref{sec:all-k-alg} for $k \bmod 3 \neq 0$). We use a batch-dynamic triangle counting algorithm as a subroutine for our batch-dynamic $k$-clique algorithm. Our algorithm for maintaining triangles is a batch-dynamic version of the triangle counting algorithm by Alon, Yuster, and Zwick (AYZ)~\cite{AYZ97}. However, our dynamic algorithm cannot directly be used for the case of $k = 3$ (and only applies for cases $k > 3$) due to the following challenge which we resolve in Section~\ref{sec:alg-overview}. Furthermore, our analysis also assumes $k > 6$ for greater simplicity and since for smaller $k$, our algorithm from Section~\ref{sec:arboricityclique} is also faster. \myparagraph{Adapting the Static Algorithm} We face a major challenge when adapting the algorithm of Alon, Yuster, and Zwick~\cite{AYZ97} for our setting as well as for the sequential setting. Because the AYZ algorithm is meant to count cliques in the static setting, it is fine to consider two different types of triangles and count the triangles of each type separately. The two different types of triangles considered are triangles which contain at least one low-degree vertex and triangles which contain only high-degree vertices. In the static case, we can find all low-degree vertices, but in the dynamic case, we cannot afford to look at all low-degree vertices. If we only look at low-degree vertices incident to edge updates, then the following case may occur: an edge update between two high-degree nodes forms a new triangle incident to a low-degree node. In such a case, only looking at the vertices adjacent to this edge update will not find this triangle. We resolve this issue for $k > 3$ via Lemma~\ref{lem:one-low-high} in Section~\ref{sec:alg-overview}. \myparagraph{Definitions and Data Structures} Given a graph $G$, we construct an auxiliary graph $G'$ consisting of vertices where each vertex represents a clique of size $\ell = k/3$ in $G$.\footnote{We use a hash table $\mathcal{Q}$ that stores each vertex in $G'$ as an index to a set of vertices in $G$ and also stores each set of vertices composing an $\ell$-clique in $G$ (lexicographically sort the vertices and turn into a string) as an index to a vertex in $G'$.} An edge $(u, v)$ between two vertices in $G'$ exists if and only if the cliques represented by $u$ and $v$ form a clique of size $2\ell$ in $G$. Our algorithm maintains a dynamic total triangle count $C$ on $G'$. Let $M=2m+1$ and let a \defn{low-degree} vertex in $G'$ be a vertex with degree less than $M^{t \ell}/2$ (for some $0<t<1$ to be determined later) and a \defn{high-degree} vertex in $G'$ be a vertex with degree greater than $3M^{t\ell}/2$. The vertices with degree in the range $[M^{t\ell}/2, 3M^{t\ell}/2]$ can be classified as either low-degree or high-degree. In addition to the total triangle count, we maintain a count, $C_{\mathcal{L}}$, of all triangles involving a low-degree vertex. Using the algorithm of AYZ~\cite{AYZ97}, we assume we have a two-level hash table, $\mathcal{L}$, representing the neighbors of low-degree vertices in $G'$ (a table mapping a low-degree vertex to another hash table containing its incident edges). We also maintain the adjacency matrix $A$ of high-degree vertices in $G'$ used in AYZ as a two-level hash table for easy insertion and deletion of additional high-degree vertices. Finally, we maintain another hash table $\mathcal{D}$ which dynamically maintains the degrees of the vertices. An simplified version of the algorithm is given in Algorithm~\ref{alg:mmcliquesimple}. \begin{mdframedalg}{Simplified matrix multiplication $k$-clique counting algorithm.}\label{alg:mmcliquesimple} \begin{algorithmic}[1] \Function{Count-Cliques}{$\mathcal{B}$} \State Update graph $G'$ with $\mathcal{B}$ by inserting new $\ell$- and $2\ell$-cliques. \State Find batch of insertions into $G'$, $\mathcal{B}'_I$, and batch of deletions, $\mathcal{B}'_D$. \State Determine the final degrees of every vertex in $G'$ after performing updates $\mathcal{B}'_I$ and $\mathcal{B}'_D$. \ParFor{$\mathtt{insert}(u, v) \in \mathcal{B}'_I, \mathtt{delete}(u, v) \in \mathcal{B}'_D$}\footnotemark \If{either $u$ or $v$ is low-degree: $d(u) \leq \delta$ or $d(v) \leq \delta$} \State Enumerate all triangles containing $(u, v)$. Let this set be $T$. \State By Lemma~\ref{lem:one-low-high}, find all possible triangles representing the same triangle $t \in T$. \State Correct for duplicate counting of triangles. \Else \State Update $A$ (adjacency list for high-degree vertices). \EndIf \EndParFor \State Compute $A^3$. The diagonal provides the triangle counts for all triangles containing only high-degree vertices. \State Sum the counts of all triangles. \State Correct for duplicate counting of cliques. \EndFunction \end{algorithmic} \end{mdframedalg} \footnotetext{Some care must be taken to ensure that rebalancing does not incur too much work. The details of how to deal with rebalancing are given in the full implementation, Algorithm~\ref{alg:mmclique}.} \ifCameraReady \subsection{Overview}\label{sec:alg-overview}~ \fi \ifFull \subsection{Overview}\label{sec:alg-overview} \fi Our algorithm proceeds as follows. Each edge in an update in the batch (edges in $G$) can either create at most $O(m^{k/3 - 1})$ new $(2k/3)$-cliques or disrupt $O(m^{k/3 - 1})$ existing $(2k/3)$-cliques in $G$. We treat each of these newly created or destroyed cliques as an edge insertion or deletion in $G'$. Since we preprocess the updates to $G$ such that there are no duplicate or nullifying updates, a destroyed clique cannot be created again or vice versa. This means that the set of updates to $G'$ will also contain no nullifying updates. Importantly, the AYZ algorithm does not take into account edge insertions and deletions between two high-degree vertices that create or destroy triangles containing at least one low-degree vertex.\footnote{Note that this is fine for the static case but not for the dynamic case.} Thus, we must prove the following lemma for any edge insertion/deletion in $G$ that results in an edge insertion in $G'$ between two high-degree vertices which creates or destroys a triangle containing a low-degree vertex. This lemma is crucial for our algorithm, since it ensures that a triangle formed by two high-degree vertices and a low-degree vertex will be discovered by enumerating all triangles formed or deleted by an edge update incident to the low-degree vertex, and its current edges. Furthermore, this lemma is the reason why our algorithm does not work for $k = 3$ cliques. \begin{lemma}\label{lem:one-low-high} Given a graph $G=(V, E)$, the corresponding $G' = (V', E')$, and for $k > 3$, suppose an edge insertion (resp.\ deletion) between two high-degree vertices in $G'$ creates a new triangle, $(u_H, w_H, x_L)$, in $G'$ which contains a low-degree vertex $x_L$. Let $R(y)$ denote the set of vertices in $V$ represented by a vertex $y \in V'$. Then, there exists a new edge insertion (resp.\ deletion) in $G'$ that is incident to $x_L$ and creates a new triangle $(u', w', x_L)$ such that $R(u') \cup R(w') = R(u_H) \cup R(w_H)$. \end{lemma} \begin{proof} We prove this lemma for edge insertions in $G$. The proof can be easily modified to account for the case of edge deletions in $G$. Suppose an edge insertion $(y, z)$ in $G$ leads to an edge insertion in $G'$ between the two high-degree vertices $u_H$ and $w_H$ that creates the new triangle $(u_H, w_H, x_L)$. The creation of the new triangle signifies that a new clique was created in $G$ consisting of vertices $R(u_H) \cup R(w_H) \cup R(x_L)$. Then, the edge insertion $(y, z)$ created a new $2k/3$-clique in $G$ consisting of the vertices in $R(u_H) \cup R(w_H)$. Since the edge $(y, z)$ between $y, z \in V$ did not exist previously but now exists, ${2k/3 - 2 \choose k/3 - 2}$ new cliques were created using the set of vertices in $R(u_H) \cup R(w_H)$. Each of these new cliques corresponds to a new vertex in $G'$. Suppose $u'$ is one such new vertex representing vertex set $R(u') \subseteq R(u_H) \cup R(w_H)$ and $w'$ represents vertex set $R(w') = \left(R(u_H) \cup R(w_H)\right) \setminus R(u')$. Then, new edges are inserted between $u'$ and $w'$ and between $u'$ and $x_L$ (the edge $(w', x_L)$ might be a newly inserted edge or it is already present in the graph) since all triangles representing the clique of vertices $(u_H, w_H, x_L)$ must be present in $G'$. Thus, the new triangle $(u', w', x_L)$ is created in $G'$. \end{proof} We now describe our dynamic clique counting algorithm that combines the AYZ algorithm~\cite{AYZ97} with the clique counting algorithm of~\cite{EG04}. Given the batch of edge insertions/deletions into $G$, we first compute the duplicate and nullifying updates and remove them. Then, for a set of insertions/deletions into $G'$, we form two batches, one containing the edge insertions and one containing the edge deletions. Given the batch of updates to $G'$, we now formulate a dynamic version of the AYZ algorithm~\cite{AYZ97} on the updates to $G'$. For the batch of updates, we first look at the updates pertaining to the low-degree vertices. For every update $(u, v)$ that contains at least one low-degree vertex (without loss of generality, let $v$ be a low-degree vertex), we search all of $v$'s $O\left(3M^{t\ell}/2\right)$ neighbors and check whether a triangle is formed (resp.\ deleted). For each triangle formed (resp.\ deleted), we update the total triangle count of the graph $G'$. For high-degree vertices, we update our adjacency matrix $A$ containing vertices with high-degree. To compute the triangles containing high-degree vertices, we need only compute $A^3$ (the diagonal will then provide us with the triangle counts). Lastly, one clique results in many different copies of triangles. We must obtain the correct clique count by dividing the number of triangles by the number of ways we can partition the vertices in a $k$-clique into triples of subcliques of size $k/3$. \subsection{Detailed Parallel Batch-Dynamic Matrix Multiplication Based Algorithm}\label{sec:matrix-full-impl} The analysis we perform in Section~\ref{sec:matrix-analysis} on the efficiency of our algorithm is with respect to the detailed implementation. We provide the detailed description and implementation of our algorithm below in Algorithm~\ref{fig:detailed-matrix}. \begin{breakablealgorithm}{}\label{alg:mmclique} \caption{Detailed matrix multiplication based parallel batch-dynamic $k$-clique counting algorithm.}\label{fig:detailed-matrix} \begin{enumerate}[label=(\textbf{\arabic*}),topsep=0pt,itemsep=0pt,parsep=0pt,leftmargin=20pt] \item Given a batch $\mathcal{B}$ of non-nullifying edge updates,\footnote{Recall that we can always remove nullifying edge updates as given in Section~\ref{sec:update-alg}.} first update the graph $G'$. If the update is an insertion, $\mathtt{insert}(u, v)$, add all new $\ell$-cliques created by it into $G'$. If the update is a deletion, $\mathtt{delete}(u, v)$, mark all $\ell$-cliques destroyed by it in $G'$.\footnote{We check in our hash table $\mathcal{Q}$ whether each newly created (deleted) $\ell$-clique is already represented (non-existent) in the graph $G'$. If not, we insert the new clique and/or remove an old clique from $\mathcal{Q}$.} For each update, $\mathtt{insert}(u, v)$ or $\mathtt{delete}(u, v)$, determine all $2\ell$-cliques that include it. This will determine the set of edge insertions/deletions into $G'$. Let all edge updates that destroy $2\ell$-cliques be a batch $\mathcal{B}'_{D}$ of edge deletions in $G'$. Then, let all $2\ell$-cliques formed by edge updates be a batch of edge insertions $\mathcal{B}'_{I}$ into $G'$. Note that edge insertions in the batch could be edges for newly created vertices; for each such newly created vertex, we also add the vertex into $G'$ and its associated data structures. \label{matrix:determineedgeinsertions} \item Determine the final degree of each vertex after all insertions in $\mathcal{B}'_{I}$ and all deletions in $\mathcal{B}'_{D}$. (We do not perform the updates yet--only compute the final degrees.) For all vertices, $X$, which become low-degree after the set of all updates (and were originally high-degree), we create a batch of updates $\mathcal{B}'_{I, L}$ consisting of old edges (not update edges) that are adjacent to vertices in $X$ and were not deleted by the batches of updates. For all vertices, $Y$, which become high-degree after the set of updates (and were originally low-degree), we create a batch of updates $\mathcal{B}'_{D, H}$ consisting of old edges adjacent to vertices in $Y$ that were not deleted after the batches of updates. \footnote{The batch of updates $\mathcal{B}'_{I, L}$ is used to rebalance the data structures when vertices need to be removed from $A$ after becoming low-degree. Because the edges adjacent to these vertices need to be inserted into the structures maintaining low-degree vertices, $\mathcal{B}'_{I, L}$, then, can be thought of as a set of edge insertions to update low-degree data structures. Similarly, vertices which become high-degree need to be deleted from low-degree structures, and hence, $\mathcal{B}'_{D, H}$ can be thought of as a set of edge deletions from low-degree structures.} \label{matrix:determinefinaldegree} \item Let the edges in $\mathcal{B}'_{D} \cup \mathcal{B}'_{D, H}$ be the batch of edge deletions to $G'$. For each of the edges in $\mathcal{B}'_{D} \cup \mathcal{B}'_{D, H}$, we first count the number of triangles it is a part of that contain at least one low-degree vertex. We call this the set of deleted triangles. Let this number of deleted triangles be $T_D$ (initially set $T_D = 0$).\label{matrix:computetd} \begin{enumerate} \item To count the number of triangles that contain at least one low-degree vertex, we first check for each edge whether one of its endpoints is low-degree. Let this set of edge deletions be $D'_L \subseteq \mathcal{B}'_{D} \cup \mathcal{B}'_{D, H}$. \item For every edge $(u', v') \in D'_L$, without loss of generality let $u'$ be the lexicographically\footnote{The specific lexicographical order for the vertices in $G'$ is fixed but can be arbitrary.} first low-degree vertex. For every edge $(u', w')$ incident to $u'$, check whether $(u', v')$ forms a triangle with $(u', w')$. \item For every $(u', v', w')$ triangle deleted (where $(u', v', w')$ is sorted lexicographically), call\\ $t \gets \counttriangles{(u', v', w'), (u', v')}$, and atomically update $T_D \gets T_D + t$. \end{enumerate} \item Update $C_{\mathcal{L}} \leftarrow C_{\mathcal{L}} - T_D$.\label{matrix:updatelowcountone} \item Update the data structures using the batches of edges insertions and deletions, $\mathcal{B}'_D$ and $\mathcal{B}'_I$:\label{matrix:updatestructs} \begin{enumerate} \item Using $\mathcal{B}'_{D}$, delete the relevant edges in $\mathcal{L}$ (containing neighbors of low-degree vertices) and then change the relevant values in $A$ to $0$. We also update $\mathcal{D}$ with the new degrees of the vertices for which an adjacent edge was deleted. \label{matrix:updatedegreeone} \item For the batch of edge insertions into $G'$, $\mathcal{B}'_{I}$, we first insert the relevant edges into $\mathcal{L}$. Then, we change the relevant entries in $A$ from $0$ to $1$. Finally, we update $\mathcal{D}$ with the new degrees of the vertices following the edge insertions. \label{matrix:updateinsertions} \item Remove all vertices which are no longer high-degree (i.e.\ their degree is now less than $M^{t\ell}/2$) from $A$. Create entries in $\mathcal{L}$ for all edges adjacent to each vertex that was removed from $A$. \label{matrix:rebalancelowtohigh} \item Remove the edges of all vertices which are no longer low-degree (i.e.\ their degree is now greater than $3M^{t\ell}/2$) from $\mathcal{L}$ and create new entries in $A$ with the new high-degree vertices. Set the relevant entries in $A$ corresponding to edges adjacent to the new high-degree vertices to $1$.\label{matrix:rebalancehigh} \end{enumerate} \item Let the edges in $\mathcal{B}'_{I} \cup \mathcal{B}'_{I, L}$ be the batch of edge insertions to $G'$. For each of the edges in $\mathcal{B}'_{I} \cup \mathcal{B}'_{I, L}$, we first count the number of triangles it is a part of that contain at least one low-degree vertex. We call this the set of inserted triangles. Let this value be $T_I$ ($T_I = 0$ initially). \label{matrix:countinsertions} \begin{enumerate} \item To count the number of triangles that contain at least one low-degree vertex, we first check for each edge whether one of its endpoints is low-degree. Let this set of edge insertions be $I'_L \subseteq \mathcal{B}'_{I} \cup \mathcal{B}'_{I, L}$. \item For every edge $(u', v') \in I'_L$, without loss of generality let $u'$ be the lexicographically first low-degree vertex. For every edge $(u', w')$ of $u'$, check whether $(u', v')$ forms a triangle with $(u', w')$. \item For every newly inserted triangle $(u', v', w')$ (where $(u', v', w')$ is sorted lexicographically), call\\ $t = \counttriangles{(u', v', w'), (u', v')}$, and atomically update $T_I \leftarrow T_I + t$. \end{enumerate} \item Update $C_{\mathcal{L}} \leftarrow C_{\mathcal{L}} + T_I$.\label{matrix:updatelowcounttwo} \item We perform parallel matrix multiplication after all entries in $A$ have been modified to calculate $S = A^3$. Then, $C_{\mathcal{H}} = \frac{1}{2}\sum_{i \in n} S_{i, i}$. \label{matrix:computeacubed} \item Update $C \leftarrow C_{\mathcal{L}} + C_{\mathcal{H}}$. \label{matrix:updatecount} \item Compute the number of $k$-cliques by dividing $C$ by ${k \choose k/3}{2k/3 \choose k/3}$. \label{matrix:computetriangles} \item If $m$ falls outside the range $[M/4,M]$, then reinitialize the degree thresholds and data structures.\label{matrix:reinitialize} \end{enumerate} \end{breakablealgorithm} Algorithm~\ref{alg:mmclique} uses a subroutine defined below in Algorithm~\ref{alg:subroutine}. \begin{mdframedalg}{Subroutine used in our detailed matrix multiplication $k$-clique counting algorithm that counts the number of unique triangles containing an edge.} \label{alg:subroutine} \begin{enumerate}[label=(\textbf{\arabic*}),topsep=0pt,itemsep=0pt,parsep=0pt,leftmargin=20pt] \item Let $u', v', w' \in V'$ represent the sets of vertices $U', X', W' \subseteq V$, respectively. \item Enumerate all possible triangles that represent the clique containing vertices $U' \cup X' \cup W'$.\label{count:enum} \item Sort the vertices of each triangle lexicographically to obtain tuples of vertices representing the triangles. Let $\mathsf{ID}(u', v')$ be the ID of edge $(u', v')$.\footnote{There are many possible ways to assign IDs to edges--for example, the ID of an edge could be the concatenation of the IDs of the vertices composing the edge.} \item For each enumerated tuple $(x', y', z')$, create a label containing the tuple representing the triangle concatenated with all labels (sorted lexicographically) of edges that are updates in the triangle. Thus, each label can have $4$ to $6$ entries consisting of the three vertices of a triangle tuple and at most $3$ edge labels. For example, suppose that $(x', y')$ is the only edge that is an updated edge in triangle $(x', y', z')$. Then, the label representing this triangle is $(x', y', z', \mathsf{ID}(x', y'))$ where the ID of the edge is given by $\mathsf{ID}(x', y')$. The IDs of all deleted or inserted edges are appended to the end of the label in the order $\mathsf{ID}(x', y'), \mathsf{ID}(y', z'), \mathsf{ID}(z', x')$. \item Sort all labels lexicographically. \item Without loss of generality, let $L = (x', y', z', \mathsf{ID}(x', y'))$ be the lexicographically-first of these triangle labels which contains at least one edge deletion (resp.\ edge insertion) of an edge that is incident to at least one low-degree vertex. \item If $(u', v', w')$ corresponds to the lexicographically-first label $L$ \emph{and} $\mathsf{ID}(u', v')$ is the first edge ID in the label that contains a low-degree vertex, then $(u', v')$ performs the following steps: \begin{enumerate} \item Count the number of unique triangles (using the labels, one can count the unique triangles) containing at least one edge deletion (resp.\ insertion) and at least one low-degree vertex as $T_D$ (resp.\ $T_I$). We count using the generated labels for the triangles enumerated in step~\ref{count:enum} of this procedure. \item Return $T_D$ (resp.\ $T_I$). \end{enumerate} \item If $(u', v', w')$ is not equal to $L$ \emph{or} $\mathsf{ID}(u', v')$ is not the first edge ID that contains a low-degree vertex in the label, return $0$. \end{enumerate} \end{mdframedalg} \ifCameraReady \subsection{Analysis}\label{sec:matrix-analysis}~ \fi \ifFull \subsection{Analysis}\label{sec:matrix-analysis} \fi In Theorem~\ref{lem:correctness-matrix}, we prove that the procedure correctly returns the exact number of $k$-cliques in $G$. The proof is similar to AYZ except that each $\ell$-clique can appear multiple times in $G'$ so we need to normalize by the constant stated in step~\ref{matrix:computetriangles} of Algorithm~\ref{alg:mmclique}. \begin{restatable}{theorem}{matrixcorrectness}\label{lem:correctness-matrix} Algorithm~\ref{alg:mmclique} correctly computes the exact number of cliques in a graph $G = (V, E)$ when $k \bmod 3 = 0$. \end{restatable} \begin{proof} We first show that all triangles in $G'$ represent a $k$-clique in $G$. A vertex exists in $G'$ if and only if it is a $(k/3)$-clique in $G$. Similarly, an edge exists in $G'$ if and only if it connects two vertices in $G'$ that form a $(2k/3)$-clique in $G$. Thus, a triangle connects $3$ pairs of $3$ distinct $(k/3)$-cliques. This implies that each pair represents a complete subgraph, which necessarily means by the pigeonhole principle that the triangle represents a $k$-clique. Now we show that for each unique $k$-clique in $G$, there exist exactly ${k \choose k/3}{2k/3 \choose k/3}$ triangles representing it in $G'$. For each $k$-clique in $G$, there are ${k \choose k/3}$ distinct $(k/3)$-subcliques. Each of these subcliques is represented by a vertex in $G'$. Each distinct triple of subcliques will be a triangle in $G'$. There are ${k \choose k/3}$ ways to choose the first subclique, ${2k/3 \choose k/3}$ ways to choose the second subclique, and ${k/3 \choose k/3}$ ways to choose the third subclique in the triple. Thus, the total number of duplicate triangles is ${k \choose k/3}{2k/3 \choose k/3}$. We conclude by proving that our algorithm finds the exact number of triangles in $G'$. All triangles containing edge updates where at least one of its endpoints is low-degree can be found by searching all of the neighbors of the low-degree vertex. All such neighbors will be in $\mathcal{L}$, thus, searching through the entries in $\mathcal{L}$ is enough to find all triangles containing at least one low-degree vertex and an edge update to a low-degree vertex. By Lemma~\ref{lem:one-low-high}, all triangles with a low-degree vertex, containing a single edge update between high-degree vertices can be found via the $\mathtt{count\_new\_low\_degree\_triangles}$ procedure. The same logic handles vertices that change status from high-degree to low-degree, since we treat edges incident to these vertices as new edge insertions. Finally, the procedure ensures that no duplicate triangles are added to the update triangle count because the lexicographically first triangle counts all possible triangles representing the same clique (and no others increment the count). Table $A$ is used to compute (via transitive closure) the number of triangles that contain no low-degree vertices. Thus, by computing $A^3$, we find the remaining triangles which only contain high-degree vertices. Finally, dividing by the total number of different triangles that are created per unique clique gives us the precise count of the number of $k$-cliques in $G$. \end{proof} \myparagraph{Cost} We now analyze the work, depth, and space of the dynamic algorithm. Our analysis assumes that $m^{\omega_p/(1+\omega_p)} = O(m^{t\ell})$ so that the $O(m^{t\ell})$ terms in our analysis are only affected by a constant factor for our batch size of $\Delta \leq m^{\omega_p/(1+\omega_p)}$. This is true for $k > 6$ because $t\geq 1/3$ and $\ell \geq 3\omega_p/(1+\omega_p)$. For small $\ell$ we use the combinatorial algorithm from Section~\ref{sec:arboricityclique}, which is also faster. First, we compute the work and depth bound of performing preprocessing on an initial graph $G = (V, E)$ with $m$ edges. We can also apply this preprocessing directly without running the update algorithm whenever we receive a batch of size $\Delta > m^{\omega_p/(1+\omega_p)}$. For preprocessing, we use a different threshold $m^{t'\ell}$ for low-degree and high-degree vertices. Searching for all the triangles containing at least one low-degree vertex takes $O\left(m^{(1+t')\ell}\right)$ work by a similar calculation as in Lemma~\ref{lem:compute-low-deg} and searching for triangles containing all high-degree vertices takes $O\left(m^{(1-t')\ell\omega_p}\right)$ work by Lemma~\ref{lem:compute-from-scratch}. Thus, the optimal value $t'$ is when $m^{(1+t')\ell} = m^{(1-t')\ell\omega_p}$, which gives $t'=\frac{\omega_p -1}{\omega_p + 1}$ as in~\cite{AYZ97}. \begin{restatable}{lemma}{preprocessing}\label{lem:mmpreprocessing} Preprocessing the graph $G = (V, E)$ with $m$ edges into $G'$, creating the data structures $\mathcal{L}$, $A$, and $\mathcal{D}$, and counting the number of $k$-cliques takes $O\left( m^{\frac{2k\omega_p}{3(1+\omega_p)}}\right)$ work and $O(\log m)$ depth w.h.p., and $O\left(m^{\frac{2k\omega_p}{3(1+\omega_p)}}\right)$ space assuming a parallel matrix multiplication algorithm with coefficient $\omega_p$. Using the fastest parallel matrix multiplication currently known (\cite{LeGall14}, Corollary~\ref{cor:matrix-exp}), preprocessing takes $O\left(m^{0.469k}\right)$ work and $O(\log m)$ depth w.h.p., and $O(m^{0.469k})$ space. \end{restatable} \ifFull \begin{proof} The graph $G'$ has size $O(m^{\ell})$ by Lemma~\ref{lem:edge-clique-bound}. We can find all $\ell$-cliques using $O(m^{\ell/2})$ work and $O(1)$ depth and all $2\ell$-cliques using $O(m^{\ell})$ work and $O(1)$ depth. Initializing the data structures $\mathcal{L}$ and $\mathcal{D}$ with $O(m^{\ell})$ entries requires insertions into two parallel hash tables. This takes $O(m^{\ell})$ work and $O(\log^* m)$ depth w.h.p., and $O(m^{\ell})$ space. There are $O\left(m^{\frac{2\ell}{(1+\omega_p)}}\right)$ high-degree vertices which means that initializing $A$, the adjacency matrix, requires creating a $2$-level hash table with $O\left(m^{\frac{4\ell}{(1+\omega_p)}}\right)$ entries. This takes $O\left(m^{\frac{4\ell}{(1+\omega_p)}}\right)$ work and $O(\log^* m)$ depth w.h.p., and $O\left(m^{\frac{4\ell}{(1+\omega_p)}}\right)$ space. Computing $A^3$ requires $O\left(m^{\frac{2\ell\omega_p}{(1+\omega_p)}}\right)$ work, $O(\log m)$ depth, and $O\left(m^{\frac{2\ell\omega_p}{(1+\omega_p)}}\right)$ space. Finally, counting all the triangles with at least one low-degree vertex requires $O\left(m^{\frac{2\ell\omega_p}{(1+\omega_p)}}\right)$ work and $O(1)$ depth (by performing $O\left(m^{(1+t)\ell}\right)$ lookups in $\mathcal{L}$). By Corollary~\ref{cor:matrix-exp}, $\omega_p = 2.373$, and since $\ell = k/3$, preprocessing takes $O\left(m^{0.469k}\right)$ work, $O(\log m)$ depth, and $O(m^{0.469k})$ space. \end{proof} \fi Next, we analyze the update procedure of our dynamic algorithm. To start, we bound the number of vertices and edges in $G'$ (representing the number of $\ell$ and $2\ell$ cliques in $G$, respectively) in terms of $m$ (the number of edges in $G$) below. \begin{lemma}[\cite{Chiba1985}]\label{lem:edge-clique-bound} Given a graph $G = (V, E)$ with $m$ edges, the number of $k$-cliques that $G$ can have is bounded by $O(m^{k/2})$. \end{lemma} \begin{lemma}\label{lem:space} $G'$ uses $O(m^{\ell})$ space. \end{lemma} \begin{proof} Each vertex in $G'$ represents an $\ell$-clique. By Lemma~\ref{lem:edge-clique-bound}, $G'$ has $O(m^{\ell/2})$ vertices and thus $O(m^{\ell})$ edges. \end{proof} Before we compute the number of triangles in $G'$, we must update $G'$ and the data structures associated with $G'$ with our batch of updates. \begin{restatable}{lemma}{updatingstructs}\label{lem:updating-structs} Updating $G'$ and the associated data structures $\mathcal{L}$ and $A$ after a batch of $\Delta$ edge updates in $G$ takes $O(\Delta m^{\ell - 1} + \Delta m^{(2-2t)\ell - 1})$ amortized work and $O(\log^* m)$ depth w.h.p., and $O\left(m^{\ell} + m^{(2-2t)\ell}\right)$ space. \end{restatable} \ifFull \begin{proof} In step~\ref{matrix:determineedgeinsertions} we first add and/or delete vertices in $G'$. Since each vertex in $G'$ represents a different clique of size $\ell$, one edge update in $G$ can result in $O(m^{(\ell/2) - 1})$ new vertices (or vertex deletions) since given two vertices (the endpoints of the edge update) that must be in the $\ell$-clique, we only need to look for all $(\ell - 2)$-cliques in $G$. For a batch of size $\Delta$, the total number of vertices added or deleted in $G'$ is $O(\Delta m^{(\ell/2) -1})$. In steps~\ref{matrix:updatedegreeone} and~\ref{matrix:updateinsertions}, updating the data structures $\mathcal{L}$, $A$, and $\mathcal{D}$ by insertions/deletions into parallel hash tables requires $O(\Delta m^{\ell-1})$ amortized work and $O(\log^*m)$ depth w.h.p. Recall that the number of edges in $G'$ is determined by the total number of $2\ell$-cliques in $G$. One edge update can affect at most $O(m^{\ell - 1})$ $2\ell$-cliques in $G$, thus, given a $\Delta$-batch of edge updates in $G$, there will be $O(\Delta m^{\ell - 1})$ edge updates in $G'$, separated into a deletion batch $\mathcal{B}'_D$ and an insertion batch $\mathcal{B}'_I$. We now analyze the cost for steps~\ref{matrix:rebalancelowtohigh} and~\ref{matrix:rebalancehigh}. Adding/removing a row and column from $A$ takes $O(m^{(1-t)\ell})$ amortized work. Since there are $O(m^{\ell-1})$ edge updates in $G'$ per update in $G$, the total work for resizing is $O(m^{(2-t)\ell-1})$ per edge update in $G$. The work for adding/removing a vertex from $\mathcal{L}$ is $O(m^{t\ell})$, and since there are $O(m^{\ell-1})$ edge updates per update in $G$, the total work is $O(m^{(1+t)\ell-1})$ per update in $G$. We must have $\Omega(m^{t\ell})$ updates in $G'$ before a vertex changes statuses (becomes high-degree if it originally was low-degree and vice versa) and needs to update $A$ and $\mathcal{L}$. Therefore, we can charge the work of updating $A$ and $\mathcal{L}$ against $\Omega(m^{t\ell})$ updates in $G'$. Thus, the amortized work for updating $A$ and $\mathcal{L}$ given a batch of $\Delta$ updates in $G$ is $O\left(\Delta\left(m^{(2-2t)\ell-1}+m^{\ell-1}\right)\right)$ for steps~\ref{matrix:determineedgeinsertions} and~\ref{matrix:updatestructs}. The depth is $O(\log^*m)$ w.h.p.\ due to hash table operations. The data structures $\mathcal{L}$, $\mathcal{D}$, and $A$ use a combined $O(m^{\ell} + m^{(2-2t)\ell})$ space because there are $O(m^\ell)$ edges in the graph and $A$ contains $O(m^{(2-2t)\ell})$ entries. \end{proof} \fi By Lemma~\ref{lem:updating-structs}, step~\ref{matrix:determinefinaldegree} takes $O\left(\Delta m^{\ell-1}\right)$ amortized work to determine the final degrees and $O(\Delta m^{\ell - 1} + \Delta m^{(2-2t)\ell - 1})$ amortized work to compute $B'_{I, L}$ and $B'_{D, H}$. In total, step~\ref{matrix:determinefinaldegree} takes $O(\Delta m^{\ell - 1} + \Delta m^{(2-2t)\ell - 1})$ amortized work, $O(\log m)$ depth (dominated by computing the final degrees), and $O(m^{\ell} + m^{(2-2t)\ell})$ space by Lemma~\ref{lem:updating-structs}. Steps \ref{matrix:updatelowcountone}, \ref{matrix:updatelowcounttwo}, \ref{matrix:updatecount}, and \ref{matrix:computetriangles} of the algorithm take $O(1)$ work. The following lemmas bound the cost for the remaining steps. Lemma~\ref{lem:compute-low-deg} below bounds the cost for steps \ref{matrix:computetd} and \ref{matrix:countinsertions}. The proof is based on counting the number of new edge updates necessary in $G'$. \begin{restatable}{lemma}{mmlowdeg}\label{lem:compute-low-deg} Computing all new $k$-cliques represented by triangles that contain at least one low-degree vertex in $G'$ takes $O(\Delta m^{(t + 1)\ell - 1})$ work and $O(\log^* m)$ depth w.h.p., and $O(m^{\ell})$ space. \end{restatable} \ifFull \begin{proof} We first bound the work necessary to perform steps~\ref{matrix:computetd} and~\ref{matrix:countinsertions} for new edge insertions and deletions. Given one edge update in $G$, there can be at most $O(m^{\ell - 1})$ edge updates necessary in $G'$ by Lemma~\ref{lem:edge-clique-bound}. For each of these edge updates, we consider whether each edge update in $G'$ contains a low-degree vertex. By Lemmas~\ref{lem:one-low-high} and~\ref{lem:correctness-matrix}, to find all updated triangles containing at least one low-degree vertex, it is only necessary to consider edge updates to low-degree vertices. For every edge update to a low-degree vertex, we search the neighbors of that low-degree vertex to see if new triangles are formed/destroyed. Since each low-degree vertex has degree $O(m^{t\ell})$, this results in a total of $O(m^{(t + 1)\ell - 1})$ work per update in $G$ to perform the search. For each triangle found that contains the low-degree vertex, we need to perform the additional work of computing every triangle that contains the set of vertices represented by the triangle, sort the labels, and determine which triangle is responsible for incrementing the count of triangles by all ${k \choose k/3}{2k/3 \choose k/3}$ triangles representing the same clique. This additional work is done by calling $\counttriangles{(u', v', w'), (u', v')}$ on each triangle $(u', v', w')$ and each edge update $(u', v')$. The total amount of additional work done for each triangle that is passed into $\mathtt{count\_updated\_low\_degree\_triangles}$ is then $O\left(k(3e^2)^k\right)$, where the number of triangles corresponding to the same $k$-clique is given by $O\left((3e^2)^k\right)$ and an additional $O(k(3e^2)^k)$ work is required to sort all the labels. Since we assume that $k$ is constant, this results in $O(1)$ additional work per call to $\mathtt{count\_updated\_low\_degree\_triangles}$. The depth is $O(\log^*m)$ w.h.p.\ due to hash table lookups. Now we bound the work of performing steps~\ref{matrix:computetd} and~\ref{matrix:countinsertions} for edges that are `inserted' or `deleted' due to rebalancing. Suppose there are $X$ vertices that must be rebalanced in this way. Each of these $X$ vertices must have degree $O(m^{t\ell})$ at the time of rebalancing. Thus, the total work performed for these updates is $O(Xm^{2t\ell})$. However, in order for a rebalancing on a vertex to happen, there must be $\Omega(m^{t\ell})$ updates. Thus, if $X$ vertices are rebalanced, then there must be $\Omega(Xm^{t\ell})$ updates. Hence, we can charge the work of rebalancing to the $\Omega(Xm^{t\ell})$ updates to obtain $O(m^{t\ell})$ amortized work per update in $G'$. Then, we obtain $O(\Delta m^{(t + 1)\ell - 1})$ amortized work for a $\Delta$ batch updates to $G$. Rebalancing requires $O(\log^*m)$ depth w.h.p.\ due to hash table operations and $O(m^{\ell})$ space (the total number of edges in the graph). \end{proof} \fi Lemma~\ref{lem:compute-from-scratch} bounds the cost for step \ref{matrix:computeacubed} by using the matrix multiplication bounds for the adjacency matrix containing high-degree vertices. \begin{restatable}{lemma}{mmhighdeg}\label{lem:compute-from-scratch} Computing $A^3$ using parallel matrix multiplication takes $O(m^{(1-t)\ell \omega_p})$ work, where $\omega_p$ is the parallel matrix multiplication constant, $O(\pmmdepth{m})$ depth, and $O(m^{\omega_p(1-t)\ell})$ space, assuming that there exists a parallel matrix multiplication algorithm with coefficient $\omega_p$ and using $O(\pmmdepth{n})$ depth and $O(\pmmspace{n})$ space given $n \times n$ matrices. \end{restatable} \begin{proof} There are $O(m^{(1-t)\ell})$ high-degree vertices because each high-degree vertex has degree $\Omega(m^{t\ell})$ and there are $O(m^{\ell})$ edges in $G'$. Since the table $A$ is an adjacency matrix on the high-degree vertices, by Corollary~\ref{cor:matrix-exp}, parallel matrix multiplication can be done in $O(m^{(1-t)\ell\omega_p})$ work. \end{proof} Lemma~\ref{lem:major-rebalancing} bounds the cost for step \ref{matrix:reinitialize}. The proof is based on amortizing the cost for reconstruction over $\Omega(m)$ updates. \begin{restatable}{lemma}{mmrebalancing}\label{lem:major-rebalancing} Step~\ref{matrix:reinitialize} requires $O(\Delta m^{(2-2t)\ell-1}+\Delta m^{\ell-1})$ amortized work and $O(\log^*m)$ depth w.h.p., and $O(m^{(2-2t)\ell}+ m^\ell)$ space. \end{restatable} \begin{proof} We reconstruct $A$ from scratch, which has one entry for every pair of high-degree vertices, which takes $O(m^{2(1-t)\ell})=O(m^{(2-2t)\ell})$ work and space. However, this is amortized against $\Omega(m)$ updates, and so the amortized work is $O(m^{(2-2t)\ell-1})$ per update. The work and space for creating $\mathcal{L}$ can be bounded by $O(m^{\ell})$, the number of edges in $G'$. Amortized against $\Omega(m)$ updates gives $O(m^{\ell-1})$ work per update. The depth is $O(\log^*m)$ w.h.p.\ using parallel hash table operations. \end{proof} Given these costs, we can now compute the optimal value of $t$ in terms of $\omega_p$ that minimizes the work. Note that here we compute for $t$ assuming $\Delta = 1$ because to adaptively change our threshold requires too much work in terms of rebalancing the data structures. However, if we have a fixed batch size, $\Delta$, we can further optimize our threshold $t$ to take into account the fixed batch size. \begin{restatable}{lemma}{mmoptimalt}\label{lem:t-value} $t = \frac{3 - k + k\omega_p}{k + k\omega_p}$ gives us an optimal work bound assuming $\Delta = 1$. \end{restatable} \begin{proof} From Lemmas~\ref{lem:updating-structs},~\ref{lem:compute-low-deg},~\ref{lem:compute-from-scratch}, and~\ref{lem:major-rebalancing}, we have that the work is $O(\Delta m^{(t + 1)\frac{k}{3} - 1} + m^{\frac{(1-t)k\omega_p}{3}})$ w.h.p. (the $O(\Delta m^{(2-2t)l-1})$ term is dominated by the $O(\Delta m^{(1+t)l-1})$ term since $\omega_p \geq 2$ implies $t\geq 1/3$). Assuming $\Delta = 1$, balancing the two sides of the equation yields: $$m^{\frac{(1-t)k\omega_p}{3}} = m^{(t + 1)\frac{k}{3} - 1}.$$ Solving for $t$ gives $$t = \frac{3 - k + k\omega_p}{k + k\omega_p}.$$ \end{proof} Plugging in our value for $t$ from Lemma~\ref{lem:t-value}, we prove Theorem~\ref{thm:mm-main} and Corollary~\ref{cor:strassen-ws} for the cost of our algorithm when $0 < m \leq m^{\omega_p/(1+\omega_p)}$. \ifCameraReady \subsection{Accounting for $k \bmod 3 \neq 0$}\label{sec:all-k-alg}~ \fi \ifFull \subsection{Accounting for $k \bmod 3 \neq 0$}\label{sec:all-k-alg} \fi We now modify the algorithm above to account for all values $k$ following the algorithm presented in~\cite{EG04}. This requires several changes to how we construct our graph $G'$ from a graph $G = (V, E)$, resulting in changes to our data structures which we detail below. We recall the notation $R(x)$ for vertex $x \in G'$ to denote the vertices in $G$ that $x$ represents. \subsubsection{Construction of $G'$}\label{sec:gp-const} For $k \bmod 3 \neq 0$, the fundamental problem we face in this case in constructing the graph $G'$ is that triangles in the graph $G'$ representing cliques of size $\floor{\frac{k}{3}}$ no longer create $k$-cliques. In fact, they now create $(k-1)$-cliques or $(k-2)$-cliques for $k \bmod 3 =1$ and $k\bmod3 = 2$, respectively. We modify the creation of $G'$ in the two following ways to account for this issue: \paragraph{$k\bmod3 = 1$:} In this case, we create two sets of vertices. One set, $A$, of vertices represents all $\left(\frac{k-1}{3}\right)$-cliques in the graph $G$. Edges exist between $v_1, v_2 \in A$ if and only if the vertices, $R(v_1)$ and $R(v_2)$, in the $\left(\frac{k-1}{3}\right)$-cliques represented by $v_1$ and $v_2$ form a $\frac{2(k-1)}{3}$ clique and there are no duplicate vertices, i.e., $R(v_1) \cap R(v_2) = \emptyset$. We create a second set of vertices $B$ which contains vertices which represent cliques of size $\frac{k+2}{3}$. Edges exist between $v \in A$ and $w \in B$ if and only if $R(v)$ and $R(w)$ form a $\left(\frac{2k +1}{3}\right)$-clique and $R(v) \cap R(w) = \emptyset$. \paragraph{$k\bmod3 = 2$:} In this case, we still create two sets of vertices but $A$ instead represents $\left(\frac{k+1}{3}\right)$-cliques in the graph $G$. Edges exist between $v_1, v_2 \in A$ if and only if $R(v_1) \cup R(v_2)$ form a $\left(\frac{2(k+1)}{3}\right)$-clique and $R(v_1) \cap R(v_2) = \emptyset$. We create a second set of vertices $B$ which contains vertices which represent cliques of size $\frac{k-2}{3}$. Edges exist between $v \in A$ and $w \in B$ if and only if $R(v)$ and $R(w)$ form a $\left(\frac{2k -1}{3}\right)$-clique and $R(v) \cap R(w) = \emptyset$. We first prove the properties the new graph $G'$ has, namely the number of vertices it contains as well as the number of edges in the graph. \begin{lemma}\label{lem:gp-struct} $G'$ constructed as in Section~\ref{sec:gp-const} contains $O\left(m^{\frac{k+2}{6}}\right)$ vertices and $O\left(m^{\frac{2k+1}{6}}\right)$ edges if $k\bmod 3 = 1$. $G'$ contains $O\left(m^{\frac{k+1}{6}}\right)$ vertices and $O\left(m^{\frac{k+1}{3}}\right)$ edges if $k \bmod 3 =2$. \end{lemma} \begin{proof} When $k\bmod 3 = 1$, the number of vertices is upper bounded (asymptotically) by the number of $\left(\frac{k+2}{3}\right)$-cliques in the graph. By Lemma~\ref{lem:edge-clique-bound}, the number of vertices is then bounded by $O\left(m^{\frac{k+2}{6}}\right)$. The number of edges is bounded by the number of $\left(\frac{2k +1}{3}\right)$-cliques in the graph which is $O\left(m^{\frac{2k+1}{6}}\right)$. Similarly, when $k\bmod 3 =2$, by Lemma~\ref{lem:edge-clique-bound}, the number of vertices and edges are bounded by $O\left(m^{\frac{k+1}{6}}\right)$ and $O\left(m^{\frac{k+1}{3}}\right)$, respectively. \end{proof} \subsubsection{Data Structure and Algorithm Changes} The major data structure change is to redefine the high-degree and low-degree vertices in terms of the number of edges in the graph. This means that low-degree is defined as having a degree less than $\frac{M^{t\left(\frac{2k+1}{6}\right)}}{2}$ and high-degree as greater than $\frac{3M^{t\left(\frac{2k+1}{6}\right)}}{2}$ for the $k\bmod 3 = 1$ case; similarly we define low-degree to be less than $\frac{M^{t\left(\frac{k+1}{3}\right)}}{2}$ and high-degree to be greater than $\frac{3M^{t\left(\frac{k+1}{3}\right)}}{2}$ for the $k\bmod 3 = 2$ case. Another key difference between this case and the case when $k$ is divisible by $3$ is that the number of duplicate cliques is different for these two cases. For the $k\bmod 3=1$ case, each $k$-clique in $G$ will be represented by ${k \choose (k+2)/3}{(2k-2)/3 \choose (k-1)/3}$ triangles found by the algorithm. For the $k\bmod 3 = 2$ case, each $k$-clique in $G$ will be represented by ${k \choose (k-2)/3}{(2k+2)/3 \choose (k+1)/3}$ triangles. Thus, at the end of our algorithm, we must divide the count of the triangles by their respective number of duplicates. The rest of the algorithm remains the same as before, except that we solve for different values of $t$ depending on the case. Since the proofs for obtaining the following results are nearly identical to the ones for $k \bmod 3 = 0$, we do not restate the proofs and only give our results. \begin{lemma}\label{lem:t-mod1} For the case when $k \bmod 3 = 1$, there exists $O\left(m^{\frac{2k+1}{6}}\right)$ edges in the graph and solving for the optimal value of $t$ (assuming $\Delta = 1$) gives $t = \frac{2k\omega_p - 2k + \omega_p + 5}{2k\omega_p + 2k + \omega_p + 1}$. For the case when $k \bmod 3 = 2$, there exists $O\left(m^{\frac{k+1}{3}}\right)$ edges in the graph and solving for the optimal value of $t$ gives $t = \frac{k\omega_p - k + \omega_p + 2}{k\omega_p + k + \omega_p + 1}$. \end{lemma} Using our values for $t$, we can obtain our final theorem, Theorem~\ref{thm:all-k}, for the work and depth bounds for these two cases. \begin{theorem}\label{thm:all-k} Our fast matrix multiplication based $k$-clique algorithm takes\\ $O\left(\min\left(\Delta m^{\frac{2(k - 1)\omega_p}{3(\omega_p + 1)}}, (\Delta+m)^{\frac{(2 k + 1)\omega_p}{3 (\omega_p + 1)}}\right)\right)$ work and $O(\log(m+\Delta))$ depth w.h.p., and $O\left((\Delta+m)^{\frac{(2 k + 1) \omega_p}{3 (\omega_p + 1)}}\right)$ space assuming a parallel matrix multiplication algorithm with coefficient $\omega_p$ when $k \bmod 3 = 1$, and $O\left(\min\left(\Delta m^{\frac{(2k - 1)\omega_p}{3(\omega_p + 1)}}, (\Delta+m)^{\frac{2(k + 1)\omega_p}{3(\omega_p + 1)}}\right)\right)$ work and $O(\log(m+\Delta))$ depth w.h.p., and $O\left((\Delta+m)^{\frac{2(k + 1)\omega_p}{3(\omega_p + 1)}}\right)$ space when $k \bmod 3 = 2$. \end{theorem} \begin{corollary}\label{cor:strassen-all-k} Using Corollary~\ref{cor:matrix-exp} with $\omega_p = 2.373$, we obtain a parallel fast matrix multiplication $k$-clique algorithm that takes $O\left(\min\left(\Delta m^{0.469k - 0.469}, (\Delta+m)^{0.469k + 0.235}\right)\right)$ work and $O(\log m)$ depth w.h.p., and $O\left((\Delta+m)^{0.469k + 0.235}\right)$ space when $k \bmod 3 = 1$, and $O\left(\min\left(\Delta m^{0.469k - 0.235}, (\Delta+m)^{0.469k + 0.469}\right)\right)$ work and $O(\log m)$ depth w.h.p., and $O\left((\Delta+m)^{0.469k + 0.469}\right)$ space when $k\bmod 3 = 2$. \end{corollary} \subsection{Parallel Fast Matrix Multiplication}\label{sec:pmm} In this section, we show that tensor-based matrix multiplication algorithms (including Strassen's algorithm) can be parallelized in $O(\log n)$ depth and $O(n^{\omega})$ work. Such techniques are used for algorithms that achieve the best currently known matrix multiplication exponents~\cite{Williams12,LeGall14}. We assume, as is common in models such as the arithmetic circuit model, that field operations can be performed in constant work. We refer readers interested in learning more about current techniques in fast matrix multiplication to~\cite{Blaser13,alman19}. Before we prove our main parallel result in this section, we first define the \emph{matrix multiplication tensor} as used in previous literature. \begin{definition}[Matrix Multiplication Tensor (see, e.g., \cite{alman19})]\label{def:mm-tensor} For positive integers $a, b, c$, the matrix multiplication tensor $\langle a, b, c \rangle$ is a tensor over $\left\{x_{ij}\right\}_{i \in [a], j \in [b]}, \left\{y_{jk}\right\}_{j \in [b], k \in [c]}, \left\{z_{ki}\right\}_{k \in [c], i \in [a]}$, where \begin{align*} \langle a, b, c \rangle = \sum_{i=1}^a \sum_{j=1}^b \sum_{k=1}^c x_{ij} y_{jk} z_{ki}. \end{align*} \end{definition} The matrix multiplication tensor can be seen as a generating function for $A \times B$ multiplication where the coefficients of the $z_{ki}$ terms are exactly the $(i, k)$ entries in the matrix product $A \times B$ where $A = \begin{pmatrix} x_{11} &\dots & x_{1b}\\ \dots&\dots&\dots \\ x_{a1} & \dots & x_{ab} \end{pmatrix}$ and $B = \begin{pmatrix} y_{11} &\dots & y_{1c}\\ \dots&\dots&\dots \\ y_{b1} & \dots & y_{bc} \end{pmatrix}$. Current matrix multiplications algorithms use this fact to obtain the best known exponents. The proof of the following lemma closely follows the proof of Proposition 4.1 given in~\cite{alman19}. \begin{lemma}\label{lem:matrix-mult-tensor-parallel} Let $R\left(\langle q, q, q\rangle\right) \leq r$ (over a field $\mathbb{F}$) be the rank of the matrix multiplication tensor $\langle q, q, q\rangle$. Assuming that field operations take $O(1)$ work, then, there exists a parallel matrix multiplication algorithm that performs $A \times B$ matrix multiplication (where $A, B \in \mathbb{F}^{n \times n}$) over $\mathbb{F}$ using $O\left(n^{\log_q(r)}\right)$ work and $O((\log r + \log q)\log_q n)$ depth using $O\left(n^{\log_q(r)}\right)$ space. \end{lemma} \begin{proof} By definition of rank, since $R \left(\langle q, q, q\rangle\right) \leq r$, \begin{align*} \langle q, q, q\rangle = \sum_{\ell = 1}^r \left(\sum_{i, j \in [q]} a_{ij\ell}x_{ij}\right)\left(\sum_{j, k \in [q]} b_{jk\ell}y_{jk}\right)\left(\sum_{k, i \in [q]} c_{ki\ell}z_{ki}\right) \end{align*} for some coefficients $a_{ij\ell}, b_{jk\ell}, c_{ki\ell} \in \mathbb{F}$. Computing this matrix multiplication tensor requires at most $O\left(rq^2\right)$ field operations. Using this information, we perform parallel matrix multiplication via the following recursive algorithm. We assume that $n$ is a power of $q$; otherwise, we can pad $A$ and $B$ with $0$'s until such a condition is satisfied--this would increase the dimensions by at most a factor of $q$. Partition the padded matrices $A$ and $B$ into $q \times q$ block matrices where each block has size $n/q \times n/q$. This algorithm performs, in parallel, the following linear combinations for each $\ell$, \begin{align*} A_{\ell}' = \sum_{i, j \in [q]} a_{ij \ell} A_{ij} \\ B_{\ell}' = \sum_{j, k \in [q]} b_{jk \ell} B_{jk} \end{align*} where $A_{ij}$ and $B_{jk}$ are the $n/q \times n/q$ blocks in $A$ and $B$, respectively. Such operations require $O(rq^2)$ operations to perform; however, all such multiplication operations can be done in parallel, and the summation of the results can be done in $O(\log q)$ depth, resulting in $O(\log q)$ depth. Then, for each $\ell \in [r]$, we compute $C_{\ell}' = A_{\ell}' \times B_{\ell}'$ by performing parallel $n/q \times n/q$ matrix multiplication recursively on $A_{\ell}'$ and $B_{\ell}'$ where the base case is $q \times q$ matrix multiplication. All field operations in the same level of the recursion can be performed in parallel. There are $O(\log_q n)$ levels of recursion. Each level of recursion computes a number of field operations in parallel in $O(\log q)$ depth as in the top level. Finally, after obtaining the results $C_{\ell}'$ of the recursive calls, we compute \begin{align*} C_{ki} = \sum_{\ell \in [r]} c_{ki\ell} C_{\ell, ki}' \end{align*} for all $k, i \in [q]$ where $C_{\ell, ki}'$ are the results we obtain from our recursive calls. The blocks $C_{ki}$ for all $k, i \in [q]$ are the results of our matrix multiplication $A \times B$. This final step can compute in parallel the blocks $C_{ki}$ for all $k, i \in [q]$ in $O(\log r)$ depth (assuming that we have the results $C_{\ell, ki}'$) since the multiplication operations can be done in parallel and the summation of the elements in the resulting matrices can be done in $O(\log r)$ depth. Thus, the depth required for this algorithm is $O((\log r + \log q)\log_q n)$. To compute the work and space usage, we compute the total number of field operations performed, which is $O(n^2)$ per level of the recursion. For each level of recursion, there are $r$ calls per subproblem of the recursion. Since we assume that each field operation is $O(1)$ work, this results in total work given by \begin{align*} W(n) = r\cdot W(n/q) + O(n^2). \end{align*} Solving the recurrence gives $W(n) = O\left(n^{\log_qr}\right)$ work for the entire algorithm. The space usage is also $O\left(n^{\log_qr}\right)$. \end{proof} Using Lemma~\ref{lem:matrix-mult-tensor-parallel}, we obtain the following parallel matrix multiplication bounds: \begin{corollary}\label{cor:matrix-exp} There exists a parallel matrix multiplication algorithm based on~\cite{Williams12,LeGall14} that multiplies two $n \times n$ matrices with $O\left(n^{2.373}\right)$ work and $O(\log n)$ depth, using $O\left(n^{2.373}\right)$ space. \end{corollary} \subsection{Our Implementation}\label{sec:ourimpl}~ \fi \ifFull \subsection{Our Implementation}\label{sec:ourimpl} \fi \myparagraph{Parallel Primitives} We implemented a multicore CPU version of our algorithm using the Graph Based Benchmark Suite (GBBS)~\cite{dhulipala2018theoretically}, which includes a number of useful parallel primitives, including high-performance parallel sorting, and primitives such as prefix sum, reduce, and filter~\cite{JaJa92}. In what follows, a \defn{filter} takes an array $A$ and a predicate function $f$, and returns a new array containing $a \in A$ for which $f(a)$ is true, in the same order that they appear in $A$. Our implementations use the atomic compare-and-swap and atomic-add instructions available on modern CPUs. \myparagraph{Implementation} For $\mathcal{T}$, we used the concurrent linear probing hash table by Shun and Blelloch~\cite{shun2014phase}. For each of the data structures $\mathcal{HH}$, $\mathcal{HL}$, $\mathcal{LH}$, and $\mathcal{LL}$, we created an array of size $n$, storing (possibly null) pointers to hash tables~\cite{shun2014phase}. For an edge $(u,v)$ in one of the data structures, the value $v$ will be stored in the hash table pointed to by the $u$'th slot in the array. We also tried using hash tables for both levels, but found it to be slower in practice. For deletions, we used the folklore \emph{tombstone} method. In this method, when an element is deleted, we mark the slot in the table as a tombstone, which is a special value. When inserting, we can insert into a tombstone, but we have to first check until seeing an empty slot to make sure that we are not inserting a duplicate key. In the preprocessing phase of the algorithm, instead of using approximate compaction, we used filter. To find the last update for duplicate updates, we use a parallel sample sort~\cite{ShunBlellochFinemanEtAl2012} to sort the edges first by both endpoints, and then by timestamp. Then we use filter to remove duplicate updates. When we initialize the dynamic data structures, a vertex is considered high-degree if it has degree greater than $2t_1$ and low-degree otherwise. During minor rebalancing, a vertex only changes its status if its degree drops below $t_1$ or increases above $t_2$ due to the batch update. In major rebalancing, we merge our dynamic data structure and the updated edges into a compressed sparse row (CSR) format graph and use the static parallel triangle counting algorithm by Shun and Tangwongsan~\cite{ShunT2015} to recompute the triangle count. We then build a new dynamic data structure from the CSR graph. We also implement several natural optimizations which improve performance. To reduce the overhead of using hash tables, we use an array to store the neighbors of vertices with degree less than a certain threshold (we used $128$ in our experiments). Moreover, we only keep a single entry for $(u,v)$ and $(v,u)$ in the wedges table $\mathcal{T}$. { \setlength{\tabcolsep}{2pt} \begin{table}[!t] \footnotesize \centering \begin{tabular}[!t]{p{0.2\columnwidth}|l|c|c|c|c|c} \toprule & & \multicolumn{5}{c}{\textbf{Batch Size}} \\ \textbf{Algorithm} & \textbf{Graph} & \textbf{$2\times 10^{3}$} & \textbf{$2\times 10^{4}$} & \textbf{$2\times 10^{5}$} & \textbf{$2 \times 10^{6}$} & $m$ \\ \specialrule{.2em}{.1em}{.1em} \multirow{3}{0.2\columnwidth}{Ours (INS)} & Orkut & 1.90e-3 & 4.76e-3 & 0.0235 & 0.168 & -- \\ & Twitter & \textbf{2.11e-3} & \textbf{7.10e-3} & \textbf{0.0430} & \textbf{0.366} & -- \\ & rMAT & \textbf{6.42e-4} & \textbf{2.09e-3} & \textbf{8.62e-3} & 0.0618 & -- \\ \hline \multirow{3}{0.2\columnwidth}{Makkar et al. (INS) \cite{Makkar2017}} & Orkut & \textbf{9.76e-4} & \textbf{2.69e-3} & \textbf{0.0143} & \textbf{0.0830} & -- \\ & Twitter & time-out & 0.0644&0.437 &3.88 & -- \\ & rMAT & 1.98e-3&6.90e-3 &0.012 & \textbf{0.0335} & -- \\ \specialrule{.2em}{.1em}{.1em} \multirow{3}{0.2\columnwidth}{Ours (DEL)} & Orkut & 1.80e-3&4.37e-3 &0.0189 & 0.124 & -- \\ & Twitter & \textbf{2.14e-3} & \textbf{7.76e-3} & \textbf{0.0486} & \textbf{0.385} & -- \\ & rMAT & 6.48e-4&2.23e-3 &9.21e-3 &0.0723 & -- \\ \hline \multirow{3}{0.2\columnwidth}{Makkar et al. (DEL) \cite{Makkar2017}} & Orkut & \textbf{4.63e-4} & \textbf{1.46e-3} & \textbf{8.12e-3} & \textbf{0.0499} & -- \\ & Twitter & time-out & 0.0597&0.401 & 3.64& -- \\ & rMAT & \textbf{4.47e-4} & \textbf{1.81e-3} & \textbf{5.12e-3} & \textbf{0.027} & -- \\ \specialrule{.2em}{.1em}{.1em} \multirow{3}{*}{Static~\cite{ShunT2015}} & Orkut & -- & -- & -- & -- & 1.027 \\ & Twitter & -- & -- & -- & -- & 32.1 \\ & rMAT & -- & -- & -- & -- & 14.7 \end{tabular} \caption{ Running times (seconds) for our parallel \batchdynamic{} triangle counting algorithm and Makkar et al.~\cite{Makkar2017}'s algorithm on 72 cores with hyper-threading. We apply the edges in each graph as batches of edge insertions (INS) or deletions (DEL) of varying sizes, ranging from $2 \times 10^{3}$ to $2 \times 10^{6}$, and report the average time for each batch size. The update time of Makkar et al. algorithm for Twitter batch size $2 \times 10^{3}$ is missing because the expriment timed out. We also report the update time for the state-of-the-art static triangle counting algorithm of Shun and Tangwongsan~\cite{ShunT2015}, which processes a single batch of size $m$. Note that for the Twitter and Orkut datasets, all of the edges are unique. However, for the rMAT dataset, batches can have duplicate edges. For each batch size of each dataset, we list the fastest time in bold. } \label{table:ourtimes} \end{table} } \myparagraph{Experiments} Table~\ref{table:ourtimes} report the parallel running times on varying insertion and deletion batch sizes for our implementation of our new parallel \batchdynamic{} triangle counting algorithm designed. For the two graphs based on static graph inputs (Orkut and Twitter), we generate updates for the algorithm by representing the edges of the graph as an array, and randomly permuting them. The algorithm is then run using batches of the specified size. For insertions, we start with an empty graph and apply batches from the beginning to the end of the permuted array. For deletions, we start with the full graph and apply batches from the end to the beginning of the permuted array. The table also reports the running time for the GBBS implementation of the state-of-the-art static triangle counting algorithm of Shun and Tangwongsan~\cite{ShunT2015, dhulipala2018theoretically}. Across varying batch sizes, our algorithm achieves throughputs between 1.05--16.2 million edges per second for the Orkut graph, 0.935--5.46 million edges per second for the Twitter graph, and 3.08--32.4 million edges per second for the rMAT graph. We obtain much higher throughput for the rMAT graph due to the large number of duplicate edges found in this graph stream, as illustrated in Table~\ref{table:rmatduplicate}. We observe that in all cases, the average time for processing a batch is smaller than the running time of the static algorithm. The maximum speedup of our algorithm over the static algorithm is $22709\times$ for the rMAT graph with a deletion batch of size $2 \times 10^{3}$, but in general our algorithm achieves good speedups across the entire range of batches that we evaluate. Lastly, Figure~\ref{fig:speed-up} shows the parallel speedup of our algorithm with varying thread-count on the Orkut and Twitter graph, for a fixed batch size of $2 \times 10^{6}$. Our algorithm achieves a maximum of $74.73 \times$ speedup using 72 cores with hyper-threading for this experiment. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{figs/speed-up.pdf} \caption{Running times of our parallel batch-dynamic triangle counting algorithm with respect to thread count (the $x$-axis is in log-scale) on the Orkut (average time across all batches) and Twitter (running time for the 6th batch) graph for both insertion (red dashed line) and deletion (blue solid line). ``144'' indicates 72 cores with hyper-threading. The experiment is run with a batch size of $2\times 10^{6}$. The parallel speedup on 144 threads over a single thread is displayed. }\label{fig:speed-up} \end{figure} \section{Technical Overview} In this section, we present a high-level technical overview of our approach in this paper. \ifCameraReady \subsection{Parallel Batch-Dynamic Triangle Counting}~ \fi \ifFull \subsection{Parallel Batch-Dynamic Triangle Counting} \fi Our parallel \batchdynamic{} triangle counting algorithm is based on a recently proposed sequential dynamic algorithm due to Kara et al.~\cite{KNNOZ19}. They describe their algorithm in the database setting, in the context of dynamically maintaining the result of a database join. We provide a self-contained description of their sequential algorithm in Appendix~\ref{app:triangle}. \myparagraph{High-Level Approach} The basic idea of the algorithm from~\cite{KNNOZ19} is to partition the vertex set using degree-based thresholding. Roughly, they specify a threshold $t=\Theta(\sqrt{m})$, and classify all vertices with degree less than $t$ to be low-degree, and all vertices with degree larger than $t$ to be high-degree. This thresholding technique is widely used in the design of fast static triangle counting and $k$-clique counting algorithms, (e.g.,~\cite{nevsetvril1985complexity, AYZ97}). Observe that if we insert an edge $(u,v)$ incident to a low-degree vertex, $u$, we can enumerate all vertices $w$ in $N(u)$ in $O(\sqrt{m})$ expected time and check if $(u,v,w)$ forms a triangle (checking if the $(v,w)$ edge is present in $G$ can be done by storing all edges in a hash table). In this way, edge updates incident to low-degree vertices are handled relatively simply. The more interesting case is how to handle edge updates between high-degree vertices. The main problem is that a single edge insertion $(u,v)$ between two high-degree vertices can cause up to $O(n)$ triangles to appear in $G$, and enumerating all of these would require $O(n)$ work---potentially much more than $O(\sqrt{m})$. Therefore, the algorithm maintains an auxiliary data structure, $\mathcal{T}$, over wedges ($2$-paths). $\mathcal{T}$ stores for every pair of high-degree vertices $(v,w)$, the number of low-degree vertices $u$ that are connected to both $v$ and $w$ (i.e., $(u,v)$ and $(u,w)$ are both in $E$). Given this structure, the number of triangles formed by the insertion of the edge $(v,w)$ going between two high-degree vertices can be found in $O(1)$ time by checking the count for $(v,w)$ in $\mathcal{T}$. Updates to $\mathcal{T}$ can be handled in $O(\sqrt{m})$ time, since $\mathcal{T}$ need only be updated when a low-degree vertex inserts/deletes a neighbor, and the number of entries in $\mathcal{T}$ that are affected is at most $t$. Some additional care needs to be taken when specifying the threshold $t$ to handle re-classifying vertices (going from low-degree to high-degree, or vice versa), and also to handle rebuilding the data structures, which leads to a bound of $O(\sqrt{m})$ amortized work per update for the algorithm. \myparagraph{Incorporating Batching and Parallelism} The input to the parallel \batchdynamic{} algorithm is a batch containing (possibly) a mix of edge insertions and deletions (vertex insertions and deletions can be handled by inserting or deleting its incident edges). For simplicity, and without any loss in our asymptotic bounds, our algorithm handles insertions and deletions separately. The algorithm first removes all \emph{nullifying} updates, which are updates that have no effect after applying the entire batch (i.e., an insertion which is subsequently deleted within the same batch, an insertion of an edge that already exists or a deletion of an edge that doesn't exist). This can easily be done within the bounds using basic parallel primitives. The algorithm then updates tables representing the adjacency information of both low-degree and high-degree vertices in parallel. To obtain strong parallel bounds, we represent these sets using parallel hash tables. For each insertion (deletion), we then determine the number of new triangles that are created (deleted). Since a given triangle could incorporate multiple edges within the same batch of insertions (deletions), our algorithm must carefully ensure that the triangle is counted only once, assigning each new inserted (deleted) triangle uniquely to one of the updates forming it. We then update the overall triangle count with the number of distinct triangles inserted (deleted) into the graph by the current batch of insertions (deletions). The remaining work of the algorithm cleans up mutable state in the hash tables, and also migrates vertices between low-degree and high-degree states. \myparagraph{Worst-Case Optimality} Our work bounds match the combinatorial lower bound obtained via a fine-grained reduction from triangle detection which is conjectured to take $m^{3/2 - o(1)}$ work (by the \emph{Strong Triangle conjecture} of~\cite{AW14} for combinatorial algorithms). The combinatorial lower bound for the Strong Triangle conjecture is based on the standard lower bound conjecture for combinatorial algorithms that solve Boolean Matrix Multiplication (BMM). Our reduction proceeds as follows. Given any input graph to the triangle detection problem, we divide the edges into batches of edge insertions arbitrarily without knowledge of the existence of (any) triangles. Then, the batches of updates are applied one after the other. Suppose the amortized work per update for this procedure is $O(X)$. Then, the total work for applying all the batches of updates is $O(Xm)$. The algorithm returns the count of the number of triangles in the graph after applying all batches of updates. In this case, the algorithm when run over all the batches solves the static problem of triangle detection in the original input graph. If the number of triangles counted by the algorithm after the last batch is $0$, then there does not exist a triangle in the original input graph; otherwise, there exists a triangle in the original input graph. If $X = m^{1/2 - \Omega(1)}$, then we violate the Strong Triangle conjecture. Thus, our work bound is conditionally optimal up to sub-polynomial factors by the Strong Triangle conjecture. It is an interesting open question to consider whether one can obtain $O(1)$ depth bounds on the CRCW PRAM. \ifCameraReady \subsection{Dynamic $k$-Clique Counting via Fast Static Parallel Algorithms}~ \fi \ifFull \subsection{Dynamic $k$-Clique Counting via Fast Static Parallel Algorithms} \fi Next, we present a very simple, and potentially practical algorithm for dynamically maintaining the number of $k$-cliques based on statically enumerating smaller cliques in the graph, and intersecting the enumerated cliques with the edge updates in the input batch. The algorithm is space-efficient, and is asymptotically more efficient than other methods for sparse graphs. Our algorithm is based on a recent and concurrent work proposing a work-efficient parallel algorithm for counting $k$-cliques in $O(m\alpha^{k-2})$ expected work and polylogarithmic depth w.h.p.~\cite{shi2020parallel}. Using this algorithm, we show that updating the $k$-clique count for a batch of $\Delta$ updates can be done in $O(\Delta(m+\Delta)\alpha^{k-4})$ expected work, and $O(\log^{k-2} n)$ depth w.h.p., using $O(m + \Delta)$ space. We do this by using the static algorithm to (i) enumerate all $(k-2)$-cliques, and (ii) checking whether each $(k-2)$-clique forms a $k$-clique with an edge in the batch. \ifCameraReady \subsection{Dynamic $k$-Clique via Fast Matrix Multiplication}~ \fi \ifFull \subsection{Dynamic $k$-Clique via Fast Matrix Multiplication} \fi We then present a parallel batch-dynamic $k$-clique counting algorithm using parallel fast matrix multiplication (MM). Our algorithm is inspired by the static triangle counting algorithm of Alon, Yuster, and Zwick (AYZ)~\cite{AYZ97} and the static $k$-clique counting algorithm of~\cite{EG04} that uses MM-based triangle counting. We present a new batch-dynamic algorithm that obtains better bounds than the simple algorithm based on static smaller-clique enumeration above (and also presented in Section~\ref{sec:arboricityclique}) for $k > 9$. To the best of our knowledge, this is also the best bound for dynamic triangle counting on dense graphs in the sequential model. Specifically, assuming a parallel matrix multiplication exponent of $\omega_p$, our algorithm handles batches of $\Delta$ edge insertions/deletions using $O\left(\min\left(\Delta m^{\frac{(2k - 3)\omega_p}{3(1+\omega_p)}}, (m + \Delta)^{\frac{2k\omega_p}{3(1+\omega_p)}}\right)\right)$ work and $O(\log m)$ depth w.h.p., in $O\left((m + \Delta)^{\frac{2k\omega_p}{3(1+\omega_p)}}\right)$ space, where $m$ is the number of edges in the graph before applying the batch of updates. To the best of our knowledge, the sequential (batch-dynamic) version of our algorithm also provides the best bounds for dynamic triangle counting in the sequential model for dense graphs for such values of $k$ (assuming that we use the best currently known matrix multiplication algorithm)~\cite{Dvorak2013}. \myparagraph{High-Level Approach and Techniques} For a given graph $G = (V, E)$, we create an auxiliary graph $G' = (V', E')$ with vertices and edges representing cliques of various sizes in $G$. For a given $k$-clique problem, vertices in $V'$ represent cliques of size $k/3$ in $G$ and edges $(u, v)$ between vertices $u, v \in V'$ represent cliques of size $2k/3$ in $G$. Thus, a triangle in $G'$ represents a $k$-clique in $G$. Specifically, there exists exactly ${k \choose k/3}{2k/3 \choose k/3}$ different triangles in $G'$ for each clique in $G$. Given a batch of edge insertions and deletions to $G$, we create a set of edge insertions and deletions to $G'$. An edge is inserted in $G'$ when a new $2k/3$-clique is created in $G$ and an edge is deleted in $G'$ when a $2k/3$-clique is destroyed in $G$. Suppose, for now, that we have a dynamic algorithm for processing the edge insertions/deletions into $G'$. Counting the number of triangles in $G'$ after processing all edge insertions/deletions and dividing by ${k \choose k/3}{2k/3 \choose k}$ provides us with the exact number of cliques in $G$. There are a number of challenges that we must deal with when formulating our dynamic triangle counting algorithm for counting the triangles in $G'$: \begin{enumerate} \item We cannot simply count all the triangles in $G'$ after inserting/deleting the new edges as this does not perform better than a trivial static algorithm. \item Any trivial dynamization of the AYZ algorithm will not be able to detect all new triangles in $G'$. Specifically, because the AYZ algorithm counts all triangles containing a low-degree vertex separately from all triangles containing only high-degree vertices, if an edge update only occurs between high-degree vertices, a trivial dynamization of the algorithm will not be able to detect any triangle that the two high-degree endpoints make with low-degree vertices. \end{enumerate} To solve the first challenge, we dynamically count \emph{low-degree} and \emph{high-degree} vertices in different ways. Let $\ell=k/3$ and $M = 2m + 1$. For some value of $0<t<1$, we define \emph{low-degree} vertices to be vertices that have degree less than $M^{t\ell}/2$ and \emph{high-degree} vertices to have degree greater than $3M^{t\ell}/2$. Vertices with degrees in the range $[M^{t\ell}/2, 3M^{t\ell}/2]$ can be classified as either low-degree or high-degree. We determine the specific value for $t$ in Lemma~\ref{lem:t-value}. We perform rebalancing of the data structures as needed as they handle more updates. For low-degree vertices, we only count the triangles that include at least one newly inserted/deleted edge, at least one of whose endpoints is low-degree. This means that we do not need to count any pre-existing triangles that contain at least one low-degree vertex. For the high-degree vertices, because there is an upper bound on the maximum number of such vertices in the graph, we update an adjacency matrix $A$ containing only edges between high-degree vertices. At the end of all of the edge updates, computing $A^3$ gives us a count of all of the triangles that contain three high-degree vertices. This procedure immediately then leads to our second challenge. To solve this second challenge, we make the observation (proven in Lemma~\ref{lem:one-low-high}) that if there exists an edge update between two high-degree vertices that creates or destroys a triangle that contains a low-degree vertex in $G'$, then there \emph{must} exist at least one new edge insertion/deletion \emph{that creates or destroys a triangle representing the same clique} to that low-degree vertex in the same batch of updates to $G'$. Thus, we can use one of those edge insertions/deletions to determine the new clique that was created and, through this method, find all triangles containing at least one low-degree vertex and at least one new edge update. Some care must be observed in implementing this procedure in order to not increase the runtime or space usage; such details can be found in Section~\ref{sec:alg-overview}. \myparagraph{Incorporating Batching and Parallelism} When dealing with a batch of updates containing both edge insertions and deletions, we must be careful when vertices switch from being high-degree to being low-degree, and vice versa. If we intersperse the edge insertions with the edge deletions, there is the possibility that a vertex switches between low and high-degree multiple times in a single batch. Thus, we batch all edge deletions together and perform these updates first before handling the edge insertions. After processing the batch of edge deletions, we must subsequently move any high-degree vertices that become low-degree to their correct data structures. After dealing with the edge insertions, we must similarly move any low-degree vertices that become high-degree to the correct data structures. Finally, for triangles that contain more than one edge update, we must account for potential double counting by different updates happening in parallel. Such challenges are described and dealt with in Section~\ref{sec:alg-overview} and Algorithm~\ref{alg:mmclique}. \subsection{Implementation and Experimental Evaluation} We present an optimized implementation of our new parallel batch-dynamic triangle counting algorithm using parallel primitives from the Graph Based Benchmark Suite (GBBS)~\cite{dhulipala2018theoretically}, and concurrent hash tables~\cite{shun2014phase} to represent our data structures. We ran experiments on varying batch sizes for both insertions and deletions for several large graphs (the Orkut and Twitter graphs, as well as rMAT graphs of varying densities) using a 72-core machine with two-way hyper-threading, and obtained parallel speedups of between 36.54--74.73x. We also compared our performance to the algorithms by Ediger et al.~\cite{Ediger2010} and Makkar et al.~\cite{Makkar2017} (we note that Makkar et al.\ provide a GPU implementation, and we implemented a multicore CPU version of their algorithm), which take linear work per update in the worst case. We found that our Makkar et al.\ implementation outperformed the multicore implementation by Ediger et al. Furthermore, our new algorithm achieves significant speedups (up to an order of magnitude) over the Makkar et al.\ implementation on graphs with high-degree vertices (the Twitter graph and dense rMAT graphs), as well as on smaller batch sizes. In contrast, the Makkar et al.\ implementation outperforms our new algorithm for the smaller Orkut graph, which does not contain vertices with very high degree. These results are consistent with the theoretical bounds of the algorithms---the work per update of our algorithm is $O(\sqrt{m})$, whereas the work per update of the Makkar et al.\ algorithm is linear in the degrees of the affected vertices. \section{Preliminaries}\label{sec:prelims} Given an undirected graph $G = (V, E)$ with $n$ vertices and $m$ edges, and an integer $k$, a \defn{$k$-clique} is defined as a set of $k$ vertices $v_1,\ldots,v_k$ such that for all $i\neq j$, $(v_i, v_j) \in E$. The \defn{$k$-clique count} is the total number of $k$-cliques in the graph. The \defn{dynamic $k$-clique problem} maintains the number of $k$-cliques in the graph upon edge insertions and deletions, given individually or in a batch. The \defn{arboricity} $\alpha$ of a graph is the minimum number of forests that the edges can be partitioned into and its value is between $\Omega(1)$ and $O(\sqrt{m})$~\cite{Chiba1985}. In this paper, we analyze algorithms in the work-depth model, where the \defn{work} of an algorithm is defined to be the total number of operations done, and the \defn{depth} is defined to be the longest sequential dependence in the computation (or the computation time given an infinite number of processors)~\cite{JaJa92}. Our algorithms can run in the PRAM model or the fork-join model with arbitrary forking. We use the concurrent-read concurrent-write (CRCW) model, where reads and writes to a memory location can happen concurrently. We assume either that concurrent writes are resolved arbitrarily, or are reduced together (i.e., fetch-and-add PRAM). We use the following primitives throughout the paper. \defn{Approximate compaction} takes a set of $m$ objects in the range $[1, n]$ and allocates them unique IDs in the range $[1, O(m)]$. The primitive is useful for filtering (i.e., removing) out a set of obsolete elements from an array of size $n$, and mapping the remaining $m$ elements to a sparse array of size $O(m)$. Approximate compaction can be implemented in $O(n)$ work and $O(\log^* n)$ depth w.h.p.~\cite{Gil91a}. We also use a \defn{parallel hash table} which supports $n$ operations (insertions, deletions) in $O(n)$ work and $O(\log^* n)$ depth w.h.p., and $n$ lookup operations in $O(n)$ work and $O(1)$ depth~\cite{Gil91a}. Our algorithms in this paper make use of the widely used \defn{atomic-add} instruction. An atomic-add instruction takes a memory location and atomically increments the value stored at the location. In this paper, we assume that the atomic-add instruction can be implemented in $O(1)$ work and depth. Our algorithms can also be implemented in a model without atomic-add in the same work, a multiplicative $O(\log n)$ factor increase in the depth, and space proportional to the number of atomic-adds done in parallel. \section{Dynamic $k$-Clique Counting via Fast Static Parallel Algorithms}\label{sec:arboricityclique} We present a very simple algorithm for dynamically maintaining the number of $k$-cliques based on statically enumerating smaller cliques in the graph, and intersecting the enumerated cliques with the edge updates in the input batch. The algorithm is space-efficient. Our algorithm is based on a work-efficient parallel algorithm for counting $k$-cliques in $O(m\alpha^{k-2})$ expected work and $O(\log^{k-2}n)$ depth w.h.p.\ by Shi et al.~\cite{shi2020parallel}. Using this algorithm, we show that updating the $k$-clique count for a batch of $\Delta$ updates can be done in $O(\Delta(m+\Delta)\alpha^{k-4})$ expected work, $O(\log^{k-2}n)$ depth w.h.p., and $O(m + \Delta)$ space. For $\Delta \ge m$ we simply call the static algorithm, and for $\Delta <m$ we use the static algorithm to (i) enumerate all $(k-2)$-cliques, and (ii) check whether each $(k-2)$-clique forms a $k$-clique with an edge in the batch. This procedure outperforms re-computation using the static parallel $k$-clique counting algorithm for $\Delta = o(\alpha^{2})$. The full details of our algorithm can be found in the full version~\cite{fullversion} of this paper. \section{Dynamic $k$-Clique via Fast Matrix Multiplication}\label{sec:mm} In this section, we present our parallel \batchdynamic{} algorithm for counting $k$-cliques based on fast matrix multiplication in general graphs (which may be dense). Our algorithm is inspired by the static triangle counting algorithm of Alon, Yuster, and Zwick (AYZ)~\cite{AYZ97} and the static $k$-clique counting algorithm of Eisenbrand and Grandoni~\cite{EG04} that uses matrix multiplication-based triangle counting. We present a new dynamic algorithm that obtains better bounds than the simple algorithm based on static lower-clique enumeration in Section~\ref{sec:arboricityclique} for larger values of $k$. We define the \defn{parallel matrix multiplication exponent} to be the smallest exponent $\omega_p$ such that there exists a parallel matrix multiplication algorithm that multiplies two $n \times n$ matrices with $O\left(n^{\omega_p}\right)$ work and $O(\log n)$ depth, using $O\left(n^{\omega_p}\right)$ space. We show that $\omega_p = 2.373$ in the full version of the paper~\cite{fullversion}. Assuming a parallel matrix multiplication exponent of $\omega_p$, our algorithm handles batches of $\Delta$ edge insertions/deletions using $O\left(\min\left(\Delta m^{\frac{(2k - 3)\omega_p}{3(1+\omega_p)}}, (m + \Delta)^{\frac{2k\omega_p}{3(1+\omega_p)}}\right)\right)$ work and $O(\log m)$ depth w.h.p., and $O\left((m + \Delta)^{\frac{2k\omega_p}{3(1+\omega_p)}}\right)$ space where $m$ is the number of edges in the graph after applying the batch of updates. To the best of our knowledge, the sequential (batch-dynamic) version of our algorithm also provides the best bounds for dynamic $k$-clique counting in the sequential model for dense graphs for large constant values of $k$ (assuming we use the best currently known matrix multiplication algorithm)~\cite{Dvorak2013}. More formally, we obtain the following results: \begin{theorem}\label{thm:all-k} Our fast matrix multiplication based $k$-clique algorithm takes\\ $O\left(\min\left(\Delta m^{\frac{2(k - 1)\omega_p}{3(\omega_p + 1)}}, (\Delta+m)^{\frac{(2 k + 1)\omega_p}{3 (\omega_p + 1)}}\right)\right)$ work and $O(\log(m+\Delta))$ depth w.h.p., and $O\left((\Delta+m)^{\frac{(2 k + 1) \omega_p}{3 (\omega_p + 1)}}\right)$ space assuming a parallel matrix multiplication algorithm with coefficient $\omega_p$ when $k \bmod 3 = 1$, and $O\left(\min\left(\Delta m^{\frac{(2k - 1)\omega_p}{3(\omega_p + 1)}}, (\Delta+m)^{\frac{2(k + 1)\omega_p}{3(\omega_p + 1)}}\right)\right)$ work and $O(\log(m+\Delta))$ depth w.h.p., and $O\left((\Delta+m)^{\frac{2(k + 1)\omega_p}{3(\omega_p + 1)}}\right)$ space when $k \bmod 3 = 2$. \end{theorem} \begin{corollary}\label{cor:strassen-all-k} Provided the best known parallel matrix multiplication exponent $\omega_p = 2.373$, we obtain a parallel fast matrix multiplication $k$-clique algorithm that takes $O\left(\min\left(\Delta m^{0.469k - 0.469}, (\Delta+m)^{0.469k + 0.235}\right)\right)$ work and $O(\log m)$ depth w.h.p., and $O\left((\Delta+m)^{0.469k + 0.235}\right)$ space when $k \bmod 3 = 1$, and $O\left(\min\left(\Delta m^{0.469k - 0.235}, (\Delta+m)^{0.469k + 0.469}\right)\right)$ work and $O(\log m)$ depth w.h.p., and $O\left((\Delta+m)^{0.469k + 0.469}\right)$ space when $k\bmod 3 = 2$. \end{corollary} \myparagraph{High-Level Approach and Techniques} For a given graph $G = (V, E)$, we create an auxiliary graph $G' = (V', E')$ with vertices and edges representing cliques of various sizes in $G$. For a given $k$-clique problem, vertices in $V'$ represent cliques of size $k/3$ in $G$ and edges $(u, v)$ between vertices $u, v \in V'$ represent cliques of size $2k/3$ in $G$. Thus, a triangle in $G'$ represents a $k$-clique in $G$. Specifically, there exist exactly ${k \choose k/3}{2k/3 \choose k/3}$ different triangles in $G'$ for each clique in $G$. Given a batch of edge insertions and deletions to $G$, we create a set of edge insertions and deletions to $G'$. An edge is inserted in $G'$ when a new $2k/3$-clique is created in $G$ and an edge is deleted in $G'$ when a $2k/3$-clique is destroyed in $G$. Suppose, for now, that we have a dynamic algorithm for processing the edge insertions/deletions into $G'$. Counting the number of triangles in $G'$ after processing all edge insertions/deletions and dividing by ${k \choose k/3}{2k/3 \choose k}$ provides us with the exact number of cliques in $G$. There are several challenges that we must deal with when formulating our dynamic triangle counting algorithm for counting the triangles in $G'$: \begin{enumerate}[label=(\textbf{\arabic*}),topsep=1pt,itemsep=0pt,parsep=0pt,leftmargin=15pt] \item We cannot simply count all the triangles in $G'$ after inserting/deleting the new edges as this does not perform better than a trivial static algorithm. \item Any trivial dynamization of the AYZ algorithm will not be able to detect all new triangles in $G'$. Specifically, because the AYZ algorithm counts all triangles containing a low-degree vertex separately from all triangles containing only high-degree vertices, if an edge update only occurs between high-degree vertices, a trivial dynamization of the algorithm will not be able to detect any triangle that the two high-degree endpoints make with low-degree vertices. \item We must ensure that batches of updates can be efficiently processed in parallel without overcounting. \end{enumerate} To solve the first challenge, we dynamically count low-degree and high-degree vertices in different ways. Let $\ell=k/3$ and $M = 2m + 1$. For some value of $0<t<1$, we define \defn{low-degree} vertices to be vertices that have degree less than $M^{t\ell}/2$ and \defn{high-degree} vertices to have degree greater than $3M^{t\ell}/2$. Vertices with degrees in the range $[M^{t\ell}/2, 3M^{t\ell}/2]$ can be classified as either low-degree or high-degree. We analyze the specific value to use for $t$ in the full version of our paper~\cite{fullversion}. We perform rebalancing of the data structures as needed as they handle more updates. For low-degree vertices, we only count the triangles that include at least one newly inserted/deleted edge, at least one of whose endpoints is low-degree. This means that we do not need to count any pre-existing triangles that contain at least one low-degree vertex. For the high-degree vertices, because there is an upper bound on the maximum number of such vertices in the graph, we update an adjacency matrix $A$ containing edges only between high-degree vertices. At the end of all of the edge updates, computing $A^3$ gives us a count of all of the triangles that contain three high-degree vertices. This procedure immediately then leads to our second challenge. To solve this second challenge, we make the observation (stated in Lemma~\ref{lem:one-low-high} below, and proven in the full version of our paper~\cite{fullversion}) that if there exists an edge update between two high-degree vertices that creates or destroys a triangle that contains a low-degree vertex in $G'$, then there \emph{must} exist at least one new edge insertion/deletion \emph{that creates or destroys a triangle representing the same clique} to that low-degree vertex in the same batch of updates to $G'$. Thus, we can use one of those edge insertions/deletions to determine the new clique that was created and, through this method, find all triangles containing at least one low-degree vertex and at least one new edge update. Some care must be observed in implementing this procedure in order to not increase the runtime or space usage; such details can be found in the full version of our paper~\cite{fullversion}. \begin{lemma}\label{lem:one-low-high} Given a graph $G=(V, E)$, the corresponding $G' = (V', E')$, and for $k > 3$, suppose an edge insertion (resp.\ deletion) between two high-degree vertices in $G'$ creates a new triangle, $(u_H, w_H, x_L)$, in $G'$ which contains a low-degree vertex $x_L$. Let $R(y)$ denote the set of vertices in $V$ represented by a vertex $y \in V'$. Then, there exists a new edge insertion (resp.\ deletion) in $G'$ that is incident to $x_L$ and creates a new triangle $(u', w', x_L)$ such that $R(u') \cup R(w') = R(u_H) \cup R(w_H)$. \end{lemma} \myparagraph{Incorporating Batching and Parallelism} When dealing with a batch of updates containing both edge insertions and deletions, we must be careful when vertices switch from being high-degree to being low-degree and vice versa. If we intersperse the edge insertions with the edge deletions, then there is the possibility that a vertex switches between low and high-degree multiple times in a single batch. Thus, we batch all edge deletions together and perform these updates first before handling the edge insertions. After processing the batch of edge deletions, we must subsequently move any high-degree vertices that become low-degree to their correct data structures. After dealing with the edge insertions, we must similarly move any low-degree vertices that become high-degree to the correct data structures. Finally, for triangles that contain more than one edge update, we must account for potential double counting by different updates happening in parallel. Such challenges are described and dealt with in detail in the full version of our paper~\cite{fullversion}. A high-level description of the algorithm is given in Algorithm~\ref{alg:mmcliquesimple}. \begin{algorithm}[!t] \caption{Simplified parallel matrix multiplication $k$-clique counting algorithm.}\label{alg:mmcliquesimple} \begin{algorithmic}[1] \Function{Count-Cliques}{$\mathcal{B}$} \State Update graph $G'$ with $\mathcal{B}$ by inserting new $\ell$- and $2\ell$-cliques. \State Find the batch of insertions ($\mathcal{B}'_I$) and batch of deletions ($\mathcal{B}'_D$) \Statex \ \ \ \ \ \ into $G'$. \State Determine the final degrees of every vertex in $G'$ after \Statex \ \ \ \ \ \ performing updates $\mathcal{B}'_I$ and $\mathcal{B}'_D$. \State $\delta \leftarrow \text{threshold for low-degree vs. high-degree}$.\\ \Comment{The precise value of $\delta$ is defined in the full version of our \Statex \ \ \ \ \ \ paper~\cite{fullversion}.} \ParFor{$\mathtt{insert}(u, v) \in \mathcal{B}'_I, \mathtt{delete}(u, v) \in \mathcal{B}'_D$} \If{either $u$ or $v$ is low-degree (degree $\leq \delta$) } \State Enumerate all triangles containing $(u, v)$. Let this \Statex \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ set be $T$. \State By Lemma~\ref{lem:one-low-high}, find all possible triangles \Statex \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ representing the same triangle $t \in T$. \State Correct for duplicate counting of triangles. \Else \State Update $A$ (adjacency matrix for high-degree \Statex \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ vertices). \EndIf \EndParFor \State Compute $A^3$. The diagonal provides the triangle counts for \Statex \ \ \ \ \ \ all triangles containing only high-degree vertices. \State Sum the counts of all triangles. \State Correct for duplicate counting of cliques. \EndFunction \end{algorithmic} \end{algorithm}
3,212,635,537,890
arxiv
\section{Introduction} Flux distributions in type-II superconductors are commonly inferred from magnetization and critical current measurements\cite{reviews} and interpreted in the context of the Bean model\cite{bean} or its variations. The Bean model, which has been widely used for over three decades, {\it postulates\/} that the current density in a hard superconductor ({\em i.e.}, with strong pinning) can only have three values: $-J_c$, $0$, and $+J_c$, where $J_c$ is the critical current density, which is independent of the local magnetic flux density ${\bf B}(x,y,t)$. The Bean model and its many variants make no specific claims with regard to the {\it microscopic\/} mechanism controlling the trapping of vortices. Bean's postulate, $J_c=$constant, was modified several times by Kim {\em et al.}\cite{kim}: $J_c \sim 1/ B \ $ [3$^a$]; $J_c \sim 1/(b_0+B) \ $ [3$^b$,3$^c$]; $J_c \sim 1/(b_0+B+b_2B^2+b_3B^3+\ldots) \ $ [3$^b$]; where $b_i$ are constants. On the other hand, Fietz {\em et al.} \cite{fietz64} suggested that $J_c \sim \exp(-B/b_0) \ $; while Yasuk\={o}chi {\em et al.} \cite{yas} suggested $J_c \sim 1/B^{1/2}$. These, and other proposals made during the 1960s, were followed by several other phenomenological modifications of $J_c(H)$ during the following two decades\cite{reviews,zeldov}. A microscopic description, {\it without\/} assuming any particular $B$-dependence of $J_c$, of these flux distributions---in terms of interacting vortices and pinning sites---can be very valuable for a better understanding of commonly measured bulk quantities. One of the most effective methods of investigating the microscopic behaviour of flux in a hard superconductor is with computer simulations (see, e.g., \cite{simulations,ray94}, and references therein). In this paper, we present molecular dynamics (MD) simulations of the evolution of rigid flux lines in a hard superconductor. We first introduce our model for vortex-vortex and vortex-pin interactions as well as the corresponding antivortex interactions. We then investigate the flux profile which results from a varying applied field; from such flux profiles we obtain full hysteresis loops indicating that our model has the essential microscopic ingredients underlying the experimentally measured macroscopic quantities. We also investigate the behaviour of $J_c(H)$ for a controlled range of pinning parameters. \section{Simulation} Our simulation geometry is that of an infinite slab of superconductor in a magnetic field applied {\em parallel} to the slab surface. Thus, demagnetization effects are unimportant. We also treat the vortices as perfectly stiff, so that we need to model only a two-dimensional (2D) slice of the 3D slab. Our system is periodic in the plane perpendicular to the applied field, and we measure distances in units of the penetration length $\lambda$. Here, we present results for a system of size $36\lambda \times 36\lambda$. The simulation, described in further detail below, consists of slowly ramping an external magnetic field. Flux lines enter the edge of the sample and their positions are allowed to evolve according to a $T=0$ MD algorithm. The resulting vortex distributions at any external field can then be deduced as a function of distance into the sample. \subsection{Sample Geometry and Time-Dependent Field} The actual sample region is heavily pinned, and extends from position $x=6\lambda$ to $x=30 \lambda$ (Fig.~1). Outside the sample itself is a region with no pinning which extends from $x = 0 \lambda $ to $x=6 \lambda $ and from $x = 30 \lambda $ to $x=36 \lambda $ (with $ 36\lambda = 0\lambda $ according to our periodic boundary conditions). \ This sample geometry is shown in the upper panels of Fig.~1. Here, the sample (pinned) region occupies the central $2/3$ of the system, and the unpinned region the outer $1/3$. We simulate the ramping of an external field by the slow addition of flux lines to the outside unpinned region. Because there is no pinning in this region, the flux lines there will attain a fairly uniform density, and we may define the applied field $H$ as $\Phi_0$ times this density. Flux lines from the external region will move into the sample through points at the sample edge where the local energy---as determined by the local pinning and vortex interaction---is low. Thus, our simulation models the real situation where vortices nucleate at such low-energy regions at the surface. Further, in a real superconductor, vortices near the surface are not expelled by their interior neighbors because of a field-induced Meissner current flowing at the surface. Again, our external ``bath'' of vortices simulates this behavior by providing a balancing inward force, proportional to the external field, on those vortices near the sample boundary. \subsection{Equations of Motion} The force per unit length \cite{reviews} between two vortices located at ${\bf r}_i$ and ${\bf r}_j$ is \begin{equation} \ f^{vv} = \frac{ \Phi_0^2 }{ 8 \pi^2 \lambda^3 } \ K_1 \left( \frac{ | {\bf r}_i - {\bf r}_j | }{ \lambda } \right) \, . \ \end{equation} We model the vortex-vortex force interaction in its exact form by using the modified Bessel function $K_1$. This force decreases exponentially at distances larger than $\lambda$, and we cut off the (by then negligible) force at distances greater than $6\lambda$. Further, we have cut off the logarithmic divergence of the force for distances less than $0.1\lambda $. These cutoffs were found to produce negligible effects on the dynamics for the range of parameters investigated. Thus, the force (per unit length) on vortex $i$ due to other vortices (ignoring cutoffs) is $ \ {\bf f}_{i}^{vv} = \ \sum_{j=1}^{N_{v}} \ f_{v} \ K_{1}( |{\bf r}_{i} - {\bf r}_{j}| / \lambda ) \ {\bf {\hat r}}_{ij} \, . $ Here, the ${\bf r}_{j} $ are the positions of the $N_v$ vortices within a radius $6\lambda$, $ \ {\bf {\hat r}}_{ij} = ({\bf r}_{i} - {\bf r}_{j}) / |{\bf r}_{i} - {\bf r}_{j}| $, $\ f_v = \pm f_0$, and \begin{equation} \ f_0 = \frac{ \Phi_0^2 } {8 \pi^2 \lambda^3} \ . \end{equation} The sign of the interaction is determined by $ f_{v} $; we take $ f_{v} = +f_0 $ for repulsive vortex-vortex interactions and $ f_{v} = -f_0 $ for attractive vortex-antivortex interactions. A vortex and antivortex annihilate and are removed from the system if they come within $0.3 \lambda $ of one another \cite{reviews}. Forces are measured in units of $f_0$, lengths in units of $\lambda$, and fields in units of $\Phi_0/\lambda^2$. We model the pinning potential\cite{pinning} as $N_p$ short-range parabolic wells at positions ${\bf r}_k^{(p)}$. The equation of motion for a vortex moving with velocity $v$ is $f=\eta v$, where $\eta$ is the viscosity ($\approx \Phi_0 H_{c2}/\rho_n$, with $\rho_n$ being the normal-state resistivity). Thus, the overall equation for the overdamped motion of a vortex subject to vortex-vortex and pinning forces is \begin{equation} \ {\bf f}_{i} = {\bf f}_{i}^{vv} + {\bf f}_{i}^{vp} = \eta {\bf v}_{i} \ , \end{equation} where \begin{eqnarray} {\bf f}_{i} & = & \ \sum_{j=1}^{N_{v}}\, f_{v} \, K_{1} \left( \frac{ |{\bf r}_{i} - {\bf r}_{j}| }{ \lambda } \right) \, {\bf {\hat r}}_{ij} \nonumber \\ & + & \sum_{k=1}^{N_{p}} \frac{f_{p}}{\xi_{p}} \ |{\bf r}_{i} - {\bf r}_{k}^{(p)}| \ \Theta\left( \frac{ \xi_{p} - | {\bf r}_{i} - {\bf r}_{k}^{(p)} | }{\lambda} \right) \ {\bf {\hat r}}_{ik} \, . \end{eqnarray} Here, $\Theta $ is the Heaviside step function, $ \xi_{p} $ is the range of the pinning potential, and $f_{p}$ is the strength (maximum pinning force) of each well, measured in units of $f_0$. For all the simulations presented here $\xi_{p} = 0.12\lambda$ and $\eta=1$. The parameters we vary here are the pinning strength $f_p$ and the average distance between pinning sites $d_p$ (which determines the pinning density $n_p$ via $n_p=1/d_p^2$). Many other parameters can be varied, making the systematic study of this problem very complex. A more thorough investigation with different pinning-potential ranges, pinning potential-shapes, non-uniform strength distributions, and non-random pinning positions will be presented elsewhere. Here, the pinning sites have uniform strengths and are placed in the sample at random, but non-overlapping, positions. The pinning strength $f_{p}$ is varied from $0.2 f_0$ to $1.0 f_0$, and $d_p$ is varied from $\lambda/3$ to $\lambda\ $ (i.e., the pin density $n_p$ varies from $1/\lambda^{2}$ to $9/\lambda^{2}$). \section{Magnetic Flux Density Profiles} Several general features of our simulations are shown in Fig.~1. In the upper frame of Fig.~1a, we show a top view of the vortex positions after the external field has been ramped up from zero. As we have stated, this external field is represented by the vortices in the unpinned regions to the left and right of the central, pinned, sample region. Here, vortices have been added to the unpinned region to a final density of about $1.2$ vortices/$\lambda^2$; since each vortex carries a flux $\Phi_0$, this corresponds to a magnetic field of $1.2 \ \Phi_0/\lambda^2$. For a real superconductor with a penetration depth of, e.g., $1000$\AA , this corresponds to $H=2.5$ kOe. We note in Fig.~1(a) that many of the vortices added to the unpinned region have been forced into the central sample region at this stage. They do not do so uniformly due to the presence of 3456 pinning sites (not shown), with a typical intersite distance of $\lambda/2$ and $f_p=0.9 f_0$. We see the characteristic density gradient determined by a balancing of the vortex-vortex forces with the local pinning forces. Since this gradient was achieved in our simulation solely by the slow ramping of an external magnetic field, we have obtained the field profiles inside a pinned superconductor using only {\em microscopic} information such as vortex-vortex and vortex-pin interactions. We should also contrast our simulations with those modeling {\em current-driven\/} vortices. In such simulations the driving force on each vortex is somewhat artificially modeled by an externally-imposed ``uniform'' current. Our simulation correctly models the driving force as a result of local interactions. The lower frame of Fig.~1a shows the resulting flux density profiles, found by averaging the vortex density over slices parallel to the sample edges. Such profiles clearly show the essentially constant flux density in the external regions, and the detailed nature of the flux gradient within the sample. Of course, these profiles may be obtained at any value of the external field. Figure~1b shows the system after the external field has been ramped {\em down} from a high value to zero. The small field outside the sample is an artifact due to the smearing of the vortex fields. Now, flux remains trapped within the sample and the field gradient has changed sign. We notice that near the sample edges, where the field is small, the gradient in the flux density is quite large. Thus our simulation correctly models the increase in flux gradient (or, equivalently, critical current) at low fields, where intervortex interactions are weak and pinning dominates. In Fig.~2 we show flux density profiles for a complete cycle of the field, with the same sample parameters as in Fig.~1. During the initial ramp-up stage (Fig.~2, left), we increase the external field from zero to a final value of about $1.9\ \Phi_0/\lambda^2$. We see the evolution of the internal flux profile from first penetration at low fields, to the first complete penetration at a field $H^* \approx 0.8\ \Phi_0/\lambda^2$, to higher values of $B$ at larger $H$. We again note the flux gradient is quite high at low fields, but becomes flatter---and less field-dependent---at high fields. Of course, in real superconductors no vortices will enter the sample until $H > H_{c1} \approx (\ln \kappa / 4 \pi ) (\Phi_0/\lambda^2) $, where $\kappa=\lambda/\xi$. However, for $\kappa$'s in the wide physically relevant range from $2$ to $100$, $H_{c1}$ varies from $0.05 \; \Phi_0/\lambda^2$ to $0.36 \;\Phi_0/\lambda^2$. Thus, $H_{c1}$ is small in the range of fields we explore. In any event, since we are only interested in the {\it mixed\/} state and not the Meissner phase, we will work in the approximation where $H_{c1}$ is negligible. During the ramp-down stage (Fig.~2, center), the field is lowered through zero to large {\em negative} values. The ramping down is initially effected by simply removing vortices from the unpinned region. However, after the external field reaches zero, it is reversed by the addition of {\em antivortices} in the unpinned region. During the beginning of this ramp-down stage, we note the appearance of the characteristic ``gull-wing'' flux profile as the internal remnant flux located close to the sample edges begins to be removed. Notice that at external fields near zero the internal field hardly changes at all as the external field is swept. This is again because of the very steep gradients possible near zero field, where pinning dominates. Thus, the effect of a change in an external field near zero propagates only a very small distance into the sample. As the field decreases below $H=0$ (in Fig.~2, center), $B(x)$ continues to have its $\wedge$-shaped profile. We note that for small negative fields the sample contains both vortices {\em and\/} antivortices. However, the pinning for both types is attractive, and so they remain locally trapped and annihilate only when their mutual attraction overcomes the pinning. This only occurs when they are closely spaced, within $0.3 \lambda$. Finally, in the last ramp-up stage (Fig.~2, right), the full cycle is completed by increasing the field from the large negative value up to a large positive field, where the flux profile looks identical to the initial ramp-up stage of the cycle. One clear advantage of our simulation is that we can obtain direct {\it spatio-temporal\/} information on the distribution of flux {\em inside} the sample. However, experimentally this is quite difficult, especially for bulk samples. Instead, average quantities, like magnetization curves, are typically obtained. From the field cycles shown in Fig.~2, we can easily obtain such magnetization loops from our simulation. Further, in our simulation it is simple to vary microscopic parameters such as pin density and strength. Thus, our simulations allow for a systematic study of the dependence of {\it macroscopic measurements}, such as the magnetization, on {\it microscopic system parameters}. It may also be possible to use our results in the reverse problem, so that some understanding of the microscopics of the pinning \cite{pinning} may be obtained from experimentally determined macroscopic measurements. \section{Magnetization Hysteresis Loops} Experimentally, what is typically measured is the average magnetization over the sample volume. In our simulation, we thus calculate the average magnetization \begin{equation} \overline{M} = \frac{1}{4 \pi V } \int (H - B) \ dV \ . \end{equation} In Fig.~3 we construct magnetization loops as two key sample microscopic parameters---the pinning density and strength---are varied. Fig.~3a shows complete magnetization loops obtained with the density of pins held constant at $4/\lambda^2$, but at three different values of the pinning strength $f_{p}$. One can see clearly that by increasing the pinning strength the hysteresis loops become much wider. This is because a large pinning force yields a large field gradient. Thus $\overline{M}$, which is essentially the difference between the internal and external fields, will be larger for large $f_{p}$. For instance, the remnant $\overline{M}$ is larger for stronger pinning. The $\overline{M}(H)$ loops all show a maximum when the external field is small ($H \leq H^*$) and close to $H^*$. This again is due to the pinning being most effective for low fields ($H \leq H^*$). Figure 3b shows magnetization loops obtained for several pinning densities. Experimentally, one may systematically vary this parameter by the introduction of columnar defects using irradiation \cite{reviews,civale}. \section{Critical Current versus pinning density and strength} Although magnetization loops are very useful for comparison with experimental data, we have emphasized that our simulations allow us to directly compute the local flux distribution inside the sample. Thus, we may directly measure the local critical current density $J_c$ using Maxwell's equation $dB/dx = \mu_0 J$. At every point on flux density profiles such as Fig.~2 we may compute the local slope ($= dB/dx$) and the corresponding local field $B$. This allows us to determine a large number of values of $J_c(B)$. We then bin these values to obtain suitably averaged curves of $J_c$ vs. $B$. As we have discussed, there are in the literature a great variety of functional dependences of $J_c$ on $B$, corresponding to different {\em ad hoc} electrodynamical assumptions. The original Bean model predicts $J_c$ to be independent of $B$. The varying slopes of the flux density in Fig.~2 show that this prediction is not borne out in our simulation (except at relatively high-fields where the vortex-vortex force dominates; e.g., for weak-pinning samples with $\lambda^2 n_p = 4.0$, $f_p=0.2f_0$). Kim {\em et al.}\cite{kim} have proposed that the critical current depends on $B$ as \begin{equation} \alpha = J_c (B + b_0) \ , \end{equation} where $\alpha$ is field-independent and has units of force per unit volume. In this model, plots of $1/J_c$ vs. $B$ should appear as straight lines with slopes $1/\alpha$ and intercept $b_0/\alpha$. The physical interpretation of the constant $b_0$ in Kim's model is unclear\cite{kim}. In Fig.~4 we plot $1/J_c$ vs. $B$, with $J_c$ determined from our flux density plots during the initial ramp up phase. We plot $1/J_c$ for several realizations of the pinning density $n_p$ and strength $f_p$. Fig.~4a shows $1/J_c$ vs $B$ for four different field sweeps with the pinning density varied from $1.0/\lambda^2$ to $9.0/\lambda^2$; in Fig.~4b we vary the pinning strength from $0.2 f_0$ to $0.9 f_0$. Over a large region of the field, we find that $1/J_c$ is indeed linear in field, as in Kim's model. We can then fit the linear portions of each curve to straight lines as shown, and extract the inverse slope $\alpha$. For fields such that $B \gg b_0$, Kim's relation reads $\alpha \approx J_c B$ which is the Lorentz force per unit volume. Since this force is exactly balanced by the pinning force, we can interpret $\alpha$ as the maximum pinning force per unit volume. $b_0$ is typically in the range of $0.4$ to $0.7$ $\Phi_0/\lambda^2$, but even below $b_0$, $\alpha$ is clearly a measure of the relative effectiveness of the pinning. In the inset to Fig.~4a, we plot the values of $\alpha$ determined from the slopes of the $1/J_c$ curves as a function of the pinning strength $f_p$ or density $n_p$. The pinning force per unit volume has an approximate linear rise with $n_p$, and the curve with dark triangles follows $\alpha \sim f_p^{1.6} $ (if we assume that $\alpha=0$ when $f_p=0$). \ Even though the vortex dynamics in our samples is not dominated by elastic flow and collective weak-pinning, it is interesting to compare these results with the predictions of the Larkin-Ovchinnikov (LO)\cite{LO} collective-pinning theory---where weakly-pinned vortices interact elastically inside a typical correlated volume. The 2D LO prediction for rigid vortices becomes \begin{equation} J_c B \sim n_p f_p^2 \ , \end{equation} which is somewhat different from \begin{equation} J_c B \ \sim n_p f_p^{1.6} \ \, , \end{equation} obtained from our strongly pinned vortices. The opposite regime of the LO weakly-pinned collective vortices is given by the very strongly-pinned independent vortices where \begin{equation} J_c B \ \sim n_p f_p^{1} \ \, . \end{equation} Thus, our results indicate that our vortices are in an intermediate state between the two extreme regimes described above. We plot our values for $J_c$ in practical SI units. The weakest pinning in our simulation occurs at our highest fields, where $1/J_c$ is about $100 \mu_0 \lambda^3/\Phi_0$. For a $\lambda$ of $1000$ \AA, this corresponds to a critical current $J_c = 1.6 \times 10^6$ A/cm$^2$, which is in practice a very reasonable value. Our highest critical currents, at low fields and high pin strength or density, are about a factor of ten higher. Thus, our parameters generally appear to model realistic materials. \section{Conclusions} To summarize, we have perfomed molecular-dynamics simulations of vortices and antivortices interacting with a controlled range of pinning strengths and densities. In these simulations we have only considered vortex-vortex and vortex-pin interactions; {\it no\/} extra force was needed to simulate a Lorentz force. Thus, our results show that the Lorentz force can be considered as a consequence of a flux gradient arising strictly from the interactions of vortices and pins. We compute the flux density profile that develops with a varying applied field, for both vortices and anti-vortices as the external field is cycled through a loop. Our computed complete hysteresis loops show realistic behaviour with varying pinning strength and density, indicating that our model contains the essential physics. We have obtained $J_{c}(H) $ by focusing on the flux gradient that develops naturally from the vortex-pin interactions and find that it monotonically decreases with an increasing external field with the fall off determined by the microscopic pinning parameters. \section{Acknowledgements} This work was supported in part by the NSF under grant No.~DMR-92-22541, and by SUN microsystems.
3,212,635,537,891
arxiv
\section*{Introduction}\label{introduction} Groups in which all elements belong to the conjugacy class of their inverses are called {\it real groups}. It is well known that all entries in the character tables of finite real groups are real \cite{SerreRepnBook}. However real groups may admit complex representations which are not realizable over $\R$. Such representations are called {\it symplectic}. The complex representation $$1 \mapsto \left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right), ~~a \mapsto \left(\begin{array}{cc} i & 0 \\ 0 & -i \end{array}\right), ~~b \mapsto \left(\begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array}\right), ~~c \mapsto \left(\begin{array}{cc} 0 & 1 \\ i & 0 \end{array}\right)$$ of the Quaternion group $Q_2 = \langle -1, a, b, c : (-1)^2 = 1, a^2 = b^2 = c^2 = abc = -1 \rangle$ is symplectic. Real groups which do not admit any symplectic representation are called {\it totally orthogonal}. In 1985, Gow proved that groups $\operatorname{O}_n(q)$ of special isometries of quadratic forms are totally orthogonal \cite{GowOrthogonal}. It was already known by then that every element of $\operatorname{O}_n(q)$ is strongly real \cite{WonenburgerOrthogonal}. {\it Strongly real} elements in a group are those which can be expressed as a product of two self-inverses. A group is called {\it strongly real} if all its elements are strongly real. It is straightforward to see that a group $G$ is strongly real if for every element $x$ of $G$, there exists an element $y$ in $G$ such that $y^2 = 1$ and $x^{-1}=y^{-1}xy$. \\ Like $\operatorname{O}_n(q)$ there are plenty of groups which are both totally orthogonal and strongly real. According to a conjecture of Tiep finite simple groups are totally orthogonal if and only if they are strongly real. However there is no general class of groups known which exhibits one of these properties and not the other. In this article we exhibit an infinite class of groups, namely the class of special $2$-groups, for which none of the notions of strongly reality and total orthogonality imply the other. This generalizes an example of a strongly real group admitting a symplectic representation \cite{AmitAnupam}. \\ The plan of the article is as follows: In \S 1 we recall basics on quadratic forms over $\F_2$, where $\F_2$ denotes the field with two elements. Then in \S 2 we record special $2$-groups, quadratic maps, second cohomology groups and their interconnection. The understanding of this interconnection gives us a characterization of strongly real special groups in terms of their associated quadratic maps. This characterization is used in next section to show that all extraspecial $2$-groups, except the quaternion group $Q_2$, are strongly real. This requires a classification of extraspecial $2$-groups as central products of $D_4$ and $Q_2$, where $D_4$ denotes the dihedral group of order $8$ \cite{GorensteinBook}. In the same section we convert this classification in the language of quadratic forms over $\F_2$. In last section we give examples of groups which are strongly real but not totally orthogonal, and vice-versa. A characterization of strongly real groups (Th. \ref{strongly-real-criterion}) and a characterization of totally orthogonal groups \cite[Th. 3.5]{ObedPaper} play key roles in the construction of these examples. \section{Quadratic forms in characteristic $2$} Let $\F$ be a field of characteristic two and $V$ be an $n$ dimensional vector space over $\F$. A map $b: V\times V\rightarrow \F$ is called {\it $\F$-bilinear} if it satisfies the following properties: \begin{enumerate} \item $b(\alpha v_1+\beta v_2, w)=\alpha b(v_1,w)+\beta b(v_2,w)$ for all $v_1, v_2, w \in V$ and $\alpha, \beta \in \F$. \item $b(v, \alpha w_1+\beta w_2)=\alpha b(v,w_1)+\beta b(v,w_2)$ for all $v, w_1, w_2 \in V$ and $\alpha, \beta \in \F$ \end{enumerate} A map $q:V\rightarrow \F$ is called {\it quadratic form} if \begin{enumerate} \item $q(\alpha v)=\alpha^2 q(v)$ for all $v \in V$ \item The map $b_q: V\times V\rightarrow \F$ given by $b_q(v,w):=q(v+w)-q(v)-q(w)$ is $\F$-bilinear. \end{enumerate} For a quadratic form $q$ the map $b_q$ is called the {\it polar map} of $q$. The pair $(V,q)$ is called the {\it quadratic space} over $\F$. A bilinear map $b_q$ is called {\it alternative} if $b_q(v,v)=0$ for all $v \in V$.\\ If $B = \{e_1,e_2,\cdots,e_n\}$ is a basis of $V$ then any $n \times n$ matrix $Q$ satisfying $q(x)=x^t Qx$, where $x\in V$ is indeterminate column vector and $x^t$ denotes the transpose of $x$, is called a {\it matrix of $q$ with respect to basis $B$}. Every matrix of $q$ with respect to same basis is of form $Q + A$, where $A$ is an alternating matrix and $Q$ is the unique upper triangular matrix of $q$ with respect to basis $B$. If we change the basis and $T$ is transition matrix for the change of basis then upper triangular matrix of $q$ with respect to new basis is $T^t QT$. \\ Two $n$-dimensional quadratic spaces $(V_1, q_1)$ and $(V_2, q_2)$ over $\F$ are said to be {\it isometric} if there exists an $\F$-linear isomorphism $T:V_1\rightarrow V_2$ such that $q_1(v)=q_2(T(v))$ for all $v\in V$. Isometry between two quadratic spaces is denoted by $(V_1, q_1)\cong(V_2, q_2).$ A quadratic space $(V, q)$ is called the {\it orthogonal sum} of $(V_1, q_1)$ and $(V_2, q_2)$ if $V = V_1\oplus V_2$ and $q(v)=q_1(v_1)+q_2(v_2)$, where $v = (v_1, v_2) \in V$ is an arbitrary element of $V$. In this case we write $q=q_1 \bot q_2$. Conversely, let $(V,q)$ be a quadratic space and $V_i,~1 \leq i\leq m$ be subspaces of $V$ such that $V = V_1\oplus \cdots \oplus V_m$ and $b_q(v_i,v_j) = 0$ for $v_i\in V_i, v_j\in V_j, i\neq j$. Then $q = q_1 \bot \cdots \bot q_m$ where $q_i$ denotes the restriction of $q$ to $V_i$. \\ The subspace $\operatorname{rad}(V) := \{w\in V : b_q(v,w)=0 ~\forall v\in V\}$ is called the {\it radical} of $(V,q)$. The quadratic space $(V,q)$ is called {\it regular} if $\operatorname{rad}(V)=0$. \\ The following theorem is analogous to the diagonalization of quadratic spaces over the field of characteristic different from $2$.\\ \begin{theorem}[\cite{Pfister}]\label{quad-form-decompo-U-and-rad(V)} Every quadratic space $(V,q)$ has an orthogonal decomposition $V = U \oplus \operatorname{rad}(V)$ such that $(U,q|_U)$ is regular and an orthogonal sum of $2$-dimensional regular quadratic spaces, where $(\operatorname{rad}(V), q|_{\operatorname{rad}(V)})$ is an orthogonal sum of $1$-dimensional quadratic spaces. \end{theorem} More explicitly, the orthogonal decomposition $V = U \oplus \operatorname{rad}(V)$ is such that there exists a basis $\{e_i, f_i, g_j, 1\leq i\leq r, 1\leq j\leq s\}$ of $V$ where $2r+s=\dim(V)$ and elements $a_i, b_i, c_j \in \F,1\leq i\leq r, 1\leq j\leq s$ such that for all $\displaystyle v=\sum_{i = 1}^r (x_i e_i + y_i f_i)+\sum_{i = 1}^s z_j g_j,$ we have $$q(v)=\sum_{i = 1}^r (a_i x_{i}^{2}+x_i y_j+ b_i y_{i}^{2})+\sum_{i = 1}^s c_j z_{j}^{2}.$$ In this case we say that $[a_1,b_1]\bot \cdots \bot [a_r, b_r]\bot \langle c_1,\cdots,c_s \rangle$ is the {\it normalized form} of $q$ and $[a_1,b_1]\bot \cdots \bot [a_r, b_r]$ is the {\it regular part} of $q$. A quadratic form $q$ is said to be {\it non-singular} if $s=0$. It is called {\it singular} if $s\neq0$. In addition, if $r=0$ then $q$ is called {\it totally singular}. If $s>0$ then regular part of quadratic form is generally not determined uniquely up to isometry, whereas the part $\langle c_1,\dots,c_2\rangle$ is always determined uniquely up to isometry. For example $[1,1]\oplus \langle 1 \rangle \cong [0,0]\oplus \langle 1 \rangle$ but $[1,1]\cong [0,0]$ holds if and only if the quadratic equation $x^2+x+1=0$ has a solution in $\F$. \\ It immediately follows from above Proposition that every regular quadratic form over a field of characteristic two is even dimensional.\\ A quadratic from is said to be {\it isotropic} if there exists $0\neq v\in V$ such that $q(v)=0$, otherwise it is called {\it anisotropic}. Quadratic form $[0,0]$ is the only $2$-dimensional isotropic quadratic form up to isometry. It is called {\it hyperbolic plane} and is denoted by $H$. A quadratic space is said to be a {\it hyperbolic space} if it is orthogonal sum of hyperbolic planes. \\ The following result is an analogue in characteristic 2 to the usual Witt decomposition in characteristic not equal to 2. \begin{proposition}[\cite{HL}]\label{witt-decomposition-in-char-2} Let $q$ be a quadratic form over $\F$. Then $q = i\times H ~\bot~ q_r~\bot~ q_s ~\bot~ j\times \langle 0 \rangle$, where $q_r$ is non-singular, $q_s$ is totally singular and $q_r ~\bot~ q_s$ is anisotropic. The form $q_r~\bot~ q_s$ is uniquely determined up to isometry. \end{proposition} The anisotropic part $q_r ~\bot~ q_s$ of $q$ will be denoted by $q_{an}$ in short. Two quadratic forms $q_1$ and $q_2$ are called {\it Witt equivalent} (denoted $q_1 \sim q_2$) if ${q_1}_{an}\cong {q_2}_{an}$. If $q_1$ and $q_2$ are non-singular then $q_1 \sim q_2$ if $q_1 ~\bot~ -q_2$ is hyperbolic. The set $W_q(\F)$ of Witt equivalence classes of regular quadratic forms over $\F$ forms an abelian group under the operation of orthogonal sum of quadratic spaces. It is called the {\it Witt group of quadratic forms}. \\ As all regular quadratic forms are of even dimension, the {\it dimension invariant} $e_0 : W(\F) \to \frac{\mbox{$\mathbb Z$}}{2\mbox{$\mathbb Z$}}$ given by $q \mapsto \dim(q)\mod 2$ is trivial. In $\operatorname{char}(\F) = 2$ case an invariant at next level is Arf invariant which is a substitute of Discriminant in $\operatorname{char}(\F) \neq 2$ case. It was defined by Arf in his classical paper \cite{Arf}. Let $q: V \rightarrow \F$ be a regular $2n$-dimensional quadratic form. As $b_q$ is an alternating form, the space $(V, b_q)$ has a symplectic basis $\{e_i, f_i, 1\leq i\geq n\}$. Let $\wp(\F)=\{x^2+x : ~x\in \F\}$. The set $\wp(\F)\}$ is a subgroup of $(\F,+)$. In the quotient $\F/\wp(\F)$, the class of element $\sum_{i = 1}^n q(e_i)q(f_i)$ is called the {\it Arf invariant of $q$} and is denoted by $\operatorname{Arf}(q)$. It is independent the choice of symplectic basis (see \cite{Scharlau}). Moreover, it defines a homomorphism $\operatorname{Arf} : W_q(\F) \to \F/\wp(\F)$ on the Witt group of $\F$. More explicitly if $q=[a_1,b_1]~\bot~ \cdots ~\bot~ [a_n,b_n]$ then $\operatorname{Arf}(q) = a_1b_1 + \cdots + a_nb_n \in \F/\wp(\F)$. \\ \section{Special $2$-groups and quadratic forms} Let $G$ be a finite group. Let $\Omega (G)=\langle x\in G~:~x^2=1\rangle$ and $G^{\prime} = [G, G]$, the derived subgroup of $G$. Let $Z(G)$ denote the center of $G$ and $\Phi(G)$ denote the Frattini subgroup of $G$. Recall that for a non-trivial group the intersection of its index $2$ subgroups is called its {\it Frattini subgroup}. We record that $\Phi(G) =\langle x^2 : x\in G \rangle$. \\ A $2$-group $G$ is called {\it special $2$-group} if $G$ is non-commutative and $$G^{\prime} = \Phi (G) = Z(G) = \Omega (Z(G)).$$ Moreover, if $G$ is a special $2$-group such that $|Z(G)|=2$ then $G$ is called an {\it extraspecial $2$-group}. \\ \begin{remark}\label{special-2-group-order-2-or-4} In a special $2$-group, the order of non-identity elements is either $2$ or $4$. \\ \end{remark} From now onwards $\F_2$ will denote the field of two elements and $V$ and $W$ will be vector spaces over $\F_2$. A map $c: V \times V \rightarrow W$ is called a {\it normal $2$-cocycle} on $V$ with coefficients in $W$ if for all $v,v_1,v_2,v_3\in V$ it satisfies the following conditions: \begin{enumerate} \item[$i.$] $c(v_2,v_3)-c(v_1+v_2,v_3)+c(v_1,v_2+v_3)-c(v_1,v_2)=0$. \item[$ii.$] $c(v,0)=(0,v)=0$. \\ \end{enumerate} We denote the set of normal $2$-cocycles on $V$ with coefficients in $W$ by $Z^2(V,W)$ and consider it as an abelian group under pointwise addition. Let $\lambda:V\rightarrow W$ be a linear map such that $\lambda(0)=0$. Then the map $c_{\lambda} : V \times V \to W$ defined by $c_{\lambda}(v_1,v_2)=\lambda(v_2)-\lambda(v_1+v_2)+\lambda(v_1)$ is a normal $2$-cocycle. Such $2$-cocycles are called {\it normal $2$-coboundaries} and their collection is denoted by $B^2(V,W)$. The set $B^2(V,W)$ forms a subgroup of $Z^2(V,W)$. The quotient $H^2(V,W)=\frac {Z^2(V,W)}{B^2(V,W)}$ is called the {\it second cohomology group of $V$ with coefficients in $W$}. \\ We consider $V$ and $W$ as groups under the operation of vector space addition. A short exact sequence of groups $1\rightarrow V\rightarrow G\rightarrow W\rightarrow 1$ is called a {\it central extension of $W$ by $V$} if $V\subseteq Z(G)$. The set of isomorphism classes of central extension of $W$ by $V$ is in one to one correspondence with $H^2(V,W)$ \cite[\S 6.6]{Weibelbook}. We record that the central extension of $W$ by $V$ corresponding to a cocycle class $[c]\in H^2(V,W)$ is isomorphic to the group $V\dot{\times} W$, where the underlying set of the group $V\dot{\times} W$ is just the cartesian product $V \times W$ and its group operation is defined by \begin{center} $(v,w)(v^{\prime},w^{\prime})=(v+v^{\prime}, c(v,v^{\prime})+w+w^{\prime})$ \end{center} for all $v, v^{\prime} \in V$ and $w, w^{\prime} \in W$. The identity element this group is $(0,0)$ and the inverse of $(v,w)$ is $(v, c(v,v)+w)$. \\ A map $q : V \to W$ is called a {\it quadratic map} if $q(\alpha v) = \alpha^2 q(v)$ for all $v \in V$, $\alpha \in \F$ and the map $b_q : V \times V \to W$ defined by $b_q (v, w) = q(v+w)- q(v) - q(w)$ is bilinear. We denote by $\langle b_q(V \times V) \rangle$ the subgroup of $W$ generated by the image of $b_q$. Let $\operatorname{Quad}(V,W)$ denote the set of quadratic maps from $V$ to $W$. We consider it as a group under the group operation of pointwise addition of maps. The following proposition gives correspondence between $H^2(V,W)$ and $\operatorname{Quad}(V,W)$. \begin{proposition}[\cite{ObedPaper}, Prop. 1.2] \label{cohomology-and-quadratic-map-correspondance} The map $\varphi: Z^2(V,W) \rightarrow \operatorname{Quad}(V,W)$ which maps $c\in Z^2(A,B)$ to the quadratic map $q_c$ defined by $q_c(x)=c(x,x)$ induces a homomorphism between $H^2(V,W)$ and $\operatorname{Quad}(V,W)$. If the dimension of $V$ is finite then this homomorphism is an isomorphism. \end{proposition} If dimension of $V$ is finite then above proposition and the correspondence of elements of $H^2(V,W)$ with central extension of $V$ by $W$ gives a useful correspondence between $\operatorname{Quad}(V,W)$ and central extension of $V$ by $W$ \cite{ObedPaper}. \begin{theorem}[\cite{Obedthesis}, Th. 3.4.11] \label{quad-map-of-special-2-group} Let $G$ be a special $2$-group and $q:\frac{G}{Z(G)}\rightarrow Z(G)$ be the map given by $q(xZ(G))=x^2$ for all $x\in G$. Then $q$ is a quadratic map and $b_q(xZ(G),yZ(G))=xyx^{-1}y^{-1}$ for all $x,y\in G$. \end{theorem} Note that the quadratic map $q$ in the above theorem is regular and the image of $b_q$ generates $Z(G)$. This quadratic map $q$ is called the {\it quadratic map associated to the special $2$-group $G$}. \begin{theorem}[\cite{ObedPaper}, Th. 1.4] \label{special-2-group-of-a-quad-map} Let $q:V\rightarrow W$ be a regular quadratic map. Suppose that $ W=\langle b_q(V\times V)\rangle$. Then there exists a special $2$-group $G$ associated with quadratic map $q$ such that $W=Z(G)$ and $V=\frac{G}{Z(G)}$. Such a group is unique up to isomorphism. \end{theorem} The special $2$-group $G$ in the above theorem is called {\it the group associated to regular quadratic form $q$}. We recall the definition of central product of a two groups. Let $G_1$ and $G_2$ two groups, $Z(G_1)$ and $Z(G_2)$ be their centers and $\theta : Z(G_1) \to Z(G_2)$ be a group isomorphism. Let $N$ denote the normal subgroup $\{(x, y) \in Z(G_1) \times Z(G_2) : \theta(x)y = 1\}$ of $Z(G_1) \times Z(G_2)$. The {\it central product} of $G_1$ and $G_2$ is the quotient of direct product $G_1 \times G_2$ by $N$. The following lemma relates orthogonal sum of quadratic maps to central products. \begin{lemma}\label{ortho-sum-corresponds-to-central-product} Let $V_1, V_2$ and $W$ be finite dimensional vector spaces over the field $\F_2$. Let $q_{1}:V_{1}\rightarrow W$ and $q_{2} : V_{2}\rightarrow W$ be regular quadratic maps associated to special $2$-groups $G_{1}$ and $G_{2}$, respectively. Then $q_{1}\perp q_{2} : V_1 \oplus V_2 \to W$ defined by $(q_1 \perp q_2)(v_1, v_2) = q_1(v_1) + q_2(v_2)$ is a regular quadratic map and the group associated to $q_{1}\perp q_{2}$ is $G_{1}\circ G_{2}$, where $\circ$ denotes the central product. \end{lemma} \begin{proof} Since quadratic maps $q_{1}:V_{1}\rightarrow W$ and $q_{2} : V_{2}\rightarrow W$ are associated to special $2$-groups $G_1$ and $G_2$, respectively, we have $V_1=\frac{G_1}{Z(G_1)},~V_2=\frac{G_2}{Z(G_2)}$ and $W=Z(G_1)=Z(G_2)$. Let $c_1\in Z^2(V_1,W)$ and $c_2\in Z^2(V_2,W)$ be normal $2$-cocycles associated to quadratic maps $q_1$ and $q_2$, respectively.\\ Let $q =q_1 \perp q_2$. Then $b_q((v_1,v_2),(v_1^{\prime},v_2^{\prime}))= b_{q_{1}}(v_1,v_1^{\prime})+b_{q_{2}}(v_2,v_2^{\prime})$ where $ (v_1, v_2)$ and $(v_1^{\prime},v_2^{\prime})\in V_1\oplus V_2$. Let $c := c_1 \perp c_2 : (V_1\oplus V_2) \times (V_1 \oplus V_2) \to W$ be the map defined by $$c((v_1,v_2),(v_1^{\prime},v_2^{\prime}))= c_1(v_1,v_1^{\prime})+c_2(v_2,v_2^{\prime}).$$ It is straightforward to check that $c$ is a normal $2$-cocycle on $V_1 \oplus V_2$ with coefficients in $W$ and the association $([c_1], [c_2]) \mapsto [c] \in H^2(V_1 \oplus V_2, W)$ is well-defined. The normal $2$-cocycle $c$ corresponds to the quadratic form $q$. Special $2$-group associated with $q$ is $G=(V_1\oplus V_2)\dot{\times}W$, with the group operation \begin{center} $((v_1,v_2),w)((v_1^{\prime}, v_2^{\prime}),w^{\prime})=((v_1+v_1^{\prime}, v_2+v_2^{\prime}), c((v_1, v_2), (v_1^{\prime},v_2^{\prime}))+w+w^{\prime})$. \end{center} We need to show that $G = G_1 \circ G_2$. By definition $G_1 \circ G_2$ is the quotient of $G_1 \times G_2$ by $\ker(f)$ where $f:W\times W\rightarrow W$ is the homomorphism defined by $f(w,w^{\prime})=w+w^{\prime}$; $w,w^{\prime}\in W$. Define $\phi :G_1\times G_2\rightarrow G$ by \begin{center} $\phi((v_1,w),(v_2,w^{\prime}))=((v_1,v_2),w+w^{\prime})$ \end{center} where $(v_1,w)\in G_1,(v_2,w^{\prime})\in G_2$. We notice that $\phi$ is a group homomorphism as \begin{align*} &\quad \quad \phi(((v_1,w_1),(v_2,w_1^{\prime}))+((v_1^{\prime},w_2),(v_2^{\prime},w_2^{\prime}))) \\ &=\phi(((v_1,w_1)+(v_1^{\prime},w_2)),((v_2,w_1^{\prime})+(v_2^{\prime},w_2^{\prime})))\\ &=\phi((v_1+v_1^{\prime},c_1(v_1,v_1^{\prime})+w_1+w_2),(v_2+v_2^{\prime},c_2(v_2,v_2^{\prime})+w_1^{\prime}+w_2^{\prime}))\\ &=((v_1+v_1^{\prime}, v_2+v_2^{\prime}), c_1(v_1,v_1^{\prime})+c_2(v_2,v_2^{\prime})+w_1+w_1^{\prime}+w_2+w_2^{\prime})\\ &=((v_1,v_2) + (v_1^{\prime},v_2^{\prime}), c((v_1,v_2),(v_1^{\prime},v_2^{\prime}))+w_1+w_1^{\prime}+w_2+w_2^{\prime})\\ &=((v_1,v_2),w_1+w_1^{\prime})((v_1^{\prime},v_2^{\prime}),w_2+w_2^{\prime})\\ &=\phi((v_1,w_1),(v_2,w_1^{\prime}))\phi((v_1^{\prime},w_2),(v_2^{\prime},w_2^{\prime})). \end{align*} where $(v_1,w_1),(v_1^{\prime},w_2)\in G_1$ and $(v_2,w_1^{\prime}),(v_2^{\prime},w_2^{\prime})\in G_2$. The homomorphism $\phi$ is surjective because for an arbitrary $((v_1,v_2),w)\in G$ we have $\phi((v_1,0),(v_2,w))=((v_1,v_2),w)$. Now identifying $W\times W$ with $(0\dot{\times}W)\times (0\dot{\times}W)\subset G_1\times G_2$, it follows that $\ker{\phi}$ gets identified with $\ker{f}$ and we finally have $G \simeq \frac{G_1\times G_2}{\ker(\phi)} = \frac{G_1\times G_2}{\ker(f)} = G_1 \circ G_2$. \hfill $\square$ \\ \end{proof} We shall use this lemma while discussing classification of extraspecial $2$-groups. The following theorem gives a characterization of strongly real special $2$-groups. \begin{theorem}\label{strongly-real-criterion} Let $q: V \to W$ be a regular quadratic map with $\langle b_q(V\times V)\rangle = W$ and $G$ be the special $2$-group associated with $q$ such that $V = \frac{G}{Z(G)}$ and $W = Z(G)$ (cf. Th. \ref{special-2-group-of-a-quad-map}). Then $G$ is strongly real if and only if for every nonzero $v\in V$ there exists $a\in V$ with $v \neq a$ and $q(a) = q(a - v) = 0$. \end{theorem} \begin{proof} We first suppose $G$ to be strongly real. Let $0 \neq \overline{x} = v \in V$. Since $G$ is strongly real there exists $y \in G$ such that $o(y)=2$ and $yx^{-1}=xy$. We take $a = \overline{y}$. Then $(yx^{-1})^2 = yx^{-1}xy = y^{2} = e$, which translates to $q(a - v) = q(a) = 0$. \\ For the converse part, recall that $G = V\dot{\times} W$ where the group operation is defined by \begin{center} $(v,w)(v^{\prime} ,w^{\prime} )=(v+v^{\prime} ,c(v,v^{\prime})+w+w^{\prime})$\\ $(v,w)^{-1}=(v,c(v,v)+w)$. \end{center} where $[c] \in H^{2}(V,W)$ and $q(x)=c(x,x)$. Let $x = (v, w) \in V \dot{\times} W = G$. By hypothesis there exists $a \in V$ such that $q(a) = q(a-v) = 0$. We take $y = (a - v, 0) \in G$. Since $$(a - v, 0) + (a - v, 0) = (2(a-v), c(a-v,a-v)) = (2(a-v), q(a-v)) = (0, 0)$$ it follows that $y^2 = 1$. Moreover \begin{align*} (a-v, 0)(v,w)(a-v, 0) &=(2a - v, c(a-v,v)+c(a,a-v)+w)\\ &=(v, c(v,v) + c(a,a)+w) \\ &=(v, q(a) + c(v,v) + w) \\ &=(v, c(v,v) + w) = (v,w)^{-1}. \end{align*} Therefore $yxy = x^{-1}$. Further since $y^2 = 1$, we conclude that $yxy^{-1} = x^{-1}$ and $G$ is strongly real. \hfill $\square$\\ \end{proof} In view of the above theorem, we remark that for a strongly real special $2$-group the associated quadratic map is always isotropic. However the converse is not true. For example, consider the special $2$-group $G$ associated to the quadratic map $q(x,y,z)=(x^2+xy+y^2, xz)$. The quadratic map is isotropic because $q(0,0,1)=(0,0)$. But the group $G$ is not strongly real by above theorem because for $v=(1,1,1)$, there does not exist any $a$ such that $q(a)=q(a-v) = 0$. \section{Classification of extraspecial $2$-groups} The aim of this section is to classify extraspecial $2$-groups in terms of quadratic forms over $\F_2$ and to show that all extraspecial $2$-groups except $Q_2$ are strongly real. We start with two quick lemmas. \begin{lemma}\label{for-quad-form-extraspecial-group} Let $V$ be a vector space over $\F_2$ and $q : V\rightarrow \F_2$ be a regular quadratic form. Then the group associated to $q$ is extraspecial $2$-group, and conversely. \end{lemma} \begin{proof} Recall that a special $2$-group $G$ is called extraspecial $2$-group if $|Z(G)|=2$. Proof follows from Th. \ref{special-2-group-of-a-quad-map} and Th. \ref{quad-map-of-special-2-group}. \hfill $\square$ \end{proof} \begin{lemma}\label{order-of-extraspecial-group-is-odd-power-of-2} The order of an extraspecial group is $2^{2n+1}$ for some $n\in \N$. \end{lemma} \begin{proof} Recall that a regular quadratic form over a field of characteristic two is even dimensional (see \S 2, Th. \ref{quad-form-decompo-U-and-rad(V)}). If $G$ is an extraspecial $2$-group and $q: V\rightarrow \F_2$ is the regular quadratic form associated to it as in Th. \ref{quad-map-of-special-2-group}, then $\dim_{\F_2}V=2n$ for some $n\in \N$. From \S 3, $G$ is in bijection with $V{\times}\F_2$. Hence $|G|=2^{2n}\times 2=2^{2n+1}$. \hfill $\square$\\ \end{proof} In the rest of this article, $D_4$ will denote the dihedral group of order $8$ and $Q_2$ will denote the quaternion group of order $8$. Presentations of these groups are \begin{center} $D_4=\langle a,b~ : ~a^4=b^2=1,bab^{-1}=a^{-1}\rangle$\\ $Q_2=\langle c,d~ : ~c^4=1,d^2=c^2,dcd^{-1}=c^{-1}\rangle$ \end{center} \begin{proposition}\label{regular-forms-correspond-to Dihedral-and-Quaternion} Let $V$ be the two dimensional vector space over $\F_2$ and $q:V\rightarrow \F_{2}$ be a regular quadratic form. Then the group associated to $q$ is either $Q_{2}$ and $D_{4}$. \end{proposition} \begin{proof} Up to isometry there are only two regular $2$-dimensional quadratic forms over $\F_{2}$. These are $q_1 = [0,0]$ and $q_2 = [1,1]$ (see \S 2). Recall that $q_{1}(x,y)=xy$ and $q_{2}(x,y)=x^{2}+xy+y^{2}$. We show that the extraspecial $2$-group corresponding to $[0,0]$ is $D_4$. From Th. \ref{special-2-group-of-a-quad-map} the group associated to $q_{1}:V \rightarrow \F_2$ is $V \dot{\times} \F_2$, where the multiplication is defined by \begin{center} $(v,\alpha) (v^{\prime},\alpha^{\prime})=(v+v^{\prime}, c_1(v, v^{\prime})+\alpha+\alpha^{\prime})$ \end{center} where $v,v^{\prime} \in V$, $\alpha,\alpha^{\prime}\in \F_2$ and $c_1\in Z^2(V,\F_2)$ is the normal $2$-cocycle such that $q_1(v)=c_1(v,v)$ for all $v\in V$. \\ Define $\psi:D_4\rightarrow V \dot{\times} \F_2$ by $\psi(a)=((1,1),1)$ and $\psi(b)=((1,0),1)$, where $a$ and $b$ are generating elements of $D_4$ as in the presentation of $D_4$. We check that $\psi$ is an isomorphism of groups. Note that both $D_4$ and $V_{1}\dot{\times}\F_2$ are groups of order $8$. It is easy to see that the orders of $((1,1),1)$ and $((1,0),1)$ are $4$ and $2$, respectively. Moreover, \begin{align*} ((1,0),1)((1,1),1)^{-1}&=((1,0),1)((1,1),c_1((1,1),(1,1))+1)\\ &= ((1,0),1)((1,1),q_1(1,1))+1)\\ &= ((1,0),1)((1,1),0)\\ &= ((1,0),c_1((1,0),(1,1))+1)\\ &= ((0,1),1)\\ &= ((1,1),1)((1,0),1) \end{align*} Hence $\psi$ is an isomorphism of groups. On similar lines it can be shown that group associated with quadratic form $q_{2} = [1,1]$ is $Q_{2}$. The isomorphism is given by $\psi^{\prime}: Q_2\rightarrow V_{2}\dot{\times}\F_2$, where $\psi^{\prime}(c)=((1,1),1)$, $\psi^{\prime}(d)=((1,0),1)$ and $c$, $d$ denote generators of group $Q_2$ as in the given presentation of $Q_2$. \hfill $\square$ \end{proof} \begin{proposition}\label{[0,0]+[0,0]=[1,1]+[1,1]} Let $q _{1}=[0,0]\bot [0,0]$ and $q_{2}=[1,1]\bot [1,1]$ be two quadratic forms over $\F_2$. Then $q_1$ is isometric to $q_2$. \end{proposition} \begin{proof} We have $q _{1}(w,x,y,z)=wx+yz$ and $q _{2}(w,x,y,z)=w^{2}+wx+x^{2}+y^{2}+yz+z^{2}$. The following change of variables in $q_{1}$ converts it to the form $q_2$ \begin{align*} w &\mapsto x+y+z \\ x &\mapsto w+y+z \\ y &\mapsto w+x+z \\ z &\mapsto w+x+y \end{align*} \hfill $\square$ \end{proof} \begin{proposition} \label{D-4-o-D-4-is-isometric-to-Q-2-o-Q-2} The group $D_4\circ D_4$ is isomorphic to $Q_2\circ Q_2$ where $\circ$ denotes the central product of groups. \end{proposition} \begin{proof} From Lemma \ref{ortho-sum-corresponds-to-central-product} and Prop. \ref{regular-forms-correspond-to Dihedral-and-Quaternion}, it follows that the quadratic form associated to $D_4\circ D_4$ is $[0,0]\perp [0,0]$ and that the quadratic form associated to $Q_2\circ Q_2$ is $[1,1]\perp [1,1]$. By Prop. \ref{[0,0]+[0,0]=[1,1]+[1,1]} we have that $q_1$ and $q_2$ are isometric and now from Th. \ref{special-2-group-of-a-quad-map} the result follows. \hfill $\square$ \end{proof} \begin{theorem}\label{classification-of-exrtraspecial-2-groups} For every $n\in \N$, there are exactly two extraspecial $2$-groups of order $2^{2n+1}$, namely $D_{4}\circ D_{4}\circ \cdots \circ D_{4}$ ($n$ copies of $D_4$) and $Q_{8}\circ D_{4}\circ \cdots\circ D_{4}$ ($n-1$ copies of $D_4$). \end{theorem} \begin{proof} Let $G$ be an extraspecial $2$-group and $q : V \to \F_2$ be the associated regular quadratic form. Since $q$ is regular, $\dim_{\F_2}(V)$ is even, say $\dim_{\F_2}(V) = 2n$. Writing $q$ as orthogonal sum of two dimensional regular spaces and using the isometry $[0,0]\perp [0,0] \simeq [1,1] \perp [1,1]$ (see Prop. \ref{[0,0]+[0,0]=[1,1]+[1,1]}) we conclude that either $q \simeq [0,0] \perp [0, 0] \perp \cdots \perp [0, 0]$ or $q \simeq [1,1] \perp\ [0,0] \perp \cdots [0, 0]$. Thus, in view of Prop. \ref{regular-forms-correspond-to Dihedral-and-Quaternion}, the group corresponding to $q$ is either $D_{4}\circ D_{4}\circ \cdots \circ D_{4}$ ($n$ copies of $D_4$) or $Q_{2}\circ D_{4}\circ \cdots\circ D_{4}$ ($n-1$ copies of $D_4$). \hfill $\square$ \\ \end{proof} This completes the classification of extraspecial $2$-groups. Their classification is also given in \cite{GorensteinBook} using group theoretic methods. We learn from \cite{Wilson} that the classification of extraspecial $2$-group using quadratic forms is known. However we do not know of any reference where it is mentioned in as much detail. \\ We shall denote the extraspecial 2-group $D_{4}\circ D_{4}\circ \cdots \circ D_{4}$ ($n$ copies of $D_4$) by $D_4^{(n)}$ and the extraspecial $2$-group $Q_{2}\circ D_{4}\circ \cdots\circ D_{4}$ ($n-1$ copies of $D_4$) by $Q_2D_4^{(n-1)}$. We now study strong reality of these groups. \begin{lemma}\label{central-product-of-strongly-real-groups} Central product of two strongly real groups is a strongly real group. \end{lemma} \begin{proof} Central products are quotients of direct products. The direct product of two strongly real groups is a strongly real group. Further, a quotient of strongly real group is a strongly real group. Hence the result follows. \hfill $\square$ \end{proof} \begin{proposition}\label{extraspecial-are-strongly-real} All extraspecial 2-groups except $Q_{2}$ are strongly real. \end{proposition} \begin{proof} It is easy to verify that the Dihedral group $D_{4}$ is strongly real. By Lemma. \ref{central-product-of-strongly-real-groups}, the groups $D_4^{(n)}$, $n \in \N$ are strongly real. To show that groups $Q_2D_4^{(n-1)}$ are strongly real, by repeated use of Lemma \ref{central-product-of-strongly-real-groups} it is enough to show that $Q_{2}\circ D_{4}$ is strongly real. We shall obtain this from Th. \ref{strongly-real-criterion}. The quadratic form associated to $Q_{2}\circ D_{4}$ is $q = [0,0] \perp [1,1]$. As a map $q: V\rightarrow \F_2$ is defined by \begin{center} $q(w,x,y,z)=w^2+wx+x^2+yz$ \end{center} To show that $Q_{2}\circ D_{4}$ is strongly real using the criterion of Th. \ref{strongly-real-criterion}, for each $v \in V$ we have to exhibit some $a \in V$ such that $q(a) = q(a - v) = 0$. The following table demonstrates that it is indeed possible. \begin{center} \begin{tabular}{|c|c|} \hline $v$ & $a$ \\ \hline $(0,0,0,1), (0,0,1,0), (1,1,1,1), (1,0,1,1), (0,1,1,1), (0,0,0,0)$ & $(0,0,0,0)$ \\ $(0,0,1,1)$ & $(0,0,0,1)$ \\ $(1,0,0,0),(0,1,0,0),(1,1,1,0),(1,1,0,1)$ & $(1,1,1,1)$ \\ $(0,1,1,0),(0,1,0,1),(1,1,0,0)$ & $(0,1,1,1)$ \\ $(1,0,1,0),(1,0,0,1)$ & $(1,0,1,1)$ \\ \hline \end{tabular} \end{center} Therefore it follows that groups $Q_2D_4^{(n-1)}$ for $n \geq 2$ are strongly real. The only extraspecial $2$-group which is left out it $Q_2$, which is not strongly real. This is because in $Q_2$ there is only one element of order $2$, which is central. \hfill $\square$\\ \end{proof} We know the group $Q_{2}$ is real (see, for example \cite{Rose}, p.304). Therefore Prop. \ref{extraspecial-are-strongly-real} gives that all extraspecial 2-groups are real. \begin{comment} \section{Totally orthogonal special 2-group which is not strongly real} Let $A:=C_2\times C_2\times C_2$ be elementary abelian group generated by $\{a, b, c\}$ and $B:=D_4:=\{d,f|d^4=f^2=(fd)^2=1\}$ be dihedral group of order 8. Let $H:=A\times B$ and $C:=C_2=\langle l|l^2=1 \rangle$ br groups of order $64$ and $2$ respectively. Consider the automorphism $\phi$ of $A$ of order $2$. \begin{align*} \phi(a)=d^2a\\ \phi(b)=b\\ \phi(c)=c\\ \phi(d)=dbc\\ \phi(f)=bf \end{align*} There is a homomorphism of $C$ into $Aut(H)$ which maps $l$ to $\phi$. Form the corresponding semidirect product $G:=H\rtimes C$. The finite presentation of group G is given by One can check that $G$ is a special 2-group as G is non-commutative and $G^{\prime} = \Phi (G) = Z(G) = \Omega (Z(G))$, each of these subgroups of $G$ are elementary abelian groups of order $8$ and generated by $\{b,c,d^2\}$. According to proposition 3.4, we can associate a quadratic map with special 2-group G. For convenience, identify $Z(G)$ with $(\F_2)^3$ via map \begin{align*} d^2 &\mapsto (1,0,0)\\ b &\mapsto (0,1,0)\\ c &\mapsto (0,0,1) \end{align*} Also $\frac{G}{Z(G)}$ is elementary abelian group of order 16 generated by $\{a,d,f,l\}$, identify it with $(\F_2)^4$ via map \begin{align*} f &\mapsto (1,0,0,0)\\ l &\mapsto (0,0,1,0)\\ a &\mapsto (0,0,0,1)\\ d &\mapsto (1,1,0,0) \end{align*} Using above identification, the regular quadratic map associated with $G$ given by proposition 3.4 is $q(w,x,y,z)=(wx+yz,wy,xy)$ for $(w,x,y,z)\in (\F_2)^4$ \hfill (1) It is easy to check that for $(1,1,1,0)\in (\F_2)^4$ and $(1,1,1,1)\in (\F_2)^4$, there does not exist any element in $(\F_2)^4$ such that the criterion for strong reality of group $G$ given in proposition 3.5 is satisfied so $G$ is not strongly real. A Special 2-group $G$ is real if and only if for every $a\in \frac{G}{Z(G)}$ there exist $a^{\prime}\in \frac{G}{Z(G)}$ such that $q(a^{\prime})=q(a+a^{\prime})$, where $q$ is associated quadratic map of $G$ (see \cite{ObedPaper} Theorem 2.1). Since this criterion is satisfied for quadratic map (1) so special group $G$ is real. Next we will show that $G$ is totally orthogonal. It is known that number of linear characters of a group $G$ is equal to $|\frac{G}{G^{\prime}}|$ (\cite{JL}, Theorem 17.11) so group $G$ has $16$ linear characters. Since $G$ is real group so all linear characters are orthogonal. To compute the non linear characters of $G$ we will use the characters of normal subgroup $H$ of $G$ so first we compute characters of $H$. Let $\chi_n, ~0\leq n\leq 7$ be irreducible characters of $A$, where if $(ijk)$ is binary representation of $n$ then \begin{center} $\chi_n(a)=(-1)^i,~\chi_n(b)=(-1)^j,~\chi_n(c)=(-1)^k$ \end{center} Let $\psi_m,~1\leq m\leq 5$ are irreducible character of $B$, where \begin{align*} \psi_1(d)=1,~\psi_1(f)=1\\ \psi_2(d)=-1,~\psi_2(f)=1\\ \psi_3(d)=1,~\psi_3(f)=-1\\ \psi_4(d)=-1,~\psi_4(f)=-1 \end{align*} $\psi_5$ is irreducible character of $B$ of degree $2$ and $\psi_5(1)=2,~\psi_5(d^2)=-2,~\psi_5(d)=\psi_5(f)=\psi_5(fd)=0.$ By Theorem 19.18 (\cite{JL}), irreducible character of H are $\chi_n\times \psi_m,~0\leq n\leq 7,~1\leq m\leq 5$, we will induce irreducible characters of $G$(\cite{JL}, 21.13) using following result. $(\chi\uparrow G)$ denotes character of $G$ induced from character $\chi$ of $H$. \begin{proposition}(\cite{JL}, Proposition 21.23) Let $\psi$ be a character of the subgroup $H$ of $G$, and suppose that $x\in G$ and $x^G$ denotes conjugacy class of $x$ in $G$ . \begin{enumerate} \item If no element of $x^G$ lies in $H$, then $(\psi\uparrow G)(x)=0.$ \item If some element of $x^G$ lies in $H$, then \begin{center} $(\psi\uparrow G)(x)=|C_G(x)|(\frac{\psi(x_1)}{|C_H(x_1)|}+\cdots+\frac{\psi(x_m)}{|C_H(x_m)|}).$ \end{center} \end{enumerate} where $x_1,\cdots,x_m\in H$ and $H\cap x^G$ breaks up into $m$ conjugacy classes of $H$, with representatives $x_1,\cdots,x_m$. $|C_G(x)| $ and $|C_H(x)|$ denotes the size of centralizer of $x$ in $G$ and $H$ respectively. \end{proposition} There are $32$ conjugacy classes in $G$ with representatives $S_1\cup S_2 \cup S_3$. where $S_i~ 1\leq i \leq 3$ are as follows \begin{center} $S_1:=\{l^G,al^G,dl_G,fl^G,adl^G,afl^G,dfl^G,adfl^G\}$\\ $S_2:=\{1^G,(d^2)^G,bc^G,b^G,(bcd^2)^G,(bd^2)^G,c^G,(cd^2)^G\}$\\ $S_3:=\{a^G,abc^G,ab^G,ac^G,d^G,f^G,ad^G,af^G,df^G,bd^G,bcf^G,adf^G,abd^G,dbcf^G,bcdf^G,abcdf^G\}$ \end{center} If $x^G\in S_1$ then $H\cap x^G=\Phi$ so \begin{center} $[(\chi_n\times \psi_m)\uparrow G](x)=0$ for $x^G\in S_1$. \end{center} If $x^G\in S_2$ then $x^G$ is conjugacy class of order $1$ and contains only one conjugacy class of $H$, so \begin{align*} [(\chi_n\times \psi_m)\uparrow G](x)&=|C_G(x)|\frac{(\chi_n\times \psi_m)(x)}{|C_H(x)|}\\ &=2(\chi_n\times \psi_m)(x). \end{align*} for $x^G\in S_2$. We divide $S_3$ in two parts \begin{center} $S_{31}=\{a^G,abc^G,ab^G,ac^G\}$\\ $S_{32}=\{d^G,f^G,ad^G,af^G,df^G,bd^G,bcf^G,adf^G,abd^G,dbcf^G,bcdf^G,abcdf^G\}$ \end{center} $S_3$ contains conjugacy classes of $G$ which contains exactly two conjugacy classes of $H$. If $x^G\in S_3$ then $x^G=x^H\cup lxl^H$. Order of conjugacy classes in $S_{31}$ is $2$ and that of in $S_{32}$ is $4$. Using proposition 5.1(2), we have \begin{align*} [(\chi_n\times \psi_m)\uparrow G](x)&=|C_G(x)|(\frac{(\chi_n\times \psi_m)(x)}{|C_H(x_1)|}+ \frac{(\chi_n\times \psi_m)(lxl)}{|C_H(lxl)|})\\ &=(\chi_n\times \psi_m)(x)+(\chi_n\times \psi_m)(lxl) \end{align*} for $x^G\in S_3$. A character $\chi$ is irreducible if and only if $\langle \chi,\chi \rangle=1$ (\cite{JL}, Theorem 14.12), where \begin{center} $\langle \chi,\chi \rangle=\frac{1}{|G|}\sum_{g\in G}\chi(g)\chi(g^{-1})$ \end{center} Among $40$ induced characters, there are $16$ irreducible inequivalent characters of $G$, out of which $12$ has degree $2$ and $4$ characters have degree $4$. The set $T_1$ and $T_2$ contains $(n,m)$ such that character $[(\chi_n\times \psi_m)\uparrow G]$ is an irreducible character of degree $2$ and $4$ respectively. \begin{center} $T_1:=\{(1,1),(2,1),(3,1),(5,1),(6,1),(7,1),(2,2),(3,2),(7,2),(1,3),(5,3),(1,4)\}$\\ $T_2:=\{(0,5),(1,5),(2,5),(3,5)\}$ \end{center} Now we check the orthogonality of non linear irreducible characters of $G$. An irreducible character $\chi$ is orthogonal if and only if its Schur index $\iota(\chi)=1$)(see ??). Let $\chi$ be any irreducible character of $G$ then \begin{align*} \iota(\chi)&=\frac{1}{|G|}\sum_{g\in G}\chi{g^2}\\ &=\frac{1}{128}(56\chi(1)+24\chi(d^2)+8\chi(b)+8\chi(c)+8\chi(bc)+8\chi(cd^2)+8\chi(bcd^2)+8\chi(bd^2))\\ &=1 \end{align*} for all irreducible characters of $G$. Hence $G$ is totally orthogonal. \end{comment} \section{Examples} In this section we obtain examples of special $2$-groups which are strongly real but not totally orthogonal, and vice-versa. We first fix our notation in order to state a criterion for total orthogonality of special $2$-groups. \\ Let $q:V\rightarrow W$ be a quadratic map and $s\in \operatorname{Hom}_{\F_2}(W,\F_2)$. Then $s_*(q):=s\circ q:V\rightarrow \F_2$ is a quadratic form with polar form $b_{s_*(q)}:=s\circ b_q:V\times V\rightarrow \F_2$. The form $s_*(q)$ is called the {\it transfer of $q$ by $s$}. If the image of the radical $\operatorname{rad}(b_{s_*(q)})$ under $s_*(q)$ vanishes then $s_*(q)$ induces a regular quadratic form $q_s:V_s:=\frac{V}{\operatorname{rad}(b_{s_*(q)})}\rightarrow \F_2$ defined by $q_s(\epsilon_s(x))=s_*(q)(x)$ for every $x\in V$, where $\epsilon_s:V\rightarrow V_s$ is the canonical surjection. \begin{theorem}[\cite{ObedPaper}, Theorem 3.5]\label{totally-orthogonal-criterion} Let $G$ be a special $2$-group with associated quadratic map $q:V\rightarrow W$. Then the following are equivalent. \begin{enumerate} \item[$i.$] The group $G$ is totally orthogonal. \item[$ii.$] For all non-zero $s\in \operatorname{Hom}_{\F_2}(W,\F_2)$ the Arf invariant $\operatorname{Arf}(q_s)$ is trivial. \end{enumerate} \end{theorem} If $G$ is an extraspecial $2$-group with associated quadratic form $q$ then $\operatorname{Hom}_{\F_2}(W,\F_2)$ consists only of identity map and by Th. \ref{totally-orthogonal-criterion} the group $G$ is totally orthogonal if and only if $\operatorname{Arf}(q)$ is trivial. \\ For all $n\in \N$, extraspecial $2$-groups $D_4^{(n)}$ are totally orthogonal because the Arf invariant of the quadratic form $q= [0,0] \perp [0,0] \cdots \perp [0,0]$ associated with to the group $D_4^{(n)}$ is trivial. Extraspecial $2$-groups $Q_2 D_4^{(n-1)}$ are not totally orthogonal because the Arf invariant of associated the quadratic form $q=[1,1] \perp [0,0] \cdots \perp [0,0]$ is not trivial. \\ Therefore to sum up, all extraspecial $2$-groups $Q_2 D_4^{(n-1)}$, $n\geq 2$ are examples of strongly real groups which are not totally orthogonal. In fact these groups have exactly one symplectic representation which is of degree $2^n$ (see \cite{GorensteinBook}). Further, the least order of a strongly real finite group which are not totally orthogonal is $32$ and $Q_2 \circ D_4$ the only group of order $32$ with this property. We have checked it by computer algebra system GAP \cite{GAP4}. Next order in which an example of strongly real group with symplectic representations is found is $64$. \begin{example} Let $V$ and $W$ be vector spaces over field $\F_2$ and $q(w,x,y,z)=(z^2+wx+wz+xy,wy)$ be a regular quadratic map from $V$ to $W$. We show that the special $2$-group associated to $q$ is strongly real but not totally orthogonal. Let $s: W\rightarrow \F_2$ be the linear map given by $s(w_1,w_2)=w_1+w_2$ for $(w_1,w_2)\in W$. Since $s_*(q) : V \to \F_2$ is a regular quadratic form given by $s_*(q)(w,x,y,z)=(z^2+wx+wz+xy+wy)$, the quadratic forms $s_*(q)$ and $q_s$ are same. The following change of variables in $s_*(q)$ converts it to the form $[1,1]\perp[0,0]$: \begin{align*} w &\mapsto w+x+z \\ x &\mapsto x+y \\ y &\mapsto y+w \\ z &\mapsto y+z \end{align*} Arf Invariant of $[1,1]\perp[0,0]$ is not trivial. Hence by Th. \ref{totally-orthogonal-criterion} the special $2$-group $G$ associated with quadratic form $q$ is not totally orthogonal. Now Th. \ref{strongly-real-criterion} we show that the special $2$-group $G$ is strongly real. In following table we give $a\in V$ for every $v\in V$ so that criterion of Th. \ref{strongly-real-criterion} is satisfied. \begin{center} \begin{tabular}{|c|c|} \hline $v$ & $a$ \\ \hline $(1,0,0,0),(0,1,0,0),(0,0,1,0),(1,0,0,1),(0,1,1,1)$ & $(0,0,0,0)$ \\ $(0,0,0,1),(1,1,0,0),(1,0,1,0),(1,1,1,1)$ & $(1,0,0,0)$ \\ $(0,1,1,0),(0,1,0,1)$ & $(0,0,1,0)$\\ $(0,0,1,1)$ & $(0,1,0,0)$\\ $(1,1,1,0),(1,0,1,1),(1,1,0,1)$ & $(1,0,0,1)$ \\ \hline \end{tabular} \end{center} This is the only special group of order $64$ which is strongly real and not totally orthogonal. Another group of order $64$ which is strongly real and not totally orthogonal is $\mathcal G = \mu_2 \times( Q_2 \circ D_4)$, where $\mu_2$ is the group of order $2$. The group $\mathcal G$ is not special. We have checked using GAP \cite{GAP4} that $G$ and $\mathcal G$ are the only strongly real groups of order $64$ which are not totally orthogonal. \\ \end{example} We now give an example of a special $2$-group which is totally orthogonal but not strongly real. From Th. \ref{strongly-real-criterion} and Th. \ref{totally-orthogonal-criterion} it is enough to find a quadratic map $q : V \to W$ between vector spaces over field $\F_2$ such that the Arf invariant $\operatorname{Arf}(q_s)$ is trivial for all non-zero $s\in \operatorname{Hom}_{\F_2}(W,\F_2)$ and for every nonzero $v\in V$ there exists $a\in V$ with $v \neq a$ and $q(a) = q(a - v) = 0$. We assert that such quadratic maps indeed exist. One such example is the following. \\ \begin{example} Let $V$ be a $4$-dimensional vector space over $\F_2$ with standard basis $$\{(1,0,0,0),(0,1,0,0),(0,0,1,0),(0,0,0,1)\}$$ and $W$ be a $3$-dimensional vector space over $\F_2$ with standard basis $$\{(1,0,0),(0,1,0),(0,0,1)\}.$$ \noindent Consider the quadratic form $q: V\rightarrow W$ given by \begin{align} \label{qform-example} q(w,x,y,z)=(wx+yz,wy,xy); \quad (w,x,y,z)\in V \end{align} \end{example} \noindent The polar form associated with $q$ is $b_q : V \times V \to W$, $$ b_q((w_1,x_1,y_1,z_1), (w_2,x_2,y_2,z_2))=(w_1x_2+x_1w_2+y_1z_2+z_1y_2,w_1y_2+y_1w_2,x_1y_2+y_1x_2) $$ where $(w_1,x_1,y_1,z_1), (w_2,x_2,y_2,z_2)\in V$. It is straightforward to check that $\operatorname{rad}(b_q)=0$ and $\langle b_q(V\times V)\rangle = W$. Hence by Th. \ref{special-2-group-of-a-quad-map} there exists a unique special $2$-group $G$ such that $V=Z(G)$ and $\frac{G}{Z(G)} = W$. The order of this group is $|V| \times |W| = 128$. We shall make explicit computations to show that $G$ is not strongly real but it is totally orthogonal. For strong reality take, for example, $v = (1,1,1,1) \in V$. Then we claim that for every $a \in V$ with $q(a) = 0$ we have $q(v - a) \neq 0$. We first identify all $a \in V$ such that $q(a) = 0$. Let $a = (w,x,y,z) \in V$ be one such vector. Then $q(w,x,y,z) = 0$ will imply $$wx + yz = wy = xy = 0$$ If $y \neq 0$ then the above condition forces $x = w = z = 0$. This gives $a = (0, 0, 1, 0)$. If $y = 0$ then to ensure the above condition one must have either $w = 0$ or $x = 0$. Therefore we conclude that $a \in \{(0,0,0,0),(0,1,0,0), (1,0,0,0),(0,0,0,1),(0,1,0,1), (1,0,0,1)\}$. For each of these possibilities for $a$ we compute $q(a-v)$. \begin{center} \begin{tabular}{|c|c|c|} \hline $a$ & $a-v$ & $q(a-v)$ \\ \hline $(0,0,1,0)$ & $(1,1,0,1)$ & $(1,0,0)$ \\ $(0,0,0,0)$ & $(1,1,1,1)$ & $(0,1,1)$ \\ $(0,1,0,0)$ & $(1,0,1,1)$ & $(1,1,0)$ \\ $(1,0,0,0)$ & $(0,1,1,1)$ & $(1,0,1)$ \\ $(0,0,0,1)$ & $(1,1,1,0)$ & $(1,1,1)$ \\ $(0,1,0,1)$ & $(1,0,1,0)$ & $(0,1,0)$ \\ $(1,0,0,1)$ & $(0,1,1,0)$ & $(0,0,1)$ \\ \hline \end{tabular}\\ \end{center} This table confirms that for $v = (1,1,1,1)$ there is no $a \in V$ such that $q(a) = q(a-v) = 0$ and from Th. \ref{strongly-real-criterion} we conclude that $G$ is not strongly real. \\ Now we show that the special $2$-group $G$ associated to the quadratic form $q$ as in (\ref{qform-example}) is totally orthogonal. Since $\dim_{\F_2}(W,\F_2)=3$, there exist exactly $7$ non-zero $\F_2$-linear maps from $W$ to $\F_2$, which are following $$s_n(x,y,z)=ix+jy+kz; \quad \quad(x,y,z)\in W, 1\leq n \leq 7,$$ where $n = 4i+2j+k$ is the binary expansion of $n \in \{1, 2, \cdots, 7\}$. We write various transfer maps of $q$ \begin{align*} s_{1_*}(q)(w,x,y,z)&=xy\\ s_{2_*}(q)(w,x,y,z)&=wy\\ s_{3_*}(q)(w,x,y,z)&=wy+xy\\ s_{4_*}(q)(w,x,y,z)&=wx+yz\\ s_{5_*}(q)(w,x,y,z)&=wx+yz+xy\\ s_{6_*}(q)(w,x,y,z)&=wx+yz+wy\\ s_{7_*}(q)(w,x,y,z)&=wx+yz+wy+xy, \end{align*} where $(w,x,y,z)\in V$. By suitable linear changes of variables each of above quadratic forms is isometric to either $q_1:V\rightarrow \F_2$ defined by $q_1(w,x,y,z)=wy$ or $q_2:V\rightarrow \F_2$ defined by $q_1(w,x,y,z)=wx+yz$. \\ Now $\operatorname{rad}(b_{q_1}) = \langle (0,1,0,0),(0,0,0,1) \rangle$ and $\frac{V}{\operatorname{rad}(b_{q_1})} = \langle (1,0,0,0),(0,0,1,0) \rangle$ so $q_1$ induces regular quadratic form $q^{\prime}_1:\frac{V}{\operatorname{rad}(b_{q_1})}\rightarrow \F_2$ defined by $q^{\prime}_1(\alpha,\beta)=\alpha\beta$, where $(\alpha,\beta)\in \frac{V}{\operatorname{rad}(b_{q_1})}$. Now $\operatorname{Arf}(q_1) = \operatorname{Arf}(q^{\prime}_1)=0$. On the similar lines, since $\operatorname{rad}(b_{q_2}) = 0$ the quadratic form $q_2$ is regular and $\operatorname{Arf}(q_2)=0$. \\ As a consequence, for all $s \in \operatorname{Hom}_{\F_2}(W,\F_2)$ the Arf invariant of the transfer $q_s$ is trivial and by Th. \ref{totally-orthogonal-criterion} the group $G$ is totally orthogonal. \\ \paragraph{\bf Remark} We remark that the smallest order in which a totally orthogonal special $2$-group which is not strongly real exists is $128$. We have checked using GAP \cite{GAP4} that the smallest totally orthogonal group which is not strongly real is of order $64$. That group, though, is not a special $2$-group. \bibliographystyle{amsalpha}
3,212,635,537,892
arxiv
\section{Introduction} Complex systems are ubiquitous phenomenon in natural and scientific disciplines, and how relationships between parts give rise to global behaviours of a system is a central theme in many areas of study such as system biology \cite{biology}, neural science \cite{brain}, and drug and material discoveries \cite{drug} \cite{material}. Graph neural networks are promising architecture for representation learning on graphs - the structural abstraction of complex system. State-of-the-art performance is observed in various graph mining tasks \cite{GCN2,GCN5,graphSAGE,GNNpower,gat,WLneural,GNNreview, GNNreview2,GNNreview3}. However, due to the non-Euclidian nature, challenges still exist in graph classification. For example, to generate a fixed-dimensional graph-level representation, GNN combines information from each node through \emph{graph pooling}. In combined forms, a graph will collapse into a ``super-node'', where identities of the constituent sub-graphs and their inter-connections are mixed together. Is this the best way to generate graph-level features? From complex systems view, mixing all parts of a system can affect interpretability and model prediction, because properties of a complex system arise largely from the \emph{interactions} among its components \citep{molecular,book_complex,book_complex2}. The choice of the ``collapsing''-style graph pooling roots deeply in the lack of natural alignment among graphs that are not isomorphic. Therefore the pooling sacrifices structural details for feature compatibility. In recent years, substructure patterns draw considerable attention in graph mining, such as motifs \citep{motif1,motif2,motif3,motif4} and graphlets \cite{fast-gkernel}. It provides an intermediate scale for structure comparison or counting, and has been considered in node embedding \cite{motif_embed}, deep graph kernels \cite{Deep-gkernel} and graph convolution \citep{GNNmotif1}. However, due to the combinitorial nature, only substructures of very small sizes (4 or 5 nodes) can be considered \cite{Deep-gkernel, motif3}, greatly limiting the coverage of structural variations; also, handling substructures as discrete objects makes it difficult to compensate for their similarities, at least computationally, and so the risk of overfit may rise in supervised learning scenarios. These intrinsic difficulties are related to the concept of \emph{resolution} in graph-structured data processing. Resolution is the scale at which measurements can be made and/or information processing algorithms are conducted. Here, we will first define two relevant terms, i.e., the spatial resolution and the structural resolution, and how they may affect the performance of graph classification. First, \emph{ {spatial resolution}} is related to the geometrical scale of the ``elementary component'' of a graph on which an algorithm operates. It can range from nodes, to sub-graphs, or entire graph. Graph details beyond effective spatial resolution are algorithmically unidentifiable. For example, graph pooling compresses the whole graph into a single vector, and so the spatial resolution drops to the lowest: node and edge identities are mixed together, and subsequent classification layer can no longer exploit any substructure or their connections, but just a global aggregation. We call this \textbf{vanishing spatial resolution}. Insufficient spatial resolution may affect the interpretability, and also the predictive power since global property of a complex system arises largely from the its inherent interactions \citep{molecular,book_complex,book_complex2} Second, \emph{{structural resolution}} is the fineness level in differentiating between substructures. substructures (or sub-graphs) shed light on functional organization and graph alignment. However, they are treated in a discrete, and over-delicate manner: in exact matching, two substructures are considered distinct even if they share significant similarity. We call it \textbf{exploding structural resolution}. It can lead to risk of overfitting, similar to observed in deep graph kernels \citep{Deep-gkernel} and dictionary learning \cite{adpt_size}. We believe that both {resolution dilemmas} originate from the way we perform profiling, identification, and alignment of substructures. Substructures are building blocks of a graph; relations like interaction or alignment are all defined between substructures (of varying scales). However, exact substructure matching is too costly and prone to overfit, leading to exploding structural resolution; meanwhile, graph alignment becomes infeasible when substructure matching is poorly defined, and so collapsing-style graph pooling becomes the norm, which finally leads to vanishing spatial resolution. \textbf{Our contribution}. In this paper, we propose a simple neural architecture called ``{S}tructural {L}andmarking and {I}nteraction {M}odelling'' - or SLIM, for inductive graph classification. The key idea is to embed substructure instances into a continuous metric space and learn structural landmarks there for explicit interaction modelling. The SLIM network can effectively resolve the resolution dilemmas. More importantly, by fully exploring the diverse structural distribution of the input graphs, any substructure instance and even unseen examples can be mapped parametrically to a common and optimizable structural landmark set. This enables a novel, \emph{identity-preserving graph pooling} paradigm, where the interacting relation between constituent parts of a graph can be modelled explicitly, shedding important light on the functional organizations of complex systems. The design philosophy of SLIM comes from the long-standing views of complex systems: complexity arises from interaction. Therefore, explicit modelling of the parts and their interactions is key to explaining the complexity and improving the prediction. In contrast, graph neural networks is more about ``integration'', where delicate part-modelling like convolution does exist but finally obscured in the pooling process. It turns out, that by respecting the structural organization of complex systems, SLIM is more interpretable, accurate, and provides new insights in graph representation learning. We will discuss the resolution dilemmas and related works in Section~\ref{sec:2}. Section~\ref{sec:3}, ~\ref{sec:theory} and ~\ref{sec:exp} covers the design, analysis, and performance of SLIM, respectively. The last section concludes the paper. \section{Resolution Dilemmas in Graph Classification} \label{sec:2} A complex system is composed of many parts that interact with each other in a non-simple way. Since graphs are structural abstraction of complex systems, accurate graph classification depends on how global properties of a system relate to its structure. It is believed that the property (and complexity) of a complex system arises from the interaction among its components \cite{book_complex,book_complex2}. So, accurate interaction modelling should benefit prediction. However, this is non-trivial due to resolution dilemmas. \subsection{Spatial Resolution Diminishes in Graph Pooling} Graph neural networks (GNN) for graph classification typically has two stages: graph convolution and graph pooling \citep{graphSAGE,GNNpower}. The spatial resolutions for these two stages are significantly different. The goal of convolution is to pass message among neighboring nodes in the general form of $h_v = \texttt{AGGREGATE}\left(\{h_u, u\in \mathcal{N}_v\}\right)$, where $\mathcal{N}_v$ is the neighbors of $v$ \cite{graphSAGE,GNNpower}. Here, the spatial resolution is controlled by the number of convolution layers: more layers capture lager substructures/sub-trees and can lead to improved discriminative power \cite{GNNpower}. In other words, a medium resolution (substructure level) can be more informative functional markers than a high resolution (node level). In practice, multiple resolutions can be combined via \texttt{CONCATENATE} function \cite{graphSAGE,GNNpower} for subsequent processing. The goal of graph pooling is to generate compact, graph-level representations that are compatible across graphs. Due to the lack of natural alignment between graphs that are not isomorphic, graph pooling typically ``squeezes'' a graph $\mathcal{G}$ into a single vector (or ``super-node'') in the form of $h_\mathcal{G} = \texttt{READOUT}\left(\{f(h_v), \forall v\in \mathcal{V}\}\right)$, where $\mathcal{V}$ is the node of $\mathcal{G}$. Different readout functions have been proposed, including max-pooling \citep{max_pooling}, sum-pooling \cite{GNNpower}, various pooling functions (\texttt{MEAN}, \texttt{LSTM}, etc.) \cite{graphSAGE}, or deep sets \citep{deep_set}; attention has been used to evaluate node importance in attention pooling \citep{att_pool} and gPool \citep{unet}; besides, hierarchical differential pooling has also been investigated \citep{dif_pool}. An important resolution bottleneck occurs in graph pooling, as shown in Figure~\ref{fig:spa_res}. Since all the nodes are mixed into one, subsequent classifier can no longer identify any individual substructure nor their interactions, regardless of the resolution in graph convolution. We call this ``diminishing spatial resolution'', which can be undesirable\footnote{Some work adopt different aggregation strategies: Sortpooling arranges nodes in a linear chain and perform 1d-convolution \cite{DGCNN}; SEED uses distribution of multiple random walks \cite{SEED}; Deep graph kernel evaluates graph similarity by subgraph counts \cite{Deep-gkernel}. Explicit modelling of the interaction between graph parts is not considered.} in that: (1) how much information in well-designed convolution domain can penetrate through the pooling layer for final prediction is hard to analyze/control; (2) in molecule classification, graph labels hinge on functional modules and how they organize \cite{drug}; an overly coarse spatial resolution will mix up functional modules and conceal their interaction. \begin{figure}[htb] \begin{center} \includegraphics[height=1.8in,width=5.3in]{aresolution5.pdf} \caption{Spatial resolution vanishes after graph pooling. \small{(Note: not all nodes are marked with convolution - the shaded circles; see Appendix Sec~8.4 for more discussion on relation with hierarchical processing.)}}\label{fig:spa_res} \end{center} \end{figure} Can meaningful spatial resolution(s) survive graph pooling? The answer is yes. Indeed, it involves substructure alignment, and the notion of structural resolution. See discussions below. \subsection{Structural Resolution Explodes in Substructure Identification} Substructures are the basic unit to accommodate interacting relations. A global criteria to identify and align substructures is the key to preserving substructure identities and comparing the inherent interactions across graphs. Again, the fineness level in determining whether two substructures are ``similar'' or ``different'' is subject to a wide spectrum of choices, which we call ``structural resolution''. \begin{figure}[htb] \begin{center} \includegraphics[height=1.75in,width=5.15in]{sresolution.pdf} \caption{How structural resolution may affect the generalization performance. Only small substructures here for illustration; node types do make a difference in profiling the substructures. }\label{fig:str_res}\end{center} \end{figure} We illustrate in Figure~\ref{fig:str_res}. The right end denotes the finest resolution in differentiating between substructures: exact matching, as we manipulate motif/graphlet \cite{motif1,motif2,motif3,GNNmotif1,fast-gkernel}. The exponential configuration of sub-graphs will finally lead to an ``exploding'' structural resolution, because maintaining a large number of unique substructures is infeasible and easily overfits. The left end of the spectrum treats all substructures the same and underfits the data. We are interested in a medium structural resolution, where similar substructures are mapped to the same identity, which we believe can benefit the generalization performance (see Figure~\ref{fig:dummy} for empirical evidence). Theoretically, an over-delicate structural resolution corresponds to a highly ``coherent'' basis in representing a graph, leading to unidentifiable dictionary learning \cite{ERC,supervised_dic}. Structural landmarking is exactly aimed at controlling structural resolution and improve incoherence for graph classification \section{Structural Landmarking and Interaction Modelling (SLIM)} \label{sec:3} Considering the difficulty in manipulating substructures as discrete objects, we embed them in a continuous space, and transform all structure-related operations from discrete and off-the-shelf version to continuous and optimizable counterpart. The key idea of SLIM is the identification of structural landmarks in this new space, via both unsupervised compression and supervised fine-tuning, through the distribution of embedded substructures under possibly multiple scales. Structural landmarking resolves resolution dilemmas and allow explicit interaction modelling in graph classification. \textbf{Problem Setting}. Give a set of labeled graphs $\{\mathcal{G}_i, y_i$\}'s for $i = 1,2,..., n$, with each graph defined on the node/edge set $\mathcal{G}_i = (\mathbf{V}_i,\mathbf{E}_i)$ with adjacency matrix $\mathbf{A}_i\in\mathbb{R}^{n_i\times n_i}$ where $n_i = |\mathbf{V}_i|$, and $y_i\in\{\pm 1\}$. Assume that nodes are drawn from $c$ categories, and the node attribute matrix for $\mathcal{G}_i$ is $\mathbf{X}_i\in\mathbb{R}^{n_i\times c}$. Our goal is to train an inductive model to predict the labels of the testing graphs. \begin{figure}[htb] \begin{center}\vskip -2mm \includegraphics[width=1\textwidth]{SLIM1.pdf}\caption{The three main steps of the SLIM network illustrated in molecule graph classification. }\label{fig:slim} \end{center} \end{figure} The SLIM network has three main steps: (1) sub-sturcture embedding, (2) substructure landmarking, and (3) identity-preserving graph pooling, as shown in Figure~\ref{fig:slim}. Detailed discussion follows. \subsection{Substructure Embedding} The goal of substructure embedding is to extract substructure instances and embed them in a metric space. One can employ multiple layers of convolutions \cite{graphSAGE,GNNpower} to model substructures (rooted sub-trees), or randomly sample sub-graphs \cite{fast-gkernel}. For convenience, we simply extract one sub-graph instance from each node using a $k$-hop breath-first search, which controls the spatial resolution\footnote{When $k$ is large, one subgraph around each node may be unnecessary. See discussion in Appendix (Sec8.4). }. In Figure~\ref{fig:slim}, sub-graphs in the shaded circles around each atom is a substructure instance. Let $\mathbf{A}_i^{(k)}$ be the $k$th-order adjacency matrix, i.e., the $pq$th entry equals 1 only if node $p$ and $q$ are within $k$-hops away. Since each sub-graph is associated with one node, the sub-graphs extracted from $\mathcal{G}_i$ can be represented as $\mathbf{Z}_i = \mathbf{A}^{(k)}_i\mathbf{X}_i$, whose $j$th row is a $c$-dimensional vector summarizing the counts of the $c$ node-types in the sub-graph around the $j$th node. Variations include (1) emphasize the center node, $\mathbf{Z}_i = [\mathbf{X}_i;\; \mathbf{A}_i\mathbf{X}_i]$; (2) layer-wise node distribution $\mathbf{Z}_i = [\tilde\mathbf{A}^{(1)}_i\mathbf{X}_i; \; \tilde\mathbf{A}^{(2)}_i\mathbf{X}_i;\;...\;\tilde\mathbf{A}^{(k)}_i\mathbf{X}_i]$, where $\tilde\mathbf{A}_i^{(k)}$ specifies whether two nodes in $\mathcal{G}_i$ are \emph{exactly} $k$-hops away; or (3) weighted Layer-wise summation $\mathbf{Z}_i = \alpha_k\sum_{k}\tilde\mathbf{A}^{(k)}_i\mathbf{X}_i$, where $\alpha_k$'s are non-negative weighting that decays with $k$ Next we consider embedding the substructure instances (i.e., rows of $\mathbf{Z}_i$'s) into a latent space so that statistical manipulations can better align with the prediction task. The embedding should preserve important proximity relations to facilitate subsequent landmarking: if two substructures are similar, or they often inter-connect with each other, their embedding should be close. In other words, the embedding should be smooth regard to both structural similarities and geometrical interactions. A parametric transform on $\mathbf{Z}_i$'s with controlled complexity can guarantee the smoothness of embedding w.r.t. structural similarity, e.g., an autoencoder $ f(\mathbf{Z}_i) = \sigma\left(\sigma\left(\mathbf{Z}_i \mathbf{T}_1 + \mathbf{b}_1\right)\mathbf{T}_2+\mathbf{b}_2\right) $. Let $\mathbf{H}_l=f(\mathbf{Z}_i)\in\mathbb{R}^{n_l\times d}$ be the embedding of the $n_l$ sub-graph instances extracted from $\mathcal{G}_l$. To maintain the smoothness of $\mathbf{H}_i$'s w.r.t. geometric interaction, we will maximize the log-likelihood of the co-ouuurrence of substructure instances in each graph, similar to word2vec \citep{wordvec} \begin{eqnarray} \max_{} \sum_{l = 1}^n \sum_{i=1}^{n_l}\sum_{j\in \mathcal{N}^{l}_i} \log\left(\frac{\exp\langle \mathbf{H}_l(i,:),\mathbf{H}_{l}(j,:)\rangle}{\sum_{j'}\exp\langle \mathbf{H}_l(i,:),\mathbf{H}_l(j',:)\rangle}\right) \label{eq:los} \end{eqnarray} Here $\mathbf{H}_l(i,:)$ is the $l$th row of $\mathbf{H}_l$, $\langle,\rangle$ is inner product, and $\mathcal{N}^l_i$ are the neighbors of node-$i$ in graph $\mathcal{G}_l$. This loss function tends to embed strongly inter-connecting substructures close to each other. \subsection{Substructure Landmarking} The goal of structural landmarking is to identify a set of informative structural landmarks in the continuous embedding space which has: (1) high statistical coverage, namely, the landmarks should faithfully recover distribution of the substructures from the input graphs, so that we can generalize to new substructure examples from the distribution; and (2) high discriminative power, namely the landmarks should be able to reflect discriminative interaction patterns for classification. Let $\mathbf{U} = \{ \boldsymbol\mu_i, \boldsymbol\mu_2,...,\boldsymbol\mu_K\}$ be the structural landmarks. In order for them to be representative of the substructure distribution, it is desirable that each sub-graph instance is faithfully approximated with the closest landmark. We will minimize the following distortion loss \begin{eqnarray}\label{eq:ldmk_loss1} \sum_{i = 1}^n\sum_{j = 1}^{n_i}\min_{k=1,2,...,K}\|\mathbf{H}_{i}(j,:)-\boldsymbol\mu_k\|^2. \end{eqnarray} Here $\mathbf{H}_{i}(j,:)$ denotes the $j$th row (substructure) from graph $\mathcal{G}_i$. In practice, we will implement a soft assignment by using one cluster indicator matrix $\mathbf{W}_i\in\mathbb{R}^{n_i\times k}$ for each graph $\mathcal{G}_i$, whose $jk$-th entry is the probability that the $j$th substructure of $\mathcal{G}_i$ belongs to the $k$th landmark $\boldsymbol\mu_k$. Inspired by deep embedding clustering \citep{DEC}, $\mathbf{W}_i$ is parameterized by a Student's t-distribution \begin{eqnarray*} \mathbf{W}_i(j,k) = \frac{\|(1+\mathbf{H}_i(j,:)-\boldsymbol\mu_k\|^2/\alpha)^{-\frac{\alpha+1}{2}}}{\sum_{k'}\|(1+\mathbf{H}_i(j,:)-\boldsymbol\mu_k'\|^2/\alpha)^{-\frac{\alpha+1}{2}}}, \end{eqnarray*} and the loss function can be greatly simplified by minimizing the KL-divergence \begin{eqnarray}\label{eq:ldmk_loss2} \min_{\mathbf{U,\mathbf{H}_i's}} \sum_i \text{KL}\left(\mathbf{W}_i,\tilde{\mathbf{W}}_i\right),\;\;\;\; \text{s.t.}\;\; \tilde{\mathbf{W}}_i(j,k) = \frac{\mathbf{W}_i^2(j,k)/\sum_l \mathbf{W}_i(l,k)}{\sum_{k'} \left[\mathbf{W}_i^2(j,k')/\sum_l \mathbf{W}_i(l,k')\right]}. \end{eqnarray}Here, $\tilde\mathbf{W}_i$ is a self-sharprening version of $\mathbf{W}_i$, and minimizing the KL-distance forces each substructure instance to be assigned to only a small number of landmarks similar to sparse dictionary learning. Besides the unsupervised regularization in (\ref{eq:ldmk_loss1}) or (\ref{eq:ldmk_loss2}), learning of the structural landmarks will also be affected by the classification loss, guaranteeing the discriminative power of the landmarks \subsection{Identity-Preserving Graph Pooling} The goal of identity-preserving graph pooling is to project structural details of each graph onto the common space of landmarks, so that a compatible, graph-level feature can be obtained that simultaneously preserves the identity of the parts (substructures) and models their interactions. The structural landmarking mechanism allows computing rich graph-level features. First, we can model substructure distributions. The density of the $K$ substructure landmarks in graph $\mathcal{G}_i$ can be computed as $\mathbf{p}_i = \mathbf{W}_i'\cdot \textbf{1}_{n_i\times 1}$. Furthermore, the first-order moment of substructures belonging to each of the $K$ landmarks in $\mathcal{G}_i$ is $\mathbf{M}_i = \mathbf{X}_i'\cdot\mathbf{W}_i\cdot \mathbf{P}_i^{-1}$ where $\mathbf{P}_i = \text{diag}(\mathbf{p}_i)$, and the $k$th column of $\mathbf{M}_i$ is the mean of $\mathcal{G}_i$'s substructure instances belonging to the $k$th landmark. Second, we can model how the $K$ landmarks interact with each other in graph $\mathcal{G}_i$. To do this, we can project the adjacency matrices $\mathbf{A}_i$'s onto the landmark sets and obtain a $\mathbb{R}^{K\times K}$ interaction matrix $\mathbf{C}_i = \mathbf{W}_i\cdot\mathbf{A}_i\cdot\mathbf{W}_i'$, which encodes the interacting relations (geometric connections) among the $K$ structural landmarks These features can be combined together for final classification. For example, they can be reshaped and concatenated to feed into the fully-connected layer. One can also resort to more intuitive ways; for example, using first-order and second-order features together, one can transform each graph $\mathcal{G}_i$ into a constant-sized, ``landmark'' graph with node feature $\mathbf{M}_i$, node weight $\mathbf{p}_i$, and edge weights $\mathbf{C}_i$. Then standard graph convolution can be applied on the landmark graphs to generate graph-level features (without pains of graph alignment anymore). In experiments, for simplicity, we will compute the normalized interaction matrix $\tilde{\mathbf{C}}_i = \mathbf{P}_i^{-1}\mathbf{C}_i \mathbf{P}_i^{-1}$ and use it as features, which works pretty well on all the benchmark datasets. More detailed discussion can be found in Appendix (Sec~8.4 \& 8.7). \section{Theoretic Analysis and Discussions}\label{sec:theory} We provide learning theoretic support on the choice of structural resolution (landmark size $K$). Graphs are bags of inter-connected substructure instances, and each instance $\mathbf{z}$ can be represented by the landmarks as $\mathbf{z} = \sum_{k = 1}^K \alpha_k \boldsymbol\mu_k$. A too small number of landmarks fails to recover basic data structures, whereas too many landmarks will result in overfitting (e.g. in exact substructure matching where a maximal $K$ is used for reconstruction) \cite{adpt_size}. In dictionary learning, the mutual coherence is a crucial index in evaluating the redundancy of the code-vectors, which is defined as \begin{eqnarray}\label{eq:ERC} \mu(\mathbf{U}) = \max_{i,j} \left|\langle \boldsymbol\mu_i,\boldsymbol\mu_j\rangle\right|, \end{eqnarray} where $\langle \cdot,\cdot\rangle$ denotes the normalized correlation. A lower self-coherence permits better support recovery \citep{ERC}; while large coherence leads to worse stability in both sparse coding and classification \cite{supervised_dic}. In particular, a faithful recovery of the sparse signal support is guaranteed only when \begin{eqnarray} |\alpha|_0 \leq \frac{1}{2}\left(1+\frac{1}{\mu(\mathbf{U})}\right).\label{eq:cond} \end{eqnarray} Obviously, large $\mu(\mathbf{U})$ leads to unstable solutions. In the following, we quantify a lower-bound of the coherence as a factor of the landmark size $K$ in clustering-based basis selection, since the sparse coding and $k$-means algorithm generate very similar code vectors \citep{clusteringdic}. \begin{theorem} The lower bound of the squared mutual coherence of the landmark vectors increases monotonically with $K$, the number of landmarks in clustering-based sparse dictionary learning. \begin{eqnarray*} \mu^2(\mathbf{U}) &\geq& 1 - \frac{4C_dC_p}{u^2_{max}K^{\frac{1}{d}}} \left(\left\lfloor\left(\frac{K}{2}\right)^{\frac{1}{d}} \right\rfloor^{-1}+1\right) \end{eqnarray*} Here, $d$ is the dimension, $C_d = \frac{3}{2}\left(1 + {\log(d)}/{d}\right)\gamma_dV_d$, where $\gamma_d = 1 + d\log(d\log(d))$ and $V_d = 2\Gamma(\frac{1}{2})^d/d\Gamma(\frac{d}{2})$ is the volume of the $d$-dimensional unit ball; $u_{max}$ is the maximum $\ell_2$-norm of (a subset) of the landmark vectors $\boldsymbol\mu_k$'s, and $C_p$ is a factor depending on data distribution $p(\cdot)$. \end{theorem} Proof is in Appendix (Sec~8.1). Theorem~1 says that when the landmark set size $K$ increases, the mutual coherence has a lower bound that consistently increases and violates the recovery condition (\ref{eq:cond}). In fact, a very high structural resolution (like exact matching) leaves a heavy burden to subsequent classifiers by failing to compensate for structural similarities. This justifies the SLIM network where the landmark set size can be controlled conveniently to avoid unstable dictionary learning \textbf{Discussions}. GNNs have shown great potential in graph isomorphism test by generating injective graph embedding, thanks to the theoretic foundations \cite{GNNpower,WLneural}. However, accurate graph classification needs more thought: classification is not injective; besides, quality of features is also of notable importance. SLIM provides new insight in both respects: (1) it finds a tradeoff in the duality of handling similarity and distinctness; (2) it explores new ways of generating graph-level features: instead of aggregating all parts together as in GNNs, it taps into the vision of complex systems so that interaction between the parts is leveraged to explain the complexity and improve the learning. More discussions are in Appendix (Sec~8.2-8.8), including the choice of spatial/structural resolutions, interpretability, hierarchical and semi-supervised version, and comparison with graph kernels \citep{graph_kernels}. \section{Experiments} \label{sec:exp} \textbf{Benchmark data}. We have used a number of popular benchmark data sets for graph classification. (1) MUTAG: chemical compound data set with 188 instances and two classes; there are 7 node/atom types, and 3 edge/bound types (bond types are ignored). (2) PROTEINS: protein molecule data set with 1113 instances and three classes; there are 3 node types (secondary structure elements). NCI1: chemical compounds data set for cancer cell lines with 4110 instances and two classes. (4) PTC: chemical compound data set for toxicology prediction with 417 instances and 8 classes. (5) D\&D data set for enzyme classification with 1178 instances and two classes. \textbf{Competing methods}. We have incorporated a number of highly competitive methods proposed in recent years for comparison: (1) Graph neural tangent kernel (GNTK) \cite{GNTK}; (2) Graph Isomorphism Network (GIN) \cite{GNNpower}; (3) End-to-end graph classification (DCGNN) \cite{DGCNN}; (4) Hierarchical and differential pooling (DiffPool) \cite{dif_pool}; (5) Self-attention Pooling (SAG) \cite{att_pool}; (6) Convolutional network for graphs (PATCHY-SAN) \cite{10}; (7) Graphlet kernel (GK) \cite{GK}; (8) Weisfeiler-Lehman Graph Kernels (WLGK) \cite{WL_kernel}; 9) Propagation kernel (PK) \cite{PK}. For method (4),(6),(7),(8),(9) we directly cited their reported results (averaged 10-fold corss-validated error) due to unavailability of their codes; for other competing methods we run their codes with default setting and report the performance. \textbf{Experimental setting}. We follow the experimental setting in \cite{GNNpower} and \cite{10} and perform 10-fold cross-validation; we report the average and standard deviation of validation accuracies across the 10 folds within the cross-validation. In the SLIM network, the spatial resolution is controlled by a BFS with 3-hop neighbors, and the structural resolution is simply set to $K = 100$; the FC-layer has one hidden layer with dimension 64; cross-entropy id used for classification; weights for the loss term (\ref{eq:los}) and (\ref{eq:ldmk_loss2}) are set to 0.01. No drop-out or batch-normalization is used considering the size of the benchmark data. The hyper-parameters for different dataset include (1) the number of hidden units in the Autoencoder with one hidden unit with a dimension $\{d,d/2, 2d\}$; (2) the optimizer is chosen among SGD or Adagrad, with the learning rate $\{1e-2, 5e-2, 1e-3,5e-3, 1e-4\}$; (3) local graph representation, including node distribution $\mathbf{A}^{(k)}\mathbf{X}_i$, layer-wise distribution, and weighted layer-wise summation (see Sec~3.2 for details); (4) the number of epochs, i.e., a single epoch with the best cross-validated accuracy averaged over all the 10 folds was selected. Overall, a minimal SLIM network is used in the experiments in order to test its performance. \begin{wrapfigure}[10]{r}{7cm} \centering \includegraphics[height=1.15in,width=2.4in]{K.pdf} \vskip -1mm \caption{Accuracy vs structural resolution $K$.}\label{fig:dummy} \end{wrapfigure} \textbf{Structural Resolution}. In Figure~\ref{fig:dummy}, we examine the performance of SLIM under different choices of the structural resolution (landmark set size $K$). As can be seen, the accuracy-vs-$K$ curve has a bell-shaped structure. When $K$ is either too small (underfitting) or too large (coherent landmarks that overfit), the accuracy is low, and the best performance is typically around a median $K$ value. This validates the correctness of Theorem~1, and the usefulness of structural landmarking in improving graph classification. \begin{figure}[ht] \centering \subfloat[NCI data.]{ \includegraphics[height=1.5in,width=2.1in]{NCI1.pdf} }\hskip 10mm \subfloat[MUTAG data.]{ \includegraphics[height=1.5in,width=2.1in]{MUTAG.pdf} }\\ \subfloat[Protein data.]{ \includegraphics[height=1.5in,width=2.1in]{protein.pdf} }\hskip 10mm \subfloat[D\&D data.]{ \includegraphics[height=1.5in,width=2.1in]{DD.pdf} } \caption{Testing accuracy of different algorithms over the training epochs. } \label{fig:acc} \end{figure} \textbf{Classification Performance}. We then compare the performance of different methods in Table~\ref{tbl:acc}. As can be seen, overall, neural network based approaches are more competitive than graph kernels, except that graph kernels have lower fluctuations, and the WL-graph kernel perform the best on the NCI1 dataset. On most benchmamrk datasets, the SLIM network generates classification accuracies that are either higher or at least as good as other GNN/graph-pooling schemes. \begin{table}\small \caption{Averaged prediction accuracy for different algorithms on 5 benchmark data-sets.} \label{sample-table} \centering \begin{tabular}{lllllll} \toprule Category & Algorithm & MUTAG & PTC & NCI1 & Protein & D\&D \\ \midrule &GK &81.38$\pm$1.74 & 55.65$\pm$0.46 & 62.49$\pm$0.27 & 71.39$\pm$0.31 & 74.38$\pm$0.69 \\ Graph &PK & 76.00$\pm$2.69 & 59.50$\pm$2.44 & 82.54$\pm$0.47 & 73.68$\pm$0.68 & 78.25$\pm$0.51\\ kernel & WLGK & 84.11$\pm$1.91 & 57.97$\pm$2.49& \textbf{84.46$\pm$0.45} & 74.68$\pm$0.49 & 78.34$\pm$0.62\\ \hline\hline &PATCHY-SAN& 92.63$\pm$4.21 & 60.00$\pm$4.82 & 78.59$\pm$1.89& 75.89$\pm$2.76 & 77.12$\pm$2.41\\ &DGCNN & 85.83$\pm$1.66 & 68.59$\pm$6.47& 74.46$\pm$0.47 & 75.54$\pm$0.94 & 79.37$\pm$1.03 \\ &DiffPool & 90.52$\pm$3.98 & -& 76.53$\pm$2.23 & 75.82$\pm$3.56 & 78.95$\pm$2.40\\ &GNTK & 90.12$\pm$8.58 & 67.92$\pm$6.98 & 75.20$\pm$1.53 &75.61$\pm$4.24 & \textbf{79.42$\pm$2.18}\\ GNN &SAG & 73.53$\pm$9.68 & 75.67$\pm$3.12 & 74.18$\pm$1.29& 71.86$\pm$0.97 & 76.91$\pm$2.12\\ &GIN & 90.03$\pm$8.82 & 76.25$\pm$2.83 & 79.84$\pm$4.57 & 71.28$\pm$2.65 & 77.58$\pm$2.94\\ & SLIM & \textbf{93.28$\pm$3.36} & \textbf{80.41$\pm$6.92}& 80.53$\pm$2.01& \textbf{77.47$\pm$4.34} & \textbf{79.48$\pm$2.66}\\ \bottomrule \end{tabular} \label{tbl:acc} \end{table} \textbf{Accuracy Evolution}. We also plot the evolution of the testing accuracy for different methods on the benchmark datasets, so as to have a more comprehensive evaluation on their performance. As can be seen from Figure~\ref{fig:acc}, our approach not only generates accurate classification on the benchmark datasets, but also the accuracy converges relatively faster and remains more stable with respect to the training epochs, making it easier to determine when to stop the training process. Other GNN algorithms can also attain a high accuracy on some of the benchmark datasets, but the prediction performance fluctuates significantly across the training epochs (even by using large mini-batch sizes). We speculate that stability of the SLIM network arises from explicit modelling of the sub-structure distributions. It's also worthwhile to note that on MUTAG data the proposed method produces a classification with 100\% accuracy on more than half of the runs across different folds (Figure~\ref{fig:acc}(b)). It demonstrates the power of the SLIM network in capturing important graph-level features. \section{Conclusion} Graph neural networks represent state-of-the-art computational architecture for graph mining In this paper, we designed the SLIM network that employs structural landmarking to resolve resolution dilemmas in graph classification and capture inherent interactions in graph-structured systems. We hope this attempt could open up possibilities in designing GNNs with informative structural priors
3,212,635,537,893
arxiv
\section{Introduction} \IEEEPARstart{W}{ith} the rapid development of the Internet of Things (IoT), more and more IoT devices access wireless networks to support diverse applications, e.g., smart city, intelligent transportation, smart industry and healthcare (eHealth) [1], [2]. It can be predicted that the number of IoT devices will exceed 20 billion in 2020 and reach hundreds of billions in 2030 [1]. In this context, the upcoming fifth generation (5G) and beyond 5G (B5G) networks are required to provide seamless access and diverse services for massive IoT devices. Due to massive access of a large number of devices over limited radio spectrum, the deluge of spectrum access requests may lead to severe congestion with low transmission success probability [1], [3]. Considering the explosive increase in the number of devices, it is essential to improve the access efficiency in 5G networks for accommodating massive access with various quality-of-service (QoS) guarantees. For QoS guarantee, in 5G networks, ultra-reliable and low latency communications (URLLC) is one of the most challenging services with stringent low latency and high reliability requirements, i.e., in 3GPP, a general URLLC requirement of a one-way radio is 99.999\% target reliability with 1 ms latency [4]. Consequently, URLLC entails great difficulty in massive access in 5G and B5G wireless networks. To relieve the radio access network congestion resulted from massive access, one of the simplest spectrum access schemes, termed random access procedure, was widely investigated recently. So far, there has been lots of research on massive random access for massive machine-type communications (mMTC), IoT networks and machine-to-machine (M2M) communications [5]-[12]. The authors in [5] and [6] proposed their contention-based random access models to enhance the access success probability and reduce the transmission delay. In [7], a two-stage random-access-based massive IoT uplink transmission protocol was presented to deal with the congestion caused by mMTC devices. Liu $et~al.$ in [8] investigated a priority-based multiple access protocol to ensure the fairness of different devices. Furthermore, grant-based and grant-free are two common random access schemes which can provide devices’ access statuses after processing device detection and channel estimation [3], [9], [10], but accurate channel state information (CSI) requirements of a massive number of devices may be impractical. Besides, several methods were presented to enhance the traditional random access performance, such as access class barring (ACB), slotted access, and backoff [11], [12]. For instance, in [12], an efficient random access procedure based on ACB was investigated to decrease the access delay and the power consumption when wireless networks have a congestion problem resulted from massive access. The aforementioned approaches [5]-[12] based on random access are simple, flexible and could be applied without central coordinator with massive wireless connections. However, these proposals achieve limited improvement and high access failures which remain the performance bottleneck for massive access. In particular, the high transmission success probability is not easily guaranteed when the devices have strict URLLC requirements. To satisfy the critical requirements of URLLC in massive IoT or mMTC, many studies have presented the advanced spectrum access schemes [13]-[19]. Weerasinghe $et~al.$ proposed a priority-based massive access approach to support reliable and low latency access for mMTC devices [13], where devices are categorized into a number of groups with different priority access levels. A probability density function of signal-to-noise ratio (SNR) was derived for a large number of uplink URLLC devices in [14], and numerical results verified that the presented model can satisfy the critical requirements of URLLC. Popovski $et~al.$ in [15] discussed the principles of wireless access for URLLC and provided a perspective on the relationship between latency, packet size and bandwidth. In [16] and [17], grant-free spectrum access was adopted to reduce transmission latency as well as improve the spectrum utilization in URLLC scenario. In [18] and [19], different resource management schemes were developed to shown how to update the system parameters that allow meeting the URLLC requirements in industrial IoT networks, since industrial automation requires strict low latency and high reliability for manufacturing control. Nevertheless, only a few literatures [13] and [14] investigated how to meet strict URRLC requirements in massive access scenario, and the optimization objective of these two studies are just a single time slot optimization problem, where the massive access decision approaches may converge to the sub-optimal solution and obtain the greedy-search like performance due to the ignorance of the historical network state and the long term benefit. \subsection{Related Works} Recently, several emerging technologies of 5G, i.e., massive multiple-input multiple-output (MIMO), non-orthogonal multiple access (NOMA) and device-to-device (D2D) communications are applied to support massive connectivity over limited available radio resources. Chen $et~al.$ in [12] and [20] presented non-orthogonal communication frameworks based on massive NOMA to support massive connections, and the transmit power values were optimized to mitigate severe co-channel interference by using interference cancellation techniques [21]. In addition, an application-specific NOMA-based communication architecture was investigated for future URLLC Tactile Internet [22]. In [14], [15], [23] and [24], the authors presented coordinated and uncoordinated access protocols to support massive connectivity in massive MIMO systems by exploiting large spatial degrees of freedom to admit massive IoT devices. Specifically, the authors in [14] and [15] discussed that the massive MIMO system can be acted as a natural enabler for URLLC where multiple antenna systems can support high capacity, spatial multiplexing and diversity links. Moreover, a potential solution for the massive access congestion problem is to offload the large amount of traffic onto D2D communication links [25], [26], which can directly reduce devices' energy consumption and transmission delay, as well as improve spectrum efficiency. D2D-based transmission protocols for supporting URLLC services were proposed in [27] and [28], where devices are classified into a number of groups based on their QoS requirements, i.e., stringent low latency and high reliability requirements, with radio resources allocated accordingly. In addition, energy efficiency (EE) plays an important role in green wireless networks. The reasons are that most of devices (e.g., sensors, actuators and wearable devices) are power constrained and energy consumption is massive and expensive under high-density scenario of devices. In [29] and [30], the authors optimized the joint radio access and power resource allocation to maximize EE while guaranteeing the transmission delay requirements and transmit power constraints of a huge number of devices. To mitigate co-channel interference and further enhance the EE performance of NOMA-based systems with massive IoT devices, subchannel allocation and power control approaches were proposed in [19], [29], [31]. Furthermore, Miao $et~al.$ [32] proposed an energy-efficient clustering scheme to address spectrum access problem for massive M2M communications. Although the authors in [29]-[32] mainly focused on the EE maximization based massive access, the different QoS requirements (such as latency and reliability) of devices has not been well studied in massive access scenario. Considering that intelligence is an important characteristic of future wireless networks, many studies have investigated application of reinforcement learning (RL) in the field of massive access management recently [9], [23], [33]-[42]. Different distributed RL frameworks were proposed to address the massive access management problem under massive scale and stringent resource constraints [33], [34], where each device has the ability to intelligently make its informed transmission decision by itself without central controller. The authors in [9] and [23] adopted the sparse dictionary learning to facilitate massive connectivity for a massive-device multiple access communication system, and the learning structure does not need any prior knowledge of active devices. Furthermore, the delay-aware access control of massive random access for mMTC and M2M was studied in [33], [35] and [36], and spectrum access algorithms based on RL were proposed to determine the access decision with high successful connections and low network access latency. As future wireless networks are complex and large-scale, RL cannot effectivity deal with the high-dimensional input state space, deep reinforcement learning (DRL) (DRL combines deep learning with RL to learn itself from experience) was developed to solve complex spectrum access decision-making tasks under large-state space [37]-[42]. The authors in [37]-[39] proposed distributed dynamic spectrum access (DSA) approaches based on DRL to search the optimal solution for the DSA problem under the large-state space and local observation information. These distributed learning approaches are capable of encouraging devices to make spectrum access decisions according to their own observations without central controller, and hence they have a great potential for finding efficient solutions for real-time services. Hua $et~al.$ in [40] presented a network-powered deep distributional Q-network to allocate radio resources for diversified services in 5G networks. Moreover, Yu $et~al.$ in [41] investigated a DRL-based multiple access protocol to learn the optimal spectrum access policy with considering service fairness, and Mohammadi $et~al.$ in [42] employed a deep Q-network (DQN) algorithm for cognitive radio underlay DSA which outperforms the distributed multi-resource allocation. However, the above works [37]-[42] did not investigate how to address the massive access management problem in their presented spectrum access approaches based on DRL, and most of the works did not consider the stringent reliability and latency constraints into the optimization problem. \subsection{Contributions} Motivated by the above analysis and observations, in order to address the above mentioned challenges in massive access for 5G and B5G wireless networks, this paper not only studies on how to manage the massive access requests from a huge number of devices, but also takes various QoS requirements (ranging from strict low latency and high reliability to minimum data rate) into consideration. Besides, a novel distributed cooperative learning approach based QoS-aware massive access is presented to optimize the joint subchannel assignment and transmission power control strategy without a centralized controller. The main contributions of the paper are summarized as follows: \begin{itemize} \item We formulate a joint subchannel assignment and transmission power control problem for massive access considering different practical QoS requirements, and the energy-efficient massive access management problem is modelled as a multi-agent RL problem. Hence, each device has the ability to intelligently make its spectrum access decision according to its own instantaneous observations. \item A distributed cooperative subchannel assignment and transmission power control approach based on DRL is proposed for the first time to guarantee both the strict reliability and latency requirements on URLLC services in massive access scenario, where the latency constraint is transformed into a data rate constraint which can make the optimization problem tractable. Specifically, a proper QoS-aware reward function is built to cover both the network EE and devices’ QoS requirements into the learning process. \item In addition, we apply transfer learning and cooperative learning mechanisms to enable communication links to work cooperatively in a distributed cooperative manner, in order to improve the network performance and transmission success probability based on local observation information. In detail, in transfer learning, if a new device joins the network or applies a new service, or one communication link achieves poor performance (e.g, low QoS satisfaction level or low convergence speed), then it can directly search the expert agent from the neighbors, and utilizes the transfer learning model from the expert agent instead of building a new learning model. In cooperative learning, devices are encouraged to share their selected actions with their neighbors and take turns to make decisions, which can enhance the overall benefit by choosing the actions jointly instead of independently. \item Extensive simulation results are presented to verify the effectiveness of the proposed distributed cooperative learning approach in massive access scenario, and demonstrate the superiority of the proposed learning approach in terms of meeting the network EE and improving the transmission success probability compared with other existing approaches. \end{itemize} The rest of this paper is organized as follows. In Section II, the system model and problem formulation are provided. The massive access management problem is modelled as a Markov decision making process in Section III. Section IV proposes a distributed cooperative multi-agent learning based massive access approach. Section V provides simulation results and Section VI concludes the paper. \section{System Model and Problem Formulation} \vspace{-2pt} \begin{figure} \centering \includegraphics[width=0.55\columnwidth]{figures/fig1.png} \vspace{-2pt} \caption{{\small System model of the massive-device network.} } \label{fig:Schematic} \vspace{-5pt} \end{figure} We consider a wireless network, as shown in Fig. 1, which consists of a base station (BS) at the center and a massive number of devices with each device being equipped with a single antenna. The devices are mainly divided into two types: cellular devices (denoted C-device) which communicate with the BS over the orthogonal spectrum subchannels, and D2D devices (D-device) which establish D2D communication links if two of them want to communicate with each other and they are close enough. In the network, D-devices can opportunistically access subchannels of C-devices while ensuring that the generated interference from D2D pairs to C-devices should not affect the QoS requirements of C-devices. We assume that each C-device can be allocated with multiple subchannels, and each subchannel only serve for at most one C-device in one time slot. In addition, each D2D pair can share multiple subchannels of C-devices. Let $K$, $M$ and $N$ denote the number of C-devices, D2D pairs and subchannels, respectively. The sets of corresponding C-device , D2D pair and subchannel are denoted by ${\mathcal{K}} = \{ 1,2,...,K\} $, ${\mathcal{M}} = \{ 1,2,...,M\} $ and ${\mathcal{N}} = \{ 1,2,...,N\} $, respectively. Let $Z$ denote the total number of communication links, $Z$ = $K$ + $M$, and its corresponding communication link set is defined by ${\mathcal{Z}} = \{ 1,2,...,Z\} $. Denote by ${h_k}$ and ${h_m}$ the channel coefficients of the desired transmission links from the $k$-th C-device to the BS, and the transmitter to the receiver in the $m$-th D2D pair, respectively. Denote by ${g_{k,m}}$, ${g_{m,B}}$ and ${g_{m',m}}$ the interference channel gains from the $k$-th C-device to the receiver of D2D pair $m$, the transmitter of D2D pair $m$ to the BS, and the transmitter of the $m'$-th D2D pair to the receiver of the $m$-th D2D pair, respectively. In the spectrum reusing case, C-devices suffer co-channel interference from the transmitters of D2D pairs if they share the subchannels with D2D pairs. As a result, the received signal-to-interference-plus-noise ratio (SINR) at the BS for C-device $k$ on the $n$-th subchannel is expressed by \begin{equation} \begin{split} SINR_{k,n}^{\rm{c}} = \frac{{P_{k,n}^{\rm{c}}{h_k}}}{{\sum\nolimits_{m \in {\mathcal{M}}} {{\rho _{m,n}}P_{m,n}^{\rm{d}}{g_{m,B}}} + \delta _k^2}}, \end{split} \end{equation} where $P_{m,n}^{\rm{c}}$ and $P_{m,n}^{\rm{d}}$ denote the transmission power values of the $k$-th C-device and the $m$-th D2D pair' transmitter on the $n$-th subchannel, respectively. ${\rho _{m,n}}$ is the subchannel access indicator, ${\rho _{m,n}} \in \{ 0,1\} $; ${\rho _{m,n}} = 1$ indicates that the $m$-th D2D pair assigns on the $n$-th subchannel; otherwise, ${\rho _{m,n}} = 0$. $\delta _k^2$ is the additive white Gaussian noise power. In (1), $\sum\nolimits_{m \in {\mathcal{M}}} {{\rho _{m,n}}P_{m,n}^{\rm{d}}{g_{m,B}}}$ is the co-channel interference. In addition, subchannel sharing also leads to the co-channel interference to D2D pairs, which is the generated interference from the co-channel C-device and co-channel D2D pairs on the same subchannel. Hence, the received SINR at the $m$-th D2D pair's receiver when it reuses the $n$-th subchannel of the $k$-th C-device is given by \begin{align} & SINR_{m,n}^{\rm{d}}\nonumber \\ & = \frac{{P_{m,n}^{\rm{d}}{h_m}}}{{P_{k,n}^{\rm{c}}{g_{k,m}} + \sum\limits_{m' \in \mathcal{M},m' \ne m} {{\rho _{m,m',n}}P_{m',n}^{\rm{d}}{g_{m',m}}} + \delta _m^2}} , \end{align} where ${\rho _{m,m',n}}$ is the subchannel access indicator, ${\rho _{m,m',n}} \in \{ 0,1\} $ ; ${\rho _{m,m',n}} = 1$ indicates that both the $m$-th D2D pair and $m'$-th D2D pair assign on the same $n$-th subchannel in one time slot; otherwise, ${\rho _{m,m',n}} = 0$. $\delta _m^2$ is the additive white Gaussian noise power. Then, the data rate of the $k$-th C-device and the $m$-th D2D pair on their assigned subchannels are respectively expressed by \begin{equation} \begin{split} R_k^{\rm{c}} = \sum\nolimits_{n \in {\mathcal{N}}} {{\rho _{k,n}}{{\log }_2}(1 + SINR_{k,n}^{\rm{c}})}, \end{split} \end{equation} and \begin{equation} \begin{split} R_m^{\rm{d}} = \sum\nolimits_{n \in {\mathcal{N}}} {{\rho _{m,n}}{{\log }_2}(1 + SINR_{m,n}^{\rm{d}})}. \end{split} \end{equation} where ${\rho _{k,n}}$ is the subchannel access indicator which has the same definition of ${\rho _{m,n}}$ and ${\rho _{m,m',n}}$ as aforementioned above. \subsection{Network Requirements} \emph{1) URLLC Requirements:} In 5G and B5G networks, different devices have different QoS requirements, i.e., some devices have ultra-high reliability communication requirements, some devices need strict low-latency services, and even some devices have both the stringent low latency and high reliability requirements. For example, intelligent transportation and factory automation have stringent URLLC requirements for real-time safety information exchange or hazard monitoring, where the maximum latency is less than 5 ms (even about 0.1 ms) and the transmission reliability needs to be higher than that $1-10^{-5}$ (or even $1-10^{-5}$), but they do not need the high data rate. For URLLC requirements, we assume that the packet arrival process of the $i$-th $(i \in {\mathcal{Z}})$ communication link is independent and identically distributed and follows Poisson distribution with the arrival rate ${\lambda _i}$ [11], [19]. Let $L_i^{{\rm{packet}}}$ denote the packet size in bits of the $i$-th communication link, and it follows the exponential distribution with mean packet size $\bar L_i^{{\rm{packet}}}$. Generally, the total latency mainly includes the transmission delay (${T_{{\rm{tr}}}}$), queuing waiting delay (${T_{{\rm{qw}}}}$) and processing/computing delay (${T_{{\rm{pc}}}}$), which can be expressed by [19] \begin{equation} \begin{split} {T_{{\rm{Latency}}}} = {T_{{\rm{tr}}}} + {T_{{\rm{qw}}}} + {T_{{\rm{pc}}}}. \end{split} \end{equation} In (5), the transmission delay of the packet $L_i^{{\rm{packet}}}$ can be given by ${T_{{\rm{tr}}}} = L_i^{{\rm{packet}}}/(W\times{R_i})$, where $W$ is the bandwidth of each subchannel and ${R_i}$ is the data rate given in (3) or (4), respectively. Due to the low latency constraint, each packet requires to be successfully transmitted in a given time period. Let ${T_{\max }}$ denote the maximum tolerable latency threshold, so the latency outage probability of URLLC can be given by \begin{equation} \begin{split} p_i^{{\rm{Latency}}} = \Pr \{ {T_{{\rm{Latency}}}} > {T_{\max }}\} \le p_{\max }^{{\rm{Latency}}} \end{split}, \end{equation} where $p_{\max }^{{\rm{Latency}}}$ is the maximum SINR violation probability. It is hard to directly calculate the device's packet latency shown in (5), and hence the outrage probability in (6) is difficult to be achieved. However, we can transform the latency constraint (6) into the data rate constraint by using max-plus queuing methods [43]. To guarantee the latency outage probability constraint shown in (6), the data rate $R_i^{{\rm{URLLC}}}$ of each URLLC service of the $i$-th communication link should meet \begin{equation} \begin{split} R_i^{{\rm{URLLC}}} \ge \frac{{\bar L_i^{{\rm{packet}}}}}{{W{T_{\max }}}}[{F_i} - {f_{ - 1}}(p_{\max }^{{\rm{Lat}}}{F_i}{e^{{F_i}}})] \buildrel \Delta \over = R_{i,\min }^{{\rm{URLLC}}} \end{split}, \end{equation} where ${f_{ - 1}}( \cdot ):[ - {e^{ - 1}},0) \to [ - 1,\infty )]$ denotes the lower branch of Lambert function meeting $y = {f_{ - 1}}(y{e^y})$ [43], ${F_i} = {\lambda _i}{T_{\max }}/(1 - {e^{{\lambda _i}{T_{\max }}}})$ , and $R_{i,\min }^{{\rm{URLLC}}}$ is the minimum data rate to ensure the latency constraint shown in (6). The relevant proof of (7) can be seen in [43, Th. 2]. If the transmission data rate is less than the minimum data rate threshold, in other words, the transmission latency exceeds the maximum latency threshold, the current URLLC service is unsuccessful and its corresponding packet transmission is stopped. In addition, the SINR value can be used to characterize the reliability of URLLC. In detail, the received SINR at the receiver should be beyond the minimum SINR threshold. Otherwise, the received signal cannot be successfully demodulated. Hence, the outage probability in term of SINR can be given by \begin{equation} \begin{split} p_{i,n}^{{\rm{outage}}} = \Pr \{ SIN{R_{i,n}} < SINR_{i,n}^{\min }\} \le p_{\max }^{{\rm{outage}}} \end{split}, \end{equation} where $SINR_{i,n}^{\min }$ denotes the minimum SINR threshold of communication link $i$ on the $n$-th subchannel and $p_{\max }^{{\rm{outage}}}$ denotes the maximum violation probability. \emph{2) Minimum Data Rate Requirements:} In addition to the high reliability and low latency requirements mentioned in Section, some C-devices and D2D pairs may have the minimum data rate requirements. Let $R_{k,\min }^{\rm{c}}$ and $R_{m,\min }^{\rm{d}}$ denote the minimum data rate requirements of the $k$-th C-device and the $m$-th D2D pair, respectively. Then, the minimum data rate requirements are given by \begin{equation} \begin{split} R_k^{\rm{c}} \ge R_{k,\min }^{\rm{c}},\;\forall k;\;\; R_m^{\rm{d}} \ge R_{m,\min }^{\rm{d}},\;\forall m. \end{split} \end{equation} \subsection{Problem Formulation} The objective of this paper is to maximize the overall network EE (EE is the ratio of the sum data rate and the sum energy consumption) while guaranteeing the network requirements shown in Section III.A. Then, the massive access management problem (joint subchannel access and transmission power control) is formulated as follows: \begin{equation} \begin{split} \begin{array}{l} \mathop {\max }\limits_{{\bm{\rho }},{\bm{P}}} \;\;\;{\eta _{EE}} = \frac{{\sum\nolimits_{k \in {\mathcal{K}}} {R_k^{\rm{c}}} + \sum\nolimits_{m \in {\mathcal{M}}} {R_m^{\rm{d}}} }}{{\sum\limits_{n \in {\mathcal{N}}} {\left( {\sum\limits_{k \in {\mathcal{K}}} {{\rho _{k,n}}P_{k,n}^{\rm{c}}} + \sum\limits_{m \in {\mathcal{M}}} {{\rho _{m,n}}P_{m,n}^{\rm{d}}} } \right)} + Z{P_{cir}}}}\\ s.t.\;\;({\rm{a}}):\;(7),\;(8),\;(9);\;\\ \;\;\;\;\;\;\;({\rm{b}}):\;{\rho _{k,n}} \in \{ 0,1\} ,\;\;{\rho _{m,n}} \in \{ 0,1\} ,\;\forall k,\;m,\;n;\\ \;\;\;\;\;\;\;({\rm{c}}):\;\sum\nolimits_{k \in {\mathcal{K}}} {{\rho _{n,k}}} \le 1,\,\,\forall n \in {\mathcal{N}};\;\\ \;\;\;\;\;\;\;({\rm{d}}):\;\sum\nolimits_{n \in {\mathcal{N}}} {{\rho _{k,n}}P_{k,n}^{\rm{c}}} \le P_{\max }^{\rm{c}},\;\forall k \in {\mathcal{K}};\\ \;\;\;\;\;\;\;({\rm{e}}):\;\sum\nolimits_{n \in {\mathcal{N}}} {{\rho _{m,n}}P_{m,n}^{\rm{d}}} \le P_{\max }^{\rm{d}},\;\forall m \in {\mathcal{M}}, \end{array} \end{split} \end{equation} where ${\bm{\rho }}$ and ${\bm{P}}$ denote the subchannel assignment and power control strategies, respectively. $P_{\max }^{\rm{c}}$ and $P_{\max }^{\rm{d}}$ denote the maximum transmission power values of each C-device and each D-device, respectively. ${P_{cir}}$ denotes the circuit power consumption of one communication link. Constraint (10c) guarantees that each subchannel is allocated at most one C-device. Constraint (10d) and (10e) are imposed to ensure that the power constraints of devices. \section{Problem Transformation} Clearly, the optimization problem given in (10) is not easy to be solved as it is a non-convex combination and NP-hard problem. More importantly, the optimization objective is just a single time slot optimization problem, where the massive access decision is only based on the current state with the fixed optimization function. The single time slot massive access decision approaches may converge to the suboptimal solution and obtain the greedy-search like performance due to the lack of the historical network state and the long term benefit. Hence, in this section, model-free RL as a dynamic programming tool can be applied to address the decision-making problem by learning the optimal solutions over dynamic environment. Similar to most of existing studies [32]-[42], we apply Markov Decision Process (MDP) to model the massive access decision-making problem in the RL framework by transforming the optimization problem (10) into MDP. In the MDP model, each communication link acts as an agent by interacting with outside environment and the MDP model is defined as a tuple $({\mathcal{S}},{\mathcal{A}},{\mathcal{P}},r,\gamma )$, where ${\mathcal{S}}$ is the state space set, ${\mathcal{A}}$ denotes the action space set, ${\mathcal{P}}$ indicates the transition probability: ${\mathcal{P}}({s_{t + 1}}|{s_t},{a_t})$ is the probability of transferring from a current state ${s_t} \in {\mathcal{S}}$ to a new state ${s_{t + 1}} \in {\mathcal{S}}$ after taking an action ${a_t} \in {\mathcal{A}}$, $r$ denotes the immediate reward, and $\gamma \in (0,1)$ denotes the discount factor. The details of the MDP model for massive access management are presented as follows. \textbf{State:} In 5G and B5G networks, the network state is defined as $s = \{ {s_{{\rm{cha}}}},{s_{{\rm{cq}}}},{s_{{\rm{tr}}}},{s_{{\rm{QoS}}}}\} \in {\mathcal{S}}$, ${s_{{\rm{cha}}}}$ indicates the subchannel working status (idle or busy); ${s_{{\rm{cq}}}}$ depicts the channel quality (i.e., SINR); ${s_{{\rm{tr}}}}$ is the traffic load of each packet; and ${s_{{\rm{QoS}}}}$ represents the QoS satisfaction level (the transmission success probability), such as the satisfaction levels of the minimum data rate, latency and reliability. \textbf{Action:} For the massive access management problem, each agent will decide which subchannels can be assigned and how much transmit power should be allocated on the assigned subchannels. Hence, the action can be defined as $a = \{ {\rho _{{\rm{cha}}}},{P_{{\rm{pow}}}}\} \in {\mathcal{A}}$ which includes the subchannel assignment indicator (${\rho _{{\rm{cha}}}}$) and the transmission power (${P_{{\rm{pow}}}}$). At each time slot, the action of each device consists of channel assignment indicator $ {\rho _{{\rm{cha}}}} \in \{0,1\}$ and transmission power level $ {P_{{\rm{pow}}}} \in \{50, 150, 300, 500\}$ in mW where the transmission power is discretized into four levels. We can observe that the action space of each device is not big, but the overall action space of all devices in the massive access scenario is large. Hence, we need to discretize the transmission power numbers as small as possible, so the four transmission power levels are chosen in this paper instead of the higher number of transmission power levels. \textbf{Reward function:} In order to reflect the device experience which the network wants to optimize, RL requires designing the specific reward function where the learning process is generally driven by the reward. In RL, each agent searches its decision-making policy by maximizing its reward under the interaction with environment. Hence, it is important to design an efficient reward function to improve the devices' service satisfaction levels. Here, let ${\mathcal{Z}}'$ denotes the set of communication links in the URLLC scenario where the devices have both the reliability and latency requirements, and ${\mathcal{Z}}''$ denotes the set of communication links in the normal scenario where the devices have minimum data requirements. $|{\mathcal{Z}}'| = Z'$ and $|{\mathcal{Z}}''| = Z''$. Let $R_i^{{\rm{nor}}}$ and $R_{i,\min }^{{\rm{nor}}}$ denote the instantaneous data rate and the minimum data rate threshold in the normal scenario, respectively. According the optimization problem shown in (10), considering the different QoS requirements, we design a new QoS-aware reward function for the massive access management problem, where the reward function of the $i$-th communication link includes the network EE, as well as the reliability, latency and minimum data rate requirements, which is expressed by \begin{equation} \begin{split} r_{i} = {{\eta _{i, EE}}} - {{c_1} {\chi _i^{{\rm{URLLC}}}} }- {{c_2} {\chi _i^{{\rm{nor}}}} }, \end{split} \end{equation} where \begin{equation} \begin{split} \chi _i^{{\rm{URLLC}}} = \left\{ \begin{array}{l} 1,\;\;{\rm{if}}\;\;(7)\;{\rm{or}}\;(8)\;{\rm{is}}\;{\rm{not}}\;{\rm{satisfied}},\;\\ 0,\;{\rm{otherwise}}{\rm{,}} \end{array} \right. \end{split} \end{equation} and \begin{equation} \begin{split} \chi _i^{{\rm{nor}}} = \left\{ \begin{array}{l} 1,\;\;{\rm{if}}\;R_i^{{\rm{nor}}} < R_{i,\min }^{{\rm{nor}}}\;,\;\\ 0,\;{\rm{otherwise}}{\rm{.}} \end{array} \right. \end{split} \end{equation} In (11), the part 1 indicates the immediate utility (network EE), the part 2 and part 3 are the cost functions of the transmission failures which are defined as the unsatisfied URLLC requirements and the unsatisfied minimum data rate requirements, respectively. The parameters ${c_i}$, $i \in \{ 1,2\} $ denote the positive constants of the latter two parts in (11) and they are adopted for balancing the utility and cost [19], [28], [39]. The objectives of (12) and (13) are to refract the QoS satisfaction levels of both the URLLC services and normal services, respectively. In detail, if the URLLC requirement of one packet is satisfied in the current time slot, then $\chi _i^{{\rm{URLLC}}} = 0$; if the minimum data rate is satisfied, then $\chi _i^{{\rm{nor}}} = 0$. This means that there is no cost or punishment of the reward due to the successful transmission with QoS guarantees. Otherwise, $\chi _i^{{\rm{URLLC}}} = 1$, or $\chi _i^{{\rm{nor}}} = 1$. The reward function shown in (11) may have the same reward values for some cases. For example, the following two cases may have the same reward for different values: Case I, the URLLC requirement is not satisfied while the minimum data rate requirement is satisfied, then ${\chi ^{{\rm{URLLC}}}} = 1$ and ${\chi ^{{\rm{nor}}}} = 0$; Case II, the URLLC requirement is satisfied while the minimum data rate requirement is not satisfied, then ${\chi ^{{\rm{URLLC}}}} = 0$ and ${\chi ^{{\rm{nor}}}} = 1$. For these two cases, they may have the same reward function values: $r = {\eta _{EE}} - {c_1} * 1 - {c_2} * 0$ and $r = {\eta _{EE}} - {c_1} * 0 - {c_2} * 1$ with ${c_1} = {c_2}$ being the punishment factors. If the punishment factors ${c_1} \ne {c_2}$, the two cases have different reward function values. We would like to mention that the values of the punishment factors ${c_1}$ and ${c_2}$ have important impacts on the reward function, if ${c_1} > {c_2}$, the URLLC requirement has the higher impact on the final reward value than that of the minimum data rate requirement; by contract, if ${c_1} < {c_2}$, the minimum data rate requirement has the higher impact on the final reward value than that of the URLLC requirement. Furthermore, if ${c_1} = {c_2}$, both the URLLC requirement and minimum data rate requirement have the same impacts on the reward value. In RL, each agent in the MPD model tries to select a policy $\pi $ to maximize a discounted accumulative reward, where $\pi $ is a mapping from state $s$ with the probability distribution over actions that the agent can take: $\pi (s):{\mathcal{S}} \to {\mathcal{A}}$. The discounted accumulative reward is also a called the state-value function for starting the state $s$ with the current policy $\pi $, and it is defined by \begin{equation} \begin{split} {V^\pi }(s) = \left\{ {\sum\limits_{t = 1}^\infty {{\gamma ^t}{r_t}({s_t},{a_t})|} {s_0} = s,\pi } \right\}. \end{split} \end{equation} The function ${V^\pi }(s)$ in (14) is usually applied to test the quality of the selected policy $\pi $ when the agent selects the action $a$. The MPD model tries to search the optimal state-value function ${V^ * }(s)$, which is expressed by \begin{equation} \begin{split} {V^ * }(s) = \mathop {\max }\limits_\pi {V^\pi }(s). \end{split} \end{equation} Once ${V^ * }(s)$ is achieved, the optimal policy ${\pi ^ * }({s_t})$ under the current state ${s_t}$ is determined by \begin{equation} \begin{split} {\pi ^ * }({s_t}) = \arg \mathop {\max }\limits_{{a_t} \in {\mathcal{A}} } {\bar U_t}({s_t},{a_t}) + \sum\limits_{{s_{t + 1}}} {P({s_{t + 1}}|{s_t},{a_t})} {V^ * }({s_{t + 1}}), \end{split} \end{equation} where ${\bar U_t}({s_t},{a_t})$ denotes the expected reward by selecting action ${a_t}$ at state ${s_t}$. To calculate ${V^ * }(s)$, the iterative algorithms can be applied. However, it is difficult to get the transition probability $P({s_{t + 1}}|{s_t},{a_t})$ in practical environments, but RL algorithms, such as Q-learning, policy gradient and DQN, are widely employed to address MDP problems under environment uncertainty. In Q-learning algorithm, the Q-function is used to calculate the accumulative reward for starting from a state $s$ by taking an action $a$ with the selected policy $\pi $, which can be given by \begin{equation} \begin{split} {Q^\pi }(s,a) = \left\{ {\sum\limits_{t = 1}^\infty {{\gamma ^t}{r_t}({s_t},{a_t})|} {s_0} = s,{a_0} = a,\pi } \right\}. \end{split} \end{equation} Similarly, the optimal Q-function is obtained by \begin{equation} \begin{split} {Q^ * }(s,a) = \mathop {\max }\limits_\pi {V^\pi }(s,a). \end{split} \end{equation} In Q-learning algorithm, the Q-function is updated by \begin{equation} \begin{split} \begin{array}{l} {Q_{t + 1}}({s_t},{a_t}) = {Q_t}({s_t},{a_t})\\ + \alpha \left[ {{r_{t + 1}} + \gamma \mathop {\max }\limits_{{a_{t + 1}}} {Q_t}({s_{t + 1}},{a_{t + 1}}) - {Q_t}({s_t},{a_t})} \right], \end{array} \end{split} \end{equation} where $\alpha $ denotes the learning rate. When ${Q^ * }(s,a)$ is achieved, the optimal policy is determine by \begin{equation} \begin{split} {\pi ^ * }(s) = \arg \mathop {\max }\limits_{a \in A} {Q^ * }(s,a). \end{split} \end{equation} \section{Distributed Cooperative Multi-Agent RL Based Massive Access} Even through Q-learning is widely adopted to design the resource management policy in wireless networks without knowing the transition probability in advance, it has some key limitations for its application in large-scale 5G and B5G networks, such as Q-learning has slow convergence speed under large-state space, and it cannot deal with large continuous state-action spaces. Recently, a great potential is demonstrated by DRL that combines neural networks (NNs) with Q-learning, called DQN, which can efficiently address the above mentioned problems and achieve better performance owing to the following reasons. Firstly, DQN adopts NNs to map from the observed state to action between different layers, instead of using storage memory to store the Q-values. Secondly, large-scale models can be represented from high dimensional raw data by using NNs. Furthermore, applying experience replay and generalization capability brought by NNs, DQN can improve network performance. In 5G and B5G networks shown in Fig. 1, massive communication links aim to access the limited radio spectrum, which can be modelled as a multi-agent RL problem, where each communication link is regarded as a learning agent to interact with network environment to learn its experience, and the learned experience is then utilized to optimize its own spectrum access strategy. Massive agents explore the outside network environment and search spectrum access and power control strategies according to the observations of the network state. The proposed deep multi-agent RL based approach consists of two stages, a training stage and a distributed cooperative implementation stage. The main contributions of the proposed distributed cooperative multi-agent RL based approach for massive access are provided in the following two subsections in detail. \subsection{Training Stage of Multi-Agent RL for Massive Access} For the training sage, we adopt DQN with experience relay to train the multi-agent RL for efficient learning of massive access policies. Fig. 2 indicates the training process. All communication links are regarded as agents and the wireless network acts as the environment. Firstly, each agent intelligently observes its current state (e.g., subchannel status (busy or idle), channel quality, traffic load and QoS satisfaction levels) by integrating with the environment. Then, it makes decision and chooses one action according to its learned policy. After that, the environment feedbacks a new state and an immediate reward to each agent. Based on the feedback, all agents smartly learn new policies in the next time step. The optimal parameters of DQN can be trained with an infinite number of time steps. In addition, the experience replay mechanism is adopted to improve the learning speed, the learning efficiency and the learning stability toward the optimal policy for the massive access management. The training data is stored in the storage memory, and a random mini-batch data is sampled from the storage memory and used to optimize the weight of DQN. \vspace{-2pt} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{figures/fig2.png} \vspace{-2pt} \caption{{\small DQN training based intelligent subchannel assignment and power control for massive access.} } \label{fig:Schematic} \vspace{-5pt} \end{figure} At each training or learning step, each DQN agent updates its weight, $\bm{\theta} $ , to minimize the loss function defined by \begin{equation} \begin{split} \begin{array}{l} Loss({{\bm{\theta} _t}}) = \\ {\left[ {{r_{t + 1}}({s_t},{a_t}) + \gamma \mathop {\max }\limits_{a \in \mathcal{A}} {Q_t}({s_{t + 1}},{a_{t + 1}},{{\bm{\theta} _t}}) - {Q_t}({s_t},{a_t},{{\bm{\theta} _t}})} \right]^2}. \end{array} \end{split} \end{equation} One important reason of adopting DQN is to update the loss functions given in (21) at each tainting step to decrease the computational complexity for large-scale learning problems [37]-[42]. The DQN weight $\bm{\theta} $ is obtained by using the gradient descent method, which can be expressed as \begin{equation} \begin{split} {\bm{\theta} _{t + 1}} = {\bm{\theta} _t} + \beta \nabla Loss({\bm{\theta} _t}), \end{split} \end{equation} where $\beta $ denotes the learning rate of the weight $\bm{\theta} $, and $\nabla (.)$ is the first-order partial derivative. Then, each agent selects its action according to the selected policy $\pi {\rm{(}}{s_t},{\bm{\theta} _t}{\rm{)}}$, which is given by \begin{equation} \begin{split} \pi {\rm{(}}{s_t},{\bm{\theta} _t}{\rm{)}} = \arg \mathop {\max }\limits_{a \in {\mathcal{A}}} \left\{ {{Q_t}({s_t},{a_t},{\bm{\theta} _t})} \right\}. \end{split} \end{equation} Pseudocode for training DQN is presented in \textbf{Algorithm 1}. The communication environment contains both the C-devices and D-devices and their positions in the served coverage area of the BS, and the channel gains are generated based on their positions. Each agent has its trained DQN model that takes as input of current observed state ${s_t}$ and outputs the Q-function with the selected action ${a_t}$. The training loop has a finite number of episodes ${N^{{\rm{epi}}}}$ (i.e., tasks) and each episode has $T$ training iterations. At each training step, after observing the current state ${s_t}$, all agents explore the state-action space by applying the $\varepsilon - $greedy method, where each action ${a_t}$ is randomly selected with the probability ${\varepsilon _t}$ while the action is chosen with the largest Q-value ${Q_t}({s_t},{a_t},{\bm{\theta} _t})$ with the probability $1 - {\varepsilon _t}$. After executing ${a_t}$ (subchannel assignment and power control), agents will receive an immediate reward ${r_t}$ and observe a new state ${s_{t + 1}}$ from the environment. Then, the experience ${e_t} = ({s_t},{a_t},r({s_t},{a_t}),{s_{t + 1}})$ is stored into the replay memory $D$. At each episode, a mini-batch data from the memory is sampled to update the weight ${\bm{\theta} _t}$ of DQN. \begin{algorithm}[t] \begin{small} \caption{\small DQN Training Stage of Subchannel Assignment and Power Control with Multi-Agent RL for Massive Access} 1: \textbf{Input:} DQN structure, environment simulator and QoS requirements of all devices (e.g., reliability, latency and minimum data rate). \\ 2: \textbf{for} each episode $j$=1,2,..., ${N^{{\rm{epi}}}}$ \textbf{do} \\ 3: $~$ \textbf{Initialize:} Initial Q-networks for all agents (e.g., Q-function $Q(s,a)$, policy strategy $\pi (s,a)$, and weight $\bm{\theta }$) and experience replay $D$.\\ 4: $~~$ \textbf{for} each iteration step $t$=0,1,2,..., $T$ \textbf{do} \\ 5: $~~~$ Each agent observes its state ${s_t}$;\\ 6: $~~~$ Select a random action ${a_t}$ with the probability $\varepsilon $;\\ 7: $~~~$ Otherwise, choose the action ${a_t} = \arg \mathop {\max }\limits_{a \in {\mathcal{A}}} {Q_t}({s_t},{a_t},{\bm{\theta _t}})$;\\ 8: $~~~$ Execute action ${a_t}$, then obtain a reward ${r_t}$ by (15), and observe a new state ${s_{t + 1}}$; \\ 9: $~~~$ Save experience ${e_t} = ({s_t},{a_t},r({s_t},{a_t}),{s_{t + 1}})$ into the storage memory $D$; \\ 10: $~$ \textbf{end for}\\ 11: $~$ \textbf{for} each agent \textbf{do}\\ 12: $~~$ Sample a random mini-batch data ${e_t}$ from $D$;\\ 13: $~~$ Update the loss function by (21);\\ 14: $~~$ Perform a gradient descent step to update ${\bm{\theta} _{t + 1}}$ by (22);\\ 15: $~~$ Update the policy $\pi $ with maximum Q-value by (23), and chose an action based on $\pi $; \\ 16: $~$ \textbf{end for}\\ 17: \textbf{end for}\\ 18: \textbf{return:} Return trained DQN models. \\ \end{small} \label{alg_lirnn} \end{algorithm} \subsection{Distributed Cooperative Implementation of Multi-Agent RL for Massive Access} \vspace{-2pt} \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{figures/fig3.png} \vspace{-2pt} \caption{{\small Distributed cooperative multi-agent RL framework.} } \label{fig:Schematic} \vspace{-5pt} \end{figure} The above mentioned trained DQN models with the computation intensive training procedure are shown in Section IV.A, which can be completed offline at BS since BS has powerful computing capacity to train large-scale models. After adequate training, the trained models are utilized for implementation. In this subsection, we propose a distributed cooperative learning approach to optimize the network performance in massive access scenario. During the distributed cooperative implementation stage, at each learning step, each communication link (agent) utilizes its local observation and information to choose its action with the maximum Q-value. In this case, each agent has no knowledge of actions chosen by other agents if the actions are updated simultaneously and new joined agents need to train their own learning model with extra training computational time or cost. In order to address this issue, motivated by the concept of transfer learning and cooperative learning, we present a distributed cooperative learning approach to improve the learning efficiency and enhance the service performance of each agent, where devices are encouraged to communicate and share their learned experiences and decisions within a small number of neighbors, and finally learn with each other, as shown in Fig.3. \emph{1) Transfer Learning:} \textbf{(i) The Expert Agent Selection:} When a new device joins 5G and B5G networks, or one device applies a new communication service, instead of building a new learning model, it can communicate with neighboring devices to search one suitable expert to utilize the expert' current learning model. In addition, if one communication link has poor performance (e.g., low convergence speed and poor QoS satisfaction levels) according to its current leaning strategy, it can search one neighboring communication link (agent) as the expert and then utilizes the learned model or policy from the expert. Generally, to find the expert, devices exchange the following several metrics with their neighbors: \emph{a)} the types of device, e.g., C-device and D2D device; \emph{b)} the communication services, which mainly refer to URLLC service and normal service; \emph{c)} the related QoS parameters, such as the target thresholds of reliability and latency, and the minimum data rate. The similarity of the agents can be evaluated by adopting the manifold learning, which is also called Bregman Ball [19]. The Bregman Ball is defined as the minimum manifold with a central ${\Theta _{{\rm{cen}}}}$ (the information of the learning agent, where information refers to the types of device,communication services, and QoS parameters mentioned above), and a radius ${\Psi _{{\rm{rad}}}}$. Any information point ${\Theta _{{\rm{poi}}}}$ (the information of neighbors) is inside this ball, and the agent tries to search the information point which has the highest similarity with ${\Theta _{{\rm{cen}}}}$. The distance between any information point and the central ${\Theta _{{\rm{cen}}}}$ is defined by \begin{equation} \begin{split} {\rm{Dis}}({\Theta _{{\rm{cen}}}},{\Psi _{{\rm{rad}}}}) = \left\{ {{\Theta _{{\rm{poi}}}} \in \Theta :{\rm{Dis}}({\Theta _{{\rm{poi}}}},{\Theta _{{\rm{cen}}}}) \le {\Psi _{{\rm{rad}}}}} \right\}. \end{split} \end{equation} After the highest similarity level (the smallest distance achieved by (24)) between the learning agent and the expert agent is found, the learning agent can use the learned DQN model of the selected expert agent. \textbf{(ii) Learning from Expert Agent:} As analyzed above, after finding the expert agent, the learning agent uses the transferred DQN model ${Q^{{\rm{Transfer}}}}(s,a)$ from the expert agent and its current native DQN model ${Q^{{\rm{Current}}}}(s,a)$ to generate an overall DQN model. Accordingly, the new Q-table of the learning agent can be expressed as \begin{equation} \begin{split} {Q^{{\rm{New}}}}(s,a) = \mu {Q^{{\rm{Transfer}}}}(s,a) + (1 - \mu ){Q^{{\rm{Current}}}}(s,a), \end{split} \end{equation} where $\mu \in [0,1]$ is the transfer rate, and it will be gradually decreased after each learning step to reduce the effect of the transferred DQN model from the expert agent on the new DQN model. In the distributed cooperative manner, the policy vector of all agents are updated as follows: \begin{equation} \begin{split} {{\bm{\pi }}_{t + 1}}{\rm{(}}{s_t}{\rm{)}} = \left[ \begin{array}{l} \pi _{t + 1}^1{\rm{(}}s_t^1{\rm{)}}\\ \;\;\;\;\;\;\; \vdots \\ \pi _{t + 1}^i{\rm{(}}s_t^i{\rm{)}}\\ \;\;\;\;\;\;\; \vdots \\ \pi _{t + 1}^Z{\rm{(}}s_t^Z{\rm{)}} \end{array} \right] = \left[ \begin{array}{l} \arg \mathop {\max }\limits_{{a^1} \in {{\mathcal{A}}^1}} \left\{ {Q_{t + 1}^1(s_t^1,a_t^1)} \right\}\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \vdots \\ \arg \mathop {\max }\limits_{{a^i} \in {{\mathcal{A}}^i}} \left\{ {Q_{t + 1}^i(s_t^i,a_t^i)} \right\}\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \vdots \\ \arg \mathop {\max }\limits_{{a^Z} \in {{\mathcal{A}}^Z}} \left\{ {Q_{t + 1}^Z(s_t^Z,a_t^Z)} \right\} \end{array} \right] \end{split} \end{equation} where $Q_{t + 1}^i(s_t^i,a_t^i,\bm{\theta} _t^i)$ denotes the Q-function of the $i$-th agent (communication link) with its current state-action pair $(s_t^i,a_t^i)$ at the current time slot in its DQN model. When the state-action pairs are visited for many enough times for convergence, all Q-tables will converge to the final point ${Q^ * }$. Hence, we can get the final learned policy as follow \begin{equation} \begin{split} {{\bm{\pi }}^ * }{\rm{(}}s{\rm{)}} = \left[ \begin{array}{l} \arg \mathop {\max }\limits_{{a^1} \in {{\mathcal{A}}^1}} \left\{ {{Q^1}^ * ({s^1},{a^1})} \right\}\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \vdots \\ \arg \mathop {\max }\limits_{{a^Z} \in {{\mathcal{A}}^Z}} \left\{ {{Q^Z}^ * ({s^Z},{a^Z},)} \right\} \end{array} \right]. \end{split} \end{equation} \emph{2) Cooperative Learning} If the action is chosen independently according to the local information, each communication link has no information of actions selected by other communication links when the actions are updated simultaneously. Consequently, the states observed by each communication link may fail to fully characterize the environment. Hence, cooperation and decision sharing among agents in the proposed distributed learning approach can improve the network performance, where a small number of communication links will share their actions with their neighbors. In the cooperative manner, the massive number of agents can be classified into $G$ groups, where the $g$-th group consists of ${L_g}$ agents and the agents in the same group are also their neighboring agents. The group division principle can adopt the studies in [13], [26] and [32]. In general, it is possible to approximate the sum utility of the $g$-th group ${Q_g}({s_g},{a_g})$ by the sum of each agent' utility ${Q_{g,i}}({s_{g,i}},{a_{g,i}})$ in the same group, where $s_g$ and $a_g$ denote the entire state and action of the $g$-th group, respectively; ${s_{g,i}}$ and ${a_{g,i}}$ are the individual state and action of the $i$-th agent in the $g$-th group, respectively. Hence, the total utility in a small group $g$ can be calculated by \begin{equation} \begin{split} {Q_g}({s_g},{a_g}) = \sum\nolimits_{i = 1}^{{L_g}} {{Q_{g,i}}({s_{g,i}},{a_{g,i}})}. \end{split} \end{equation} Then, the joint optimal policy learned in the $g$-th group can be expressed by \begin{equation} \begin{split} {\pi _g}{\rm{(}}{s_g}{\rm{)}} = \arg \mathop {\max }\limits_{{a_g} \in {{\mathcal{A}}_g}} \left\{ {{Q_g}({s_g},{a_g})} \right\}, \end{split} \end{equation} where ${{\mathcal{A}}_g}$ denotes the entire action space of the $g$-th group. In fact, the cooperation can be defined by allowing communication links (agents) to share their selected actions with their neighboring links and take turns to make decisions, which can enhance the overall feedback reward by choosing the actions jointly instead of independently. For example, in the fully distributed learning manner, each spectrum access may run into collisions when other links make their decisions independently and happen to assign the same subchannel, leading to the increased co-channel interference and reduce the performance. By contrast, in the cooperative learning scenario, to avoid such situation, each communication link has information of the neighbors' actions in its observation, and try to avoid the assignment of the same subchannel in order to achieve more rewards. The distributed cooperative implementation of multi-agent RL for massive access is shown in \textbf{Algorithm 2}. Generally, at each time step, after observing the states (subchannel occupation status, channel quality, traffic load, QoS satisfaction level, etc.) from the environment, the actions (massive subchannel assignment and power control) in communication links are selected with the maximum Q-value given by loading the trained DQN models shown in \textbf{Algorithm 1}. As mentioned above, a small number of neighboring devices are encouraged to cooperate with each other in the same group to maximize the sum Q-value shown in (28), where their decisions are shared in the same group and the joint action strategy ${a_g}$ is selected with the maximum cooperative Q-value. In addition, it is worth noting that if a new device joins the network or applies a new service, or one communication link achieves poor performance (e.g, low transmission success probability or low convergence speed), then it can directly search the expert agent from the neighbors in the same group, and utilizes the transfer learning model and policy from the expert agent. Finally, all communication links begin transmission with the subchannel assignment and transmission power strategies determined by their learned policies. \begin{algorithm}[t] \begin{small} \caption{\small Distributed Cooperative Implementation of Multi-Agent RL for Massive Access} 1: \textbf{Input:} DQN structure, environment simulator and QoS requirements of all devices. \\ 2: \textbf{start:} Load DQN models. \\ 3: \textbf{loop}\\ 4: $~$ Each agent (communication link) observes its state $s$; \\ $~~~~$ \textbf{\emph{Transfer learning}}\\ 5: $~$ \textbf{if} the agent is new, or needs new service or has poor performance, \textbf{then}; \\ 6: $~~~$ The agent exchanges information with its neighbors;\\ 7: $~~~$ Search the expert with the highest similarity by (24);\\ 8: $~~~$ Use the learned model from the expert; \\ 9: $~~~$ Update the overall Q-table by (25); \\ 10: $~~$ Update the transfer rate $\mu $, and select an action by (31); \\ 11: $~~$ Perform learning from step 13 to step 16;\\ 12: $~$ \textbf{else} \\ $~~~~$ \textbf{\emph{Cooperative learning}}\\ 13: $~~$ In each group $g$, each agent shares its observations and actions;\\ 14: $~~$ Each group calculate its cooperative Q-table by (28); \\ 15: $~~$ Update the joint policy ${\pi _g}{\rm{(}}{s_g}{\rm{)}}$ with the largest cooperative Q-value ${Q_g}({s_g},{a_g})$, and selecte the joint action ${a_g}$; \\ 16: $~~$ Execute action ${a_g}$, then obtain a reward ${r'_g}$ using (11), and observe a new state ${s'_g}$;\\ 17: $~$ \textbf{end if}\\ 18: $~$ Both transfer learning and cooperative learning are jointly updated to optimize the learned policy;\\ 19: \textbf{end loop}\\ 20: \textbf{output:} Subchannel assignment and power control. \\ \end{small} \label{alg_lirnn} \end{algorithm} Multi-agent reinforcement learning is also called "independent DQN", where each agent independently learns its own policy and considers other agents as part of the environment. Moreover, , the combination of experience replay with independent DQN appears to be problematic: the non-stationarity introduced by independent DQN. Hence, we have presented a distributed cooperative multi-agent DQN scheme, devices are encouraged to communicate and share their learned experiences and actions within a small number of neighbors, and finally learn with each other. In this case, the scheme is capable of avoiding the non-stationarity of independent Q-learning by having each agent learn a policy that conditions on an information-sharing of the other agents’ policies (behaviors) in the same group. \subsection{Computational Complexity Analysis} For the training phase, in trained DQN models, let $L$, $B_0$ and $B_l$ denote the training layers which are proportional to the number of states, the size of the input layer and the number of neurons used in DQN, respectively. The complexity in each time step for each agent is calculated by $O(B_0B_1 + \sum\nolimits_{l = 1}^{L - 1} {B_lB_{l + 1}} )$ at each training step. In the training phase, each mini-batch has episodes ${N^{{\rm{epi}}}}$ with each episode being $T$ time steps, and each trained model is completed over $I$ iterations until convergence and the network has $Z$ agents with the $Z$ trained DQN models. Hence, the total computational complexity is $O\left( {ZI{N^{{\rm{epi}}}}T(B_0B_1 + \sum\nolimits_{l = 1}^{L - 1} {B_lB_{l + 1}} )} \right)$. The high computational complexity of the DQN training phase can be performed offline for a finite number of episodes at a powerful unit (such as the BS) [38], [39]. For the distributed cooperative phase (also called testing phase), our proposed approach applies the transfer learning mechanism and allows the expert agent to share the learned knowledge or actions with other agents. Let ${\mathcal{S}}'$ and ${\mathcal{A}}'$ denote the stored state space and action space, respectively. The computational complexity of the classical DQN approach (the fully distributed DQN approach) and the proposed approach are $O(|S{|^2} \times |{\mathcal{A}}|)$ and $O(|{\mathcal{S}}'{|^2} \times |{\mathcal{A}}'| + |{\mathcal{S}}{|^2} \times |{\mathcal{A}}|)$ [19], respectively, indicating that the complexity of the proposed approach is higher than the classical DQN learning approach. Nevertheless, the stored state space and action space in the memory is not large at each device, and hence the complexity of the proposed learning approach is slightly higher than the classical DQN approach. For cooperative learning, a small number of agents in each same group will select their actions jointly instead of independently by sharing their own selected action. Let $a^{co}_{g,i}$ denote the shared action set of each $i$-th agent in the $g$-th group in the current time slot, then the computational complexity the $g$-th group in term of action sharing is $O(\sum\nolimits_{i = 1}^{{L_g}} {|a^{co}_{g,i}|})$. As the network has $G$ groups, the total computational complexity of the cooperative learning is $O(\sum\nolimits_{g = 1}^{{G}} \sum\nolimits_{i = 1}^{{L_g}} {|a^{co}_{g,i}|})$. \section{Simulation Results and Analysis} In this section, simulation results are provided to evaluate the proposed distributed cooperative multi-agent RL based massive access approach. We consider a single cell with a cell radius of 500 m, the total number of devices is 2000. In addition, we set one fifth of the total number of devices to be normal services and the minimum data rate requirement is set as 3.5 bps/Hz. The maximum D2D communication distance is 75 m. The carrier frequency is 2 GHz, and the total bandwidth is 100 MHz which is equally divided into 100 subchannels with each subchannel having 1 MHz. For the URLLC services, the SINR threshold is 5 dB, the processing/computing delay ${T_{{\rm{pc}}}}$= 0.3 ms, the reliability requirement varies between 99.9\% and 99.99999\%, and the maximum latency threshold varies between 1 ms and 10 ms for different simulation settings. The maximum transmit power of each device and circuit power consumption are 500 mW and 50 mW, respectively. The background noise power is -114 dBm. Each packet size in URLLC links is 1024 bytes. The DQN model consists of three connected hidden layers, containing 250, 250, and 100 neurons, respectively. The learning rate is $\alpha = 0.02$ and discount factor is set to be $\gamma = 0.95$. \begin{table}[!t] \renewcommand{\arraystretch}{1.0} \caption{Simulation Parameters} \centering \includegraphics[width=0.475\textwidth]{figures/table1.png} \end{table} We compare the proposed distributed cooperative multi-agent RL based massive access approach (denoted as proposed DC-DRL MA, which adopts both transfer learning and cooperative learning mechanisms) with the following approaches: \emph{1)} The group based massive access approach, where devices are grouped by the similarities with each group having one group leader to communicate with the centralized controller. Then, the subchannel assignment and transmission power control are adjusted iteratively to the communication links in each group, similar to the group based preamble reservation access approach [13] (denoted as centralized G-MA). \emph{2)} The fully distributed multi-agent RL based massive access approach (denoted as fully D-DRL MA [37]), similar to the approach [37], where each communication link selects its subchannel assignment and transmission power strategy based on its own local information without cooperating with other communication links. \emph{3)} Random massive access approach (denoted as random MA), where each communication link chooses its subchannel assignment and transmission power strategy in a random manner. \subsection{Convergence Comparisons} Here, we show in Fig. 4 the energy efficiency (EE) with increasing training episodes to investigate the convergence behavior of the proposed multi-agent DQN approach and compared approaches. Clearly, the proposed learning approach significantly achieves the higher EE performance than that of the fully distributed DRL approach [37] and random MA approach. Especially, the proposed approach has faster convergence speed and less fluctuations by adopting by transfer learning and cooperative learning mechanisms to improve the learning efficiency and convergence speed. The fully distributed DRL approach [37] is simple without any cooperation among devices, but it achieves poor global performance, leading to the poor EE value. Even though the random MA approach has the simplest structure, the worst performance fails to optimize the network energy efficiency with increasing training episodes. Our proposed approach applies both the transfer learning and cooperative learning mechanisms to enhance the convergence speed and learning efficiency, and the optimized strategy can be learned after a number of training episodes. From Fig. 4, the energy efficiency per episode improves as training continues, demonstrating the effectiveness of the proposed training approach. When the training episode approximately reaches 1900, the performance gradually converges despite some fluctuations due to mobility-induced channel fading in mobile environments. Since we investigate the resource management in massive access scenario, the environment is complex, as well as the action and state spaces are large for all mobile devices, so our presented learning approach requires about 2000 training episodes to appropriately converge. \begin{figure} \centering \includegraphics[width=0.85\columnwidth]{figures/fig4.png} \caption{{Convergence comparisons of compared learning approaches. } } \label{fig:Schematic} \end{figure} \subsection{Performance Comparisons Under Different Thresholds of Reliability and Latency} Fig. 5 and Fig. 6 compare the performances of all approaches under different values of the reliability and latency thresholds, respectively, when the packet arrival rate is 0.03 packets/slot/per link and the total number of devices is 2000. From both Fig. 5 and Fig. 6, for all approaches, we can find that both the EE performance and the transmission success probability drop as the required reliability value increases and the maximum latency threshold decreases. The reason is that the more stringent the reliability and latency constraints are, the worse network EE and transmission success probability the network can archive. In this case, both the transmission power and subchannel assignment strategy needs to be carefully designed to guarantee the stringent reliability and latency constraints, such that the transmission success probability can be guaranteed at a high level. \vspace{-2pt} \begin{figure} \centering \includegraphics[width=1\columnwidth]{figures/fig5.png} \vspace{-2pt} \caption{{\small Performance comparisons vs. different reliability thresholds.} } \label{fig:Schematic} \vspace{-5pt} \end{figure} \vspace{-2pt} \begin{figure} \centering \includegraphics[width=1\columnwidth]{figures/fig6.png} \vspace{-2pt} \caption{{\small Performance comparisons vs. different latency thresholds.} } \label{fig:Schematic} \vspace{-5pt} \end{figure} We also observe from Fig. 5 (b) and Fig. 6 (b) that within a reasonable region of the reliability and latency thresholds change, the three approaches (except the random search approach) can till achieve the high transmission success probability, which, however, have more unsatisfied transmission link events happen if the constraints are extremely stick (e.g., the reliability threshold grows beyond 99.999\% or the maximum latency threshold is less than 4 ms). Compared with other approaches, our proposed approach achieves the higher EE performance and transmission success probability under different reliability and latency requirements, especially the performance gap between the proposed approach and other approaches becomes more significant when the constraints become more stringent. The reason is that our proposed approach employs both the transfer learning and cooperative learning mechanisms to optimize the global subchannel assignment and transmission power strategy, thereby improving the network performance. From Fig. 5 (a) and Fig. 6 (a), an interesting observation is that compared with the centralized G-MA approach [13] and random MA approach, the EE value curve declines more quickly in our proposed approach when the constraints become stricter. The reason is that the proposed approach designs the specific QoS-aware reward function shown in (15) to try to guarantee QoS requirements (meeting the high transmission success probability), and hence the network may sacrifice the part of EE performance to support more successful transmission communication links. \subsection{Performance Comparisons Versus Packet Arrival Rate} Fig. 7 represents the performance comparisons with respect to the increasing packet arrival rate for different massive access approaches, when the number of devices is 2000, and the reliability and latency thresholds are 99.999\% and 5ms, respectively. From Fig. 7, with the growing packet arrival rate, both the EE performance and the transmission success probability decrease slightly for all approaches when the packet arrival rate is less than a certain threshold, but drop sharply when the packet arrival rate grows beyond the acceptable margin. An increase of packet arrival rate results in longer transmission duration (e.g., transmission delay and queue waiting delay), frequent spectrum access and possibly increases more transmission power in order to improve packet transmission success probability. In addition, the increase of packet arrival rate also leads to stronger co-channel interference for a longer period, which limits the data rate improvement. Hence, as shown in Fig. 7 (a), the EE performance decreases slightly with the increase of packet arrival rate when packet arrival rate is not high, and the performance will become worse if packet arrival rate grows beyond the acceptable margin. \vspace{-2pt} \begin{figure} \centering \includegraphics[width=1\columnwidth]{figures/fig7.png} \vspace{-2pt} \caption{{\small Performance comparisons with different packet arrival rates.} } \label{fig:Schematic} \vspace{-5pt} \end{figure} From Fig. 7 (b), even though the transmission success probability drops for all approaches, the proposed approach still achieves the better performance than other three approaches. Remarkably, the proposed approach attains approximately 100\% success transmission probability when the packet arrival rate is less than 0.03 packets per time slot, and achieves noticeable degradation when the packet size grows beyond 0.03 packets per time slot. Such the performance degradation may result from the limited spectrum resource, where the current subchannel resource cannot completely support the massive number of transmission packets simultaneously with the increasing packet arrival rate. \section{Conclusion} In this paper, a distributed cooperative channel assignment and power control approach based on multi-agent RL has been presented to solve the massive access management problem in future wireless networks, where the proposed approach is capable of supporting different QoS requirements (e.g., URLLC and minimum data rate) of a huge number of devices. The proposed multi-agent RL based approach consists of a centralized training procedure and a distributed cooperative implementation procedure. In order to improve the network performance and QoS satisfaction levels, the transfer learning and cooperative learning mechanisms have been employed to enable communication links to work cooperatively in a distributed cooperative way. Simulation results have confirmed the effectiveness of the proposed learning approach and also showed that the proposed approach outperforms other existing approaches in massive access scenario.
3,212,635,537,894
arxiv
\section{Introduction} \label{section:intro} In a planetary system, the stellar obliquity is defined as the angle between the stellar spin axis and the net orbital angular momentum vector of the system. While the true stellar obliquity is not currently possible to pinpoint in most exoplanet systems due to incomplete knowledge of where all planets in each system lie, stellar obliquities can be constrained through measurements of individual planets' orbital configurations. These, in turn, offer evidence of the systems' dynamical histories. As a transiting exoplanet occults its host star, it produces a distortion in the net Doppler shift measured across the integrated light from the star. This distortion is known as the Rossiter-McLaughlin (R-M) effect \citep{rossiter1924detection, mclaughlin1924some}, and it enables a precise measurement of the sky-projected angle $\lambda$ between the stellar spin axis and the transiting exoplanet's orbit normal. To date, the angle $\lambda$ has been measured for over 170 transiting planets, revealing a diversity of system architectures \citep{albrecht2022stellar}. However, the vast majority of these spin-orbit measurements have been made in hot Jupiter systems due to their relatively deep and frequent transits. By contrast, relatively few $\lambda$ measurements have been made in systems with wider-orbiting warm Jupiters (e.g. \citealt{Wang2021}), which offer important clues to constrain the dominant formation channels for both hot and warm Jupiters \citep{dawson2018origins, jackson2021observable, rice2022tendency}. For the purposes of this work, we define a ``warm Jupiter'' as a short-period ($P<100$ days), Jovian-mass ($0.3M_J<M_b<13M_J$) exoplanet with star-planet separation $a_b/R_*>11$ such that the planet is ``tidally detached'' -- meaning that it undergoes relatively weak tidal interactions with the host star. We present a measurement of the Rossiter-McLaughlin effect with the Keck/HIRES instrument \citep{vogt1994hires} across a transit of Qatar-6 A b, which is a warm Jupiter residing in a binary star system. This is the fourth result of our Stellar Obliquities in Long-period Exoplanet Systems (SOLES) survey \citep{rice2021soles, wang2022aligned, rice2022tendency} that is systemically extending the census of $\lambda$ measurements to wider-orbiting, tidally detached exoplanets. It is also the ninth measurement of a warm Jupiter spin-orbit angle in a system with one or more known stellar companions \citep[see the TEPcat catalogue;][]{southworth2011homogeneous}. Qatar-6 A is a young ($1.02\pm0.62$ Gyr), $V=11.5$ early K-type main sequence star that hosts one known sub-Jovian-mass ($M_{b}=0.668\pm0.066M_J$) planet with star-planet separation $a_b/R_*=12.61\pm0.22$ \citep{alsubai2018qatar}. We demonstrate that Qatar-6 A b is likely at or near alignment with the stellar spin axis of its host star, with a projected spin-orbit angle $\lambda=0.1\pm2.6\ensuremath{\,^{\circ}}$ and a true spin-orbit angle $\psi=21.82^{+8.86}_{-18.36}\ensuremath{\,^{\circ}}$. Considering the larger-scale architecture of the system, we also show that the Qatar-6 AB stellar binary system is edge-on ($i_{B}=90.17^{+1.07}_{-1.06}\ensuremath{\,^{\circ}}$) relative to our vantage point from the Earth. This edge-on configuration hints at a potential alignment between the planetary companion's orbit and the orbit of the binary system. We first describe our observations and data reduction in Section \ref{section:observations}. Then, we characterize the system by extracting stellar parameters in Section \ref{section:stellar_parameters} and modeling both the 2D and 3D stellar obliquity in Section \ref{section:spinorbitmodel}. In Section \ref{section:binary_alignment}, we constrain the Qatar-6 AB binary orbital properties to demonstrate that the system is edge-on, suggesting that the three masses in the system (two stars and one transiting planetary companion) may lie on mutually aligned orbits. We consider relevant timescales for the dynamical evolution of this system in Section \ref{section:dynamical_timescales}, and we discuss the system's potential formation scenarios in Section \ref{section:formation}. Finally, we provide an overview of our findings and their implications in Section \ref{section:conclusions}. \section{Observations} \label{section:observations} We observed the Rossiter-McLaughlin effect across one full transit of Qatar-6 A b from UT 07:25-13:30 on May 30th, 2022 with the Keck/HIRES spectrograph. We obtained thirty-nine 500-second iodine-imprinted radial velocity (RV) exposures during this time span (Table \ref{tab:rv_data_hires}), with a median signal-to-noise ratio (SNR) of 126. Our observing sequence included approximately $150$ and $70$ minutes of pre- and post-transit observations, respectively, to constrain the RV baseline. All RV observations were taken using the C2 decker ($14\arcsec \times 0.861\arcsec$, $R = 60,000$) and reduced using the California Planet Search pipeline \citep{howard2010california}. Conditions were stable throughout most of the observing sequence, with seeing ranging from 1.0\arcsec — 1.2\arcsec\, and a short spike in seeing (1.8\arcsec) around UT 08:25-08:45. During the second half of the transit, from UT 11:30-12:25, the presence of clouds substantially reduced the photon count of each spectrum. This is reflected as inflated error bars in the corresponding RV measurements, as shown in Figure \ref{fig:rv_joint_fit}. To calibrate our RV observations and characterize the system's stellar parameters, we also obtained a 2030-second iodine-free template spectrum of Qatar-6 A with Keck/HIRES during the same night. This single exposure was centered at UT 06:45 and used the spectrograph's B3 decker ($14.0\arcsec \times 0.574\arcsec$, $R = 72,000$). The Qatar-6 A template spectrum was observed in excellent conditions, with 1.1\arcsec\, seeing and SNR$\sim$200. \begin{deluxetable}{rrrrr} \tablecaption{Keck/HIRES radial velocities for the Qatar-6 A b planetary system.\label{tab:rv_data_hires}} \tabletypesize{\scriptsize} \tablehead{ \colhead{Time (BJD)} & \colhead{RV (m/s)} & \colhead{$\sigma_{\rm RV}$ (m/s)} & \colhead{S-index} & \colhead{$\sigma_S$}} \tablewidth{300pt} \startdata 2459729.816537 & 27.56 & 1.74 & 0.555 & 0.001 \\ 2459729.822683 & 28.70 & 1.73 & 0.551 & 0.001 \\ 2459729.828597 & 20.15 & 2.30 & 0.557 & 0.001 \\ 2459729.835425 & 22.04 & 1.83 & 0.571 & 0.001 \\ 2459729.841490 & 24.85 & 1.64 & 0.553 & 0.001 \\ 2459729.847693 & 25.06 & 1.64 & 0.561 & 0.001 \\ 2459729.854035 & 25.88 & 1.58 & 0.553 & 0.001 \\ 2459729.866778 & 20.49 & 2.30 & 0.555 & 0.001 \\ 2459729.872900 & 13.84 & 1.69 & 0.554 & 0.001 \\ 2459729.879208 & 13.98 & 1.64 & 0.541 & 0.001 \\ 2459729.885493 & 9.70 & 1.60 & 0.553 & 0.001 \\ 2459729.891962 & 12.33 & 1.68 & 0.544 & 0.001 \\ 2459729.898073 & 6.03 & 1.57 & 0.554 & 0.001 \\ 2459729.904358 & 11.27 & 1.56 & 0.547 & 0.001 \\ 2459729.910630 & -1.82 & 1.66 & 0.554 & 0.001 \\ 2459729.916833 & -0.66 & 2.04 & 0.570 & 0.001 \\ 2459729.923049 & 2.91 & 1.98 & 0.552 & 0.001 \\ 2459729.929438 & 4.36 & 2.65 & 0.558 & 0.001 \\ 2459729.936196 & -1.59 & 2.06 & 0.541 & 0.001 \\ 2459729.942122 & -1.22 & 2.05 & 0.551 & 0.001 \\ 2459729.948545 & 12.43 & 1.73 & 0.542 & 0.001 \\ 2459729.954760 & 8.68 & 1.77 & 0.543 & 0.001 \\ 2459729.961033 & 7.73 & 1.71 & 0.550 & 0.001 \\ 2459729.967294 & 6.28 & 1.66 & 0.549 & 0.001 \\ 2459729.973555 & -5.04 & 1.74 & 0.553 & 0.001 \\ 2459729.979874 & -11.39 & 2.04 & 0.553 & 0.001 \\ 2459729.985847 & -17.88 & 2.52 & 0.547 & 0.001 \\ 2459729.99234 & -29.59 & 3.47 & 0.615 & 0.001 \\ 2459729.998357 & -34.45 & 5.74 & 0.587 & 0.001 \\ 2459730.005718 & -18.23 & 5.48 & 0.455 & 0.001 \\ 2459730.011725 & -12.29 & 2.58 & 0.528 & 0.001 \\ 2459730.017974 & -27.38 & 2.24 & 0.559 & 0.001 \\ 2459730.024097 & -18.28 & 1.92 & 0.561 & 0.001 \\ 2459730.030254 & -14.63 & 1.88 & 0.551 & 0.001 \\ 2459730.036561 & -23.86 & 2.02 & 0.559 & 0.001 \\ 2459730.042881 & -17.00 & 1.92 & 0.550 & 0.001 \\ 2459730.049154 & -21.06 & 2.00 & 0.560 & 0.001 \\ 2459730.055426 & -22.27 & 1.88 & 0.545 & 0.001 \\ 2459730.061792 & -27.15 & 2.16 & 0.533 & 0.001 \enddata \end{deluxetable} \section{Stellar Parameters} \label{section:stellar_parameters} We first characterized the stellar properties of Qatar-6 A through a spectroscopic analysis of our iodine-free Keck/HIRES template spectrum. To accomplish this, we applied the machine learning model described in \citealt{rice2020stellar}, which is designed to extract precise stellar atmospheric parameters from Keck/HIRES spectra. Our spectroscopic model is trained on 1,202 FGK spectra from the Spectral Properties of Cool Stars (SPOCS) catalogue \citep{valenti2005spectroscopic, brewer2016spectral} and built on the generative machine learning program \textit{The Cannon} \citep{ness2015cannon}. We applied this model to extract four key stellar properties that were directly characterized in the SPOCS catalog: $T_{\rm eff}$, log$g$, $v\sin i_*$, and [Fe/H]. Input spectra to the \citealt{rice2020stellar} model must be continuum-normalized and shifted to the rest frame for direct comparison with the SPOCS training set. We first fit the continuum baseline of Qatar-6 A using the Alpha-shape Fitting to Spectrum (AFS) algorithm described in \citealt{xu2019modeling}, with $\alpha=1/8$ the span of each echelle order. We then divided our initial spectrum by this baseline model to produce a normalized spectrum. Finally, we cross-correlated the normalized spectrum with the solar atlas provided by \citealt{wallace2011optical} to shift the wavelength solution into the rest frame. We characterized our uncertainties by training and applying our model separately for each of the 16 echelle orders in the Keck/HIRES spectrum, then finding the standard deviation of our results added in quadrature to the training set uncertainties reported in \citealt{brewer2016spectral}. We excluded the fourth echelle order from this analysis due to a poorly fitted baseline subtraction caused by the close proximity of the 5995 \AA\, Na I absorption line to the edge of the echelle order. Our results, provided in the top portion of Table \ref{table:results}, are comprised of the mean and the associated uncertainty derived for each parameter from the 15 remaining model iterations. We then used these spectroscopically determined stellar parameters, together with parallax constraints from \textit{Gaia} DR3 and archival photometry, as inputs to derive the mass and radius of Qatar-6 A while placing further constraints on $T_{\rm eff}$, $\log g$, and [Fe/H]. We applied the \texttt{isoclassify} Python package \citep{huber2017isoclassify, huber2017asteroseismology, berger2020gaia} to derive posterior distributions for each stellar parameter by fitting a grid of isochrones to our input constraints. Photometry incorporated within our model included magnitudes from Tycho-2 \citep[B and V bands;][]{hog2000tycho}; 2MASS \citep[J, H, and K bands;][]{cutri20032mass}; \textit{Gaia} DR3 \citep[G, Bp, and Rp bands;][]{brown2022gaiadr3}; and the Sloan Digital Sky Survey \citep[u, g, r, i, and z bands;][]{ahn2012ninth}. We set an uncertainty floor of 0.1 mag in each photometric band to facilitate model convergence. We also used an all-sky dust model, implemented through the \texttt{mwdust} Python package \citep{bovy2016galactic}, to fit for extinction. Our results, which are provided in Table \ref{table:results}, are all in agreement with previously published values within $2\sigma$. \begin{deluxetable*}{lllll} \tablecaption{Parameters, Priors, and Results for the Qatar-6 A b Planetary System \label{table:results}} \tablehead{} \tablewidth{300pt} \startdata & Keck/HIRES & Photometry + Keck/HIRES & Photometry + Keck/HIRES & TRES \\ & This work, spectroscopic fit & This work, isochrone fit & This work, RM fit & Alsubai+ 2018 \\ & \textit{The Cannon} & \texttt{isoclassify} & \texttt{allesfitter} & \texttt{SPC} \\ \hline \multicolumn{5}{l}{Stellar Parameters:}\\ $M_*$ (\ensuremath{\,{\rm M_\Sun}}) & - & $0.829^{+0.019}_{-0.022}$ & - & $0.822\pm0.021$ \\ $R_*$ (\ensuremath{\,{\rm R_\Sun}}) & - & $0.785^{+0.033}_{-0.023}$ & - & $0.722\pm0.020$ \\ $T_{\rm eff}$ (K) & $4895\pm 77$ & $5063\pm42$ & - & $5052\pm66$ \\ $\log{g}$ (cm/s$^2$) & $4.41\pm0.20$ & $4.56^{+0.03}_{-0.04}$ & & $4.64\pm0.01$ \\ $v\sin i_{\star}$ (km/s) & 2.4$\pm$1.2 & - & $2.88^{+0.95}_{-0.66}$ & $2.9\pm0.5$ \\ $[\rm{Fe/H}]$ (dex) & $0.06\pm0.05$ & $0.04\pm0.05$ & - & $-0.025\pm0.093$ \\ \\ \hline Parameter & Description & Priors & Value \\ \hline \\ \multicolumn{5}{l}{Fitted Parameters:}\\ $R_b / R_\star$& Planet-to-star radius ratio& $\mathcal U(0.151;0;1)$ & $0.1516_{-0.0046}^{+0.0055}$ & \\ $(R_\star + R_b) / a_b$& Sum of radii divided by orbital semimajor axis& $\mathcal U(0.077;0;1)$ & $0.0930_{-0.0023}^{+0.0026}$ & \\ $\cos{i_b}$& Cosine of the orbital inclination& $\mathcal U(0.0696;0;1)$ & $0.0707\pm0.0041$ & \\ $T_{0, b}$& Mid-transit epoch (BJD)-2450000 & $\mathcal U(8611.50257;8610.5025;8612.5025)$ & $8611.4968\pm0.0082$ & \\ $P_b$& Orbital period (days) & $\mathcal U(3.506195;0;10)$ & $3.506200\pm0.000026$ & \\ $K_b$& RV semi-amplitude (m/s) & $\mathcal U(100;0;200)$ & $110.8_{-5.5}^{+5.8}$ & \\ $\sqrt{e_b} \cos{\omega_b}$& Eccentricity parameter 1 & $\mathcal U(0;-1;1)$ & $0.130_{-0.13}^{+0.094}$ & \\ $\sqrt{e_b} \sin{\omega_b}$& Eccentricity parameter 2 & $\mathcal U(0;-1;1)$ & $0.139_{-0.12}^{+0.087}$ & \\ $\lambda$& Sky-projected spin–orbit angle ($\ensuremath{\,^{\circ}}$) & $\mathcal U(0;-180;180)$ & $0.1\pm2.6$ & \\ $v \sin i_{*}$& Sky-projected stellar rotational velocity (km/s) & $\mathcal U(2.9;0;20)$ & $2.88_{-0.66}^{+0.95}$ & \\ $q_{1; \mathrm{TESS}}$ & Transformed limb-darkening coefficient 1 &$\mathcal U(0.5;0;1)$ & $0.73_{-0.28}^{+0.20}$ & \\ $q_{2; \mathrm{TESS}}$ & Transformed limb-darkening coefficient 2&$\mathcal U(0.5;0;1)$ &$0.61_{-0.36}^{+0.28}$ & \\ $q_{1; \mathrm{RM}}$ & Transformed limb-darkening coefficient 1&$\mathcal U(0.5;0;1)$ & $0.36_{-0.26}^{+0.35}$ & \\ $q_{2; \mathrm{RM}}$ & Transformed limb-darkening coefficient 2&$\mathcal U(0.5;0;1)$ & $0.50\pm0.34$ & \\ \\ \multicolumn{5}{l}{Derived Parameters:}\\ $R_{b}$ & Planetary radius (R$_{J}$) & ... & $1.164_{-0.057}^{+0.063}$ & \\ $M_{b}$& Planetary mass (M$_{J}$) & ... & $0.683_{-0.046}^{+0.050}$ & \\ $a_b/R_\star$ & Planetary semi-major axis over host star radius&... &$12.39\pm0.31$ & \\ $b$ & Impact parameter & ... & $0.852_{-0.043}^{+0.029}$ & \\ $T_{\rm 14}$& Total transit duration (hours) & ... & $1.636_{-0.037}^{+0.039}$ & \\ $i_b$ & Inclination ($\ensuremath{\,^{\circ}}$) & ... & $85.95\pm0.24$ & \\ $e_b$ & Eccentricity & ... & $0.051_{-0.030}^{+0.032}$ & \\ $\omega_b$& Argument of periastron ($\ensuremath{\,^{\circ}}$) & ...& $58_{-32}^{+79}$ & \\ $a_b$ & Semi-major axis (AU) & ...& $0.045\pm0.002$ & \\ $u_\mathrm{1; TESS}$ &Limb-darkening parameter 1, TESS &...& $0.95\pm0.57$ & \\ $u_\mathrm{2; TESS}$ & Limb-darkening parameter 2, TESS &...&$-0.17_{-0.46}^{+0.58}$ & \\ $u_\mathrm{1; RM}$ &Limb-darkening parameter 1, RM &...&$0.50_{-0.36}^{+0.55}$ & \\ $u_\mathrm{2; RM}$ & Limb-darkening parameter 2, RM &... &$0.00\pm0.38$ & \\ \enddata \end{deluxetable*} \section{Obliquity Modeling} \label{section:spinorbitmodel} We used the \texttt{allesfitter} Python package \citep{gunther2021allesfitter} to jointly model our new Rossiter-McLaughlin measurements together with other publicly available datasets for Qatar-6 A. Qatar-6 A was observed by the Transiting Exoplanet Survey Satellite \citep[TESS;][]{ricker2015tess} at a 2-minute cadence during Sectors 50 and 51, and both sectors of data were included within our analysis.\footnote{The TESS data used in this paper can be found in MAST: \dataset[10.17909/t9-nmc8-f686]{http://dx.doi.org/10.17909/t9-nmc8-f686}.} Our model also incorporated published radial velocity data from the TRES spectrograph, drawn from \citealt{alsubai2018qatar}. We corrected for systematic additive offsets between RV datasets by fitting and subtracting off a quadratic function between each dataset. The additive offsets account for any correlated noise, instrumental drift, or astrophysical phenomena on timescales longer than $\sim6$ hours. The free parameters within our model include the companion's orbital period ($P_b$), transit mid-times ($T_{0, b}$), cosine of the planetary orbital inclination ($\cos{i_b}$), planet-to-star radius ratio ($R_{b}/R_{\star}$), sum of radii divided by the orbital semi-major axis ($(R_{\star}+R_{b})/a_b$), RV semi-amplitude ($K_b$), parameterized eccentricity and argument of periastron ($\sqrt{e_b}\,\cos{\,\omega_b}$, $\sqrt{e_b}\,\sin{\,\omega_b}$), sky-projected spin-orbit angle ($\lambda$), sky-projected stellar rotational velocity ($v\sin i_{\star}$), and four limb-darkening coefficients ($q_{1; \mathrm{TESS}}$, $q_{2; \mathrm{TESS}}$, $q_{1; \mathrm{RM}}$, and $q_{2; \mathrm{RM}}$). Each parameter was initialized with uniform priors within the bounds provided in Table \ref{table:results}. We leveraged an affine-invariant Markov Chain Monte Carlo (MCMC) analysis with 100 walkers to thoroughly sample the posterior distribution for each free parameter, allowing each Markov chain to run over 30$\times$ the autocorrelation length ($\geq500,000$ accepted steps per walker) to ensure convergence. The optimized model results and associated uncertainties are provided in Table \ref{table:results} and displayed in Figure \ref{fig:rv_joint_fit}. We measured a sky-projected stellar obliquity $\lambda=0.1\pm2.6\ensuremath{\,^{\circ}}$ for Qatar-6 A b, demonstrating that the system is consistent with alignment. \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{RM.pdf} \caption{Keck/HIRES observations from the UT 5/30/22 transit of Qatar-6 A b, with the best-fitting Rossiter-McLaughlin model, corresponding to $\lambda=0.1\pm2.6\ensuremath{\,^{\circ}}$, overplotted. The associated residuals from the best-fitting model are provided below.} \label{fig:rv_joint_fit} \end{figure*} Following the routine described in \citealt{Southworth2008}, we adopted the residual-shift method to characterize our uncertainties that may result from unmodeled red noise. We shifted the residual around the best-fit model, point-by-point, until the residuals cycled back to where they originated. After each shift, a new best fit was calculated using the Nelder-Mead algorithm. We ended up with 39 best fits, which is equivalent to the number of RV measurements across our RM curve. The $\lambda$ value derived from the resulting distribution of fitted values is $\lambda=-0.5\pm2.2^{\circ} $, which is consistent with the result from the \texttt{allesfitter} fit ($\lambda=-0.1\pm2.6^{\circ} $). To be conservative, the latter was used since its error is larger than that of the former. We also measured the stellar rotation period of Qatar-6 A to determine the 3D stellar obliquity, $\psi$. We employed a Generalised Lomb-Scargle periodogram \citep[GLS;][]{Zechmeister2009} to analyze the TESS light curves from Sectors 50 and 51 and to extract key periodicities. There are currently two types of light curves provided by the TESS SPOC pipeline: the Simple Aperture Photometry (SAP) light curves and the Pre-search Data Conditioning SAP \cite[PDCSAP;][]{Jenkins2016} light curves. SAP light curves were used in our work, since the stellar rotation signals could be recognized as spacecraft-related systematics that would be removed in the PDCSAP detrending process. The SAP light curves were downloaded using the \texttt{lightkurve} Python package \citep{cardoso2018lightkurve}, and all transits and flagged measurements were masked. From the reduced SAP light curves, Qatar-6 A shows apparent rotational modulation with period of $P_* = 12.962\pm0.015$ days (see Figure~\ref{fig:Rotation}). Our result is in excellent agreement with the rotation period $12.75\pm1.75$ days derived in \citealt{alsubai2018qatar}. We then used this result to derive the stellar inclination $i_*$, following the methods described in \citealt{Masuda2020}. We adopted the Affine-Invariant Monte Carlo Markov Chain method implemented in the \texttt{emcee} Python package \citep{foremanmackey2013} to derive posterior distributions for three independent variables -- $R_*$, $P_*$, and $\cos{i_*}$ -- with measurement-informed priors on $R_*$, $P_*$, and $(2 \pi R_*/P_*)\sqrt{1-\cos^{2}{i_*}}$. From this, we obtained the stellar inclination $i_*=66.8^{+9.7}_{-23.3}\ensuremath{\,^{\circ}}$. \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{LC.pdf} \caption{Simple Aperture Photometry (SAP) light curve for Qatar-6 A with transits masked. The star shows significant rotational modulations with a periodicity of $P_* = 12.962\pm0.015$ days based on a Generalised Lomb-Scargle analysis.} \label{fig:Rotation} \end{figure*} Combining the derived stellar inclination $i_*$ with our newly constrained sky-projected spin-orbit angle $\lambda$ and the orbital inclination measurement ($i_b$) derived from our global fit to the Qatar-6 A b system, we calculated the true spin-orbit angle $\psi$ using the relation \begin{equation} \cos \psi = \sin i_* \cos \lambda \sin i_b + \cos i_* \cos i_b. \end{equation} We obtained $\psi=21.82^{+8.86}_{-18.36}\ensuremath{\,^{\circ}}$, indicating that Qatar-6 A b is consistent with spin-orbit alignment. The large error bars in $\psi$ are primarily driven by the relatively large uncertainty in our measured $v\sin i_*$. This value may be better constrained with higher-resolution spectrographs mounted on large-aperture telescopes amenable to observations of relatively dim stars, such as the Keck Planet Finder \citep{gibson2016kpf}. \section{Binary System Alignment} \label{section:binary_alignment} The Qatar-6 system contains two stars, Qatar-6 A and Qatar-6 B, together with the companion planet Qatar-6 A b. The relative orbital configurations of the full 3-body system can, therefore, offer clues to the system's most likely formation mechanism. In this section, we leverage astrometric data from \textit{Gaia} DR3 \citep{brown2022gaiadr3} to constrain the orbital properties of the Qatar-6 AB binary star system. \subsection{Confirming the System's Stellar Multiplicity} \label{subsection:multiplicity} The system's secondary star, Qatar-6 B, was first identified by \citealt{mugrauer2019search}. \citealt{mugrauer2019search} used the \textit{Gaia} DR2 astrometric dataset \citep{Gaia2016, brown2018gaia} to demonstrate that Qatar-6 B is bound to the primary. In this section, we use the updated \textit{Gaia} DR3 dataset to confirm that Qatar-6 B is the only candidate stellar companion to Qatar-6 A. We queried for all sources within $10\arcmin$ of Qatar-6 A, then followed the methods of \citealt{el2021million} to vet potential companions. Our search included any sources with parallaxes $\varpi>1$ mas, fractional parallax uncertainties $\sigma_{\varpi}/\varpi<0.2$, and absolute parallax uncertainties $\sigma_{\varpi}<2$ mas. For each source that passed this initial cut, we checked the following three requirements: \vspace{2mm} \noindent(i) The sky-projected separation $s$ between the two stars must be less than 1 pc; that is, \begin{equation} \Big(\frac{\theta_s}{\mathrm{arcsec}}\Big) < 206.265 \Big(\frac{\varpi}{\mathrm{mas}}\Big) \end{equation} for projected angular separation $\theta_s$ calculated as \begin{equation} \cos{(\theta_s)} = (\sin\delta_{p}\sin\delta_{cc}+\cos\delta_{p}\cos\delta_{cc}\cos({\alpha_p - \alpha_{cc}))}. \end{equation} The subscript $p$ refers to the primary (Qatar-6 A), whereas the subscript $cc$ refers to a companion candidate. RA and Dec are given as $\alpha$ and $\delta$, respectively. \vspace{2mm} \noindent(ii) The parallax of the primary and the candidate companion must be consistent within $b\sigma$, such that \begin{equation} |\varpi_{cc} - \varpi_p| < b \sqrt{\sigma_{\varpi_{cc}}^2 + \sigma_{\varpi_{p}}^2}. \end{equation} Following \citealt{el2021million}, we set $b = 3$ for pairs with angular separation $\theta_s>4.0\arcsec$, or $b = 6$ for pairs with $\theta_s<4.0\arcsec$. The weaker threshold for pairs at small angular separation is set to counteract the systematic underestimate of parallax uncertainties at close angular separations \citep{el2021million}. \vspace{2mm} \noindent(iii) The two stars must have relative proper motion measurements consistent with a bound Keplerian orbit \begin{equation} \Delta\mu < \Delta\mu_{\rm orbit} + 2\sigma_{\Delta\mu}. \end{equation} Here, $\Delta\mu_{\rm orbit}$ is given by \begin{equation} \Delta\mu_{\rm orbit} =( 0.44\, \mathrm{mas/yr})\,\Big(\frac{\varpi}{\mathrm{mas}}\Big)^{3/2}\Big(\frac{\theta_s}{\mathrm{arcsec}}\Big)^{1/2}. \end{equation} This relation, which is drawn from \citealt{el2018imprints}, provides the maximum difference in proper motion for a circular binary orbit with total system mass 5$M_{\odot}$. We use this as a conservative estimate to encapsulate a range of potential candidate stellar companion velocities. The uncertainty $\sigma_{\Delta\mu}$ in the proper motion difference between the two stellar components is given as \begin{equation} \sigma_{\Delta\mu} = \frac{1}{\Delta\mu}\sqrt{(\sigma_{\mu^*_{\alpha, 1}}^2 + \sigma_{\mu^*_{\alpha, 1}}^2)\Delta{\mu^*_{\alpha}}^2 + (\sigma_{\mu_{\delta, 1}}^2 + \sigma_{\mu_{\delta, 2}}^2)\Delta\mu_{\delta}^2}, \end{equation} while the proper motion difference $\Delta\mu$ is \begin{equation} \Delta\mu = \sqrt{\Delta{\mu^*_{\alpha}}^2 + \Delta\mu_{\delta}^2}. \end{equation} The proper motion differences $\Delta\mu^*_{\alpha}$ and $\Delta\mu_{\delta}$ in the RA and Dec directions, respectively, are calculated as \begin{equation} \Delta{\mu^*_{\alpha}}^2 = (\mu^*_{\alpha,1} - \mu^*_{\alpha, 2})^2 \end{equation} and \begin{equation} \Delta{\mu_{\delta}}^2 = (\mu_{\delta,1} - \mu_{\delta, 2})^2. \end{equation} We note that proper motions reported by \textit{Gaia} DR3 in the RA direction already account for a $\cos{\delta}$ corrective factor,\footnote{See the documentation for \textit{Gaia} source parameters; \url{https://gea.esac.esa.int/archive/documentation/GEDR3/Gaia_archive/chap_datamodel/sec_dm_main_tables/ssec_dm_gaia_source.html}} such that $\mu^*_{\alpha} \equiv \mu_{\alpha}\cos\delta$. Our provided relations have implicitly included this correction within them. For clarity, we include a star superscript on each variable that includes this corrective factor. A single source passed all three of the tests outlined above. We first compared the projected separation and orientation of the system to confirm that the recovered source is the previously identified companion Qatar-6 B. Then, we verified that the same binary companion was also identified in \citealt{el2021million}, which predicts a low fractional chance alignment probability $R=1.92125073\times10^{-6}$ for the binary pair. We conclude that the previously identified companion Qatar-6 B -- the only candidate companion that we identify for Qatar-6 A -- is likely not a chance alignment. We also confirmed that the projected separation of the binary is $s<<30,000$ au, a threshold above which \citealt{el2021million} finds that chance alignments dominate over true bound companions. Using \textit{Gaia} DR3, we measured a sky-projected separation $s=482$ au between Qatar-6 A and Qatar-6 B. This separation is similar to, but slightly smaller than, the $s=486$ au separation determined by \citealt{mugrauer2019search} using \textit{Gaia} DR2. Lastly, we checked each star's Renormalized Unit Weight Error (RUWE) parameter -- a $\chi^2$-based metric provided by \textit{Gaia} to quantify the robustness of the astrometric fit for a star. RUWE $\sim1.0$ typically corresponds to a high-quality single star fit, while RUWE $>1.4$ generally indicates a poor astrometric fit that may result from the presence of an unresolved companion. For Qatar-6 A and B, respectively, \textit{Gaia} DR3 reports RUWE $=1.22$ and RUWE $=1.20$. Although both Qatar-6 stars fall comfortably below the commonly-adopted limit RUWE $<1.4$, the astrometric fit for each star deviates substantially from RUWE $=1.0$. With this caveat in mind, we proceed to further characterize the properties of the binary star orbits. \subsection{Constraining the Binary Star Orbit} \label{subsection:binary_alignment} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{gamma_Q6.pdf} \caption{Geometry of the Qatar-6 AB stellar system. We orient our schematic with north upwards and east to the left. The position vector between the two stars (black) and the relative proper motion vector (purple), scaled to demonstrate the geometry of the system, are nearly linear within the sky plane. Qatar-6 B is moving in a sky-projected direction $\gamma=179.11\ensuremath{\,^{\circ}}$ away from the primary, Qatar-6 A.} \label{fig:gamma_Q6} \end{figure} To examine the relative orbital configuration of the Qatar-6 AB binary star system, we first measured the angle $\gamma$ between the position vector connecting the two binary star components and the relative proper motion vector \citep{tokovinin1998distribution, tokovinin2015eccentricity}. We selected the convention that $\gamma=180\ensuremath{\,^{\circ}}$ where the secondary star is moving directly in the opposite direction of the primary, whereas $\gamma=0\ensuremath{\,^{\circ}}$ where the secondary star is moving directly toward the primary. We obtained $\gamma=179.11\ensuremath{\,^{\circ}}$ for Qatar-6 AB, and the geometry of the system is visualized in Figure \ref{fig:gamma_Q6} for reference. This nearly linear configuration indicates that the position and velocity vectors are well-aligned within the sky plane, suggesting an edge-on orbit for the binary system. Next, we further constrained the highest-likelihood binary orbits for the Qatar-6 AB system using the \texttt{lofti\_gaia} Python package \citep{pearce2020orbital}. The LOFTI (Linear OFTI) algorithm implemented by \texttt{lofti\_gaia} was designed to constrain the orbital properties of binary star systems based on the linear sky-plane velocity vector of each star measured by the \textit{Gaia} mission. LOFTI builds upon the Orbits For The Impatient (OFTI) Bayesian rejection sampling method \citep{blunt2017orbits} for orbit fitting to short orbital arcs. We ran LOFTI up to 100,000 accepted orbits using astrometric constraints provided by \textit{Gaia} DR3. Stellar masses $M_{A}=0.829^{+0.019}_{-0.022} M_{\Sun}$ (this work) and $M_{B}=0.244^{+0.013}_{-0.020}M_{\Sun}$ \citep{mugrauer2019search} were adopted for Qatar-6 A and Qatar-6 B, respectively. Because LOFTI requires a symmetric mass uncertainty, we used the larger uncertainties $\sigma_{M_{A}}= 0.022M_{\Sun}$ for Qatar-6 A and $\sigma_{M_{B}}=0.020M_{\Sun}$ for Qatar-6 B. A subsample of 1,000 accepted orbits is shown in Figure \ref{fig:sample_orbits}, demonstrating a clear tendency toward aligned orbits in agreement with our $\gamma$ analysis. The posteriors of our orbit fit are provided in Figure \ref{fig:histograms}. We find that the current set of \textit{Gaia} DR3 astrometry is unable to provide a strong constraint on the system's eccentricity. There is also a degeneracy between position angles $PA=180\ensuremath{\,^{\circ}}$ and $PA=360\ensuremath{\,^{\circ}}$ due to the relatively small, $2.2\sigma$ difference in parallaxes between the two components, which produces a corresponding degeneracy in the argument of periapsis $\omega$. Regardless, we find a strong preference for an edge-on binary orbit, with inclination $i_{B}=90.17^{+1.07}_{-1.06}\ensuremath{\,^{\circ}}$. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{orbit_samples.pdf} \caption{Selection of 1,000 accepted orbits from the posterior of LOFTI orbit fits to the Qatar-6 AB stellar system. All accepted orbits have nearly edge-on inclinations, with $i_{B}=90.17^{+1.07}_{-1.06}\ensuremath{\,^{\circ}}$. Here, the RA and Dec of the primary star have been centered at (0,0).} \label{fig:sample_orbits} \end{figure} \begin{figure*} \centering \includegraphics[width=0.98\textwidth]{histograms_qatar-6.pdf} \caption{Posteriors from LOFTI fits to the Qatar-6 AB binary star system. The inclination ($i$) and longitude of ascending node ($\Omega$) for the binary system are well-constrained by \textit{Gaia} DR3 astrometry. The median and 68\% minimum credible interval for each parameter are each shown in gray and provided along the top of each panel.} \label{fig:histograms} \end{figure*} Ignoring selection effects \citep[see e.g.,][]{el2018imprints, ferrer2021biases}, the probability distribution function for inclinations $i_B$ drawn from an isotropically distributed set of orbits follows a uniform distribution in $\sin(i_B)$, where $i_B=90\ensuremath{\,^{\circ}}$ is defined as an edge-on orbit. In the case that stellar binary orbits are randomly oriented, there would be a $\sim 2 \%$ occurrence rate for chance alignments in $i_B$ within $1.2\ensuremath{\,^{\circ}}$. Correspondingly, there is a $\sim 2 \%$ chance that the observed alignment results from a chance alignment among a randomly distributed set of orbits. Selection biases favor relatively edge-on orbits, such that even an isotropically distributed set of orbits (uniform in $\sin(i_B)$) should include an overdensity toward $i_B\sim90\ensuremath{\,^{\circ}}$. Furthermore, orbit fitting with little to no orbital coverage suffers from known degeneracies between inclination and eccentricity \citep{ferrer2021biases}. To examine the robustness of our edge-on orbit fit, we produced a comparison sample of ten systems with similar binary separation, parallax, and magnitude properties to that of Qatar-6 AB. This sample includes the ten systems within the \citealt{el2021million} catalogue with the most similar properties to that of Qatar-6 AB based on the metric adopted in \citealt{christian2022possible}. Masses were extracted using the \texttt{isoclassify} Python package in the same configuration described in Section \ref{section:stellar_parameters}, but with an uncertainty floor of 0.5 mag to facilitate convergence. We integrated each comparison system to 1,000 accepted orbits using LOFTI, with results that are shown in Figure \ref{fig:comparison_sample}. This exercise reaffirms the edge-on nature of Qatar-6 AB, which has an inclination distribution that is restricted to a much narrower range of edge-on values than the comparison sample. \begin{figure*} \centering \includegraphics[width=0.98\textwidth]{comparison_sample.pdf} \caption{Gallery of 10 comparison systems with similar properties to Qatar-6, together with the normalized density of accepted orbital inclinations across all samples (bottom left). 1,000 accepted orbits, fit to \textit{Gaia} DR3 astrometric constraints, are shown for each system. While we do find a tendency toward edge-on systems, the distribution of accepted orbits in our comparison sample is much broader than that of the Qatar-6 AB binary system.} \label{fig:comparison_sample} \end{figure*} Recent population studies have demonstrated that hosts of transiting planets tend to have stellar binary orbits closer to an edge-on configuration than field binary star systems with no known planets, suggesting a systematic trend toward alignment between the binary plane and the planetary orbits \citep{christian2022possible, dupuy2022orbital}. Our results are consistent with this finding: that is, we show that the Qatar-6 AB binary system, which includes one confirmed transiting planet, lies in a precisely edge-on configuration. Combined with previous findings, this suggests that the planet's orbital plane may be closely aligned with the stellar binary orbital plane (a configuration that we refer to as ``orbit-orbit alignment''). However, we emphasize that the only well-constrained angle in our binary system is inclination. That is, the sky-plane direction in which the planet is transiting is not constrained relative to the plane of the stellar orbit. While the line-of-sight orientation of the system is consistent with alignment, it is not currently possible to measure the sky-plane angle between the planetary orbit and the stellar orbit in most planetary systems. Additional population-wide studies examining the prevalence of orbit-orbit alignment in hot and warm Jupiter systems may further inform the role of stellar multiplicity in the evolution of planetary systems. \section{Dynamical Timescales} \label{section:dynamical_timescales} The timescales of relevant dynamical mechanisms can be compared with the system age to better constrain the past evolution of a given planetary system. In this section, we examine several important timescales at play in the Qatar-6 system: the tidal alignment timescale (Section \ref{subsection:tidal_align_timescale}), the tidal circularization timescale (Section \ref{subsection:tidal_circ_timescale}), the Kozai-Lidov timescale (Section \ref{subsection:kozai_timescale}), the apsidal precession timescale from general relativity (Section \ref{subsection:apsidal_timescale}), and the timescales for changes to each orbital element under the influence of the Qatar-6 B secondary star (Section \ref{subsection:orb_elem_timescale}). We then discuss the joint implications of these timescales in Section \ref{subsection:timescale_implications}. \subsection{Tidal Alignment Timescale} \label{subsection:tidal_align_timescale} \subsubsection{Angular Momentum of the System} To evaluate the feasibility of tidal alignment within the system, we first compared the orbital angular momentum of Qatar-6 A b with that of the convective layer of Qatar-6 A. In the case that tidal alignment occurs prior to the completion of the orbital decay process (and the subsequent disruption of the Jovian planet), the companion planet's orbit should host more angular momentum than the host star's convective layer. We calculated the angular momentum $L_{CZ}$ of the star's convective zone using the relation \begin{equation} L_{CZ} = \omega\int l^2 dm \label{eq:L_cz} \end{equation} for distance $l$ from each mass element $dm$ to the spin axis, with $l=r\sin\phi$ and \begin{equation} dm = \rho dV = \rho r^2 \sin\phi dr d\phi d\theta. \end{equation} Here, $\omega=v/R_*$ is the angular velocity of the convective layer's rotation, where we assume no shear between layers. We integrated the density $\rho$ over each volume element $dV$ of the convective layer, using spherical coordinates to integrate from the radius of the convective zone boundary ($r=R_{CZ}$) to the full radius of the star ($r=R_*$). The radius of the convective zone boundary was set as $R_{CZ}/R_*=0.69$ based on the models of \citealt{van2012sensitivity} for the mass and age of Qatar-6 A. The density of the convective layer was approximated as uniform, with a total mass $M_{CZ}=10^{-1.35}M_{\odot}$ drawn from the stellar interior models of \citealt{pinsonneault2001mass}. Comparing this with the planet's orbital angular momentum, which is given by \begin{equation} L_{p, orb}=M_b v_b r_b \end{equation} at a given orbital distance $r_b$ and momentary velocity $v_b$, we find that $L_{p, orb}/L_{CZ}\sim15$. This excess of angular momentum in the planet's orbit indicates that tidal alignment could feasibly occur within this system. \subsubsection{Equilibrium Tides} Qatar-6 A is a cool star ($T_{\rm eff}\sim5063\pm42$ K) that lies well below the Kraft break — a rotational discontinuity at roughly $T_{\rm eff}\sim6100$ K, below which stars typically have convective envelopes \citep{kraft1967studies}. The tidal alignment timescale $\tau_{CE}$ for stars with convective envelopes can be approximated as \begin{equation} \tau_{CE} = \frac{10^{10} \rm{yr}}{(M_b/M_*)^2}\Big(\frac{a_b/R_*}{40}\Big)^{6}, \label{eq:tau} \end{equation} where $M_b/M_*$ is the planet-to-star mass ratio \citep{zahn1977tidal, albrecht2012obliquities}. This scaling is calibrated based on the observed synchronization of stellar binaries under the framework of equilibrium tides. Consequently, it includes an implicit assumption that the tidal realignment timescale for planetary systems scales similarly to that of stellar systems. Based on Equation \ref{eq:tau}, Qatar-6 A b has $\tau_{CE}\sim 1.6\times10^{13}$ years, which is longer than the age of the Universe. Therefore, under the assumption that equilibrium tides well approximate the dynamical behavior of this system, the companion planet's spin-orbit angle likely has not changed significantly since its initial formation. \subsubsection{Dynamical Tides} \label{subsubsection:dynamical_tides} Alternatively, the system may have been aligned through the dissipation of inertial waves, which are driven by the Coriolis force in a rotating star. The tidal disturbances produced through this mechanism are collectively known as ``dynamical tides''. We focus on a specific mode of the dynamical tide -- known as the ``obliquity tide'', with $m=\pm1$ and $m'=0$ -- that damps only the stellar obliquity without altering the orbital semimajor axis of the companion planet \citep{lai2012tidal}. In this case, the obliquity of a planetary system evolves as \begin{equation} \begin{split} \frac{d\psi}{dt}\bigg\rvert_{10} = -\frac{3}{4}\frac{k_{2}}{Q_{10}}\Big(\frac{M_b}{M_*}\Big)\Big(\frac{R_*}{a_b}\Big)^5\Omega_K\sin(\psi) \cos^2(\psi) \\ \times \Big[1 + \frac{L_{p, orb}}{L_{*, spin}}\cos(\psi)\Big], \end{split} \end{equation} where $L_{*, spin}$ is the stellar spin angular momentum, $\Omega_*$ is the star's spin frequency, $k_{2}$ is the planet's Love number, $Q_{10}$ is the tidal quality factor for the obliquity tide, and $\Omega_K$ is the Keplerian orbital angular frequency \citep{lai2012tidal, spalding2022tidal}. We use $L_{*, spin}=L_{CZ}$ to consider the most conservative case in which only the convective zone realigns with the companion orbit. If the convective zone is not decoupled from the stellar core, then the timescale for realignment would be longer due to the larger value of $L_{*, spin}$. We numerically integrate this expression from $t=30$ Myr (a rough starting age for a K-type star to enter the main sequence) to $t=1$ Gyr (the age of the system) to determine the maximum initial $\psi$ such that the system would today be observed to be aligned at $\psi=1\ensuremath{\,^{\circ}}$. The ratio $k_{2}/Q_{10}$ remains poorly constrained and provides the limiting uncertainty within our model. We consider a range of $k_{2}/Q_{10}$ values in Figure \ref{fig:dynamical_tides_constraint}, where we include both the case in which Qatar-6 A b has a true obliquity $\psi=22\ensuremath{\,^{\circ}}$ and the case in which the true obliquity is within $1\ensuremath{\,^{\circ}}$ of alignment ($\psi=1\ensuremath{\,^{\circ}}$). As demonstrated in Figure \ref{fig:dynamical_tides_constraint}, a relatively large value of $k_{2}/Q_{10}\gtrsim 10^{-4}$ would be required to push Qatar-6 A b from a highly misaligned state to its currently observed spin-orbit angle over the system lifetime. Empirical measurements of tidal dissipation in hot Jupiter hosts typically range from $k_2/Q=10^{-5}$ to $10^{-7}$ \citep{penev2018empirical}, with larger values for wider-orbiting planets. The solar system's much wider-orbiting Jupiter, for comparison, has been measured with $k_2/Q=(1.102\pm0.203)\times10^{-5}$ \citep{lainey2009strong}. While the effective tidal quality factor $Q$ differs for each mode of tidal dissipation, $Q$ values for hot and warm Jupiters are typically expected to be high, leading to correspondingly low $k_2/Q$ values. As a result, unless tidal dissipation in warm Jupiter systems is more efficient than previous estimates have suggested, it is unlikely that Qatar-6 A b has been realigned from a large previous misalignment over the post-disk system lifetime. \begin{figure} \centering \includegraphics[width=0.46\textwidth]{dynamical_tides.pdf} \caption{Maximum starting stellar obliquity at the time of protoplanetary disk dispersal, for a range of $k_{2}/Q_{10}$ values under the framework of dynamical tides. We consider the cases in which the true obliquity $\psi$ is either within $1\ensuremath{\,^{\circ}}$ of alignment ($\psi=1\ensuremath{\,^{\circ}}$) or is set to $\psi=22\ensuremath{\,^{\circ}}$ -- the central value determined in Section \ref{section:spinorbitmodel}.} \label{fig:dynamical_tides_constraint} \end{figure} \subsection{Tidal Circularization Timescale} \label{subsection:tidal_circ_timescale} Our model demonstrates that Qatar-6 A b may lie on a slightly eccentric orbit, with $e_b=0.051^{+0.032}_{-0.030}$ (Section \ref{section:spinorbitmodel}). Over time, the orbit will evolve along a path of constant orbital angular momentum characterized by \begin{equation} a_{b, \mathrm{final}} = a_b (1 - e_b^2), \end{equation} towards $e\rightarrow0$ as energy is removed from the system through tidal dissipation within the planet. As a result, the orbit will ultimately settle to a slightly smaller separation if it currently has a true nonzero eccentricity. We follow the formulation of \citealt{rice2022origins} to calculate the timescale $\tau_{\rm circ}$ for orbital circularization, with methods summarized here for convenience. $\tau_{\rm circ}$ can be characterized as \begin{equation} \tau_{\rm circ}\sim e_b/(de_b/dt), \end{equation} where \begin{equation} \frac{de_b}{dt} = \frac{dE}{dt}\frac{a_b(1-e_b^2)}{GM_* M_b e_b}\, \label{eq:dE_dt} \end{equation} and \begin{equation} \frac{dE}{dt} = \frac{21k_2 GM_*^2 \Omega R_b^5}{2Qa_b^6}\zeta(e_b) \end{equation} for an incompressible, synchronously rotating planet. Here, $G$ is the gravitational constant, $E$ is the orbital energy of the planet, $Q$ is the planet's effective tidal dissipation parameter, $k_2$ is the planet's Love number, and $\Omega$ is the pseudosynchronous rotation rate, given by \begin{equation} \Omega = \frac{1 + \frac{15}{2}e_b^2 + \frac{45}{8}e_b^4 + \frac{5}{16}e_b^6}{(1 + 3e_b^2 + \frac{3}{8}e_b^4)(1 - e_b^2)^{3/2}}n_b \end{equation} for planetary mean motion $n_b$. The corrective factor $\zeta(e)$ in Equation \ref{eq:dE_dt} was derived in \citealt{wisdom2008tidal} and is defined as \begin{equation} \zeta(e_b) = \frac{2}{7}\Big[\frac{f_0(e_b)}{\beta^{15}} - \frac{2f_1(e_b)}{\beta^{12}} + \frac{f_2(e_b)}{\beta^9}\Big], \end{equation} where \begin{equation} f_0(e_b) = 1 + \frac{31}{2}e_b^2 + \frac{255}{8}e_b^4 + \frac{185}{16}e^6 + \frac{25}{64}e_b^8 \end{equation} \begin{equation} f_1(e_b) = 1 + \frac{15}{2}e_b^2 + \frac{45}{8}e_b^4 + \frac{5}{16}e_b^6 \end{equation} \begin{equation} f_2(e_b) = 1 + 3e_b^2 + \frac{3}{8}e_b^4 \end{equation} \begin{equation} \beta = \sqrt{1-e_b^2}. \end{equation} Considering typical expected ranges of $k_2/Q\sim10^{-5}$ to $10^{-7}$ for close-in giant planets \citep{penev2018empirical}, we obtain $\tau_{\rm circ}\sim10^{7}-10^{9}$ years. Based on these relatively short timescales, we cannot exclude the possibility that Qatar-6 A b, with an age $\tau= (1.0\pm0.5) \times 10^9$ yr \citep{alsubai2018qatar}, began its dynamical evolution at a higher eccentricity that has been damped over time. However, because the planet's orbit is consistent with $e_b=0$ within $2\sigma$, we find no strong evidence requiring that the planet's orbit must have had a higher eccentricity in the past. \subsection{Kozai-Lidov Timescale} \label{subsection:kozai_timescale} While, in the line-of-sight direction, the orbit of Qatar-6 A b appears to be aligned with the Qatar-6 AB binary system orbit ($i_b\sim i_{B}\sim90\ensuremath{\,^{\circ}}$), we cannot rule out the possibility that its orbit may be misaligned in the sky-plane and therefore inclined relative to the Qatar-6 AB binary system. If this is the case, then Qatar-6 A b could be located along a low-eccentricity trough of a Kozai-Lidov cycle, where the z-component of the angular momentum vector \begin{equation} L_z = \sqrt{1-e_b^2}\cos i_{tot} \end{equation} is conserved at the quadrupole level. Here, $i_{tot}$ refers to the true inclination between the Qatar-6 A b warm Jupiter orbit and the Qatar-6 AB stellar binary orbit. At the quadrupole level of approximation, the Kozai-Lidov timescale in a hierarchical 3-body system is given by \begin{equation} \tau_{KL} = \frac{16}{30\pi}\frac{P_2^2}{P_1}(1 - e_2^2)^{3/2} \Big(\frac{m_1 + m_2 + m_3}{m_3}\Big), \end{equation} where subscripts 1 and 2 refer to the inner and outer orbits, respectively \citep{naoz2016eccentric}. In our case, $m_1 << m_2, m_3$ such that we can approximate $(m_1 + m_2 + m_3) \sim (m_2 + m_3)$. Simplifying our general expression and reconfiguring it for our system, we obtain \begin{equation} \tau_{KL} \approx \frac{16}{30\pi}\frac{P_{B} ^2}{P_b}(1 - e_{B}^2)^{3/2} \Big(\frac{M_{A} + M_{B}}{M_{B}}\Big). \end{equation} The primary star Qatar-6 A is referred to here with the subscript $A$, whereas the orbital properties of the stellar binary companion Qatar-6 B are denoted by the subscript $B$. From our LOFTI orbital fitting results, we adopt a semimajor axis of $a_{B}=5.36\arcsec$, which translates to $a_{B}=541$ au at a distance of 100.95 pc measured through the parallax reported in \textit{Gaia} DR3. Given our poor constraint on the orbital eccentricity of the stellar binary system, we approximate this timescale for a range of possible eccentricities from $e=0$ to $e=0.9$. We find a timescale ranging from $\tau_{KL}\sim10^{9}$ yr for $e=0.9$ to $\tau_{KL}\sim10^{10}$ yr for $e=0$. These timescales are comparable to the estimated age of the system ($(1.0\pm0.5) \times 10^9$ yr \citep{alsubai2018qatar}), indicating that Kozai-Lidov migration likely has not played a major role in the evolution of this system if Qatar-6 A b formed near its current location. The same planet would have a much shorter Kozai-Lidov timescale if it instead formed on a wider orbit (ranging from $\tau_{KL}\sim10^{7}$ yr for $e=0.9$ to $\tau_{KL}\sim10^{8}$ yr for $e=0$ if the planet began with an orbital period $P_b=1$ year), such that migration through Kozai-Lidov orbital evolution may have occurred in the past. \subsection{Apsidal Precession Timescale} \label{subsection:apsidal_timescale} Kozai-Lidov oscillations can be suppressed by additional perturbations that produce apsidal precession at a rate more rapid than that of the Kozai-Lidov mechanism, reducing the orbit-averaged torque induced by the companion star \citep{holman1997chaotic, wu2003planet}. We consider, in particular, the timescale $\tau_{GR}$ for apsidal precession due to general relativity \begin{equation} \tau_{GR} = \frac{2\pi c^2 (1 - e_b^2) a_b^{5/2}}{3 (GM_{*})^{3/2}}, \end{equation} which is relatively short for short-period giant planets. In this expression, $c$ is the speed of light. For Qatar-6 A b, the timescale for precession induced by general relativistic effects is only $\tau_{GR}=2\times10^4$ years -- much shorter than the system's current Kozai-Lidov timescale $\tau_{KL}\sim10^9 - 10^{10}$ yr (Section \ref{subsection:kozai_timescale}). We can, therefore, rule out the possibility that the Qatar-6 system is currently undergoing Kozai-Lidov oscillations. A past mutual inclination of $i_{\rm tot}\geq39.2\ensuremath{\,^{\circ}}$ would have been required between the planetary orbit and the binary star orbit to initiate Kozai-Lidov oscillations. In the case of the Qatar-6 system, a significant mutual inclination could remain undetected within the sky-plane direction if Kozai-Lidov oscillations occurred in the past. Alternatively, the system may have settled at a low final mutual inclination after undergoing Kozai-Lidov cycles (as in, e.g., some of the systems simulated by \citealt{naoz2012formation}). \subsection{Precession Timescales} \label{subsection:orb_elem_timescale} The stellar binary companion, Qatar-6 B, provides a small disturbing force \begin{equation} dF = \bar{R}\hat{r}+\bar{T}\hat{\theta}+\bar{N}\hat{z} \end{equation} that perturbs the orbit of Qatar-6 A b, altering the observed orbital elements of the system. Here, $\bar{R}$, $\bar{T}$, and $\bar{N}$ are the radial, tangential, and normal components of the force induced by the perturber. In particular, a nonzero mutual inclination between the planet's and the stellar binary's orbits should induce nodal and inclination precession in the orbit of Qatar-6 A b that may manifest as observable transit duration variations (TDVs) in the system. Our best-fitting solution includes a $\sim$5$\ensuremath{\,^{\circ}}$ mutual inclination between the planetary orbit ($i_b=85.95\pm0.24\ensuremath{\,^{\circ}}$) and the stellar binary orbit ($i_B=90.17^{+1.07}_{-1.06}\ensuremath{\,^{\circ}}$). The rates of nodal ($d\Omega_b/dt$) and inclination ($di_b/dt$) precession under the influence of a perturbing force $dF$ are given by \citep{murray1999solar} \begin{equation} \frac{di_b}{dt} = \sqrt{\frac{a_b(1-e_b^2)}{G(M_A+M_b)}}\frac{\bar{N}\cos(\omega_b + f_b)}{1 + e_b\cos f_b} \label{eq:Idot} \end{equation} and \begin{equation} \frac{d\Omega_b}{dt} = \sqrt{\frac{a_b(1-e_b^2)}{G(M_A+M_b)}}\frac{\bar{N}\sin(\omega_b+f_b)}{\sin i_b (1 + e_b\cos f_b)}, \label{eq:bigomegadot} \end{equation} where $f_b$ is the true anomaly of the planetary orbit, and both precession timescales are driven by the normal component of the perturbing force. We consider the range $f_b\in (0, 2\pi)$ and find the largest possible nodal and inclination precession rates across this range, adopting a mutual inclination $5\ensuremath{\,^{\circ}}$ between the planetary orbital plane and the stellar binary orbital plane and using the values derived in this work (Table \ref{table:results}). We obtain precession rates $d\Omega_b/dt=9.6\times10^{-4}$ deg/yr and $di_b/dt=8.5\times10^{-5}$ deg/yr, corresponding to a projected transit duration change of $<1$ minute over the course of a decade. Thus, we do not expect to observe significant transit duration variations caused by the binary perturber Qatar-6 B. \subsection{Implications of Dynamical Timescales} \label{subsection:timescale_implications} Together, the tidal alignment, tidal circularization, Kozai-Lidov, and apsidal precession timescales jointly indicate that the warm Jupiter Qatar-6 A b likely formed quiescently. The system's long tidal alignment timescale suggests that Qatar-6 A b formed within a protoplanetary disk that was primordially aligned with the Qatar-6 A host star, and the system was not dramatically altered from that point to push the system out of alignment. The planet's short tidal circularization timescale prevents us from ruling out an initially eccentric orbit within the plane of that protoplanetary disk. The planet's current low-eccentricity orbit is consistent with either an initially circular orbit or a previously higher-eccentricity orbit that was not pushed to a high inclination relative to the stellar spin axis. A higher eccentricity could have been previously excited within the plane of the protoplanetary disk through planet-disk interactions \citep{goldreich2003eccentricity}, planet-planet scattering \citep{rasio1996dynamical, chatterjee2008dynamical}, or resonance crossings with a planetary companion \citep{chiang2003}. Lastly, the Kozai-Lidov timescale for the Qatar-6 system is comparable to or longer than the age of the system, depending on the true Qatar-6 AB stellar binary orbital properties. If the planet formed near its currently observed orbit, this indicates that the Kozai-Lidov mechanism has likely not played a major role in the evolutionary past of the Qatar-6 AB system. The possibility that the planet is currently undergoing Kozai-Lidov oscillations can also be ruled out based on the short timescale for general relativistic-induced precession within the system. If Qatar-6 A b instead formed on a wider orbit and migrated inward over time, it is possible that the system may have experienced Kozai-Lidov oscillations in the past. To trigger Kozai-Lidov oscillations, however, the system would have needed a past mutual inclination of $i_{\rm tot}\geq39.2\ensuremath{\,^{\circ}}$ between the planetary orbit and the binary star orbit. The line-of-sight orbit-orbit alignment demonstrated in this work, together with the observed evidence for spin-orbit alignment, suggests a fully quiescent formation mechanism: if the system formed with no large mutual inclinations, the requirement of $i_{\rm tot}\geq39.2\ensuremath{\,^{\circ}}$ to initialize Kozai-Lidov oscillations would have never been met, even in the case that Qatar-6 A b began as a much wider-orbiting planet that migrated inwards over time. We conclude that Qatar-6 A b most likely reached its current orbit quiescently, either \textit{in situ} or through disk migration. \section{Potential Causes of Orbit-Orbit Alignment} \label{section:formation} \subsection{Binary Formation Scenarios} There are a few potential avenues through which a binary star system could form, some of which are more or less likely to produce the observed line-of-sight orbit-orbit alignment. Three key mechanisms for binary star formation are dynamical capture, disk fragmentation, and turbulent fragmentation. In this section, we examine the likelihood of each of these scenarios in the context of the Qatar-6 AB binary system. In the dynamical capture scenario, the binary companion would have been captured by the gravitational potential of the primary star and its protoplanetary disk \citep[][]{Tokovinin2017WideBinaryCapture}. This formation mechanism produces a relatively high rate of highly misaligned systems, with the orientation of the final system set by the impact parameter of nearby passing stars. Consequently, it is unlikely that the observed line-of-sight orbit-orbit alignment would arise directly from dynamical capture. Furthermore, dynamical capture tends to produce much wider binaries ($a>10^4$ au) and requires a series of specific conditions that must be satisfied, including (1) the presence of a third companion or a highly dissipative disk/envelope system to remove kinetic energy from the binary such that the binding energy of the system becomes negative; (2) a nearby star that falls into a specific range of appropriate relative velocities; and (3) an impact angle that is conducive to capture. Because of these conditions, dynamical capture is thought to be relatively rare in systems with a sub-solar-mass primary star \citep{clarke1991star, heller1995encounters, moeckel2007capture}. Therefore, it is plausible but unlikely that the Qatar-6 AB system was produced by the capture of a companion star. Alternatively, Qatar-6 AB may have formed through disk fragmentation, which can naturally produce binary star systems with primordially aligned protoplanetary disks. In this framework, a massive, gravitationally unstable circumprimary disk produces a stellar companion that inherits its angular momentum vector \citep[][]{Addams1989diskinst, BonnellBate1994binaryform}. However, disk fragmentation is expected to produce relatively close-in companions \citep[$a \lesssim 200$ au; e.g.,][]{Krumholz2007,Tobin2016} with separation bounded by the extent of the circumprimary disk. A distant stellar companion with sky-projected separation $s=482$ au would, accordingly, be unexpected for the relatively low-mass ($M_A=0.829M_{\odot}$) Qatar-6 A host star within the disk fragmentation framework. A third possibility is that the binary system formed through turbulent fragmentation \citep[][]{Offner2010, Offner2016, Lee2017}, where a gravitationally unstable over-density already in the contraction phase further fragments into two separate stars. While the turbulent environment can imprint potentially misaligned angular momentum vectors onto the stars, the overall angular momentum of the contracting over-density is expected to preferentially produce relatively aligned systems with $i_{tot}\lesssim 45\ensuremath{\,^{\circ}}$ \citep[][]{Bate2018}. In this case, the Qatar-6 A b planet could have quiescently formed within the relatively aligned protoplanetary disk, producing either a primordial orbit-orbit alignment or a relatively small primordial misalignment. \subsection{Dynamical Orbit-Orbit Alignment During Binary-Driven Disk Precession} If a primordial misalignment existed between the circumprimary disk and the companion star's orbit, the binary potential would drive disk precession about the binary orbit normal \citep[][]{Bate2000, batygin2012primordial, Zanazzi2018}. The gas-rich disk can dissipate the energy available from precession into heat, and in the process the disk can be pushed toward alignment with the binary orbit \citep[][]{papaloizou1995dynamics, Bate2000, lubow2000tilting}. As the associated timescale of this mechanism is well within the typical lifetime of protoplanetary disks for binary separations of order $500$ au \citep[][]{christian2022possible}, dynamical alignment via energy dissipation during binary-driven disk precession could robustly explain the sky-projected inclination match between the binary orbit and transiting warm Jupiter orbit in the Qatar-6 system. On the other hand, binary-driven disk precession has been invoked numerous times to explain observed spin-orbit misalignments \citep[][]{batygin2012primordial}. Indeed, the alignment scenario described above would, naively, leave the angular momentum vector of the star unchanged, and thus, given the long tidal realignment timescale in Equation \eqref{eq:tau}, produce a spin-orbit misalignment of order the initial binary-disk misalignment. Our constraint on the true spin-orbit angle $\psi=21.82^{+8.86}_{-18.36}\ensuremath{\,^{\circ}}$ leaves room for a non-negligible spin-orbit misalignment that could have resulted from a dynamical forcing of an initially misaligned protoplanetary disk into alignment with the binary orbit. However, a true joint spin-orbit and orbit-orbit alignment could naturally arise from dynamical orbit-orbit alignment paired with additional gravitational and magnetic processes that occur over the disk lifetime. During the embedded phase of star formation, accretion and gravitational star-disk coupling can efficiently transfer angular momentum between the star and the disk, suppressing the production of large spin-orbit misalignments \citep{spalding2014alignment}. Furthermore, strong magnetic torques in young systems with relatively low-mass stars ($M_*\lesssim1.2M_{\odot}$) can push systems with primordial spin-orbit misalignments back to alignment within the protoplanetary disk's lifetime \citep{spalding2015magnetic}. As a result, dynamical orbit-orbit alignment during the protoplanetary disk phase does not necessarily preclude spin-orbit alignment. \section{Conclusions} \label{section:conclusions} In this work, we have demonstrated that all current lines of evidence are consistent with a quiescent formation mechanism for the Qatar-6 system, which includes two stars and one transiting warm Jupiter. Our results are summarized by two key points: \begin{itemize} \item The warm Jupiter Qatar-6 A b is consistent with spin-orbit alignment along the host star's equator, with a project spin-orbit angle $\lambda=0.1\pm2.6\ensuremath{\,^{\circ}}$ and a true spin-orbit angle $\psi=21.82^{+8.86}_{-18.36}\ensuremath{\,^{\circ}}$. \item Both the planet's orbit ($i_b=85.95\pm0.24\ensuremath{\,^{\circ}}$) and the stellar binary orbit ($i_{B}=90.17^{+1.07}_{-1.06}\ensuremath{\,^{\circ}}$) are edge-on, such that all three bodies are consistent with alignment in the line-of-sight direction. \end{itemize} We have precisely measured the spin-orbit alignment of Qatar-6 A b within the 2D sky plane, and we find that the planet's 3D spin-orbit angle is also consistent with alignment (albeit with larger uncertainties). Interestingly, our results further suggest a joint orbit-orbit alignment across the three-body system: both the transiting planet and the stellar binary lie in an edge-on configuration. Such a 3D alignment may have been produced either primordially or through dynamical alignment of the protoplanetary disk and quiescent formation of the warm Jupiter within that disk. The full 3D alignment of the system cannot be confirmed due to the unconstrained transit direction within the sky plane. This system offers a detailed case study with multiple lines of evidence pointing toward a likely quiescent formation mechanism. The gaps in data pervasive in studies of individual systems, such as the one presented here, may be possible to remediate by applying statistical arguments to a wider sample of planetary systems. Future studies examining population-wide trends in the spin-orbit and orbit-orbit orientations of binary systems will offer further insights into the key dynamical mechanisms that dominate the evolution of exoplanet systems. \section{Acknowledgments} \label{section:acknowledgments} We thank the anonymous referee for their helpful comments that have improved this manuscript. We also thank Andrew Vanderburg and Sam Christian for helpful discussions, and Sam Yee for providing support for our Keck/HIRES observations. M.R. thanks the Heising-Simons Foundation for their generous support. This work is supported by the Astronomical Big Data Joint Research Center, co-founded by National Astronomical Observatories, Chinese Academy of Sciences and Alibaba Cloud. The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. This research has made use of the Keck Observatory Archive (KOA), which is operated by the W. M. Keck Observatory and the NASA Exoplanet Science Institute (NExScI), under contract with the National Aeronautics and Space Administration. This research has also made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. \software{\texttt{allesfitter} \citep{gunther2021allesfitter}, \texttt{emcee} \citep{foremanmackey2013}, \texttt{isoclassify} \citep{huber2017isoclassify, huber2017asteroseismology, berger2020gaia}, \texttt{lightkurve} \citep{cardoso2018lightkurve}, \texttt{lofti\_gaia} \citep{pearce2020orbital}, \texttt{matplotlib} \citep{hunter2007matplotlib}, \texttt{numpy} \citep{oliphant2006guide, walt2011numpy, harris2020array}, \texttt{pandas} \citep{mckinney2010data}, \texttt{scipy} \citep{virtanen2020scipy}, \textit{The Cannon} \citep{ness2015cannon}} \facility{Keck: I (HIRES), Exoplanet Archive, Extrasolar Planets Encyclopaedia}
3,212,635,537,895
arxiv
\section{Introduction} The issue of a possible loss of quantum coherence in processes in which a black hole is produced and then evaporates has been the subject of much debate since Hawking's claim \cite{Haw} that black holes should emit an exactly thermal spectrum of light quanta (see e.g. \cite{Mathur} for a recent review). Progress from string theory on the microscopic understanding of black-hole entropy \cite{entropy} and on the AdS-CFT correspondence \cite{AdS}, has lent strong support to the belief that no loss of information/quantum-coherence should occur. However, even in the AdS/CFT case, understanding how unitarity on the CFT side teaches about information recovery on the gravity side remains unclear (see \cite{Malda}, \cite{BR}). Another ``ab initio" approach to the same problematics consists of the study of trans-planckian-energy collisions of massless strings as a function of center of mass energy (or of the associated gravitational radius $R$), of impact parameter $b$, and of the string-length scale $l_s$, with the relative ratios of these scales defining different regimes for the process \cite{ACV1}. In this framework it has been possible to recover, within a unitarity-preserving $S$-matrix, both General Relativity expectations and string-size related modifications of it \cite{ACV2}, albeit in regimes in which no-black hole formation is expected according to closed-trapped surface criteria\cite{CTS}. Dealing with the complementary regime (corresponding to $R \gg b, l_s$) has met with more limited success, although some progress has been made in understanding how the threshold of black-hole production can be approached from below \cite{GV04}. An approximation to deal with the full-collapse regime, proposed a few years ago in \cite{ACV07}, appears to predict correctly the existence and rough values of some critical ratios for the onset of collapse, but, unfortunately, has failed so far to provide a unitary description of the process beyond such critical points \cite{CC}. Given the above difficulties, the attention has been shifted to a supposedly easier problem \cite{DDRV}, that of the scattering of a closed light string off a stack of $N$ $D-p$-branes at small string coupling and large $N$. Here the equivalent of the black-hole formation regime is the one in which the closed string is absorbed by the brane system and its energy is dissipated in open string excitations of the stack itself. In spite of some progress \cite{DDRV} \cite{MWB}, understanding how information about the initial state gets encoded in the final one is still far from settled. One problem is that information, if it's to be eventually recovered, has to start coming out, at the latest, by the so-called Page time \cite{Page}, corresponding roughly to the time by which the evaporating black hole has lost half of its entropy $S$. In order for this to be possible, the rate of information retrieval cannot be too small, e.g. cannot be of order $\exp(-S)$, at least not after the Page time. Information retrieval should instead be easy if ``quantum hair" is inversely proportional to $S$, as recently proposed in a toy model identifying black holes with a self-sustained critical Bose-Einstein condensate of $N \sim S$ gravitons \cite{DG}. Similar claims have been made in \cite{Ramy} on the basis of general uncertainty-principle considerations applied to the geometry itself. Indeed, once an effective classical geometry with an information-free horizon is assumed (even an effective one that corrects the classical horizon), continuous information loss looks inevitable\cite{Mathur}. In this paper we will address this kind of questions using the correspondence between strings and black holes \cite{Corr1, HP, DV} that occurs when the mass of the former is tuned to the value $M_{SH} = M_s g_s^{-2}$, giving a Schwarzschild radius $R = O(l_s)$. By going to small enough string coupling we can make the entropy of such ``string-holes" (SH) arbitrarily large: \begin{equation} \label{SSH} S_{SH} = \left( \frac{l_s}{l_P}\right)^{D-2} = \left( \frac{M_P}{M_s}\right)^{D-2} = g_s^{-2} \gg 1 \, . \end{equation} It is particularly appealing that, for SHs, the question of the size of quantum hair becomes one about whether it is perturbative or not in the string coupling constant. In our case, the role of the parameter $N$ of \cite{DG} is played by the string coupling which, for a given string mass, is tuned to a critical value. Unfortunately, and unlike in the simple model of \cite{DG}, we are presently unable to perform a reliable calculation when $g_s$ and/or $M$ are parametrically larger than their critical values. Furthermore, in order be able to claim that strings of mass $M_{SH} = M_s g_s^{-2}$ can also be seen as black holes, we have to impose that they are compact enough not to exceed in size their own Schwarzschild radius $R = O(l_s)$, and to check that this restriction does not invalidate the entropy estimate (\ref{SSH}). This question was addressed in \cite{DV} (see also \cite{HP}), where it was argued that the entropy of string states of mass $M \le M_{SH} = g_s^{-2} M_s$ is shared among states of different size $r$ according to: \begin{equation} \label{S(M,R)} S(M, r) \sim \frac{M}{M_s} \left( 1- c_1 \frac{l_s^2}{r^2} \right) \left( 1- c_2 \frac{r^2}{(\alpha' M)^2} \right)\left(1 +c_3 \left(\frac{R}{r}\right)^{D-3} \right)\, , \end{equation} where $c_i$ are positive constants of $O(1)$. For $M \ll M_{SH}$ the last terms is negligible and the first two factors give a maximal entropy for $r \sim l_s \sqrt{\frac{M}{M_s}}$, the random-walk value. However, there is still an entropy $O(M/M_s)$ in ``compact" strings and, furthermore, as one approaches $M= M_{SH}$, the third term in (\ref{S(M,R)}) helps favoring such strings. Another way of reaching a similar result consists in counting string states at level $N = \alpha' M^2$ produced by oscillators of index larger than $K$. A simple argument, based on evaluating the corresponding partition function, shows that the entropy of such states is still $O(\sqrt{N})$ if $K \sim \sqrt{N}$. They will generally correspond to occupation numbers $\le O(1)$ for $O(\sqrt{N})$ oscillators (providing the right value for their mass) and will have a size of order: \begin{equation} r^2 \sim l_s ^2 \sum_{n > \sqrt{N}} \frac1n \langle a_n^{\dagger} a_n \rangle \sim{l_s^2}\, . \end{equation} This is the kind of states we shall focus our attention on. We recall that, not only entropy, but, qualitatively, many other properties of strings and black holes (decay rates, evaporation time etc.) match on the correspondence line \cite{HP,DV}. The idea, therefore, is to consider a thought experiment in which a massless string probes a stringhole target, a process somewhere in between those discussed in \cite{ACV1} (where both projectile and target are massless) and \cite{DDRV} (where the target is infinitely heavy). Studying such a process at sufficiently large impact parameters for the approximations to be under control turns out to be sufficient to reveal whether the quantum hair of such SHs is perturbative or not in $1/S \sim g_s^{2} $. This appears to be the string-theory counterpart to checking (albeit only at a specific point) whether quantum hair is perturbative in $1/N$ in the approach of \cite{DG}. \section{ A thought experiment revealing quantum hair} We work in flat $10$-dimensional spacetime with $(10- D)$ dimensions compactified at the string-length scale so that the effective large-distance physics lives in $D$ spacetime dimensions. We are also assuming to be working at very small string coupling $g_s$ so that, as already indicated in (\ref{SSH}), there is a large hierarchy between the string and Planck mass scales. Consider now a process in which a massless ``probe" string collides with a well-defined heavy (and for the moment generic) ``target" string of mass $M \gg M_P \gg M_s$. Let us also take a high-energy limit in which the energy $E$ of the probe string in the rest frame of the heavy one is much larger than $M_s$ and yet much smaller than $M$, \begin{equation} \label{Ebounds} M_s M \ll s - M^2 = -2 p\cdot P = 2 E M \ll M^2\, , \end{equation} so that the light string does indeed act (almost) as a probe and yet we can apply a high-energy limit in which graviton exchange dominates. Following the logic of \cite{ACV1} (see also \cite{Iengo}, \cite{DDRV}) we can argue that, at large-enough impact parameter $b$, the elastic scattering amplitude is given by the semiclassical eikonal formula: \begin{equation} \label{classps} \label{leadeik} S(E, M, b) \sim \exp(i \frac{{\cal A}_{cl}}{ \hbar}) = \exp\left(i \frac{4G E M}{\hbar} c_D b^{4-D}\right) \equiv e^{2i\delta(E,M,b)}~;~ c_D = \Omega_{D-4}^{-1} \equiv \frac{\Gamma(\frac{D-4}{2})}{2 \pi^{\frac{D-4}{2}}}. \end{equation} As a consistency check, we note that, when one goes back from $b$ to $q$-space (or deflection angle $\theta$), one recovers, at the saddle point of the $b$-integral, the classical Einstein relation (generalized to arbitrary $D$) between deflection angle, mass, and impact parameter: \begin{equation} \label{Einstein} \theta = \frac{8 \pi G M}{\Omega_{D-2} b^{D-3}} \sim \left(\frac{R}{b}\right)^{D-3} \ll 1 ~~;~~ (G M)^{\frac{1}{D-3}} \sim R \ll b \, , \end{equation} where $R$ is the Schwarzschild radius of the heavy string. Obviously, the above formula satisfies the ``no-hair" theorem, in the sense that it is sensitive to the mass of the heavy string state but not to its microscopic quantum numbers. Diagrammatically, the result (\ref{leadeik}) comes from exponentiating the exchange of a single graviton between the light and the heavy string. Both (\ref{classps}) (and (\ref{Einstein})) are indeed only valid at sufficiently large impact parameter (small deflection angle) and suffer from corrections of higher order in $R/b$ ($\theta$). These will reconstruct, for instance, the deflection formula in the full Schwarzschild (or Kerr if we consider a target with spin) metric. As shown long ago by Duff \cite{Duff}, they correspond, diagrammatically, to exponentiating connected graviton-tree (fan) diagrams in which a single vertex (the trunk of the tree) is attached to the probe string while all the branches terminate on the heavy one, giving the appropriate powers of $R$ and $b$. These classical correction still satisfy the no-hair condition as well as elastic unitarity. On the other hand if instead``hairy" corrections to $\delta(E,M,b)$ are exponentially suppressed for large black holes, we would expect them to show up in the form: \begin{equation} \label{ExpSuppr} \delta(E,M,b) \rightarrow \delta(E,M,b)(1+{\rm classical ~corrections} + e^{-c S} \hat{Q} ) \, , \end{equation} where $c$ is some constant and $ \hat{Q} $ represents, schematically, a quantum-hair operator taking different expectation values depending on the black hole microstate. We will check below whether an ansatz like (\ref{ExpSuppr}) is satisfied for the particular stringhole states introduced in the previous Section. To address this question recall that, as discussed in \cite{ACV1} and \cite{DDRV} in two different contexts, there are also ``string corrections" to the leading eikonal form. These are related to the fact that strings are extended objects and therefore suffer tidal forces when moving in a non trivial geometry\cite{Giddings}\footnote{Although all calculations are performed in flat spacetime the effects of an effective non-trivial geometry emerge from the calculation.}. Fortunately, at least at small scattering angle, such corrections are fully under control and lead to a unitary $S$-matrix. Unitarity is now satisfied in a less trivial way: different channels couple, elastic unitarity is violated, but one still obtains a fully unitary $S$-matrix in the Hilbert space of two arbitrary string states. The question is whether this non-trivial S-matrix contains information about the actual state of the heavy string, and at which level. Building on the work of \cite{ACV1} and \cite{DDRV} we can be confident that the tidal excitation of both the light and the heavy string are captured, at leading order in $\theta$, by the replacement: \begin{equation} \label{quantumps} \delta(E, M, b) \rightarrow \hat{\delta}(E, M, b) = \langle \delta (b + \hat{X}_H - \hat{X}_L) \rangle = 2 G E M \hbar^{-1} c_D \langle (b + \hat{X}_H - \hat{X}_L)^{4-D} \rangle \, . \end{equation} Here $\hat{X}_H$ and $\hat{X}_L$ represent the heavy and light string position operators, stripped of their zero modes (which give $b$), evaluated at $\tau =0$, and averaged over $\sigma$. These operations, together with a normal-ordering prescription, are indicated in (\ref{quantumps}) by the brackets, i.e. \begin{equation} \langle (b + \hat{X}_H - \hat{X}_L)^{4-D} \rangle \equiv \frac{1}{4 \pi^2} \int_0^{2 \pi} d \sigma_L \int_0^{2 \pi} d\sigma_H : \left(b + \hat{X}_H(\sigma_H, 0) - \hat{X}_L(\sigma_L, 0)\right)^{4-D} : ~ . \end{equation} In words, the classical phase shift is replaced by the average of a {\it quantum} phase shift in which the impact parameter is affected by a quantum uncertainty encoded in the string position operators. For what concerns the excitation of the light string, further justification of the above formula comes from the study of string-brane collision discussed in \cite{DDRV}, specialized to the case of a stack of $0$-branes. For the excitation of the heavy string we can instead appeal to the quantization of the heavy string in the shock-wave metric produced by the light one \cite{GGM}. Following \cite{ACV1}, we now expand (\ref{quantumps}) to quadratic order in the $\hat{X}$ (the linear order clearly averages out to zero) to get the leading correction in an expansion in $(l_s/b)^2$: \begin{equation} \label{quantumps2nd} 2( \hat{\delta} - \delta) = \frac{2 \pi G E M (D-2)}{\hbar \Omega_{D-2} b^{D-2}} \langle Q_H^{ij} + Q_L^{ij} \rangle \hat{b}_i \hat{b}_j \, . \end{equation} Here $Q_H^{ij}$ is the $(D-2)$-dimensional (i.e. Lorentz-contracted in the direction of the incoming momentum) quadrupole operator for the heavy string\footnote{I am grateful to T. Damour for this interesting remark.}. \begin{equation} \label{Q} Q_H^{ij} = \hat{X}_H^i \hat{X}_H^j - \frac{ \delta_{ij}}{D-2} \sum_{i=1}^{D-2} \hat{X}_H^i \hat{X}_H^i \, , \end{equation} and is projected along the unit vector $\hat{b}$ in the direction of the impact parameter. This projection can also be written in the form: \begin{equation} \label{Pi} Q_H^{ij} \hat{b}_i \hat{b}_j = \hat{X}_H^i \hat{X}_H^j \left( \hat{b}_i \hat{b}_j -\frac{ \delta_{ij}}{D-2} \right) \equiv \Pi_{ij} \hat{X}_H^i \hat{X}_H^j \, . \end{equation} As indicated in (\ref{quantumps2nd}), we get a similar term for the probe string. At this order in $l_s/b$ the $S$-matrix thus factorizes in the form: \begin{equation} \label{quantumS} S(E, M, b) = \exp (2i\delta)~ \Sigma_L~ \Sigma_H~;~ \Sigma_{L,H} = \exp \left(i (D-2) \Delta~ \tilde{Q}_{L,H}^{ij} ~ \hat{b}_i \hat{b}_j \right)\, , \end{equation} where we have defined the dimensionless quantities\footnote{The value of $\Delta$, when compared to unity, determines \cite{ACV1} whether the probe string gets excited or not by tidal forces. However, once more, these effects will {\it not} depend on the particular state of the target string.}: \begin{equation} \label{Delta} \Delta = \frac{2 \pi G E M l_s^2}{\hbar \Omega_{D-2} b^{D-2}} ~~ \, ; \, ~~ \tilde{Q}^{ij} = l_s^{-2} Q^{ij}\, , \end{equation} the latter being the quadrupole measured in string-length units. Since the quadrupole operators are hermitian (see also below), each factor appearing in (\ref{quantumS}) corresponds to a unitary operator. The first two factors are independent of the particular state chosen for the heavy string. Let us therefore concentrate our attention on $\Sigma_H$ (dropping for simplicity the $H$ suffix). The operator appearing at the exponent in $\Sigma$ can be easily written down: \begin{equation} \label{quantumps2ndosc} \tilde{Q}^{ij} ~ \hat{b}_i \hat{b}_j= \Pi_{ij} ~ \sum_{n=1}^{\infty} \frac1n\left( a^{ \dagger i}_n a_n^j + \tilde{a}^{ \dagger i}_n \tilde{a}_n^j + a_n^i \tilde{a}^{ j}_n + a^{ \dagger i}_n \tilde{a}^{ \dagger j}_n \right)\, . \end{equation} Its diagonal matrix elements are sensitive to the (projected, transverse) quadrupole of the heavy string, while the transitions to other states, induced by terms with two creation or two annihilation operators, correspond to a quadrupole-like excitation of the original string itself. This is hardly surprising in view of the intimate relation between tidal forces and quadrupole moments (see e.g. \cite{tidalQ}), and simply appears as a generalization of known facts to an ultra-relativistic situation involving strings (our quadrupole, in particular, is a purely geometrical object). In order to have an estimate of quantum hair we need to normal order the whole exponential operator occurring in $\Sigma$. Following again \cite{ACV1}, we find: \begin{eqnarray} \label{NOSmatrix} \Sigma_H &=& \Sigma^{(univ)}~ \Sigma^{(hair)}~;~ \Sigma^{(univ)} = \Gamma (1+ i \Delta)^{D-3} ~ \Gamma (1- i (D-3) \Delta) \nonumber \\ \Sigma^{(hair)} &=& : \exp \left( \sum_{n=1}^{\infty} (a^{ \dagger i}_n + \tilde{a}_n^i)(a_n^j + \tilde{a}^{ \dagger j}_n)\left[ C_n(\Delta)(\delta_{ij} - \hat{b}_i \hat{b}_j ) + \tilde{C}_n(\Delta) \hat{b}_i \hat{b}_j \right] \right): \nonumber \\ C_n(\Delta) &=& - \frac{i \Delta}{n + i \Delta} ~;~ \tilde{C}_n(\Delta) = C_n(- (D-3) \Delta)\, . \end{eqnarray} At this point the explicit calculation of the $S$-matrix is simple, in particular between coherent states. $\Sigma^{(univ)}$, being a $c$-number, does not depend on the internal quantum numbers of the heavy string and, together with similar factors coming from the light string, provides absorption and further contributions to the phase shifts, but no hair. Instead, the operator $\Sigma^{(hair)}$ generates matrix elements that feel the nature of the microstate in which the heavy string actually is. Note that normal ordering has slightly upset the exact quadrupole structure appearing in (\ref{Q}), (\ref{Pi}) (which is however recovered for $n \gg \Delta$). Let us now specify further the process described in the previous section in order to make contact with black-hole physics. To this purpose we shall identify the heavy string with a ``stringhole" state described in Sec. 1. The reason for choosing that precise (within factors $O(1)$) value of $M$ is twofold. Choosing $M$ in the range $M_P \ll M \ll M_{SH}$ leads to reliable results, but the string, in this case, is below the correspondence curve, its size is larger than its Schwarzschild radius and therefore is not a collapsed object \cite{Corr1}. On the other hand, various approximations that can be justified for strings of mass up to $M_{SH}$ cease to be valid for strings with $M >> M_{SH}$, i.e. strings that would simulate ``large" black holes in string-length units. Let us first evaluate the quantity $\Delta$ in (\ref{Delta}) for the SH case. Up to numerical factors: \begin{equation} \Delta = \frac{G E M l_s^2}{\hbar ~b^{D-2}} \rightarrow \frac{E l_s}{\hbar} \left(\frac{ l_s}{b}\right)^{D-2} \sim \frac{E}{M_s} \theta^{\frac{D-2}{D-3}} \, . \end{equation} Given our bounds (\ref{Ebounds}) on $E$ we find: \begin{equation} \theta^{\frac{D-2}{D-3}} \ll \Delta \ll g_s^{-2} \theta^{\frac{D-2}{D-3}} \, . \end{equation} Obviously, even keeping $\theta \ll 1$, but finite and $g_s$-independent, we can make $\Delta \gg 1$ (yet $\ll g_s^{-2}$) for sufficiently small $g_s$ and with $E$ in a parametrically large region. In order to estimate the size of quantum hair we note that the coefficients $C_n(\Delta)$ appearing in (\ref{NOSmatrix}) become $O(1)$ at $n < \Delta$ or of order $\Delta/n$ at $n > \Delta$. As already discussed, typical SHs will have most of the non vanishing occupation numbers of $O(1)$ in oscillators with $n \sim \sqrt{N} \sim g_s^{-2}$. In that case $C_n \sim \Delta/n$ and eq. (\ref{NOSmatrix}) simplifies further: \begin{equation} \label{simpler} \Sigma^{(hair)} = : \exp \left( - i (D-2) \Delta \sum_{n=1}^{\infty}\frac{1}{n} (a^{ \dagger i}_n + \tilde{a}_n^i)(a_n^j + \tilde{a}^{ \dagger j}_n) \Pi_{ij} \right): ~. \end{equation} The basic observation is that the operator appearing in the exponent of (\ref{simpler}) is completely unrelated to the one giving the mass of the SH\footnote{It is also clearly non-degenerate with the spin of the SH.} and therefore will distinguish degenerate microstates. It contains positive definite diagonal terms that correspond to the transverse, projected quadrupole of the SH. The non-positive definite terms, corresponding to inelastic transitions, are also microstate-dependent through a similar quadrupole operator. There is also a state-dependent absorption from the real part of $C_n$. This is suppressed by an extra factor $\Delta/n \sim g_s^2 \Delta$ and is not controlled by the quadrupole. The dominant terms sum up to something $O(1)$ but can still take different values of that same order within the whole SH ensemble. Hence, an experiment measuring the phase of the $S$-matrix should be able to reduce our ignorance on the state of the SH by a factor $O(2)$\footnote{If instead we wish to distinguish SH states with differences $O(1)$ in the occupation numbers the sensitivity of (\ref{simpler}) will have a suppression factor\ of $O(\Delta/n) \sim g_s^2 \Delta$.}. Interpreting this as a reduction on the total number of states $e^S$, it will correspond to a decrease of $O(1)$ in entropy, meaning that the whole information can be recovered after $O(S)$ experiments. The minimal duration of each experiment being $O(l_s)$, namely the light-crossing time for a SH, the total time needed to recover the information will be of order of the Page/evaporation time $S R \sim g_s^{-2} l_s$. In our approximation quantum hair also appears to be suppressed with respect to the no-hair terms by a power of the scattering angle. While this is still sufficient for our qualitative discussion, we think that our results should qualitatively extend to scattering angles of O(1). Checking this is not easy since, precisely for a SH target, string-size and classical corrections kick in simultaneously as we increase the scattering angle (although the different $\theta$-dependence should help separate the two kinds of corrections). If so, the quantum hair revealed by our scattering process (with higher multipoles appearing besides $Q_{ij}$) will indeed approach $O(1)$ for a probe-energy of order of the Hawking/Hagedorn temperature of the SH. Such impact parameters and energies are precisely those typical of Hawking's radiation. Actually, as well known in particle physics (see e.g. \cite{SW}), a decay amplitude is usually to be corrected by a ``final-state interaction" which basically amounts to multiplying the naive decay amplitude by a factor $S^{1/2} (E,b) \sim \exp(i \delta(E,b) )$, where the typical values of $E$ and $b$ will be $ M_s$ and $l_s$ respectively. In other words, such a quantum hair may directly leave its imprint in the decay of a SH. Admittedly, all these are hand-waving arguments that should be analyzed more carefully. In any case, it appears that the quantum-hair amplitude is {\it not} suppressed, relative to no-hair terms, by $\exp(-S_{SH}) \sim \exp(-g_s^{-2})$ but rather, at most, by a small inverse power of $S \sim g^{-2}$, i.e. is a perturbative effect in the string coupling. Note, however, that a generic individual element of the $S$-matrix is always suppressed by an $\exp{(-\Delta})$ ``non-perturbative" factor, which gets compensated by the exponentially large number of final states contributing to inclusive-enough cross sections. Indeed, given that $\Sigma_H$ is unitary, it is easy to lose its sensitivity to quantum hair if traces over the heavy initial and/or final SH states are taken. Summing individual transition probabilities over final SH states corresponds to considering an inclusive cross section, while tracing/averaging over the initial SH state corresponds to an initial mixed state. In both cases it is quite clear that unitarity of $\Sigma_H$ washes out all the leading-order SH hair discussed so far\footnote{At order $l_s^4 b^{-4}$ the eikonal operator will give terms proportional to $X_L^2 X_H^2$ that destroy factorization.}. Only subleading terms and/or appropriate interference experiments will be able to leave information about the state of the SH on the probe-string. Whether this is in principle sufficient to retrieve enough information on the SH is not completely obvious. In conclusion, the results we have presented point in the direction of some perturbative quantum hair being revealed in our thought experiment, very likely something of order $1/S \sim g_s^2$ for a probe of $E \sim M_s$ during a collision (horizon-crossing) time $O(l_s)$. At least naively, this would allow to retrieve the full information about the microstate of the string hole within its evaporation time of order $g_s^{-2} l_s$. This result can be related to the fact that, for a stringhole, the concept of an infomation-free horizon does not make sense (the horizon being as large as the string itself) and, in this sense, it is similar to what is believed to occur for the so-called fuzzballs states of string theory (see \cite{fuzzballs} and references therein). It would be interesting to see whether thought experiments of the kind discussed here using fuzzballs could reveal a similar amount of quantum hair. Of course the issue of whether or not spacetime around the horizon can be considered to be empty is also very relevant in the recent firewalls debate \cite{firewalls}. Can we reconcile our finding with an exponentially small amount of quantum hair for large black holes (i.e. for black holes much heavier that $M_{SH}$, for which our simple analysis fails to provide a reliable answer)? Clearly an expression like that of (\ref{ExpSuppr}) is in contradiction with our findings but one could instead imagine an ansatz like: \begin{equation} \label{DiffSuppr} \delta \rightarrow \delta(E,M,b)(1+{\rm classical ~corrections} + e^{-c \frac{S}{S_{SH}}} \hat{Q} ) \, , \end{equation} which would only give an exponential suppression for black holes that are much heavier that those on the correspondence line. Indeed, a single string may fail to represent black holes above the correspondence curve (seen in that case as a critical line separating two phases), in which case $\Sigma^{(hair)}$ could change quite abruptly above the phase transition. Another possible objection to drawing strong conclusions from our results lies in the possibility\footnote{This possible loophole was suggested by M. Porrati.} that SHs do {\it not} represent typical black holes but only a tiny fraction of them. In that case, their long hair will make them atypical ``hippie-like" black holes within a vast majority of ``bald" ones. \section*{Acknowledgements} This investigation was prompted by a stimulating seminar by Gia Dvali and by subsequent discussions with him and Ramy Brustein. I have also benefitted from interesting discussions and/or correspondence with Daniele Amati, Thibault Damour, Sergei Dubovsky, Gregory Gabadadze, Cesar Gomez, Hovhannes Grigoryan, Matthew Kleban, Samir Mathur, Yaron Oz, Massimo Porrati, Eliezer Rabinovici, Rodolfo Russo and Adam Schwimmer. I also wish to acknowledge the support of an NYU Global Distinguished Professorship.
3,212,635,537,896
arxiv
\section{Introduction} The realization of new coherent radiation sources in the THz frequencies boosts dramatically the development of innovative spectroscopic techniques~\cite{Lee_09}. These spectroscopies are non-invasive methods that provide complementary information to traditional analytical tools. THz radiation introduces lower risks in terms of sample preservation so it must be considered as a particularly suitable probe for fragile/sensible samples. The potential to provide non-destructive information in the cultural heritage field has been demonstrated in a series of recent studies~\cite{Fukunaga_08,Abraham_09,Labaune_10,Seco_13,Bardon_13,Walker_13,Krugener_15,Jackson_15}. Nowadays, THz spectroscopic investigations are supported by a series of commercial off-the-shelf systems. Typically, these systems are not open to a full control of the experimental parameters, so the lab customized set-ups prove more flexible and adaptable to a specific application. In the last years, the THz-Time Domain Spectroscopy (THz-TDS) has been recognized as a leading tool to measure the transmission parameters of complex materials. This is a spectroscopic method based on THz pulsed radiation generated by down-conversion of ultrafast optical laser pulses. Even if the THz-TDS is nowadays a well established technique, its application to samples characterized by a complex structure remains an open problem. In this work, we explored the potentiality of a specific THz-TDS experimental apparatus, based on a table top set-up, in order to investigate samples formed by multiple layers structures and characterized by micrometric thickness. We implemented a quite efficient data analysis and fitting procedure that enable the extraction of the material optical parameters (i.e. absorption and index of refraction) with absolute values and the measurement of the layer thickness down to tens of micrometers. In this paper we report on the numerical algorithm that defines the iterative fitting process. We applied the experimental and data analysis methods to the specific problem of measuring the THz spectral features of thin bi-layer samples: a test sample made of a plastic layer on a Teflon substrate and a prototype sample of interest for application in artworks studies made of a thin ink layer deposited on a polyethylene support. \section{THz time-domain spectroscopy set-up} The data presented in the current paper have been measured by a home-made THz-TDS system in transmission configuration, which enables us to explore the frequency range 0.1 - 4 THz. In Figure~\ref{setup} we show a sketch of our THz-TDS set-up. The THz pulses are produced by exciting a Low-temperature GaAs photoconductive antenna (PcA)~\citep{Lee_09} with optical laser pulses, at $\lambda=780~nm$, pulse duration of around $120~fs$ and a repetition rate of $100~MHz$ (produced by a T-light 780 nm fiber laser from MenloSystems). PcA is biased with sinusoidal voltage of $0-30~Volt$ at a frequency of $10~KHz$. \begin{figure*}[htb] \centering \includegraphics[width=0.8\textwidth]{setup.pdf} \caption{ We report a sketch of the experimental set-up utilized to realize the THz time-domain spectroscopy, the sample investigation is performed in THz transmission configuration. The labelled elements are: M – mirror, BS – beam splitter, CC – corner cube, PcA – photoconductive antenna, PM – parabolic mirror.} \label{setup} \end{figure*} The free carriers, generated by the laser pulse focused on the dipole gap, are accelerated by the bias field and quickly recombine producing a short current pulse. This, in turn, generates a short burst of electromagnetic radiation with a broad spectrum in the THz region. The emitted THz field is efficiently extracted from the chip by a hemispherical silicon lens, and then collimated and focused on the sample by two parabolic off-axis mirrors (PMs). The THz pulse transmitted through the sample is again collimated and focused, by another couple of PMs, on a second PcA. Another hemispherical silicon lens optimizes the coupling between the THz field and the dipole of the detection antenna. The THz field, in this case, acts as a bias, which accelerates the free carriers produced by a second optical pulse, the probe, focused again on the gap dipole. Thanks to the shorter temporal width of the laser pulses compared to that of the THz ones, the probe acts as a current gate and the amplitude of the photocurrent is directly related to the amplitude of the THz field. The whole time evolution of the electric field of the THz pulse is then obtained recording the photocurrent amplitude at varying the time delay between the pump and probe pulses. The detection current is amplified by a lock-in amplifier, locked at the bias frequency of the source antenna, and digitalized by an acquisition board. A home-made software acquires the processed signal together with the reading of the delay line encoder and retraces the final time dependent THz field. The whole THz set-up is enclosed in a nitrogen purged chamber for removing the water vapour contribution present at the THz frequencies spanned by the experiment. As reported in the next section, the optical properties of a material can be calculated measuring the amplitude, phase and time delay modifications that the THz pulse undergoes by crossing the sample. This is obtained by calculating the ratio between the Fourier transforms of the THz pulse which has crossed the sample and the one of the reference pulse obtained without any sample; this ratio is referred to as a transfer function. In order to improve the data quality and reduce the effects of external perturbations during the acquisition, the sample is mounted on a motorized translation stage, for moving the sample outside and inside the THz path. We performed several scans for the two configurations alternating the position of sample stage from reference to sample, thus for every sample scan we took a reference scan. Then we performed the average of the transfer functions obtained from each couple of sample and reference signals. Each single scan is acquired for $300$ s at a rate of $10~KHz$ with a continuous motion of the probe delay line at a velocity of $0.5~mm/s$. \section{Material parameters extraction from experimental data} The ratio between the Fourier transform of the THz pulse, transmitted after the sample, $E_t \left( \omega\right)$, and the one of the reference pulse, $E_{i}\left( \omega\right)$, describes the amplitude and phase changes due to absorption and refraction of the traversed medium. This ratio is referred to as the material transfer function, $H\left( \omega\right)$. In the simple case of a homogeneous dielectric slab of thickness $d$ and complex refractive index $\hat{n}_s$, surrounded by nitrogen, the theoretical expression of the transfer function can be written, for normal incidence of waves, as~\cite{Withaya_14}: \begin{align} \label{Hfun1} H(\omega) & = \frac{E_t(\omega)}{E_i(\omega)} \nonumber \\ & = \tau\tau' \exp{\left\lbrace -i\left[ \hat{n}_s(\omega)-n_0\right] \frac{\omega d}{c}\right\rbrace } \cdot FP(\omega), \end{align} \begin{align} \label{HfunFP} FP(\omega)& =\sum\limits_{m=0}^\infty \left\lbrace \rho'^2 \exp{\left[-2i\hat{n}_s(\omega) \frac{\omega d}{c}\right]} \right\rbrace^m \nonumber\\ &=\left\lbrace 1-\rho'^2 \exp{\left[-2i\hat{n}_s(\omega) \frac{\omega d}{c}\right]} \right\rbrace^{-1}, \end{align} where \begin{equation} \tau = 2/(n_0+\hat{n}_s) \end{equation} is the nitrogen-sample complex transmission coefficient and \begin{equation} \tau'=2\hat{n}_s/(n_0+\hat{n}_s), \rho' = (n_0-\hat{n}_s)/(n_0+\hat{n}_s) \end{equation} are the sample-nitrogen complex transmission and reflection coefficients, with $\hat{n}_s=n_s(\omega)-ik_s(\omega)$, where $n_s(\omega)$ is the refractive index, $k_s(\omega)$ the extinction coefficient, and $n_0$ the refractive index of nitrogen. Yet in eq.s~\ref{Hfun1} and \ref{HfunFP}, $c$ is the vacuum speed of light and $FP(\omega)$ represents the Fabry-P\'{e}rot effect due to the multiple reflections inside the sample. In a THz-TDS transmission experiment the optical properties of a material can be completely characterized by measuring the experimental transfer function, $H_{exp}( \omega)$, from which by using eq.s ~\ref{Hfun1} and \ref{HfunFP} the refractive index, $n_s\left( \omega\right)$, the absorption coefficient, $\alpha_s(\omega)=2\omega k_s(\omega)/c$ and the thickness could be in principle extracted. However, eq~\ref{Hfun1} is not in a closed form and cannot be solved to give analytical expressions for the optical parameters, moreover the thickness of the sample is not generally known with a sufficient accuracy. An iterative process of calculation has to be employed~\cite{Withaya_05,Pupeza_07,Scheller_11,Scheller_09,Scheller_09b}. In our work we followed the numerical optimisation algorithm proposed by Scheller et all.~\cite{Scheller_11}. As a first step we can obtain raw estimations of $n_s$ and $\alpha_s$ by neglecting the $FP$ term and the imaginary part of the refractive index in the Fresnel coefficients of eq.~\ref{Hfun1}. With these approximations, analytical expressions for the optical parameters can be obtained~\cite{Withaya_14}: \begin{align} n_s(\omega)&=n_0-\frac{c}{\omega d}\text{arg}\left[H(\omega)\right] \label{ns}\\ k_s(\omega)&=\frac{c}{\omega d}\left\lbrace ln \left[ \frac{4n_0n_s}{\vert H(\omega) \vert(n_0+n_s)^2}\right] \right\rbrace \label{alphas} \end{align} Substituting in $H$ the experimental value $H_{exp}$ and assuming the initial value of $d$ what measured with a micrometric screw, we obtain approximated frequency dependent values of $n_s$ and $\alpha_s$, which are, moreover, affected by fake oscillations due to the neglected $FP$ effect. If the $FP$ reflection pulses are clearly distinguishable in the sample temporal signal, we can calculate the experimental transfer function with time shorten signals, where the reflection pulses have been simply cut off. This returns optical parameters not affected by the fake oscillations. However, when the reflections are close in time and partially superimposed, due to a short optical path, the cutting process can't be applied. Also in the case where the reflection peaks are well separated but the sample signal shows a long time evolution after the main peak, because of a structured absorption of the medium, the method can give wrong evaluations of the optical parameters. Anyway, it is better to use the full $H_{exp}$ and remove the oscillations in different way. Here we implement a polynomial fit of the optical parameters, of variable order and fitting range, by which we can catch the real physical frequency behaviour and remove the $FP$ oscillations. After this preliminary evaluation of $n_s$, $\alpha_s$, and $d$, we can calculate the full theoretical expression of $H(\omega)$, eq.~\ref{Hfun1} together with eq. \ref{HfunFP}, with the summation of the $FP$ limited to the number of reflections appearing in the time window of the measurement. Then we can compare the result with the experimental one to infer new best values for $n_s$, $\alpha_s$, and $d$. Thus, the second step is to minimize the function \begin{equation}\label{deltaH} \bigtriangleup H=\sum\limits_{\omega}\vert H(\omega)-H_{exp}(\omega)\vert \end{equation} with a numerical optimization on the $n_s$, $\alpha_s$ for different fixed values of $d$. We use a Nelder-Mean simplex algorithm with the two scalars $\xi$ and $\psi$: \begin{align} n_{s,new}(\omega)&=\xi \left[ n_{s,old} (\omega)-1 \right]+1,\label{parn}\\ k_{s,new}(\omega)&=\psi k_{s,old}(\omega),\label{park} \end{align} For every value of $d$, new values of $n_s(\omega,d)$ and $\alpha_s(\omega,d)$ are calculated by eq.s~\ref{ns} and \ref{alphas}, filtered with the polynomial fit, and then optimized minimizing $\bigtriangleup H$. Graphing the minima of $\bigtriangleup H$ at varying of $d$ we plot a curve with a minimum for $d_{min}$ value, which corresponds to the thickness of the sample. The fitting process is then repeated, starting from the triad, $n_s(\omega,d_{min})$, $\alpha_s(\omega,d_{min})$, and $d_{min}$, but now with the additional parametrization of $d$, $d=\zeta d$, in order to refine its value. It is worth noting that the parametrizations of $n_s$ and $\alpha_s$ through the scalar $\xi$ and $\psi$ do not change their frequency behaviours which are still those inferred from the first step and can be affected by the filtering process. Thus, as a third and final step, as already reported by Scheller et all.~\citep{Scheller_11}, we perform an optimization of the optical parameters at every frequency step $\omega_i$ using the function \begin{equation}\label{deltaHwi} \bigtriangleup H(\omega_i)=\vert H(\omega_i)-H_{exp}(\omega_i)\vert \end{equation} The starting values for $n_s$ and $\alpha_s$ are the optimal ones found in the previous step, the parametrizations are the same of eq.s~\ref{parn} and \ref{park} with the same algorithm, whilst $d$ is always kept fixed to the optimal value estimated before. This last optimization reshapes frequency features of the optical constants that may have been distorted or erased by the first step evaluation and filtering process. This new set of curves of $n_s$ and $\alpha_s$ can be used for a new optimization cycle starting again with the step one: especially for a sample with short optical paths, the optimization must be repeated several times to find reliable values of the thickness and the optical constants. All the calculations and minimization routines written above by which all the data reported in this work have been processed, were performed by executing an in-house developed Matlab code. In Figure~\ref{DiagBlock} we report a block diagram of the fitting procedure. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{schema.pdf} \caption{Schematic representation of the fitting procedure used for the extraction of the refractive index, $n_S$, the absorption coefficient $\alpha_S$, and the sample thickness $d$. The algorithm can be diagrammed in three main blocks. Step 1: a preliminary and approximated evaluation of the refractive index and absorption coefficient. Step 2: a minimization routine of the function $\Delta H$ enables the estimation of the correct value of the sample thickness and more reliable values of $n_S$ and $\alpha_S$. Finally, step 3: the real frequency dependence of the optical parameters is revealed through the minimization of $\Delta H$ frequency by frequency. For very thin samples successive iterations of the process need to be repeated several times as far as the sample thickness value is stabilized. In each cycle the output parameters of the step 3 are smoothed out by the polynomial fit and used as input parameters again for the step 1.} \label{DiagBlock} \end{figure} What described so far concerns the analysis for a free standing single slab or layer, in the case of a bilayer system the optimization process is similar but starts from a different set of equations. The analysis can be carried out if at least the optical proprieties of one of the layers and its thickness are completely known. This can be achieved by preliminarily, characterizing one of the two layer as a single free standing layer by means of the analysis just now described. The first step is to consider the bilayer system as a single layer and obtain effective optical parameters using the approximated eq.s~\ref{ns} and \ref{alphas} with $d=d_1+d_2$, where $d_1$ and $d_2$ are the thicknesses of the two layers. The $n_{eff}$ and $\alpha_{eff}$ can be connected to the optical constants of the two layers by simple considerations on the refractive index and absorption. Denoting the layer under study as 1 and the known layer as 2, we get: \begin{align}\label{n1k1} n_1&=\frac{1}{d_1}\left[(n_{eff}-n_0)(d_1+d_2)-d_2(n_2-n_0)\right]+n_0,\\ k_1&=\frac{1}{d_1}k_{eff}(d_1+d_2)-k_2 \frac{d_2}{d_1}, \end{align} The optical parameters calculated with these expressions need to be filtered from the $FP$ oscillations applying the same polynomial fit described before. The transfer function for a bilayer system now is more complex and, for waves at normal incidence, can be written as~\cite{MacFarlane_94,Jin_14}: \begin{widetext} \begin{align}\label{Hfun2} H(\omega) &= \frac{E_t(\omega)}{E_i(\omega)} = \frac {\tau_{01}\tau_{12}\tau_{20}~e^{ -i\frac{\omega}{c}\left[ d_1\hat{n}_1+d_2\hat{n}_2-n_0\left( d_1+d_2\right) \right] } } {\left[1-\rho_{21}\rho_{20}~e^{-i\frac{2\omega}{c}d_2\hat{n}_2}\right] \left[ 1-\rho_{12}\rho_{10}~e^{-i\frac{2\omega}{c}d_1\hat{n}_1}-\frac{\rho_{20}\rho_{10}\tau_{21}\tau_{12}~e^{-i\frac{2\omega}{c}\left( d_1\hat{n}_1+d_2\hat{n}_2\right)}}{1-\rho_{21}\rho_{20}~e^{-i\frac{2\omega}{c}d_2\hat{n}_2}} \right]} \end{align} \end{widetext} where $\hat{n}_i$ are the complex refractive indices, $\tau_{ij}$ and $\rho_{ij}$ the complex transmission and reflection coefficients with $i,j=0$ for nitrogen, $1$ and $2$ for the two layers. This expression includes a $FP$ effect with an infinite number of reflexes between all the three separation surfaces; this can be considered as a valid approximation for thin layers and measurements extended over long time delays. Following the same procedure of optimization and minimization as in the single slab case, but using the correct expression of $H(\omega)$ reported in eq.\ref{Hfun2} we can calculate the final values of the refractive index $n_1$ and the extinction coefficient $k_1$. \begin{figure*} [htb] \centering \includegraphics[width=0.7\textwidth]{figure3.pdf} \caption{Refractive index and absorption coefficient vs frequency of the Teflon layer and the plastic layer measured as single free standing layers. The analysis algorithm for a single layer sample gives a thickness of $31~\mu m$ for the Teflon layer and $39~\mu m$ for the plastic layer.} \label{singlelayer} \end{figure*} \section{Results and discussion} In order to in-depth test the analysis algorithm for the extraction of the optical parameters and thickness in single and bilayer samples, we studied three samples; a single layer made of Teflon, a single plastic layer and a bilayer sample made by the close overlap of these two layers. The single layer thicknesses measured by a micrometric screw equal $30\pm 2~\mu m$ and $37\pm 2~\mu m$, respectively\footnote{Due to the softness of the two materials, we had to hold the layers between two glass windows and calculate the layer thickness as a difference.}. The single layer samples have been studied as free standing samples and then glued to form a bilayer system (the glue is estimated to have sub-micrometric thickness). The Teflon layer is made mainly of politetrafluoroetilene (PTFE); the plastic layer is made mainly of polipropilene (PP). Nevertheless, some other polymers could be present in these material compositions, hence the absolute values of their indexes of refraction in the THz range are unknown. In Figure~\ref{singlelayer} we show the results obtained for the two materials individually studied. The single layer analysis gave for the Teflon sample a refractive index almost constant in all the probed frequency range, $n=1.18$, and a negligible absorption coefficient. For the plastic layer, instead, we found a refractive index weakly decreasing with the frequency, with a value of $n=1.53$ at $1~THz$, and an absorption coefficient increasing with the frequency ($\alpha\sim10~cm^{-1}$ at $1~THz$). The measured values are in fair agreement with other THz measurements on PTFE and PP bulk samples \cite{Jin_06}. The sample thickness, extracted from the single layer analysis, for the Teflon and the plastic samples were $31~\mu m$ and $39~\mu m$, respectively. These values are in a good agreement with the values measured with the micrometric screw. As a second step in the test, we performed the analysis of the bilayer system considering as known material the Teflon layer and the unknown one the plastic layer; we extracted the thickness and optical parameters of the plastic layer by using the bilayer analysis algorithm and compared the results with those extracted by the single layer analysis done before. Figure ~\ref{bilayer} reports the comparison between the two analysis. The agreement of thickness, refractive index, and absorption coefficient is really good. The absolute error we estimate on the thickness extractions is of about $5~\mu m$ for both the analysis. Moreover, the data analysis has a lower limit of about $10~\mu m$ for thickness for a trustworthy measurement of the thickness values. \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{figure4.pdf} \caption{Refractive index and absorption coefficient vs frequency of the plastic layer measured in the bilayer configuration (red line) and extracted from the bilayer analysis considering the Teflon layer as known material. For comparison, we report in the figure the plastic layer measured as single free standing sample (black line). The optical parameters are compared to those obtained from the single layer measurement with a very good agreement.} \label{bilayer} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{figure5.pdf} \caption{THz behaviour of the refractive index and absorption coefficient of iron gall black ink studied in the form of thin film layered on $10~\mu m$ PE pellicle. The bilayer algorithm allows for the measurement of the ink film thickness of about $17~\mu m$.} \label{bilayerink} \end{figure*} After this hard test of the analysis procedure and in order to get closer to a real-practice experiment on ancient manuscripts and drawings, we measured a black ink film layered on polyethylene (PE) pellicle of $10~\mu m$ (polyethylene far-IR sample cards by Sigma-Aldrich). As an ink sample we chose an historical iron gall black ink prepared in lab using as a source of gallo-tannic acid a synthesized gallic acid (more details about the ink preparation and the comparison with other black inks can be found on~\cite{tasseva_17}). PE is the ideal support for absorption spectroscopy in the THz region thanks to its negligible absorption coefficient (below of $1~cm^{-1}$ see~\cite{Lee_09} and references therein) and a refractive index equal to 1.4 constant in all the studied THz frequency range. In Figure \ref{bilayerink} we report the optical parameters for the studied ink. A thickness of about $17~\mu m$ was found. The ink shows an absorption spectra with features that have to be ascribed to absorption of the gallic acid present in the ink \cite{taschin_17}. These features are confirmed by the dispersive character of the refractive index at these frequencies. \section{Conclusion} We implemented an innovative experimental procedure and data analysis to measure the transmission parameters in the THz frequency range and the thickness of thin film materials. In the data analysis we used a new method in order to extract the material characteristics from the THz-TDS data. In the case of thin film materials, the THz pulse transmission is strongly affected by multiple reflections. For these samples, the extraction of the real physical material parameters requires a proper data analysis taking into account the multiple reflection contributions to the THz-TDS signal. We implemented an iterative fitting process based on a polynomial fit of the transmission parameters that enables a correct extraction of the refraction indexes and absorption coefficients for samples with thickness down to $10~\mu m$, both free standing layers and multi-layer system. Using the THz-TDS technique and the iterative fitting procedure of the data, we succeed to measure with high confidence the refraction indexes and the absorption coefficients of samples made of a single thin layer or double layer structure. The study shows that the frequency dependence of the transmission parameters of very thin layered samples can be extracted reliably from the THz-TDS measurements, disentangling the single layer contributions. Moreover, the THz transmission parameters of each layer are measured in absolute scale of values and the layer thickness is extracted. We applied this techniques to the following samples: a couple of samples made of teflon and polypropylene, with single layer and bilayer structures, in order to test the experimental and data analysis potentialities; A prototype sample for artworks investigation made by a thin film of black ink layered on a polyethylene pellicle. \section*{Acknowledgement} This work was founded by Regione Toscana, prog. POR-CROFSE-UNIFI-26 and by Ente Cassa di Risparmio Firenze, prog. 2015-0857. We acknowledge M. De Pas, A. Montori, and M. Giuntini for providing their continuous assistance in the electronic set-ups; and R. Ballerini and A. Hajeb for the mechanical realizations. \section*{References}
3,212,635,537,897
arxiv
\section{introduction} In a recent paper\cite{wulee12}, we presented predictions of photo-production of bound states $[^3He]_{J/\Psi}$ on $^4He$ target and $[q^6]_{J/\Psi}$ on $^3He$ target. In this work we apply the same approach to predict the production cross sections for these bound states with pion beams. Our predictions depend on a potential model of $J/\Psi$-N interaction $v_{J/\Psi N,J/\Psi N}$. All theoretical calculations of $v_{J/\Psi N,J/\Psi N}$, based on the effective field theory method\cite{pesk79,luke92,brodsky-1,russia}, the Pomeron-quark coupling model\cite{brodsky90}, and the Lattice QCD\cite{lqcd}, give an attractive feature. However the resulting strength of $v_{J/\Psi N,J/\Psi N}$ is rather uncertain. Here we present results on the deuteron target to facilitate the experimental tests of these models. \section{Production of bound states $[^3He]_{J/\Psi}$ and $[q^6]_{J/\Psi}$} The calculations in Ref.\cite{wulee12} are based on the impulse approximation mechanism illustrated in Fig.\ref{fig:impulse}. The same approach can be used to perform calculations with incident pions by simply replacing the $\gamma + N \rightarrow J/\Psi +N$ amplitude by the $\pi+ N \rightarrow J/\Psi +N$ amplitude. Following the approach of Refs.\cite{wulee12} and \cite{brodsky-1}, we calculate the $\pi+ N \rightarrow J/\Psi +N$ amplitude from the $\rho$-exchange mechanism calculated from the following Lagrangian \begin{eqnarray} L = L_{J/\Psi,\rho\pi} + L_{\rho NN} \label{eq:larg} \end{eqnarray} with \begin{eqnarray} L_{J/\Psi,\rho\pi}&=& -\frac{g_{J/\Psi,\rho\pi}}{m_{J/\Psi}} \epsilon^{\alpha\beta\mu\nu} \partial_\alpha \phi_{J/\Psi,\beta} \partial_\mu \vec{\rho}_\nu \cdot\vec{\phi}_{\pi}\,, \label{eq:L-jrp} \\ L_{\rho NN}&=& \bar{\psi}_N[\gamma^\eta - \frac{\kappa_\rho}{2m_N}\sigma^{\eta\delta}]\vec{\rho}_\eta\cdot \frac{\vec{\tau}}{2}\psi_N\,, \label{eq:L-rnn} \end{eqnarray} where $g_{J/\Psi,\rho\pi}=0.032$ is determined from the width of $J/\Psi \rightarrow \rho + \pi$, $g_{\rho NN}= 6.23$ and $\kappa_\rho =1.825$ are taken from a dynamical model\cite{sl96} of $\pi N$ scattering. For the calculations of $\pi +^4He \rightarrow [^3He]_{J/\Psi}+N $, we need to calculate the bound state wavefunction from a $^3He$-$J/\Psi$ potential $V_{3,J/\Psi}= \alpha_3 \frac{e^{-\mu r}}{r}$. Here we use $\alpha_3=0.33$ and $\mu=257$ MeV determined\cite{wulee12} from the Pomeron-quark coupling model of Brodsky, Schmidt, and de Teramond\cite{brodsky90}. For the calculations of $\pi +^3He \rightarrow [q^6]_{J/\Psi} +N$, the probability of finding a six-quark cluster $q^6$ in $^3He$ is determined by using the Compound Bag model\cite{fasano} of $NN$ interaction. The relative wavefunction of $J/\Psi$-$q^6$ is constrained by reproducing the $^3He$ charge form factor, as detailed in Ref.\cite{wulee12}. The results from pion and photon beams are compared in Fig.\ref{fig:he4} for the $[^3He]_{J/\Psi}$ production on a $^4He$ target and in Fig.\ref{fig:he3} for $[q^6]_{J/\Psi}$ production on a $^3He$ target. We see that the cross sections from pions are about a factor of 2-3 larger than those from photons. We also see that the detections of these bound states with hidden charm are favored at energies near the production threshold. \begin{figure}[ht] \centering \epsfig{file=impulse.eps, width=0.4\hsize} \caption{The impulse approximation mechanism of $\gamma/\pi + A \rightarrow N + [B]_{J/\Psi}$ reaction. $A$ is a nucleus with mass number $A$ and $B$ could be a nucleus with mass number $(A-1)$ or a $[q^{3(A-1)}]$ multi-quark cluster. } \label{fig:impulse} \end{figure} \begin{figure}[ht] \centering \epsfig{file=he4.eps, width=0.4\hsize} \caption{Production cross sections of $\gamma/\pi +^4He \rightarrow [^3He]_{J/\Psi} + n$. } \label{fig:he4} \end{figure} \begin{figure}[ht] \centering \epsfig{file=he3.eps, width=0.4\hsize} \caption{ Production cross sections of $\gamma/\pi +^3He \rightarrow [q^6]_{J/\Psi} + n$ } \label{fig:he3} \end{figure} \section{Production on deuteron target} To facilitate the experimental determination of the $J/\Psi$-N interaction, we make predictions of the cross sections of $\gamma/\pi + d \rightarrow J/\Psi + n + p$. In the impulse approximation, the amplitude of this process is the coherent sum of the three mechanisms illustrated in Fig.\ref{fig:diagram}. The $\pi + N \rightarrow J/\Psi+ N$ amplitudes needed in the calculations are computed from the $\rho$-exchange mechanism, as described in section II. The $\gamma + N \rightarrow J/\Psi+ N$ amplitude is taken from Ref.\cite{wulee12}, $NN \rightarrow NN$ amplitudes are generated from the Bonn potential, and $J/\Psi+ N \rightarrow J/\Psi+N $ amplitudes are generated from a potential $v_{J/\Psi N,J/\Psi N} = -\alpha \frac{e^{-\mu r}}{r}$. With $\mu = 630$ MeV, the strength $\alpha$ determines the s-wave scattering lengths $a$. In presenting our results, we use $a$ to indicate the strength of the considered $J/\Psi$-N potential model. We find that the kinematics favoring the determination of $v_{J/\Psi N,J/\Psi N}$ is in the region where the outgoing proton is in the $\theta_p=0$ forward angle. In Fig.\ref{fig:alld}, we compare the predicted differential cross section of the outgoing proton at $\theta_p=0$, where $\kappa_{J/\Psi}$ denotes the relative momentum of the outgoing $J/\Psi$-n pair. We see that cross sections for pion beam are larger than that for the photon beams in the low $\kappa_{J/\Psi}$ region where the $J/\Psi$-N relative motion is slow. We also see that the predicted magnitudes depend on the scattering length $a$ of the $J/\Psi$-N potential model. The results for $a = -8.83$ fm(left) are about a factor of 10 larger than those for $a = -0.24$ fm(right). In Fig.\ref{fig:diffd}, we show the relative importance between the different mechanisms illustrated in Fig.\ref{fig:diagram}. For the case with photon beams (left), the $J/\Psi$-N re-scattering term (Fig.\ref{fig:diagram}(c)) dominants in the considered kinematic region. Thus the measured cross section (solid curve) can be used to sensitively test the considered $J/\Psi$-N potential models. For the results with pion beams (right), determinations of $J/\Psi$-N interaction clearly need an accurate calculation of the impulse term (Fig.\ref{fig:diagram}(a)), which is comparable to the $J/\Psi$-N re-scattering term (Fig.\ref{fig:diagram}(c)). \begin{figure}[ht] \centering \epsfig{file=diagram.eps, width=0.6\hsize} \caption{The mechanisms of $\gamma + d \rightarrow J/\Psi + p + n$. } \label{fig:diagram} \end{figure} \begin{figure}[ht] \centering \epsfig{file=alld.eps, width=0.6\hsize} \caption{The differential cross sections of $\gamma/\pi + d \rightarrow J/\Psi + n + p$ } \label{fig:alld} \end{figure} \begin{figure}[ht] \centering \epsfig{file=diffd.eps, width=0.6\hsize} \caption{ Relative importance between the contributions from three mechanisms illustrated in Fig.\ref{fig:diagram} to the differential cross sections of $\gamma/\pi + d \rightarrow J/\Psi + n + p$. Imp: Fig.4(a), $NN$: Fig.4(b), $J/\Psi N$: Fig.4(c). } \label{fig:diffd} \end{figure} \section{discussions} If the predicted bound states $[^3He]_{J/\Psi}$ and $[q^6]_{J/\Psi}$ can be detected, it will provide useful information to understand the role of the gluon field in determining nuclear properties. Thus the experiments on $\gamma/\pi +^4He (^3He) \rightarrow N + [^3He]_{J/\Psi} ([q^6]_{J/\Psi})$ will be very interesting to perform at J-PARC and JLab. However, the data can be analyzed properly only when we have information to determine the basic $J/\Psi$-N interactions. Our predictions on the cross sections for $\gamma/\pi + d \rightarrow J/\Psi +n + p$ can facilitate the future experimental efforts in this direction. \clearpage \begin{acknowledgments} This work is supported by the U.S. Department of Energy, Office of Nuclear Physics Division, under Contract No. DE-AC02-06CH11357. This research used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231, and resources provided on ``Fusion,'' a 320-node computing cluster operated by the Laboratory Computing Resource Center at Argonne National Laboratory. \end{acknowledgments}
3,212,635,537,898
arxiv
\section{Introduction} In computer vision and robotics, \acf{hoi}~\cite{liao2020ppdm,liao2022gen,zhang2021mining,yuan2022detecting} is the crux of modern fine-grained human activity understanding. In this work, we tackle a challenging problem of \acf{fahoi}, which requires (i) building on kinematic-agnostic object representations for \textbf{articulated} objects, and (ii) modeling the fine-grained spatial-temporal interactions between objects and the human \textbf{whole-bodies}. Specifically, we address the problem of object pose estimation under \ac{fahoi}, as the reconstruction of foreground 3D human poses is relatively easy from the front-view cameras. Object pose estimation under \ac{fahoi} is inherently challenging due to three primary reasons: \paragraph{Lack \ac{fahoi} datasets that captures human whole-bodies interacting with articulated objects} Despite recent progress in 3D \ac{hoi}, prior works either assume that the objects to be interacted with are rigid~\cite{bhatnagar2022behave,taheri2020grab,zhang2020perceiving,li2020detailed}, or the interactions involve only part of human bodies (\eg, only hands~\cite{fan2022articulated} or upper limbs~\cite{xu2021d3d,haresh2022articulated}). These assumptions oversimplify daily interactions; humans use different body parts to interact with articulated objects composed of movable parts, such as cabinets and office chairs, calling for a dataset with a finer-grained level of interactions. \paragraph{Large variance of object kinematic structures} Objects related to \ac{fahoi} show significant divergence in their kinematic structures, even within the same category; objects possess various numbers and types of parts and joints. Such diversity is in stark contrast to the articulated objects modeled in literature~\cite{xu2021d3d,liu2022akb,fan2022articulated,haresh2022articulated}, which assumes limited or no variety in kinematic structures. Reconstructing objects with diverse geometries and structures remains challenging. \paragraph{Complex and subtle relations between human body parts and object parts} Interacting with articulated objects involves complicated spatial and physical relationships, with severe occlusions and rich contacts that incapacitate conventional pose estimation methods that rely on pointcloud template-matching~\cite{zhang2020perceiving,wang2019normalized,he2020pvn3d,peng2019pvnet,li2020category}. The contact-rich property also challenges capturing the fine details in reconstruction, as even small errors can result in implausible interactions such as penetration and floatation. We devise the following three solutions to tackle the above three challenges, respectively. To address the scarcity of \ac{fahoi} dataset, we present \ac{dataset}, a large-scale \ac{fahoi} dataset with multi-view RGB-D sequences. As shown in \cref{fig:teaser}, \ac{dataset} includes 17.3\xspace hours of diverse interactions among 46\xspace participants and 81\xspace sittable objects (\eg, chairs, sofas, stools, and benches), 28 of which have movable parts; each frame includes 3D meshes of whole-body humans and objects. In this work, we focus on interactions with sittable objects; they are diverse in structure and contain distinct movable parts that afford various whole-body human interactions. To model diverse kinematic structures, we extend the task of object pose estimation to the challenging setting of kinematic-agnostic pose estimation. Existing datasets~\cite{xiang2020sapien,wang2019shape2motion,liu2022akb} and methods~\cite{li2020category,abbatematteo2019learning,mu2021a,tseng2022cla} for articulated objects assume similar or identical kinematic structure for intra-class objects; this assumption fails when dealing with real-world daily objects. The kinematic structures in \ac{dataset} vary from a rigid stool with no articulation to swivel chairs with 7 movable parts. Specifically, we relax the assumption of limited kinematic structures to an open set of flexible but known structures. Given an observed image, an estimated human body from the image, and the kinematic structure of the object of interest, we aim to reconstruct the pose and shape of the object. To disambiguate the complex and subtle relations during the whole-body interactions with articulated objects, we devise a novel pose estimation approach that leverages the fine-grained interaction relationships to reconstruct the interacting object. A common solution in the prior arts~\cite{zhang2020perceiving,hassan2019resolving,bhatnagar2022behave} is to manually label each object mesh with contact maps corresponding to human body parts. In comparison, our method exploits the complex and fine relationships with a reconstruction model and an interaction prior learned with \ac{cvae}, which avoids the pre-defined knowledge through mundane annotation. Specifically, our approach first reconstructs coarse shapes and poses of the objects, then optimizes the details with the learned interaction prior. Our \textbf{contributions} are four-fold. (i) We present \ac{dataset}, a large-scale multi-view RGB-D dataset with diverse and high-quality 3D meshes of human and articulated objects. (ii) We extend articulated object pose estimation to the challenging setting of \ac{fahoi}. (iii) We devise an object pose estimation approach agnostic to the articulation structure. (iv) We propose a generic interaction prior that captures the fine-grained interactions with sittable objects and facilities the pose estimation. \section{Related Work} \paragraph{3D \acf{hoi}} \ac{hoi} research has evolved from detecting interactions in 2D images~\cite{chao2015hico,qi2018learning,gkioxari2018detecting,liao2020ppdm,liao2022gen,zhang2021mining,yuan2022detecting} to reconstructing~\cite{savva2016pigraphs,hassan2019resolving,chen2019holistic,weng2021holistic,xu2021d3d,zanfir2018monocular,siwei2021learning} and generating~\cite{hassan2021populating,wang2021synthesizing,xu2020hierarchical,holden2017phase,wang2022humanise} 3D interactions in 3D scenes. Notably, PiGraph~\cite{savva2016pigraphs} captures human daily activities, Rosinol \emph{et al}\onedot~\cite{rosinol20203d} represent the interactions with a graph structure, and Hassan \emph{et al}\onedot~\cite{hassan2019resolving,xu2021d3d} reconstruct 3D human-scene interactions. However, these works rely on visual observations to collect ground-truth 3D poses, which leads to inaccurate reconstruction under partial observation. Meanwhile, MoCap systems~\cite{taheri2020grab,bhatnagar2022behave,fan2022articulated} provide fine-grained 3D interactions between humans and 3D objects. In particular, GRAB~\cite{taheri2020grab} and ARCTIC~\cite{fan2022articulated} focus on interactions with small objects, such as grasping and holding, whereas BEHAVE~\cite{bhatnagar2022behave} captures the interactions with daily objects. However, most existing works focus on either rigid objects or articulated objects but in the domain of hand-object interactions. In comparison, our \ac{dataset} dataset provides realistic \textit{whole-body} interactions (\eg, move the bench, relax in the chair) with diverse articulated objects. \paragraph{\acf{ahoi}} \acp{ahoi} build on part-level object representations and model the fine-grained spatial-temporal interactions between human and articulated objects~\cite{haresh2022articulated}. To date, the most relevant works are D3D-HOI~\cite{xu2021d3d}, ARCTIC~\cite{fan2022articulated}, and 3DADN~\cite{qian2022understanding}. Specifically, D3D-HOI~\cite{xu2021d3d} collects a video dataset of humans interacting with containers such as microwaves and refrigerators, ARCTIC~\cite{fan2022articulated} collects a motion-captured RGB-D dataset of hand-object interactions with articulated objects, whereas 3DADN~\cite{qian2022understanding} annotates movable object parts from internet videos as 3D planes with rotations. Of note, all objects only have one revolute joint connecting two rigid parts, and all interactions captured focus only on hand-object interactions such as ``open'' and ``close.'' In comparison, we take one step further to study the whole-body \acp{ahoi}; most body parts interact with diverse articulated objects. \begin{table*}[t!] \centering \caption{\textbf{Comparisons between \ac{dataset} and other \ac{hoi} datasets.}} \resizebox{0.97\linewidth}{!}{% \begin{tabular}{cccccccccc} \toprule Dataset & \# object & \# participants & \# instance & \# hours & fps & \# view & articulated objects & human & annotation type \\ \midrule PROX~\cite{hassan2019resolving} & / & 20 & / & 0.9 & 30 & 1 & No & Whole-body & single-kinect\\ GRAB~\cite{taheri2020grab} & 51 & 10 & 4 & 3.8 & 120 & 0 & No & Whole-body & mocap\\ BEHAVE~\cite{bhatnagar2022behave} & 20 & 8 & 6 & 0.14 & 30 & 4 & No & Whole-body & multi-kinect\\ ARCTIC~\cite{fan2022articulated} & 10 & 9 & 1 & 1.2 & 30 & 8+1 & Yes & Two hands & mocap\\ D3D-HOI~\cite{xu2021d3d} & 24 & 5 & / & 0.6 & 3 & 1 & Yes & Whole-body & manual \\ \ac{dataset} (Ours) & 81\xspace & 46\xspace & 32\xspace & 17.3\xspace & 30 & 4 & Yes & Whole-body & mocap\\ \bottomrule \end{tabular}% }% \label{tab:dataset} \end{table*} \begin{figure*}[b!] \vspace{6pt} \centering \begin{subfigure}{0.49\linewidth} \includegraphics[width=\linewidth]{data/data-1-optim}% \\ \includegraphics[width=\linewidth]{data/data-2-optim}% \caption{sequences of objects articulating over time} \end{subfigure}% \hfill \begin{subfigure}{0.49\linewidth} \includegraphics[width=\linewidth]{data/data-3-optim}% \\ \includegraphics[width=\linewidth]{data/data-4-optim}% \caption{frames of diverse interactions} \end{subfigure}% \caption{\textbf{Examples from the proposed \ac{dataset} dataset.} \ac{dataset} captures versatile \acp{ahoi} from carefully calibrated multi-view RGB-D cameras and provides fine-grained 3D meshes for both humans and articulated objects. We show (a) RGB frames and ground-truth meshes of \acp{ahoi} in sequences and (b) diverse types of \acp{ahoi}.} \label{fig:dataset} \end{figure*} \paragraph{Contact-Rich \ac{hoi}} \ac{fahoi} requires a more detailed \ac{hoi} understanding. Despite the rapid growth of literature in 3D \ac{hoi}, only a few involve full-body contacts either by reconstruction~\cite{hassan2019resolving} or generation~\cite{zhao2022compositional,wang2022humanise,hassan2021stochastic}. However, these prior arts are limited to interactions with static scenes and limited interactions. In comparison, our \ac{dataset} dataset contains diverse articulated objects and interactions. \paragraph{Articulated Object Pose Estimation} Estimating rotation and translation (\ie, 6-DOF pose estimation) of rigid objects has recently attracted significant attentions~\cite{kang2020yolo,he2020pvn3d,braun2016pose,wang2019densefuison,peng2019pvnet,park2019pix2pose,do2018deep}. Template-based methods are commonly adopted approach~\cite{hinterstoisser2011multimodal,yang2015go,wang2019normalized,kehl2017ssd} and have spurred a series of recent works in articulated object pose estimation~\cite{desingh2019factored,michel2015pose,li2020category}. Other methods rely on regression models~\cite{abbatematteo2019learning} or implicit functions~\cite{mu2021a,tseng2022cla,yang2021lasr,jiang2022ditto}. Despite recent progress, these methods are based on a simplified assumption of consistent kinematic structures within each object category. Hence, the pose estimation models are designed and trained to estimate the attributes and states of a fixed set of joints. Although recent datasets on articulated objects~\cite{liu2022toward,wang2019shape2motion,liu2022akb} contain different kinematic structures, the diversity of kinematic structures is not the primary focus and thus is still limited. To overcome these shortcomings, we collect the \ac{dataset} dataset with diverse kinematic structures and devise models to handle 3D objects with various parts and kinematics. \section{The \texorpdfstring{\texttt{CHAIRS}}{} Dataset} A major obstacle in modeling \acp{ahoi} is the absence of accurate 3D annotations. In this work, we present \ac{dataset}, a large-scale \ac{ahoi} dataset with multi-view RGB-D sequences. \ac{dataset} provides high-quality 3D meshes of humans and articulated objects during interactions, collected with an inertial-optical hybrid motion capture (MoCap) system and optimized for superior realism and physical plausibility. \cref{tab:dataset} shows the detailed comparison between \ac{dataset} and previous \ac{hoi} datasets. \subsection{Data Collection} \paragraph{Summary} \ac{dataset} has a total of 1390\xspace sequences of articulated interactions between human and sittable objects, such as chairs, sofas, stools, and benches. \cref{fig:dataset} shows exemplar sequences of \ac{dataset} and object gallery. For each object, we asked 6 participants to record three sequences of interactions with it, yielding 18 sequences for each object. In each sequence, a participant was asked to perform 6 different actions. The actions were randomly chosen from a list of 32\xspace interactions (\eg, move the stool forward, relax on the sofa, spin the chair); see Supplementary Material for details. We ensure the data diversity with 46\xspace participants and 81\xspace objects. Only high-level instructions were provided to the participants to ensure natural performances. \paragraph{Object Gallery} \ac{dataset} features object collections with rich appearances and kinematic structures. The objects were selected and purchased online by maximizing the style variance; 28 of them have at least one articulated joint. We scanned the 3D meshes of each object with the Scaniverse app on an iPad Pro (11-inch, 2nd generation) and manually refined the geometries to remove artifacts. We define eight object functional parts and use the annotation tool~\cite{mo2019partnet} to segment the 3D meshes accordingly. When interacting with an object, participants were only provided with instructions compatible with the given object. \paragraph{Camera and Hardware Setup} As shown in \cref{fig:setup}, all the sequences were captured exclusively in a controlled laboratory setup, with a designated area of 5m$\times$4m where all actions were fully visible to the cameras. Four multi-view front-facing Kinect Azure DK cameras were set up towards performed interactions. The cameras were well-calibrated and synchronized. To ensure high-quality ground-truth poses for both humans and objects, we adopted a commercial inertial-optical hybrid MoCap system in addition to the Kinect setup; see details in the next section. \begin{figure}[htb!] \centering \includegraphics[width=\linewidth]{mocap/mocap} \caption{\textbf{The camera and hardware setup of data collection for the \ac{dataset} dataset.} We (a) set up 4 front-facing RGB-D cameras along with a set of motion capture cameras around the capturing site, (b) attach hybrid trackers to movable object parts, and (c) place 5 hybrid trackers and 17 IMUs on participants.} \label{fig:setup} \end{figure} \subsection{Motion Capture (MoCap) System} \paragraph{Hybrid MoCap} Our MoCap system contains a MoCap suit with 5 hybrid trackers and 17 wearable Inertial Measurement Units (IMUs), a pair of gloves with 12 IMUs each, an additional set of hybrid trackers, and a set of 8 high-speed cameras. A hybrid tracker is a rigid assembly of 4 optical markers and an IMU that can measure accurate 6D poses of itself under severe occlusion. We illustrate our data collection setup in \cref{fig:setup}. When capturing the pose of a human or object part, we can either use an IMU to record its global orientation or a hybrid tracker to record its 6D pose. \paragraph{Articulated Object Capture} Collecting the articulated pose of an object during interactions involves three steps. First, we arrange the object to its canonical pose and attach a hybrid tracker to each of its movable parts. Next, we compute the relative transformation between the trackers and the object part. During recording, we calculate the ground-truth 6D pose of each object part in real time based on the trackers' pose. Finally, we fit the rigid parts to the object's kinematic structure for high-quality object poses. \paragraph{Human Body Capture} We adopt the SMPL-X~\cite{pavlakos2019expressive} representation for human poses and shapes. Participants were asked to wear a MoCap suit with 17 IMUs, a pair of MoCap gloves, and 5 hybrid trackers mounted on their heads, hands, and feet. Of note, the hybrid trackers capture 6D poses, whereas IMUs only measure global orientations. We optimize the human model's shape parameters such that the reconstructed SMPL-X mesh aligns with the hybrid tracker positions. The MoCap system produces real-time estimated human poses and shapes during recording. \subsection{Post-processing} \paragraph{Data Alignment} Kinect cameras and the MoCap system have separate 3D coordinates and clocks. We align the 3D coordinates of Kinect sequences with MoCap reconstructions based on plane-to-plane correspondences~\cite{segal2009generalized}, which alleviate the sensitivity to outliers, disturbances, and partial overlaps. We align the temporal sequences from Kinect and MoCap using time-lagged cross-correlation~\cite{shen2015analysis}, a typical approach to synchronize two sequences that shift relatively in time. \paragraph{Penetration Removal} \begin{figure}[b!] \centering \includegraphics[width=\linewidth]{penetration/penetration} \caption{\textbf{Illustration of the penetration removal process.} (a)(d) Small purple points denote human vertices without penetration, whereas large colored points are those with penetration. Red points denote the most significant penetration, and blue barely in contact. (b)(c) Yellow lines denote the original skeleton, red markers the target joints to be optimized, and red lines the optimized skeleton.} \label{fig:optimize} \end{figure} Due to the limited number of sensors and discrepancies in limb lengths, implausible contacts and penetrations still exist in captured 3D interactions. To address this issue, we fix the physical glitches with a carefully designed optimization algorithm as shown in \cref{fig:optimize}. Given a parameterized human body and an articulated object point cloud, we first compute the penetration depths between the human and the object point cloud. Next, we use the transpose of the linear-blend-skinning weights of SMPL-X to aggregate the maximum penetration depth and direction to the human skeleton joints; this information is used to calculate a target skeleton that offsets the penetration. Finally, we run gradient-based optimization to fit the human model to the new skeleton while keeping the human pose parameter close to the MoCap reconstruction. \paragraph{Privacy Protection} We blur the faces~\cite{blurryfaces} of all participants to hide identities and informed all participants that they can remove themselves from \ac{dataset} at any time. \section{Articulated Object Pose Estimation} \ac{dataset} can support a wide range of \ac{ahoi} tasks, including detection, motion generation, physics-based analysis, or even language-guided motion generation with additional annotations. We showcase the value of \ac{dataset} on articulated object pose estimation. Despite recent progress in articulated object pose estimation~\cite{xu2021d3d,fan2022articulated,haresh2022articulated} and \ac{hoi} reconstruction~\cite{chen2019holistic,taheri2020grab,zhao2022compositional,wu2021saga}, articulated object pose estimation remains unaddressed in the challenging setting of \ac{fahoi}. Specifically, our setting requires the model to accurately estimate the pose of the articulated objects in the context of heavy occlusion and dense contact. \begin{figure*}[t!] \centering \includegraphics[width=\linewidth]{recon-model_5} \caption{\textbf{The overall architecture of our model.} The reconstruction model uses the predicted voxelized human to guide the pose estimation of the interacting object. We further regress the root 6D pose of the object using the image feature and the SMPL-X parameters. We utilize both predictions and an interaction prior to optimize the final estimated pose.} \label{fig:overview} \end{figure*} \subsection{Task Definition} Given an observed image $I$, the parameterized human model $H=(\beta,\theta_b,\theta_h,R_b,T_b)$, and the meshes $X=\{X_i, i=1,\cdots,N\}$ of the interacting object that has $N$ parts, the task is to estimate the object pose $O=\{(R_i, T_i), i=0,\cdots,N\}$, where $\beta\in\mathbb{R}^{10}, \theta_b\in\mathbb{R}^{21\times6}, \theta_h\in\mathbb{R}^{30\times6}$, and $R_b\in\mathbb{R}^6$ and $T_b\in\mathbb{R}^3$ are the shape and pose parameters of the SMPL-X~\cite{pavlakos2019expressive} model. $(R_0\in\mathbb{R}^6, T_0\in\mathbb{R}^3)$ is the object root pose, and $\{(R_i\in\mathbb{R}^6, T_i\in\mathbb{R}^3)\}$ denotes the global rotation and translation for each part $X_i$. We use the orthogonal 6D representation~\cite{zhou2019continuity} for the rotations in both human and object poses. \subsection{Model Architecture} We propose an interaction-aware object pose estimation model that leverages fine-grained geometric relationships in \acp{hoi} and the interaction priors. Our method contains two stages: given an image and estimated SMPL-X~\cite{pavlakos2019expressive} parameters, we first estimate the object occupancy grids and root pose with a reconstruction model. Then, we optimize the reconstructed human-object pair with a learned interaction prior. \cref{fig:overview} illustrates the overall framework of our model, and \cref{fig:prior} shows the interaction prior model. \begin{figure}[ht!] \centering \includegraphics[width=\linewidth]{prior-model} \caption{\textbf{An illustration of the interaction prior model.} It is a \ac{cvae} that generates object voxels conditioned on human voxels. We minimize the norm of the latent code during optimization.} \label{fig:prior} \end{figure} \subsection{Object Reconstruction and Pose Initialization} Given an observation $I$, we estimate the human pose and shape using an off-the-shelf estimator and voxelize the estimated human shapes $H'$ using Kaolin~\cite{jatavallabhula2019kaolin} to four different resolutions. To better utilize the geometric relationship between the human-object pair, we estimate the object shape and pose with the guidance of the human pose. Specifically, we first extract the ResNet-101~\cite{he2016deep} features from the image and estimate the object voxel from the image features with a 3D decoder, which is composed of three 3DConvT layers and upsampling layers at different resolutions, and two 1x1 3DConv layers. Next, we concatenate the convolutional feature grids with the human voxels at each resolution to enhance the human pose guidance. The last 3DConv layer produces the estimated object occupancy grid $\mathcal{V}_{O}^{'}$. We finally concatenate the image features extracted from ResNet-101 and the SMPL-X parameters, and use an additional MLP to regress the root pose $(R'_0, T'_0)$ of the object. We also use this root pose as the initialization for the optimization. To train the reconstruction model, we first initialize the human shape estimator with the pre-trained weights from the PARE model~\cite{pavlakos2019expressive} and fine-tune it on \ac{dataset}. Next, we freeze the weights of the PARE model and train the reconstruction model with the object pose estimation loss $\mathcal{L}^\mathcal{O}$, which is the L1 loss on object voxels. \subsection{Interaction Prior} To capture the fine-grained relationship between humans and interacting objects, we propose a \ac{cvae}-based interaction prior model, which learns the conditional distribution of object occupancy given the human shape. Specifically, the condition to the prior \ac{cvae} is a multi-resolution voxelized human, and the goal is to reconstruct the voxelized object. We use 3DConvNets as the encoder and decoder. During training, we feed the voxelized object through the encoder to get the object features at different scales. The object features are concatenated with the multi-resolution human voxels in each corresponding layer, and an MLP is utilized to estimate the latent Gaussian distribution $\mathcal{N}(\mu,\sigma)$. Next, we sample the latent code $z\sim\mathcal{N}(\mu,\sigma)$ by re-parameterization and decode it with the decoder. Finally, we concatenate the feature grids at each layer in the decoder with the corresponding human voxel condition. We train the prior model on \ac{dataset} with four losses: \begin{equation} \mathcal{L}_\mathrm{P}=\mathcal{L}_\mathrm{recon}+\mathcal{L}_\mathrm{KL}+ \mathcal{L}_\mathrm{pene}+\mathcal{L}_\mathrm{contra}, \end{equation} where $\mathcal{L}_\mathrm{recon}$ and $\mathcal{L}_\mathrm{KL}$ are the standard reconstruction and KL divergence loss, respectively. $\mathcal{L}_\mathrm{pene}$ is the penetration loss that penalizes voxel grids occupied by both humans and objects. $\mathcal{L}_\mathrm{contra}$ maximizes the distance of latent variables between the original data and augmented noisy data. We augment part of the training data with random noises. \subsection{Pose Optimization with Interaction Prior} To reconstruct the fine-grained human-object relation and recover the final object poses, we utilize an additional optimization stage based on the initialized poses using the kinematic information and the interaction prior. Specifically, given the object's CAD model and URDF, the estimated SMPL-X parameters $H'$, and object voxels $\mathcal{V}'_{O}$ estimated from the reconstruction model, we initialize the object model $\hat{O}$ with the estimated root transformations and random part states, and iteratively update the object model $\hat{O}$'s parameters by minimizing the objective $\mathcal{J}_\mathrm{recon} + \mathcal{J}_\mathrm{z}$: \begin{equation} \begin{aligned} \mathcal{J}_\mathrm{recon} = \Vert V(\hat{O}) - \mathcal{V}'_{O} \Vert_2; \hspace{8pt} \mathcal{J}_\mathrm{z} = \Vert\texttt{Enc}(H',\hat{O})\Vert, \end{aligned} \end{equation} where $V(\cdot)$ is the voxelization function, $\mathcal{J}_\mathrm{recon}$ term penalizes the distance between the voxelized object model and the estimated object voxels, and $\mathcal{J}_\mathrm{z}$ constrains the norm of the latent predicted by the \ac{cvae} encoder to be small, which regularizes the estimated interaction to be close to the prior. The overall process of pose optimization with interaction prior is illustrated in \cref{fig:pose_optimize}. \begin{figure}[ht!] \centering \includegraphics[width=\linewidth]{optimize} \caption{\textbf{An illustration of pose estimation with interaction prior.} Starting with the reconstruction output, we optimize the object according to the \textbf{\textcolor{optyellow}{reconstructed voxel}} and \textbf{\textcolor{optblue}{interaction prior}}.} \label{fig:pose_optimize} \end{figure} \section{Experiments}\label{sec:exp} \paragraph{Experimental Settings} We split \ac{dataset} into training, testing, and evaluation sets; $70\%$ of objects are used for training, $20\%$ for testing, and the rest for evaluation. We evaluate the performance of our model under two different settings: with (\emph{w/ opt}) and without optimization (\emph{w/o opt}). In the \emph{w/ opt}. setting, we report the chamfer distance between the objects posed with ground truth and estimated transformation parameters. In the \emph{w/o opt.} setting, however, we do not have the estimated transformation parameters. We therefore report the chamfer distance between the ground-truth object mesh and the mesh obtained by running the marching cube algorithm on the reconstructed voxels. \paragraph{Evaluation Metrics} We evaluate object pose estimation with the mean rotation and translation errors of each object part, and evaluate the object shape reconstruction with the chamfer distance and intersection over union (IoU). We finally evaluate the reconstructed \ac{fahoi} with the penetration depth and contact scores between the human and the object. We compute the penetration depth for a human-object pair as the maximum depth of the object's surface inside the human's body. This metric is zero if there is no penetration. The contact value is the shortest distance between the human and the object. We clip the contact value to [0,20cm] for human-object pairs that are far away. \paragraph{Baseline Methods} We compare the performance of articulated object pose estimation with two object reconstruction methods LASR~\cite{yang2021lasr} and ANCSH~\cite{li2020category} as baselines; we use the depth map as the input to ANCSH. Both methods are \textit{fine-tuned} on \ac{dataset}. We further compare our model with D3D-HOI~\cite{xu2021d3d} that jointly estimates the human and object poses. We modified the optimization objectives of D3D-HOI to better fit the data distribution of \ac{dataset}. \begin{table}[b!] \centering \caption{\textbf{Comparisons against existing methods.} $*$: method requires knowledge of object structure and/or geometry; $\dagger$: method does not require object-related knowledge.} \resizebox{\linewidth}{!}{% \begin{tabular}{l|cccc|cc} \toprule \multirow{2}{*}{Method}&\multicolumn{4}{c}{Object} & \multicolumn{2}{|c}{HOI}\\ & \specialcell{Rot.$\downarrow$\\($^\circ$)} & \specialcell{Transl.$\downarrow$\\(mm)} & \specialcell{CD$\downarrow$\\(mm)} & \specialcell{IoU$\uparrow$\\(\%)} & \specialcell{Pene.$\downarrow$\\(mm)} & \specialcell{Cont.$\downarrow$\\(mm)}\\ \midrule LASR$^\dagger$~\cite{yang2021lasr} & /&/&205.2&/&/&/ \\ ANCSH$^*$~\cite{li2020category} & /&/&90.36&/&/&/ \\ \midrule D3D-HOI$^*$~\cite{xu2021d3d} & 27.31 & 119.2 & 126.9 & 16.60 & 7.472 & \textbf{1.163} \\ Ours (w/o opt.)$^\dagger$& / & / & \textbf{160.2} & \textbf{11.03} & \textbf{4.530} & \textbf{2.720} \\ Ours (w/ opt.)$^*$& \textbf{19.35} & \textbf{66.23} & \textbf{72.30} & \textbf{21.57} & \textbf{1.143} & 1.562\\ \bottomrule \end{tabular}% }% \label{tab:result} \end{table} \begin{figure*}[t!] \centering \includegraphics[width=\linewidth]{results} \caption{\textbf{Qualitative results of our model.} (a)-(e) Results on \ac{dataset} test set. (f)-(g) Results on images taken in the wild. Baseline results are obtained from D3D-HOI~\cite{xu2021d3d}. We show optimized human and object poses in the third and fourth row, and visualize the mesh obtained by running marching cube on the reconstructed voxels in the last row.} \label{fig:result} \end{figure*} \subsection{Results and Analyses} \cref{tab:result} shows the quantitative results. Incorporating the geometrical relationships, our model significantly improves the performance of pose estimation and shape reconstruction compared with existing methods. More specifically, in the \emph{w/o opt.} setting where the object is unknown, our model outperforms the SOTA method LASR, by a wide margin. Although our model is surpassed by D3D-HOI and ANCSH, they both assume known object structures. Our model notably outperforms all existing baselines when we provide the object structure to our model in the \emph{w/ opt.} setting. We show qualitative results in \cref{fig:result}. Columns (a)-(e) show reconstruction results on the test set. We visualize the reconstructed mesh before optimization with the marching cube. We observe that our model can reconstruct plausible and accurate interactions before optimization, and the optimization step further improves interaction details. \subsection{Ablations} We verify the design of our model with three ablation studies and report the quantitative results in \cref{tab:ablation}. \begin{table}[b!] \centering \caption{\textbf{Ablation of interaction, prior, and contrastive loss.}} \label{tab:ablation} \resizebox{\linewidth}{!}{ \begin{tabular}{l|cccc|cc} \toprule \multirow{2}{*}{Method}&\multicolumn{4}{c}{Object} & \multicolumn{2}{|c}{HOI}\\ & \specialcell{Rot.$\downarrow$\\($^\circ$)} & \specialcell{Transl.$\downarrow$\\(mm)} & \specialcell{CD$\downarrow$\\(mm)} & \specialcell{IoU$\uparrow$\\(\%)} & \specialcell{Pene.$\downarrow$\\(mm)} & \specialcell{Cont.$\downarrow$\\(mm)}\\ \midrule Full$^\dagger$ & / & / & \textbf{160.2} & \textbf{11.03} & 4.530 & \textbf{2.720}\\ $-\,$prior$^\dagger$ & / & / & 165.3 & 10.52 & \textbf{4.377} & 3.295\\ \midrule Full$^*$ & 19.35 & \textbf{66.23} & \textbf{72.30} & \textbf{21.57} & 1.143 & \textbf{1.562} \\ $-\,$prior$^*$ & 19.97 & 83.39 & 87.90 & 18.81 & 1.749 & 2.081 \\ $-\,$contr.$^*$ & 21.52 & 81.90 & 87.28 & 18.93 & 1.265 & 2.393 \\ $-\,$inter.$^*$ & \textbf{17.88} & 69.53 & 78.12 & 19.50 & \textbf{1.022} & 2.320\\ \bottomrule \end{tabular}% }% \end{table} \paragraph{Prior} We remove the interaction prior model and optimize object poses by minimizing only $\mathcal{L}_{\mathrm{recon}}$. We observe a large drop in performance in both $*$ and $\dagger$ settings and confirm that the interaction prior plays a vital role in estimating the object pose accurately. Note that both settings have an optimization step, and the only difference is that the $*$ model has access to the object geometry and structure during optimization. We observe a drop in penetration when the prior model is removed in the $\dagger$ setting, while the contact value increases by a much larger margin. This indicates our interaction prior model pulls the object toward the human when they are not in contact. \paragraph{Contrast} We remove the contrastive loss $\mathcal{L}_{\mathrm{contra}}$ when training the prior model. We observe similar results as in the $-$prior experiment. This result shows that contrastive loss is crucial to learning a robust interaction prior. \paragraph{Interaction} We remove the concatenated human voxel in 3DConv layers in both the reconstruction model and the interaction prior model. This eliminates the interaction awareness of our model. We observe slight degradation across all object reconstruction metrics, showing the significance of interaction awareness in our model. We also observe that the contact value increases while penetration drops. This is similar to the $-$prior ablation in \textit{w/o opt.} setting, which shows that the interaction awareness is also pulling the human and object towards each other. Finally, we observe an unexpected low rotation error, which we attribute to the rotation symmetries in the dataset. In summary, we conclude that all three components contribute significantly to object pose and shape reconstruction. \paragraph{Failure Cases} Our model fails to estimate the correct orientation of object parts in two typical scenarios. The most common scenario is rotation symmetry, wherein the object is geometrically similar under certain rotations. Rotation symmetry is common in spherical and cylindrical object parts, such as the base of a stool or a round seat. \cref{fig:failure-b} shows an example of rotation symmetry. Existing methods~\cite{fan2017point,wang2019normalized} bypass this issue with (i) multiple equally-correct ground truths and (ii) a min-of-N loss that calculates the smallest distance to any of the ground truths. However, this method requires a carefully designed classification of the symmetry type for each object. We attribute another common failure to interaction symmetry; the way a person interacts with an object is identical when the object is in different poses. Interaction symmetry confuses our model when the visual module fails to differentiate poses. We show in \cref{fig:failure-c} that our model leverages fine geometrical relations to reconstruct natural interactions despite the false prediction of the object pose. \begin{figure}[ht!] \centering \hfill \begin{subfigure}{\linewidth} \includegraphics[width=\linewidth]{failure/failure-2-optim} \caption{an example of rotation symmetry} \label{fig:failure-b} \end{subfigure}% \\% \hfill% \hfill \begin{subfigure}{\linewidth} \includegraphics[width=0.95\linewidth]{failure/failure-3-optim} \caption{an example of interaction symmetry} \label{fig:failure-c} \end{subfigure}% \caption{\textbf{Common failure cases caused by symmetry.} The left meshes are ground truths, whereas the right are the model predictions. (a) Rotation-symmetrical object yields a large rotation error but a small visual error. (b) Interaction symmetry occurs when both the body and legs of the puppy stool are flipped, yet the predicted interactions and structure look reasonable.} \label{fig:failure} \end{figure} \paragraph{In-the-wild Generalization} We curate a small set of images captured in our daily scenes to test the model's generalizability. \cref{fig:result}(f-g) shows two qualitative results in an office and demonstrates that the proposed model generalizes to images taken outside laboratory settings. The model fails to predict an accurate object pose in the last column when the person is not interacting with the object. Please see additional results and analyses in the Supplementary Material. \section{Conclusion}\label{sec:conclusion} We promote \ac{hoi} towards articulated, fine-grained, and part-level direction with (i) a novel dataset, \ac{dataset}, (ii) a challenging problem of object reconstruction under \ac{fahoi}, and (iii) a strong baseline. The \ac{dataset} dataset captures a large-scale collection of whole-body \acp{ahoi} with diverse and natural interactions and wildly different sittable objects. The object reconstruction problem removes the oversimplified assumption of kinematic consistency, and our model leverages fine-grained interaction relationships to rule out ambiguities. \paragraph{Limitations} While our model can accurately reconstruct articulated objects under heavy occlusions from \ac{fahoi}, its performance depends heavily on the interaction that created such ambiguity. The performance of our model drops significantly when there is no \ac{fahoi}. In addition, Our model does not leverage the interaction prior to improve human pose estimation. Similar to our approach, the same interaction prior will likely improve human pose estimation in hard cases when the human is heavily occluded. \paragraph{Societal Impacts} \ac{dataset} and \ac{fahoi} bring in new opportunities to understand how humans interact with the environment. We firmly believe that a solid understanding of \acp{fahoi} in the future would empower intelligent agents in real-life applications, such as assistive robots in healthcare and elderly care services, as well as indoor service robots that clean and arrange furniture. Meanwhile, we are aware of the insecure use of \acp{fahoi} understanding in surveillance technology that could lead to the invasion of privacy; we blur all faces to remove personally identifiable information in our dataset. \paragraph{Acknowledgments} We thank Zhiyuan Zhang for his technical support during his internship at BIGAI. This work is supported in part by the National Key R\&D Program of China (2021ZD0150200) and the Beijing Nova Program. { \small \balance \bibliographystyle{ieee_fullname}
3,212,635,537,899
arxiv
\section{Introduction} Ikeda--Mihalcea--Naruse \cite{IkedaMihalceaNaruse} introduced the {\it double Schubert polynomials} of type $C$ (also $B$ and $D$) by extending Billey--Haiman \cite{BilleyHaiman}'s construction for the single case. These polynomials represent the equivariant cohomology classes of Schubert varieties in type $C$ flag varieties. One can express the ones corresponding to Lagrangian Grassmannians in terms of the Schur-Pfaffian by the work of Kazarian \cite{Kazarian} and Ikeda \cite{Ikeda2007}, and they coincide with the {\it factorial Schur $Q$-functions} of Ivanov \cite{Ivanov04} defined in terms of {\it marked shifted tableaux} of strict partitions (cf. \cite{MacdonaldHall}). Note that the corresponding fact for the single case was established in the earlier work of Pragacz \cite{PragaczPQ}. In \cite{AndersonFulton, AndersonFulton2}, Anderson--Fulton introduced {\it vexillary signed permutations}, a family of signed permutations containing the Lagrangian ones, and showed that the associated double Schubert polynomials can be also expressed in the Schur-Pfaffian formula. The goal of this paper is to give a new tableau formula for this family of double Schubert polynomials, by extending the notion of marked shifted tableaux and Ivanov's factorial Schur $Q$-functions. Our study is motivated by the analogy in type A. Lascoux--Sch\"{u}tzenberger's double Schubert polynomials \cite{ClassesLascoux, SchubertLascoux, LascouxSchutzenberger1985} represent the equivariant cohomology classes of Schubert varieties of type $A$ flag varieties, due to Fulton \cite{FlagsFulton}. A family of permutations including Grassmannian ones, now called {\it vexillary permutations}, was singled out by Lascoux and their associated double Schubert polynomials are given in a Jacobi--Trudi type determinant formula. It is worth mentioning that the ones associated to Grassmannians coincide with the factorial Schur polynomials essentially introduced and studied by Biedenharn--Louck \cite{BiedenharnLouck}. The flagged double (or factorial) Schur polynomials generalize such kind of double Schubert polynomials and are defined either by flagged determinant formula or by flagged semistandard tableaux for a partition by the work of Chen--Li--Louck \cite{ChenLiLouck} (for the single case, see Gessel--Viennot \cite{GesselViennot}, and Wachs \cite{Wachs}). Below we explain our main results in more detail. Let $\lambda=(\lambda_1,\dots,\lambda_r)$ be a strict partition of length $r$, {\it i.e.}, a strictly decreasing sequence of $r$ positive integers. We identify it with its {\it shifted Young diagram}, obtained from the usual Young diagram by shifting the $i$-th row $(i-1)$ boxes to the right, for each $i\geq 1$. Let $f=(f_1,\dots, f_r)$ be a sequence of nonnegative integers. We call $f$ a {\it flagging} of $\lambda$ and the pair $(\lambda,f)$ a {\it flagged strict partition}. Consider the ordered set of alphabets: {\it unmarked} numbers $1,2,\dots$ and {\it primed} numbers $1',2',\dots$ with $1'<1<2'<2<\cdots$. The classical {\it marked shifted tableau} $T$ of $\lambda$ is an assignment of such an alphabet to each box of the diagram subject to the rules: (1) assigned alphabets are weakly increasing in each column and row; (2) unmarked numbers are strictly increasing in each column; (3) primed numbers are strictly increasing in each row. In order to extend this notion, we add, to the above ordered set of alphabets, {\it circled} numbers $1^{\circ}<2^{\circ}<\cdots $ which are greater than any unmarked and primed number. We define a {\it (flagged) marked shifted tableau} of $(\lambda,f)$ to be an assignment of an alphabet to each box of $\lambda$ with rules: in addition to (1), (2), and (3), we require (4) circled numbers are strictly increasing in each row, and (5) alphabets in the $i$-th row are at most $f_i^{\circ}$. We denote the set of all marked shifted tableaux of $(\lambda,f)$ by $\operatorname{MST}(\lambda,f)$. A {\it signed permutation} $w$ is a permutation on the set $\{1,2,\dots \} \cup \{-1,-2,\dots\}$ such that $w(i)\not= i$ for only finitely many $i$, and $\overline{w(i)}=w(\bar i)$ where we denote $\bar i = -i$. Let $x=(x_i)_{i\in {\mathbb N}}, z=(z_i)_{i\in {\mathbb N}}, b=(b_i)_{i\in {\mathbb N}}$. The double Schubert polynomial associated to a signed permutation $w$ is denoted by ${\mathfrak C}_w(x;z|b)$. Note that the variables $b$ coincide with $-t$ in the notation of \cite{IkedaMihalceaNaruse}. If a signed permutation $w$ is {\it vexillary} in the sense of Anderson--Fulton \cite{AndersonFultonVex}, there is a unique flagged strict partition $(\lambda,f)$. For each $T \in \operatorname{MST}(\lambda,f)$, we assign \[ (xz|b)^T = \prod_{k \in T}\left(x_k+b_{c(k)-r(k)}\right)\cdot \prod_{k' \in T}\left(x_k-b_{c(k')-r(k')}\right) \cdot \prod_{k^{\circ} \in T} \left(z_k+b_{k+r(k^{\circ})-c(k^{\circ})}\right), \] where $r(\ )$ and $c(\ )$ denote the column and row indices of the entry respectively, and we set $b_{-i}:=-b_{i+1}$ for all $i\geq0$. Our main result is as follows. \vspace{3mm} \noindent{\bf Theorem A} (Theorem \ref{thmmain2}). {\it Let $w$ be a vexillary signed permutation in the sense of Anderson--Fulton \cite{AndersonFultonVex} and $(\lambda,f)$ the corresponding flagged strict partition. Then we have \begin{equation}\label{eqintro1} {\mathfrak C}_w(x;z|b) = \sum_{T\in \operatorname{MST}(\lambda,f)} (xz|b)^T. \end{equation} } \vspace{0mm} For a general flagged strict partition $(\lambda,f)$, we denote by $Q_{\lambda,f}(x;z|b)$ the function defined by the right hand side of (\ref{eqintro1}). We call it a {\it flagged factorial $Q$-function}, since it is nothing but the original definition of Ivanov's factorial $Q$-functions $Q_{\lambda}(x|b)$ when $f=(0,\dots,0)$. The proof of Theorem A is based on the following Schur--Pfaffian formula of $Q_{\lambda,f}(x;z|b)$ which generalizes the corresponding formula of $Q_{\lambda}(x|b)$ when $f=(0,\dots,0)$ in \cite[Theorem 9.1]{Ivanov04}. \vspace{3mm} \noindent{\bf Theorem B} (Theorem \ref{mainthm}). {\it Let $(\lambda,f)$ be a flagged strict partition of length $r$. Suppose that $0<\lambda_i-f_i\leq \lambda_j-f_j$ for all $i<j$. The we have \[ Q_{\lambda,f}(x,z|b)=\operatorname{Pf}\left[q_{\lambda_1}^{[f_1|\lambda_1-f_1-1]}q_{\lambda_2}^{[f_2|\lambda_2-f_2-1]}\dots q_{\lambda_r}^{[f_r|\lambda_r-f_r-1]}\right], \] where $\operatorname{Pf}$ is the Schur--Pfaffian defined in \S \ref{secPf}, and the function $q_m^{[k|\ell]}=q_m^{[k|\ell]}(x;z|b)$ is defined by \[ \sum_{m\geq 0} q_m^{[k|\ell]} u^m := \left(\prod_{i\geq 1} \frac{1+x_iu}{1-x_iu}\right)e^{[k]}_u(z) e^{[\ell]}_u(b), \ \ \ \ e^{[k]}_u(z)=\begin{cases} \displaystyle\prod_{i=1}^k(1+z_i u) & (k\geq 0),\\ \displaystyle\prod_{i=1}^{|k|}\frac{1}{1-z_i u}& (k\leq 0). \end{cases} \] } \vspace{0mm} As an application of Theorem A, we obtain a new tableau formula of Ivanov's factorial $Q$-function $Q_{\lambda}(x|b)$. Ikeda--Mihalcea--Naruse \cite{IkedaMihalceaNaruse} showed that ${\mathfrak C}_w(x;z|b) = {\mathfrak C}_{w^{-1}}(x;b|z)$ and Anderson--Fulton \cite{AndersonFultonVex} showed that if $w$ is vexillary, then so is $w^{-1}$. If $w$ is Lagrangian with strict partition $\lambda$ of length $r$, we can see that the strict partition of $w^{-1}$ is also $\lambda$ and its flag is $f=(\lambda_1-1, \dots, \lambda_r-1)$. All together we obtain \vspace{3mm} \noindent{\bf Theorem C} (Theorem \ref{thmmain3}). {\it Let $\lambda$ be a strict partition of length $r$ and $f=(\lambda_1-1,\dots,\lambda_r-1)$, then we have \[ Q_{\lambda}(x|b) = \sum_{T\in \operatorname{MST}(\lambda,f)} (xb)^T \] where $(xb)^T$ is the monomial given by \[ (xb)^T=\prod_{k\in T} x_k \prod_{k'\in T} x_k \prod_{k^{\circ}\in T} b_k. \] } Anderson--Fulton \cite{AndersonFulton2} also introduced a larger family of {\it theta-vexillary signed permutations} (Lambert \cite{Lambert}), containing the $k$-Grassmannian signed permutations. They obtained the theta-polynomial (or raising operator, Pfaffian-sum) formula of double Schubert polynomials associated to such elements, extending the ones for $k$-Grassmannians (\cite{BuchKreschTamvakis1, WilsonThesis, IkedaMatsumura}). The combinatorial aspect of these signed permutation is far more complicated than the vexillary ones. In particular, it is worth mentioning that there is a tableau formula of the corresponding {\it single} Schubert polynomials associated to $k$-Grassmannian signed permutations, due to Tamvakis \cite{Tamvakis2011Crelle}. Since some of those polynomials can also be given in terms of the tableaux introduced in this paper, it is an interesting problem to find the relation to these expressions and to extend the formula to all theta-vexillary double Schubert polynomials. This paper is organized as follows. In Section \ref{secprelim}, we introduce a few basic functions and set up an algebraic framework to study double Schubert polynomials and the combinatorially defined functions defined in this paper. In Section \ref{secffQ}, we introduce flagged marked shifted tableaux and the functions defined by them. We prove a few basic formulas that will be used in the proof of Theorem B. In Section \ref{secPf}, we review the definition of Schur-Pfaffian and prove Theorem B. In Section \ref{secSchPol}, we first recall the basic fact about double Schubert polynomials and vexillary signed permutations, following Ikeda--Mihalcea--Naruse \cite{IkedaMihalceaNaruse} and Anderson--Fulton \cite{AndersonFultonVex}. We explain how Theorem A and Theorem C follow from Theorem B. In the appendix, we give a proof of a Jacobi--Trudi type formula of row-strict skew Schur polynomials, extending the work of Wachs \cite{Wachs} and Chen--Li--Louck \cite{ChenLiLouck}. This formula is used in the proof of Theorem B. \section{Preliminary}\label{secprelim} Before we proceed with our main object of interest, we prepare the notations for a few basic functions. The goal is to set an algebraic framework in which we can study combinatorially defined functions. In particular, our Pfaffian formula of the vexillary double Schubert polynomials (and also the factorial flagged $Q$-functions) will be in terms of the basic functions that we review here. We use infinite sequences of variables, $x=(x_i)_{i\in {\mathbb N}}, z=(z_i)_{i\in {\mathbb N}}$, and $b=(b_i)_{i\in {\mathbb N}}$. We define functions $q_m=q_m(x)$ in the $x$-variables for integers $m\geq 0$ by the generating function \[ q_u(x)= \sum_{m\geq 0} q_m(x) u^m := \prod_{i\geq 1} \frac{1+x_iu}{1-x_iu}, \] where $u$ is a formal variable. For each integer $k$, we also define polynomials $e^{[k]}_m(b)$ in the $b$-variables for $m\geq 0$ by \[ e^{[k]}_u(b)=\sum_{m\geq 0} e^{[k]}_m(b)u^m:=\begin{cases} \displaystyle\prod_{i=1}^k(1+b_i u) & (k\geq 0)\\ \displaystyle\prod_{i=1}^{|k|}\frac{1}{1-b_i u}& (k\leq 0). \end{cases} \] The polynomials $e^{[k]}_m(b)$ and $e^{[-k]}_m(b)$ are nothing but the elementary and complete symmetric polynomials of degree $m$ in $b_1,\dots, b_k$ respectively. For integers $k, \ell\in {\mathbb Z}$, we set \begin{eqnarray*} e^{[k|\ell]}_u(z|b)&=&\sum_{m\geq 0} e_m^{[k|\ell]}(z|b) u^m := e^{[k]}_u(z) e^{[\ell]}_u(b),\\ q^{[\ell]}_u(x|b)&=&\sum_{m\geq 0} q_m^{[\ell]}(x|b) u^m := q_u(x)e^{[\ell]}_u(b),\\ q^{[k|\ell]}_u(x;z|b)&=&\sum_{m\geq 0} q_m^{[k|\ell]}(x;z|b) u^m := q_u(x)e^{[k]}_u(z) e^{[\ell]}_u(b). \end{eqnarray*} We will also denote $h^{[k|\ell]}_m(z|b):=e^{[-k|-\ell]}_m(z|b)$. Moreover, we often suppress the variables when it is clear from the context, {\it e.g.}, $e^{[-k|-\ell]}_m=e^{[-k|-\ell]}_m(z|b)$, $q_m^{[k|\ell]}=q_m^{[k|\ell]}(x;z|b)$, and so on. Occasionally we use the infinite sequence of variables ${\mathbf b}=(b_i)_{i\in {\mathbb Z}}$. With this extended sequence of $b$-variables in mind, we will use the following index shifting operator $\tau$. For each integer $k\in {\mathbb Z}$, let $\tau^k(b)$ be the sequence of variables defined by \[ \tau^k(b) = (b_{1+k},b_{2+k},b_{3+k},\dots). \] Similarly $\tau^k({\mathbf b})$ denotes the sequence of variables such that its $i$-th variable is $b_{i+k}$ for $i\in {\mathbb Z}$. We consider the ring $\Gamma={\mathbb Z}[q_1,q_2,\dots]$. We should note that this is not a polynomial ring since $q_i$'s are not algebraically independent. It is well-known that $\Gamma$ has a ${\mathbb Z}$-basis consisting of Schur $Q$-functions $Q_{\lambda}(x)$ (cf. \cite{MacdonaldHall}). It is also worth mentioning that Ivanov's factorial $Q$-functions $Q_{\lambda}(x|b)$ \cite{Ivanov04} form a ${\mathbb Z}[b]$-basis of the ${\mathbb Z}[b]$-algebra $\Gamma[b]:=\Gamma\otimes_{{\mathbb Z}}{\mathbb Z}[b]$ where ${\mathbb Z}[b]$ denotes the polynomial ring in $b$-variables. All functions defined above are regarded as elements of \[ \Gamma[z,b]:= \Gamma \otimes_{{\mathbb Z}} {\mathbb Z}[z]\otimes_{{\mathbb Z}} {\mathbb Z}[b]. \] \section{Flagged factorial $Q$-functions}\label{secffQ} In this section, we introduce {\it flagged factorial $Q$-functions} $Q_{\lambda,f}(x;z|b)$ based on the notion of {\it marked shifted tableaux} of flagged strict partitions $(\lambda,f)$. We will also discuss basic formulas that will be used in the proof of Schur--Pfaffian formula for $Q_{\lambda,f}(x;z|b)$ in the next section. \subsection{Definition of tableaux and functions}\label{fst} A {\it strict partition} $\lambda=(\lambda_1,\lambda_2,\dots)$ is a sequence of non-negative integers such that $\lambda_i>\lambda_{i+1}$ if $\lambda_i\not=0$ and the number of positive integers in $\lambda$, called the {\it length} of $\lambda$, is finite. We also denote a strict partition of length $r$ as a finite sequence of $r$ positive integers $\lambda=(\lambda_1,\dots,\lambda_r)$ and identify it with its {\it shifted Young digram}, obtained from the usual Young diagram by shifting the $i$-th row $(i-1)$ boxes to the right, for $1 \leq i\leq r$. Let $\calS\!\calP$ be the set of all strict partitions and $\calS\!\calP_r$ the set of all strict partition of length at most $r$. Consider the order set ${\mathbf P}$ of {\it alphabets}, consisting of {\it unmarked} numbers $1,2,\dots$, {\it primed} numbers $1',2',\dots$, and {\it circled} numbers $1^{\circ},2^{\circ},\dots$, where the total order is given by \[ 1'<1<2'<2 < 3'<3<\cdots < 1^{\circ}<2^{\circ}<\cdots. \] For a given strict partition $\lambda$ of length $r$, a {\it flagging} of $\lambda$ is a sequence $f=(f_1,\dots, f_r)$ of non-negative integers. We call the pair $(\lambda, f)$ a {\it flagged strict partition}. \begin{defn} A {\it (flagged) marked shifted tableau} of a flagged strict partition $(\lambda,f)$ is a filling of the shifted Young diagram of $\lambda$ which assigns an alphabet in ${\mathbf P}$ to each box, subject to the rules \begin{enumerate} \item alphabets are weakly increasing in each row and column, \item unmarked numbers are strictly increasing in each column, \item primed numbers are strictly increasing in each row, \item circled numbers are strictly increasing in each row, and, \item for $1\leq i \leq r$, one can assign alphabets at most $f_i^{\circ}$ in the $i$-th row. \end{enumerate} \end{defn} \begin{rem}\label{remMST} It is worth noting that, by the total order of ${\mathbf P}$ and the rule (1), the part consisting of unmarked and primed numbers forms the usual marked shifted tableaux of the shifted Young diagram of a strict partition (cf. \cite[p.256]{MacdonaldHall}). It is also clear from the order of ${\mathbf P}$ that the part consisting of circled numbers forms a row-strict semistandard Young tableau of a skew shape $\lambda/\mu$ given by a strict partition $\mu \subset \lambda$. \end{rem} \begin{exm} Let $\lambda=(5,3,1)$ and $f=(2,1,0)$. The following are examples of marked shifted tableaux of the flagged strict partition $(\lambda,f)$: \setlength{\unitlength}{0.5mm} \begin{center} \begin{picture}(60,40) \put(00,30){\line(1,0){50}}\put(00,20){\line(1,0){50}}\put(10,10){\line(1,0){30}}\put(20,00){\line(1,0){10}}\put(00,30){\line(0,-1){10}}\put(10,30){\line(0,-1){20}}\put(20,30){\line(0,-1){30}}\put(30,30){\line(0,-1){30}}\put(40,30){\line(0,-1){20}}\put(50,30){\line(0,-1){10}} \put(03,22){{\small $1$}}\put(13,22){{\small $2'$}}\put(23,22){{\small $2$}}\put(33,22){{\small $2$}}\put(43,22){{\small $3'$}} \put(13,12){{\small $2'$}}\put(23,12){{\small $3$}}\put(33,12){{\small $4$}} \put(23,02){{\small $4'$}} \end{picture} \ \ \begin{picture}(60,40) \put(00,30){\line(1,0){50}}\put(00,20){\line(1,0){50}}\put(10,10){\line(1,0){30}}\put(20,00){\line(1,0){10}}\put(00,30){\line(0,-1){10}}\put(10,30){\line(0,-1){20}}\put(20,30){\line(0,-1){30}}\put(30,30){\line(0,-1){30}}\put(40,30){\line(0,-1){20}}\put(50,30){\line(0,-1){10}} \put(03,22){{\small $1$}}\put(13,22){{\small $2'$}}\put(23,22){{\small $2$}}\put(33,22){{\small $2$}}\put(43,22){{\small $1^{\circ}$}} \put(13,12){{\small $2'$}}\put(23,12){{\small $3$}}\put(33,12){{\small $1^{\circ}$}} \put(23,02){{\small $4'$}} \end{picture} \ \ \begin{picture}(60,40) \put(00,30){\line(1,0){50}}\put(00,20){\line(1,0){50}}\put(10,10){\line(1,0){30}}\put(20,00){\line(1,0){10}}\put(00,30){\line(0,-1){10}}\put(10,30){\line(0,-1){20}}\put(20,30){\line(0,-1){30}}\put(30,30){\line(0,-1){30}}\put(40,30){\line(0,-1){20}}\put(50,30){\line(0,-1){10}} \put(03,22){{\small $1$}}\put(13,22){{\small $2'$}}\put(23,22){{\small $2$}}\put(33,22){{\small $1^{\circ}$}}\put(43,22){{\small $2^{\circ}$}} \put(13,12){{\small $2'$}}\put(23,12){{\small $3$}}\put(33,12){{\small $1^{\circ}$}} \put(23,02){{\small $4'$}} \end{picture} \end{center} The following are non-examples due to rules (2), (5), (4), respectively: \begin{center} \begin{picture}(60,40) \put(00,30){\line(1,0){50}}\put(00,20){\line(1,0){50}}\put(10,10){\line(1,0){30}}\put(20,00){\line(1,0){10}}\put(00,30){\line(0,-1){10}}\put(10,30){\line(0,-1){20}}\put(20,30){\line(0,-1){30}}\put(30,30){\line(0,-1){30}}\put(40,30){\line(0,-1){20}}\put(50,30){\line(0,-1){10}} \put(03,22){{\small $1$}}\put(13,22){{\small $2'$}}\put(23,22){{\small $2$}}\put(33,22){{\small $2$}}\put(43,22){{\small $3'$}} \put(13,12){{\small $2'$}}\put(23,12){{\small $2$}}\put(33,12){{\small $4$}} \put(23,02){{\small $4'$}} \end{picture} \ \ \begin{picture}(60,40) \put(00,30){\line(1,0){50}}\put(00,20){\line(1,0){50}}\put(10,10){\line(1,0){30}}\put(20,00){\line(1,0){10}}\put(00,30){\line(0,-1){10}}\put(10,30){\line(0,-1){20}}\put(20,30){\line(0,-1){30}}\put(30,30){\line(0,-1){30}}\put(40,30){\line(0,-1){20}}\put(50,30){\line(0,-1){10}} \put(03,22){{\small $1$}}\put(13,22){{\small $2'$}}\put(23,22){{\small $2$}}\put(33,22){{\small $2$}}\put(43,22){{\small $1^{\circ}$}} \put(13,12){{\small $2'$}}\put(23,12){{\small $3$}}\put(33,12){{\small $1^{\circ}$}} \put(23,02){{\small $1^{\circ}$}} \end{picture} \ \ \begin{picture}(60,40) \put(00,30){\line(1,0){50}}\put(00,20){\line(1,0){50}}\put(10,10){\line(1,0){30}}\put(20,00){\line(1,0){10}}\put(00,30){\line(0,-1){10}}\put(10,30){\line(0,-1){20}}\put(20,30){\line(0,-1){30}}\put(30,30){\line(0,-1){30}}\put(40,30){\line(0,-1){20}}\put(50,30){\line(0,-1){10}} \put(03,22){{\small $1$}}\put(13,22){{\small $2'$}}\put(23,22){{\small $2$}}\put(33,22){{\small $1^{\circ}$}}\put(43,22){{\small $1^{\circ}$}} \put(13,12){{\small $2'$}}\put(23,12){{\small $3$}}\put(33,12){{\small $1^{\circ}$}} \put(23,02){{\small $4'$}} \end{picture} \end{center} \end{exm} We call the alphabet assigned to a box of $\lambda$ by $T$ an {\it entry} of $T$, and denote it by $e\in T$. Abusing the notation slightly, we often write those entries by their assigned alphabets and denote the numeric value of an entry $e\in T$ by $|e|$, {\it i.e.}, $k,k',k^{\circ}\in T$ and $|k|=|k'|=|k^{\circ}|=k$. Let $c(e)$ and $r(e)$ be the column and row indices of an entry $e$ respectively. Let $\operatorname{MST}(\lambda,f)$ be the set of all marked shifted tableaux of $(\lambda,f)$. If $f=(0,\dots, 0)$, we denote $\operatorname{MST}(\lambda)$ instead of $\operatorname{MST}(\lambda,f)$. \begin{defn}\label{df2-1} Consider the infinite sequence of variables $x=(x_i)_{i\in {\mathbb N}}, z=(z_i)_{i\in {\mathbb N}}, b=(b_i)_{i\in {\mathbb N}}$ as before. Let $(\lambda,f)$ be a flagged strict partition. To each $T \in \operatorname{MST}(\lambda,f)$, we assign the {\it weight} \[ (xz|b)^T = \prod_{k \in T}\left(x_k+b_{c(k)-r(k)}\right)\cdot \prod_{k' \in T}\left(x_k-b_{c(k')-r(k')}\right) \cdot \prod_{k^{\circ} \in T} \left(z_k+b_{k+r(k^{\circ})-c(k^{\circ})}\right) \] where we set $b_{-i}:=-b_{i+1}$ for all $i\geq0$. We define the {\it flagged factorial $Q$-function} $Q_{\lambda,f}(x;z|b)$ by \[ Q_{\lambda,f}(x;z|b) = \sum_{T\in \operatorname{MST}(\lambda,f)} (xz|b)^T. \] \end{defn} \begin{rem} When $f=(0,\dots, 0)$, the $z$-variables are not involved and $Q_{\lambda,f}(x;z|b)$ coincides with Ivanov's factorial $Q$-function $Q_{\lambda}(x|b)$ \cite{Ivanov04}: \[ Q_{\lambda}(x|b) = \sum_{T\in \operatorname{MST}(\lambda)} (x|b)^T, \ \ \ (x|b)^T = \prod_{k \in T}\left(x_k+b_{c(k)-r(k)}\right)\cdot \prod_{k' \in T}\left(x_k-b_{c(k')-r(k')}\right). \] Furthermore, in view of Remark \ref{remMST}, $Q_{\lambda,f}(x;z|b)$ can be expanded in terms of $Q_{\mu}(x|b)$ for strict partitions $\mu\subset \lambda$. This expansion will be discussed in the next subsection. \end{rem} \begin{exm}\label{remMST2} Let $\lambda=(3,1)$ and $f=(1,0)$. In this case, $\operatorname{MST}(\lambda,f)$ can be divided into two families of tableaux \setlength{\unitlength}{0.5mm} \begin{center} \begin{picture}(30,20) \put(00,20){\line(1,0){30}}\put(00,10){\line(1,0){30}}\put(10,00){\line(1,0){10}}\put(00,20){\line(0,-1){10}}\put(10,20){\line(0,-1){20}}\put(20,20){\line(0,-1){20}}\put(30,20){\line(0,-1){10}} \put(03,12){{\small $*$}}\put(13,12){{\small $*$}}\put(23,12){{\small $*$}} \put(13,02){{\small $*$}} \end{picture} \ \ \ \ \ \ \begin{picture}(30,20) \put(00,20){\line(1,0){30}}\put(00,10){\line(1,0){30}}\put(10,00){\line(1,0){10}}\put(00,20){\line(0,-1){10}}\put(10,20){\line(0,-1){20}}\put(20,20){\line(0,-1){20}}\put(30,20){\line(0,-1){10}} \put(03,12){{\small $*$}}\put(13,12){{\small $*$}}\put(22,12){{\small $1^{\circ}$}} \put(13,02){{\small $*$}} \end{picture} \end{center} where the part with $*$ consists of unmarked and primed numbers. Thus we have \begin{eqnarray*} Q_{\lambda,f}(x;z|b) &=& Q_{31}(x|b) + Q_{21}(x|b) (z_1 - b_2) \end{eqnarray*} Similarly, if $\lambda=(5,3,1)$ and $f=(2,1,0)$, we have \setlength{\unitlength}{0.5mm} \begin{center} \begin{picture}(50,40) \put(00,30){\line(1,0){50}}\put(00,20){\line(1,0){50}}\put(10,10){\line(1,0){30}}\put(20,00){\line(1,0){10}}\put(00,30){\line(0,-1){10}}\put(10,30){\line(0,-1){20}}\put(20,30){\line(0,-1){30}}\put(30,30){\line(0,-1){30}}\put(40,30){\line(0,-1){20}}\put(50,30){\line(0,-1){10}} \put(03,22){{\small $*$}}\put(13,22){{\small $*$}}\put(23,22){{\small $*$}}\put(33,22){{\small $*$}}\put(43,22){{\small $*$}} \put(13,12){{\small $*$}}\put(23,12){{\small $*$}}\put(33,12){{\small $*$}} \put(23,02){{\small $*$}} \end{picture} \ \ \begin{picture}(50,40) \put(00,30){\line(1,0){50}}\put(00,20){\line(1,0){50}}\put(10,10){\line(1,0){30}}\put(20,00){\line(1,0){10}}\put(00,30){\line(0,-1){10}}\put(10,30){\line(0,-1){20}}\put(20,30){\line(0,-1){30}}\put(30,30){\line(0,-1){30}}\put(40,30){\line(0,-1){20}}\put(50,30){\line(0,-1){10}} \put(03,22){{\small $*$}}\put(13,22){{\small $*$}}\put(23,22){{\small $*$}}\put(33,22){{\small $*$}}\put(43,22){{\small $k^{\circ}$}} \put(13,12){{\small $*$}}\put(23,12){{\small $*$}}\put(33,12){{\small $*$}} \put(23,02){{\small $*$}} \end{picture} \ \ \begin{picture}(50,40) \put(00,30){\line(1,0){50}}\put(00,20){\line(1,0){50}}\put(10,10){\line(1,0){30}}\put(20,00){\line(1,0){10}}\put(00,30){\line(0,-1){10}}\put(10,30){\line(0,-1){20}}\put(20,30){\line(0,-1){30}}\put(30,30){\line(0,-1){30}}\put(40,30){\line(0,-1){20}}\put(50,30){\line(0,-1){10}} \put(03,22){{\small $*$}}\put(13,22){{\small $*$}}\put(23,22){{\small $*$}}\put(33,22){{\small $*$}}\put(43,22){{\small $*$}} \put(13,12){{\small $*$}}\put(23,12){{\small $*$}}\put(33,12){{\small $1^{\circ}$}} \put(23,02){{\small $*$}} \end{picture} \ \ \begin{picture}(50,40) \put(00,30){\line(1,0){50}}\put(00,20){\line(1,0){50}}\put(10,10){\line(1,0){30}}\put(20,00){\line(1,0){10}}\put(00,30){\line(0,-1){10}}\put(10,30){\line(0,-1){20}}\put(20,30){\line(0,-1){30}}\put(30,30){\line(0,-1){30}}\put(40,30){\line(0,-1){20}}\put(50,30){\line(0,-1){10}} \put(03,22){{\small $*$}}\put(13,22){{\small $*$}}\put(23,22){{\small $*$}}\put(33,22){{\small $*$}}\put(43,22){{\small $k^{\circ}$}} \put(13,12){{\small $*$}}\put(23,12){{\small $*$}}\put(33,12){{\small $1^{\circ}$}} \put(23,02){{\small $*$}} \end{picture} \ \ \begin{picture}(50,40) \put(00,30){\line(1,0){50}}\put(00,20){\line(1,0){50}}\put(10,10){\line(1,0){30}}\put(20,00){\line(1,0){10}}\put(00,30){\line(0,-1){10}}\put(10,30){\line(0,-1){20}}\put(20,30){\line(0,-1){30}}\put(30,30){\line(0,-1){30}}\put(40,30){\line(0,-1){20}}\put(50,30){\line(0,-1){10}} \put(03,22){{\small $*$}}\put(13,22){{\small $*$}}\put(23,22){{\small $*$}}\put(33,22){{\small $1^{\circ}$}}\put(43,22){{\small $2^{\circ}$}} \put(13,12){{\small $*$}}\put(23,12){{\small $*$}}\put(33,12){{\small $1^{\circ}$}} \put(23,02){{\small $*$}} \end{picture} \end{center} where $k=1$ or $2$ so that \begin{eqnarray*} Q_{\lambda,f}(x|b) &=& Q_{531}(x|b) + Q_{431}(x|b) (z_1-b_4 + z_2-b_3) + Q_{521}(x|b)(z_1-b_2) \\ &&\ \ \ \ \ \ \ + Q_{421}(x|b)(z_1-b_4 + z_2-b_3)(z_1-b_2)+ Q_{321}(x|b)(z_1-b_3)(z_2-b_3)(z_1-b_2). \end{eqnarray*} \end{exm} \subsection{Decomposition into $Q$-functions and skew Schur polynomials} We can expand $Q_{\lambda}(x;z|b)$ in terms of Ivanov's factorial $Q$ functions in $x$ and $b$ where the coefficients are a variant of row-strict flagged skew Schur polynomials considered by Wachs \cite[p.288]{Wachs} in $z$ and $b$. A {\it partition} $\lambda$ is a weakly decreasing finite sequence of positive integers and we identify it with its Young diagram. The length of $\lambda$ is $r$ if it consists of $r$ positive integers. We denote the set of all partitions by ${\mathcal P}$. Let $\lambda=(\lambda_1,\dots, \lambda_r)$ and $\mu=(\mu_1,\dots, \mu_r)$ be partitions of length at most $r$ such that $\mu\subset \lambda$, and $\lambda/\mu$ the corresponding skew diagram. A flagging $f=(f_1,\dots, f_r)$ of $\lambda/\mu$ is a sequence of non-positive integers. We call the pair $(\lambda/\mu,f)$ a {\it flagged skew diagram}. A {\it row-strict (flagged) tableau} $T$ of $(\lambda/\mu, f)$ is a filling of the skew diagram $\lambda/\mu$ which assigns a positive integer to each box of $\lambda/\mu$ subject to the rules: \begin{itemize} \item numbers increase strictly from left to right along rows, \item numbers increase weakly from top to bottom along columns, and \item for each $i=1,\dots,r$, the numbers used in the $i$-th row are at most $f_i$. \end{itemize} Let $\operatorname{SST}^*(\lambda/\mu,f)$ be the set of all row-strict tableaux of the flagged skew diagram $(\lambda/\mu, f)$. \begin{defn}\label{df: row Schur} Let ${\mathbf b}=(b_i)_{i\in {\mathbb Z}}$. We define the row-strict flagged skew factorial Schur polynomial of a flagged shape $(\lambda/\mu,f)$ by \[ \widetilde{s}_{\lambda/\mu,f}(z|{\mathbf b}) = \sum_{T\in \operatorname{SST}^*(\lambda/\mu,f)} (z|{\mathbf b})^T \] where we assign the weight for each $T$ by \[ (z|{\mathbf b})^T = \prod_{e\in T} \big(z_{|e|} + b_{|e|+r(e)-c(e)}\big). \] Note that here we {\it do not} assume $b_{-i}=-b_{i+1}$ for $i\geq 0$. In the case when $\mu=\varnothing$, then we denote the corresponding polynomial by $\widetilde{s}_{\lambda,f}(z|{\mathbf b})$. \end{defn} \begin{rem} In \S \ref{app1}, we give a Jacobi--Trudi type determinant formula for the row-strict flagged skew factorial Schur polynomials (Theorem \ref{thm:app1}). The proof uses the lattice path method. \end{rem} \begin{prop}\label{prop1} Let $\lambda$ be a strict partition of length $r$. For a strict partition $\mu\subset \lambda$, let $\bar\mu=(\bar\mu_1,\dots,\bar\mu_r)$ be the sequence defined by $\bar\mu_i=\mu_i+i-1$ for $i=1,\dots, r$. Assume that, if $r\geq 2$, then $\lambda_{r-1} >f_{r-1}$ or $\lambda_r> f_r$. Then we have \begin{equation}\label{eqprop1} Q_{\lambda,f}(x;z|b) = \sum_{\mu \in \calS\!\calP \atop{\mu\subset\lambda \atop{\bar\mu\in {\mathcal P}}}} Q_{\mu}(x|b)\cdot \widetilde{s}_{\bar\lambda/\bar\mu, f}(z|{\mathbf b})^{\star}, \end{equation} where $\star$ is the substitution $b_{-i}\mapsto -b_{i+1}$ for all $i\geq 0$. \end{prop} \begin{proof} The circled numbers form a row-strict flagged skew tableau of a skew {\it shifted} diagram $\lambda/\mu$ since the alphabets must be weakly increasing in each row and column. The assumption assures that this skew shifted diagram is indeed a skew (unshifted) diagram $\bar\lambda/\bar\mu$, {\it i.e.}, $\bar\mu$ is a partition contained in the partition $\bar \lambda$. Thus we see that there is an obvious bijection \[ \operatorname{MST}(\lambda,f) \cong \bigsqcup_{\mu\in \calS\!\calP \atop{\mu \subset \lambda \atop{\bar\mu \in{\mathcal P}}} }\operatorname{MST}(\mu) \times \operatorname{SST}^*(\bar\lambda/\bar\mu,f), \ \ \ \ \ \ T \mapsto (T',T^{\circ}) \] where $T'$ is the part of $T$ with unmarked and primed numbers and $T^{\circ}$ is the part of $T$ with circled numbers. This bijection apparently preserves the weights after the substitution $\star$, and hence we obtain the desired formula. \end{proof} \begin{rem} Proposition \ref{prop1} implies that $Q_{\lambda,f}(x;z|b)$ is an element of $\Gamma[z,b]$ defined in \S\ref{secprelim}. Indeed, it follows from the facts that the summation in (\ref{eqprop1}) is finite, and that both $Q_{\mu}(x|b)$ and $\widetilde{s}_{\bar\lambda/\bar\mu, f}(z|{\mathbf b})^{\star}$ are elements of $\Gamma[z,b]$. \end{rem} \subsection{One row case} In this section, we describe $Q_{\lambda,f}(x;z|b)$ in the case $\lambda$ has only one row. \begin{lem}\label{lem1} Let $r,t$ and $f$ be nonnegative integers such that $r-t\geq 0$. We have \[ \widetilde{s}_{(r)/(t), (f)}(z|{\mathbf b}) = e_{r-t}^{[f|r-t-f-1]}(z|\tau^{-t}b). \] In particular, if $r-t>f$, both sides of the equation are zero. \end{lem} \begin{proof} If $r-t>f$, then the left hand side is zero, since the tableaux are row-strict. The right hand side is also zero, since it is the $(r-t)$-th elementary symmetric polynomial in $r-t-1$ variables. Suppose $r-t\leq f$. If $t=0$, then we have \begin{eqnarray*} \widetilde{s}_{(r), (f)}(z|{\mathbf b}) &=&\sum_{1\leq i_1<\cdots <i_r\leq f} (z_{i_1}+b_{i_1})(z_{i_2}+b_{i_2-1})\cdots (z_{i_r}+b_{i_r+1-r})\\ &=&\sum_{1\leq j_1\leq\cdots \leq j_r\leq f+1-r} (b_{j_1}+z_{j_1})(b_{j_2}+z_{j_2+1})\cdots (b_{j_r}+z_{j_r+r-1}). \end{eqnarray*} Since this is the usual one-row factorial Schur polynomial, we have \[ \widetilde{s}_{(r), (f)}(z|{\mathbf b}) = h_r^{[f+1-r|-f]}(b|z) = e_r^{[f|r-f-1]}(z|b). \] In the general case $t\geq 0$, let $m:=r-t$, then we have \begin{eqnarray*} \widetilde{s}_{(r)/(t), (f)}(z|{\mathbf b}) &=&\sum_{1\leq i_1<\cdots <i_{m}\leq f} (z_{i_1}+b_{i_1-t})(z_{i_2}+b_{i_2-1-t})\cdots (z_{i_m}+b_{i_m+1-m-t})\\ &=&\widetilde{s}_{(m), (f)}(z|\tau^{-t}{\mathbf b})= e_m^{[f|m-f-1]}(z|\tau^{-t}b). \end{eqnarray*} This completes the proof of the formula. \end{proof} \begin{rem} Suppose that $0\leq r-t \leq f$. The $b$-variables appearing in $\widetilde{s}_{(r)/(t), (f)}(z|{\mathbf b})$ are \[ b_{1-t}, b_{2-t},\dots,b_{f-r} ,b_{f-r+1} \] If $r>f$, then $t>0$ and the indices of those $b_i$'s are all nonpositive. \end{rem} Proposition \ref{prop1} and Lemma \ref{lem1} imply the following. \begin{prop}\label{prop2-1} For nonnegative integers $r$ and $f$, we have \[ Q_{(r), (f)}(x;z|b) = \sum_{k=0}^{f} q_{r-k}^{[r-k-1]}(x|b) \cdot e_k^{[f|k-f-1]}(z|\tau^{k-r}b)^{\star}, \] where $\star$ is the substitution $b_{-i}=-b_{i+1}$ for all $i\geq 0$. \end{prop} \begin{proof} Proposition \ref{prop1} implies that \[ Q_{(r), (f)}(x;z|b) = \sum_{k=0}^r Q_{(r-k)}(x|b) \cdot \widetilde{s}_{(r)/(r-k), (f)}(z|{\mathbf b})^{\star}. \] It is known that $Q_{(m)}(x|b)=q_m^{[m-1]}(x|b)$ (see \cite[\S11]{IkedaMihalceaNaruse}) and thus together with Lemma \ref{lem1} we have \[ Q_{(r), (f)}(x;z|b) = \sum_{k=0}^{r} q_{r-k}^{[r-k-1]}(x|b) \cdot e_k^{[f|k-f-1]}(z|\tau^{k-r}b)^{\star}. \] The upper bound for $k$ in the summation can be $f$ instead of $r$: if $r<f$, the claim holds since $q_{r-k}^{[r-k-1]}(x|b)=0$ for $r<k\leq f$; if $f<r$, the claim holds since $e_k^{[f|k-f-1]}(z|\tau^{k-r}b) = 0$ for $f<k\leq r$. Thus we have proved the desired formula. \end{proof} \subsection{Other formulas} In the rest of the section, we prove Proposition \ref{prop2} which will be used in the proof of Theorem \ref{mainthm}. Let $\star$ denote the substitution $b_{-i}\mapsto -b_{i+1}$ for all $i\geq 0$ as before. We start with the following lemma. \begin{lem}\label{lem2} Let $s, t, m\in {\mathbb Z}$ and $n \in {\mathbb Z}_{\geq 0}$. For each $s\in {\mathbb Z}$, we have \[ \sum_{\ell\leq s} q_{\ell}^{[m]}\cdot e_{t-\ell}^{[-n-1]}(\tau^{-m}b)^{\star} =q_{s}^{[m-1]}\cdot e_{t-s}^{[-n-1]}(\tau^{-m}b)^{\star} +\sum_{\ell \leq s-1} q_{\ell}^{[m-1]}\cdot e_{t-\ell}^{[-n]}(\tau^{1-m}b)^{\star}. \] \end{lem} \begin{proof} By definition, we have $q_u^{[m]} = q_u^{[m-1]}\cdot (1+ b_m^{\star}u)$ so that \begin{equation}\label{eeqq1} q_{\ell}^{[m]} = q_{\ell}^{[m-1]} + q_{\ell-1}^{[m]} \cdot b_m^{\star} \ \ \ (\ell\in {\mathbb Z}). \end{equation} Similarly, we have $e_u^{[-n]}(\tau^{1-m} b)^{\star}=e_u^{[-n-1]}(\tau^{-m}b)^{\star} \cdot (1+b_{m}^{\star} u)$ so that \begin{equation}\label{eeqq2} e_{\ell}^{[-n]}(\tau^{1-m} b)^{\star} = e_{\ell}^{[-n-1]}(\tau^{-m}b)^{\star} + e_{\ell-1}^{[-n-1]}(\tau^{-m}b)^{\star}\cdot b_{m}^{\star}\ \ \ \ \ (\ell\in {\mathbb Z}). \end{equation} Using these identities, we can compute: \begin{eqnarray*} &&\sum_{\ell\leq s} q_{\ell}^{[m]}\cdot e_{t-\ell}^{[-n-1]}(\tau^{-m}b)^{\star}\\ &\stackrel{(\ref{eeqq1})}{=}&\sum_{\ell\leq s} q_{\ell}^{[m-1]}\cdot e_{t-\ell}^{[-n-1]}(\tau^{-m}b)^{\star}+\sum_{\ell\leq s} q_{\ell-1}^{[m-1]} \cdot b_m^{\star}\cdot e_{t-\ell}^{[-n-1]}(\tau^{-m}b)^{\star}\\ &=&q_{s}^{[m-1]}\cdot e_{t-s}^{[-n-1]}(\tau^{-m}b)^{\star}+\sum_{\ell\leq s-1} q_{\ell}^{[m-1]}\cdot e_{t-\ell}^{[-n-1]}(\tau^{-m}b)^{\star}+\sum_{\ell\leq s-1} q_{\ell}^{[m-1]} \cdot b_m^{\star}\cdot e_{t-\ell-1}^{[-n-1]}(\tau^{-m}b)^{\star}\\ &\stackrel{(\ref{eeqq2})}{=}&q_{s}^{[m-1]}\cdot e_{t-s}^{[-n-1]}(\tau^{-m}b)^{\star} +\sum_{\ell\leq s-1} q_{\ell}^{[m-1]}\cdot e_{t-\ell}^{[-n]}(\tau^{1-m}b)^{\star}. \end{eqnarray*} Thus we obtain the desired formula. \end{proof} \begin{prop}\label{prop2} For integers $r,f\geq 0$ and an integer $a$, we have \[ q_{r+a}^{[f|r-f-1]}(x;z|b)=\sum_{k=0}^f q_{r-k+a}^{[r-k-1]}(x|b)\cdot e_{k}^{[f|k-1-f]}(z|\tau^{k-r}b)^{\star}. \] In particular, we have $q_r^{[f|r-f-1]}(x;z|b)= Q_{(r), (f)}(x;z|b)$ in the view of Proposition \ref{prop2-1}. \end{prop} \begin{proof} First we observe that $e_u^{[r-1-f]}(b)=e_u^{[r]}(b)\cdot e_u^{[-1-f]}(\tau^{-r}b)^{\star}$. Indeed, if $r>f$, then \begin{eqnarray*} e_u^{[r]}(b)\cdot e_u^{[-1-f]}(\tau^{-r}b)^{\star} &=&e_u^{[r]}(b_1,\dots, b_r)\cdot e_u^{[-1-f]}(b_{1-r},b_{2-r},\dots, \dots, b_{f+1-r})^{\star}\\ &=&e_u^{[r]}(b_1,\dots, b_r)\cdot e_u^{[-1-f]}(-b_r,-b_{r-1},\dots, -b_{r-f})\\ &=&e_u^{[r]}(b_1,\dots, b_r)\cdot e_u^{[f+1]}(b_{r-f}, \cdots, b_{r-1}, b_r)^{-1}\\ &=&e_u^{[r-1-f]}(b_1,\dots, b_{r-f-1}) = e_u^{[r-1-f]}(b). \end{eqnarray*} If $r\leq f$, then \begin{eqnarray*} e_u^{[r]}(b)\cdot e_u^{[-1-f]}(\tau^{-r}b)^{\star} &=&e_u^{[r]}(b_1,\dots, b_r)\cdot e_u^{[-1-f]}(\underbrace{b_{1-r},b_{2-r},\dots, b_{-1},b_0}_{r},b_1\dots, b_{f+1-r})^{\star}\\ &=&e_u^{[r]}(b_1,\dots, b_r)\cdot e_u^{[-1-f]}(\underbrace{-b_r,-b_{r-1},\dots, -b_2,-b_1}_{r},b_1\dots, b_{f+1-r})\\ &=&e_u^{[r-1-f]}(b_1,\dots, b_{f+1-r})=e_u^{[r-1-f]}(b). \end{eqnarray*} Thus $q_u^{[f|r-f-1]}=q_u^{[r]}\cdot e_u^{[f|-1-f]}(z|\tau^{-r}b)^{\star}$. In particular, we have \begin{equation}\label{eq4443} q_{r+a}^{[f|r-f-1]}=\sum_{\ell\leq r+a} q_\ell^{[r]}\cdot e_{r+a-\ell}^{[f|-1-f]}(z|\tau^{-r}b)^{\star}. \end{equation} On the other hand, by setting $s=r+a-k$, $m=r-k$, $t=r+a$, $n=f-k$ for $k=0,\dots,f$ in the identity of Lemma \ref{lem2}, we obtain \begin{eqnarray*} &&\sum_{\ell\leq r+a-k} q_{\ell}^{[r-k]}\cdot e_{r+a-\ell}^{[f|k-1-f]}(z|\tau^{k-r}b)^{\star}\\ &=&q_{r+a-k}^{[r-k-1]}\cdot e_{k}^{[f|k-1-f]}(z|\tau^{k-r}b)^{\star}+\sum_{\ell \leq r+a-k-1} q_{\ell}^{[r-k-1]}\cdot e_{r+a-\ell}^{[f|k-f]}(z|\tau^{k+1-r}b)^{\star}. \end{eqnarray*} We apply this to the right hand side of (\ref{eq4443}) consecutively from $k=0$ to $k=f$, and obtain \begin{eqnarray*} q_{r+a}^{[f|r-f-1]} &=&\sum_{\ell\leq r+a} q_\ell^{[r]}\cdot e_{r+a-\ell}^{[f|-1-f]}(z|\tau^{-r}b)^{\star}\\ &=&q_{r+a}^{[r-1]}\cdot e_{0}^{[f|-1-f]}(z|\tau^{-r}b)^{\star}+\sum_{\ell \leq r+a-1} q_{\ell}^{[r-1]}\cdot e_{r+a-\ell}^{[f|-f]}(z|\tau^{1-r}b)^{\star}\\ &=&\sum_{k=0}^1 q_{r+a-k}^{[r-1-k]}\cdot e_{k}^{[f|k-1-f]}(z|\tau^{k-r}b)^{\star}+\sum_{\ell \leq r+a-2} q_{\ell}^{[r-2]}\cdot e_{r+a-\ell}^{[f|1-f]}(z|\tau^{2-r}b)^{\star}\\ &=&\cdots\\ &=&\sum_{k=0}^f q_{r+a-k}^{[r-1-k]}\cdot e_{k}^{[f|k-1-f]}(z|\tau^{k-r}b)^{\star}+\sum_{\ell \leq r+a-f-1} q_{\ell}^{[r-f-1]}\cdot e_{r+a-\ell}^{[f|0]}(z|\tau^{f+1-r}b)^{\star}. \end{eqnarray*} The last summation is zero since $r+a-\ell>f$ and $e_u^{[f|0]}$ is a degree $f$ polynomial in $u$. Thus we obtain the desired equation. \end{proof} \begin{rem}\label{remprop2} The identity of Proposition \ref{prop2} can be also written as \[ q_{r+a}^{[f|r-f-1]}(x;z|b)=\sum_{k\in {\mathbb Z}} q_{r-k+a}^{[r-k-1]}(x|b)\cdot e_{k}^{[f|k-1-f]}(z|\tau^{k-r}b)^{\star}. \] since, if $k>f$, then $e_u^{[f|k-1-f]}(z|\tau^{k-r}b)$ is a degree $k-1$ polynomial in $u$ so that $e_{k}^{[f|k-1-f]}(z|\tau^{k-r}b)=0$. \end{rem} \section{Schur-Pfaffian formula}\label{secPf} In this section, we review the basic properties of Schur-Pfaffian and then prove a Pfaffian formula of the flagged factorial $Q$-functions $Q_{\lambda}(x;z|b)$. \subsection{Schur-Pfaffian and factorial $Q$-functions} Let $\alpha=(\alpha_1,\dots,\alpha_r) \in {\mathbb Z}^r$ be a sequence of integers. Consider the Laurent series in variables $t_1,\dots, t_r$ \[ f^{\alpha}(t)=t_1^{\alpha_1}\cdots t_r^{\alpha_r} \prod_{1\leq i<j\leq r}\frac{1-t_i/t_j}{1+t_i/t_j} \] where we expand $\frac{1}{1+t_i/t_j}$ as the series $\sum_{m\geq 0} (- t_it_j^{-1})^m$. Consider sequences of indeterminants \[ c^{(i)}= \left(\, c^{(i)}_m\, \right)_{m \in {\mathbb Z}} \ \ \ \ \ \ (i=1,\dots,r). \] The {\it Schur-Pfaffain} $\operatorname{Pf}\left[c^{(1)}_{\alpha_1}\cdots c^{(r)}_{\alpha_r}\right]$ associated to $\alpha$ is defined by replacing each monomial $t_1^{m_1}\cdots t_r^{m_r}$ in $f^{\alpha}(t)$ by $c^{(1)}_{m_1}\cdots c^{(r)}_{m_r}$. Below in Lemma \ref{lem3} and \ref{lem4}, we list well-known properties without proofs (cf. \cite{IkedaMatsumura}). \begin{lem}\label{lem3}\ \begin{enumerate}[$(1)$] \item If $\operatorname{Pf}[c^{(i)}_{\alpha_i}c^{(j)}_{\alpha_j}]+\operatorname{Pf}[c^{(j)}_{\alpha_j}c^{(i)}_{\alpha_i}]=0$ for all $1\leq i,j\leq r$, then for any $w\in S_r$, we have \[ \operatorname{Pf}\left[c^{(1)}_{\alpha_1}\cdots c^{(r)}_{\alpha_r}\right] = {\operatorname{sign}}(w) \operatorname{Pf}\left[c^{(1)}_{\alpha_{w(1)}}\cdots c^{(r)}_{\alpha_{w(r)}}\right]. \] \item If $c_m^{(i)}=k a_m + \ell b_m$ with variables $a=(a_m)_{m\in {\mathbb Z}}$ and $b=(b_m)_{m\in {\mathbb Z}}$, we have \[ \operatorname{Pf}\left[c^{(1)}_{\alpha_1}\cdots c^{(i)}_{\alpha_i} \cdots c^{(r)}_{\alpha_r}\right] = k \operatorname{Pf}\left[c^{(1)}_{\alpha_1}\cdots a_{\alpha_i} \cdots c^{(r)}_{\alpha_r}\right] + \ell \operatorname{Pf}\left[c^{(1)}_{\alpha_1}\cdots b_{\alpha_i} \cdots c^{(r)}_{\alpha_r}\right]. \] \item If $r$ is even, then \[ \operatorname{Pf}\left[c^{(1)}_{\alpha_1}\cdots c^{(r)}_{\alpha_r}\right] = \operatorname{Pf}\left( \operatorname{Pf}[c^{(i)}_{\alpha_i}c^{(j)}_{\alpha_j}] \right)_{1 \leq i<j\leq r} \] where the right hand side is the Pfaffian of the $r\times r$ skew symmetric matrix $\left( \operatorname{Pf}[c^{(i)}_{\alpha_i}c^{(j)}_{\alpha_j}] \right)_{1 \leq i<j\leq r}$ with $(i,j)$-entry \[ \operatorname{Pf}[c^{(i)}_{\alpha_i}c^{(j)}_{\alpha_j}] = c^{(i)}_{\alpha_i}c^{(j)}_{\alpha_j} + 2\sum_{k\geq 1} (-1)^k c^{(i)}_{\alpha_i+k}c^{(j)}_{\alpha_j-k} \] for $i<j$ \end{enumerate} \end{lem} \begin{rem} Lemma \ref{lem3} (3) follows from the identity \[ \prod_{1\leq i<j\leq r}\frac{1-t_i/t_j}{1+t_i/t_j} = \operatorname{Pf} \left( \frac{1-t_i/t_j}{1+t_i/t_j}\right)_{1\leq i<j \leq r}. \] for $r$ even, which is due to Schur \cite{Schur}. \end{rem} \begin{lem}\label{lem4} We denote the substitution $c_m^{[i]} = 0$ for all $m<0$ and $i=1,\dots,r$ by $\displaystyle\operatorname{Pf}\left[c^{(1)}_{\alpha_1}\cdots c^{(r)}_{\alpha_r}\right]_{\geq 0}$. We have \begin{enumerate}[$(1)$] \item $\displaystyle\operatorname{Pf}\left[c^{(1)}_{\alpha_1}\cdots c^{(r)}_{\alpha_r}\right]_{\geq 0}$ is a polynomial in $c_m^{(i)}$'s ($m\geq 0$). \item If $\alpha_r=0$, then $\displaystyle\operatorname{Pf}\left[c^{(1)}_{\alpha_1}\cdots c^{(r)}_{\alpha_r}\right]_{\geq 0}=\displaystyle\operatorname{Pf}\left[c^{(1)}_{\alpha_1}\cdots c^{(r)}_{\alpha_{r-1}}\right]_{\geq 0}$. \end{enumerate} \item If $\alpha_r<0$, then $\displaystyle\operatorname{Pf}\left[c^{(1)}_{\alpha_1}\cdots c^{(r)}_{\alpha_r}\right]_{\geq 0}=0$. \end{lem} By the work of Kazarian \cite{Kazarian} and Ikeda \cite{Ikeda2007}, it is known that the factorial $Q$-functions $Q_{\lambda}(x|b)$ of Ivanov \cite{Ivanov04} can be expressed as a Schur-Pfaffian: for a strict partition $\lambda=(\lambda_1,\dots,\lambda_r)$, \[ Q_{\lambda}(x|b) = \operatorname{Pf}\left[q_{\lambda_1}^{[\lambda_1-1]}q_{\lambda_2}^{[\lambda_2-1]}\cdots q_{\lambda_r}^{[\lambda_r-1]}\right]:=\left.\operatorname{Pf}\left[c^{(1)}_{\lambda_1}c^{(2)}_{\lambda_2}\cdots c^{(r)}_{\lambda_r}\right]\right|_{c^{(i)}_{m}=q_{m}^{[\lambda_i-1]}, \forall i,m}. \] \begin{lem}\label{lem5} For $k, \ell\in {\mathbb Z}_{\geq 1}$, we have \[ \operatorname{Pf}\left[q_{k}^{[k-1]}q_{\ell}^{[\ell-1]}\right]+ \operatorname{Pf}\left[q_{\ell}^{[\ell-1]}q_{k}^{[k-1]}\right]=0. \] \end{lem} \begin{proof} The left hand side equals to $2\sum_{r\in {\mathbb Z}}(-1)^rq_{k+r}^{[k-1]} q_{\ell-r}^{[\ell-1]}$, which is the coefficient of $u^{k+\ell}$ in $2 q_{-u}^{[k-1]}q_{u}^{[\ell-1]}= 2 e_{-u}^{[k-1]}(b)e_{u}^{[\ell-1]}(b)$, a polynomial in $u$ of degree $k+\ell-2$. Therefore it is zero. \end{proof} Lemma \ref{lem3} (1) and Lemma \ref{lem5} imply the following. \begin{lem}\label{lem6} For a sequence of positive integers $(\alpha_1,\dots,\alpha_r)$ and $w\in S_r$, we have \[ \operatorname{Pf}\left[q_{\alpha_1}^{[\alpha_1-1]}\cdots q_{\alpha_r}^{[\alpha_r-1]}\right] = {\operatorname{sgn}}(w)\operatorname{Pf}\left[q_{\alpha_{w(1)}}^{[\alpha_1-1]}\cdots q_{\alpha_{w(r)}}^{[\alpha_r-1]}\right]. \] If particular, if $\alpha_i=\alpha_j$ for some $i\not=j$, we have $\operatorname{Pf}\left[q_{\alpha_1}^{[\alpha_1-1]}\cdots q_{\alpha_r}^{[\alpha_r-1]}\right]=0$. \end{lem} \subsection{Schur-Pfaffian formula of flagged factorial $Q$-functions} \begin{thm}\label{mainthm} Let $(\lambda,f)$ be a flagged strict partition of length $r$ such that $(\mathrm{a})$ $\lambda_i-f_i \geq \lambda_j-f_j$ for all $1\leq i<j \leq r$ and $(\mathrm{b})$ $\lambda_{r-1}-f_{r-1}>0$. We have \[ Q_{\lambda,f}(x,z|b)=\operatorname{Pf}\left[q_{\lambda_1}^{[f_1|\lambda_1-f_1-1]}q_{\lambda_2}^{[f_2|\lambda_2-f_2-1]}\dots q_{\lambda_r}^{[f_r|\lambda_r-f_r-1]}\right]. \] \end{thm} \begin{proof} Let $(\nu_1,\dots, \nu_r) \in {\mathbb Z}^r$. By Proposition \ref{prop2} (and Remark \ref{remprop2}), we have \[ q_{\lambda_i+\nu_i}^{[f_i|\lambda_i-f_i-1]} =\sum_{\alpha_i\in {\mathbb Z}} q_{\alpha_i+\nu_i}^{[\alpha_i-1]}\cdot e_{\lambda_i-\alpha_i}^{[f_i|\lambda_i-\alpha_i-f_i-1]}(z|\tau^{-\alpha_i}b)^{\star} \ \ \ \ \ \ \ (i=1,\dots,r). \] By linearity (Lemma \ref{lem3} (2)), \begin{eqnarray*} &&\operatorname{Pf}\left[q_{\lambda_1}^{[f_1|\lambda_1-f_1-1]}q_{\lambda_2}^{[f_2|\lambda_2-f_2-1]}\cdots q_{\lambda_r}^{[f_r|\lambda_r-f_r-1]}\right]\\&=& \sum_{(\alpha_1,\dots,\alpha_r)\in {\mathbb Z}^r} \left(\prod_{i=1}^r e_{\lambda_i-\alpha_i}^{[f_i|\lambda_i-\alpha_i-f_i-1]}(z|\tau^{-\alpha_i}b)^{\star} \right) \operatorname{Pf}\left[q_{\alpha_1}^{[\alpha_1-1]}q_{\alpha_2}^{[\alpha_2-1]}\cdots q_{\alpha_r}^{[\alpha_r-1]}\right]. \end{eqnarray*} Suppose that $\lambda_r-f_r>0$. In this case, by the assumption (a), we have $\lambda_i-f_i>0$ for all $i=1,\dots,r$ so that $e_{\lambda_i-\alpha_i}^{[f_i|\lambda_i-\alpha_i-f_i-1]}(z|\tau^{-\alpha_i}b)^{\star}=0$ for $\alpha_i\leq 0$. Thus we have \begin{eqnarray*} &&\operatorname{Pf}\left[q_{\lambda_1}^{[f_1|\lambda_1-f_1-1]}q_{\lambda_2}^{[f_2|\lambda_2-f_2-1]}\cdots q_{\lambda_r}^{[f_r|\lambda_r-f_r-1]}\right]\\&=& \sum_{(\alpha_1,\dots,\alpha_r)\in ({\mathbb Z}_{>0})^r} \left(\prod_{i=1}^r e_{\lambda_i-\alpha_i}^{[f_i|\lambda_i-\alpha_i-f_i-1]}(z|\tau^{-\alpha_i}b)^{\star} \right) \operatorname{Pf}\left[q_{\alpha_1}^{[\alpha_1-1]}q_{\alpha_2}^{[\alpha_2-1]}\cdots q_{\alpha_r}^{[\alpha_r-1]}\right]. \end{eqnarray*} By Lemma \ref{lem6}, we have \begin{eqnarray*} &&\operatorname{Pf}\left[q_{\lambda_1}^{[f_1|\lambda_1-f_1-1]}q_{\lambda_2}^{[f_2|\lambda_2-f_2-1]}\cdots q_{\lambda_r}^{[f_r|\lambda_r-f_r-1]}\right]\\ &=&\sum_{\mu\in \calS\!\calP_r \atop{\mu_r>0}} \left(\sum_{w\in S_r} {\operatorname{sgn}}(w) \prod_{i=1}^r e_{\lambda_i-\mu_{w(i)}}^{[f_i|\lambda_i-\mu_{w(i)}-f_i-1]}(z|\tau^{-\mu_{w(i)}}b)^{\star}\right) Q_{\mu}(x|b)\\ &=&\sum_{\mu\in \calS\!\calP_r \atop{\mu_r>0}} \det\left(e_{\lambda_i-\mu_j}^{[f_i|\lambda_i-\mu_j-f_i-1]}(z|\tau^{-\mu_j}b)^{\star}\right)_{1\leq i,j\leq r} Q_{\mu}(x|b). \end{eqnarray*} It is easy to see that the determinant vanishes if there is $k$ such that $\lambda_k-\mu_k<0$. Thus the sum runs over all $\mu\in \calS\!\calP_r$ such that $\mu_r>0$ and $\mu\subset \lambda$. In particular, $\bar\mu$ is a partition since $\mu_r>0$. Finally, by Theorem \ref{thm:app1}, we have \begin{eqnarray*} \det\left(e_{\lambda_i-\mu_j}^{[f_i|\lambda_i-\mu_j-f_i-1]}(z|\tau^{-\mu_j}b)\right)_{1\leq i,j\leq r} &=&\det\left(e_{\bar\lambda_i-\bar\mu_j+j-i}^{[f_i|\bar\lambda_i-\bar\mu_j+j-i-f_i-1]}(z|\tau^{j-\bar\mu_j-1}b)\right)_{1\leq i,j\leq r}\\ &=&\widetilde{s}_{\bar\lambda/\bar\mu,f}(z|{\mathbf b}), \end{eqnarray*} where the assumption (a) implies the inequalities that must be satisfied by $(\overline{\lambda},f)$. Thus we obtain \[ \operatorname{Pf}\left[q_{\lambda_1}^{[f_1|\lambda_1-f_1-1]}q_{\lambda_2}^{[f_2|\lambda_2-f_2-1]}\cdots q_{\lambda_r}^{[f_r|\lambda_r-f_r-1]}\right]\\ =\sum_{\mu\in \calS\!\calP \atop{\mu\subset \lambda \atop{\bar\mu \in{\mathcal P}}}} \widetilde{s}_{\bar\lambda/\bar\mu,f}(z|{\mathbf b})^{\star}\cdot Q_{\mu}(x|b), \] and finally the claim follows from Proposition \ref{prop1}. Here note that $\widetilde{s}_{\bar\lambda/\bar\mu,f}(z|{\mathbf b})^{\star}=0$ if $\mu_r=0$ since $\lambda_r-f_r>0$. Suppose that $\mu_r-f_r\leq 0$. In this case, we have \begin{eqnarray*} &&\operatorname{Pf}\left[q_{\lambda_1}^{[f_1|\lambda_1-f_1-1]}q_{\lambda_2}^{[f_2|\lambda_2-f_2-1]}\cdots q_{\lambda_r}^{[f_r|\lambda_r-f_r-1]}\right]\\ &=& \sum_{(\alpha_1,\dots,\alpha_r)\in ({\mathbb Z}_{>0})^r} \left(\prod_{i=1}^r e_{\lambda_i-\alpha_i}^{[f_i|\lambda_i-\alpha_i-f_i-1]}(z|\tau^{-\alpha_i}b)^{\star} \right) \operatorname{Pf}\left[q_{\alpha_1}^{[\alpha_1-1]}q_{\alpha_2}^{[\alpha_2-1]}\cdots q_{\alpha_r}^{[\alpha_r-1]}\right] \\ &&+ \sum_{(\alpha_1,\dots,\alpha_{r-1})\in ({\mathbb Z}_{>0})^{r-1}} e_{\lambda_r}^{[f_r|\lambda_r-f_r-1]}(z|b)^{\star} \left(\prod_{i=1}^{r-1} e_{\lambda_i-\alpha_i}^{[f_i|\lambda_i-\alpha_i-f_i-1]}(z|\tau^{-\alpha_i}b)^{\star} \right) \operatorname{Pf}\left[q_{\alpha_1}^{[\alpha_1-1]}q_{\alpha_2}^{[\alpha_2-1]}\cdots q_{\alpha_{r-1}}^{[\alpha_{r-1}-1]}\right] \\ &=&\sum_{\mu\in \calS\!\calP \atop{\mu\subset \lambda \atop{\mu_r>0}}} \widetilde{s}_{\bar\lambda/\bar\mu,f}(z|{\mathbf b})^{\star}\cdot Q_{\mu}(x|b)+ \sum_{\mu\in \calS\!\calP \atop{\mu\subset \lambda \atop{\mu_r=0}}} e_{\lambda_r}^{[f_r|\lambda_r-f_r-1]}(z|b)^{\star} \det\left(e_{\lambda_i-\mu_j}^{[f_i|\lambda_i-\mu_j-f_i-1]}(z|\tau^{-\mu_j}b)^{\star}\right)_{1\leq i,j\leq r-1}Q_{\mu}(x|b). \end{eqnarray*} Since $e_{\lambda_i-\mu_r}^{[f_i|\lambda_i-\mu_r-f_i-1]}(z|\tau^{-\mu_r}b)^{\star}=0$ for all $i=1,\dots,r-1$, we have \[ e_{\lambda_r}^{[f_r|\lambda_r-f_r-1]}(z|b)^{\star} \det\left(e_{\lambda_i-\mu_j}^{[f_i|\lambda_i-\mu_j-f_i-1]}(z|\tau^{-\mu_j}b)^{\star}\right)_{1\leq i,j\leq r-1} =\det\left(e_{\lambda_i-\mu_j}^{[f_i|\lambda_i-\mu_j-f_i-1]}(z|\tau^{-\mu_j}b)^{\star}\right)_{1\leq i,j\leq r}. \] Thus we obtain \[ \operatorname{Pf}\left[q_{\lambda_1}^{[f_1|\lambda_1-f_1-1]}q_{\lambda_2}^{[f_2|\lambda_2-f_2-1]}\cdots q_{\lambda_r}^{[f_r|\lambda_r-f_r-1]}\right]\\ =\sum_{\mu\in \calS\!\calP \atop{\mu\subset \lambda \atop{\bar\mu \in{\mathcal P}}}} \widetilde{s}_{\bar\lambda/\bar\mu,f}(z|{\mathbf b})^{\star}\cdot Q_{\mu}(x|b), \] and finally the claim follows from Proposition \ref{prop1}. \end{proof} \section{Vexillary double Schubert polynomials of type C}\label{secSchPol} \subsection{Double Schubert polynomials of type C} In this section, we briefly recall the double Schubert polynomials of Ikeda--Mihalcea--Naruse. Please see \cite{IkedaMihalceaNaruse} for more detail. Let $W_{\infty}$ be the infinite hyperoctahedral group, {\it i.e.}, the Weyl group of type $C_{\infty}$ (or $B_{\infty}$). It is given as the group defined by generators (simple reflections) $\{s_i \ |\ i=0,1,2,\dots\}$ and relations \[ s_i^2 = e \ (i\geq 0), \ \ s_1s_0s_1s_0=s_0s_1s_0s_1, \ \ s_is_{i+1}s_i=s_{i+1}s_is_{i+1} (i\geq 1),\ \ s_is_j=s_js_i (|i-j|\geq 2), \] where $e$ is the identity element. We identify $W_{\infty}$ with the group of {\it signed permutations}, {\it i.e.}, permutations $w$ of the set $\{1,2,\dots \} \cup \{-1,-2,\dots\}$ such that $w(i)\not= i$ for only finitely many $i$, and $\overline{w(i)}=w(\bar i)$ where we denote $\bar i = -i$. Each element of $W_{\infty}$, therefore, can be specified by the sequence $(w(1),w(2),\dots)$ which we call the one-line notation of $w$. The simple reflections are identified with the transpositions $s_0=(1,\bar 1)$ and $s_i=(i,i+1)(\bar i, \overline{i+1})$ for $i\geq 1$. To each $w\in W_{\infty}$, Ikeda--Mihalcea--Naruse \cite{IkedaMihalceaNaruse} associated a unique function ${\mathfrak C}_w={\mathfrak C}_w(x;z|b)$ in the ring $\Gamma[z,b]$\footnote{Note that the parameters $t=(t_1,t_2,\dots)$ in \cite{IkedaMihalceaNaruse} are replaced by $-b=(-b_1,-b_2,\dots)$ in this paper.}. They are characterized by left and right divided difference operators $\delta_i$ and $\partial_i$ with $i=0,1,2,\dots$. Namely there is a unique family of elements ${\mathfrak C}_w(x;z|b) \in \Gamma[z,b]$ ($w\in W_{\infty}$), satisfying \[ \partial_i {\mathfrak C}_w = \begin{cases} {\mathfrak C}_{ws_i} & \mbox{ if } \ell(ws_i)<\ell(w),\\ 0 & \mbox{ otherwise}, \end{cases} \ \ \ \ \ \ \delta_i {\mathfrak C}_w = \begin{cases} {\mathfrak C}_{s_iw} & \mbox{ if } \ell(s_iw)<\ell(w),\\ 0 & \mbox{ otherwise}, \end{cases} \] for all $i=0,1,2,\dots,$ and such that ${\mathfrak C}_w$ has no constant term except for ${\mathfrak C}_e=1$. \subsection{Vexillary signed permutations} We follow Anderson--Fulton \cite{AndersonFultonVex}. A {\it triple} is a three $r$-tuples of positive integers, $\tau=({\mathbf k },{\mathbf p},{\mathbf q })$, with ${\mathbf k }=(0<k_1<\cdots<k_r)$, ${\mathbf p}=(p_1\geq \cdots\geq p_r>0)$, and ${\mathbf k }=(q_1\geq \cdots\geq q_r>0)$, satisfying the inequality \[ (*) \ \ \ \ \ \ k_{i+1}-k_i \leq p_i - p_{i+1} + q_i - q_{i+1} \ \ \ \ \ \ (1\leq i\leq r-1). \] A triple is essential if the inequality ($*$) is strict for all $i$. Each triple reduces to a unique essential triple by successively removing $(k_i,p_i,q_i)$ such that the equality holds in ($*$) and two triples are equivalent if they reduce to the same essential triple. Anderson--Fulton explained how to construct a signed permutation $w=w(\tau)$ in \cite[\S 2]{AndersonFultonVex} and they define a signed permutation to be {\it vexillary} if it arises from a triple in such a way. Equivalent triples give the same vexillary signed permutation. An essential triple $\tau$ also determines a strict partition $\lambda(\tau)$ of length $r$, by setting $\lambda_{k_i}=p_i+q_i - 1$, and filling in the remaining $\lambda_{k}$ minimally so that $\lambda_1>\cdots >\lambda_r$. Similarly, we introduce a flag $f(\tau)=(f_1,\dots, f_r)$ associated to an essential triple $\tau$ by setting $f_{k_i}:=p_i-1$, and filling in the remaining $f_k$ minimally so that $f_1\geq \cdots \geq f_r$. In this way, we can assign a unique flagged strict partition to each vexillary signed permutation. Note that $m_i:=f_{k_i}$ is nothing but the labeling of $\lambda(\tau)$ given in \cite[\S 4]{AndersonFultonVex}. From the work of Anderson-Fulton \cite{AndersonFulton, AndersonFulton2}, it follows that the double Schubert polynomials associated to vexillary signed permutations can be given in the following Pfaffian formula. \begin{thm}[Anderson-Fulton \cite{AndersonFulton, AndersonFulton2}] Let $w$ be a vexillary signed permutation and $(\lambda,f)$ the associated flagged strict partition. Then we have \[ {\mathfrak C}_w(x;z|b) = \operatorname{Pf}\left[q_{\lambda_1}^{[f_1|\lambda_1-f_1-1]}q_{\lambda_2}^{[f_2|\lambda_2-f_2-1]}\dots q_{\lambda_r}^{[f_r|\lambda_r-f_r-1]}\right]. \] \end{thm} Since, by construction, the flagged strict partition $(\lambda,f)$ associated to a vexillary signed permutation $w$ satisfies the requirement in Theorem \ref{mainthm}, we obtain the following theorem. \begin{thm}\label{thmmain2} Let $w$ be a vexillary signed permutation and $(\lambda,f)$ the associated flagged strict partition. Then we have ${\mathfrak C}_w(x;z|b) = Q_{\lambda,f}(x;z|b)$. \end{thm} \subsection{A new tableau formula of Ivanov's factorial $Q$ functions} A signed permutation $w$ is {\it Lagrangian} if $w(1)<w(2)<\cdots<w(r)<0<w(r+1)<\cdots$ for some integer $r\geq 1$. A Lagrangian signed permutation is vexillary. Indeed, we can define a triple $\tau$ from which $w$ is constructed by setting $k_i=i, p_i=1$, and $q_i= \overline{w(i)}$ for $i=1,\dots, r$. The associated flagged strict partition $(\lambda,f)$ is given by $\lambda_i=\overline{w(i)}$ and $f_i=0$ for $i=1,\dots, r$. On other hand, if $w$ is vexillary, then $w^{-1}$ is also vexillary. In fact, Anderson--Fulton showed that for a triple $\tau=({\mathbf k },{\mathbf p},{\mathbf q })$, we have $w(\tau)^{-1} = w(\tau^*)$ where $\tau^* = ({\mathbf k },{\mathbf q },{\mathbf p})$ (\cite[Lemma 2.3]{AndersonFultonVex}). From this, we can deduce that if $w$ is Lagrangian with the strict partition $\lambda$ of length $r$ (and the flag $(0,\dots,0)$, then $w^{-1}$ is a vexillary signed permutation with the strict partition $\lambda$ and the flag $f=(\lambda_1-1,\dots,\lambda_r-1)$. It is known (\cite{Kazarian}, \cite{Ikeda2007}) that for a Lagrangian signed permutation $w$ with the associated strict partition $\lambda$, we have ${\mathfrak C}_w(x;z |b) = Q_{\lambda}(x|b)$. On the other hand, by \cite[Theorem 8.1 (3) ]{IkedaMihalceaNaruse}, we know that for a signed permutation $w$, we have ${\mathfrak C}_{w}(x;z|b)={\mathfrak C}_{w^{-1}}(x;b|z)$, which, by Theorem \ref{thmmain2}, implies that ${\mathfrak C}_w(x;z|b) = Q_{(\lambda,f)}(x;b|z)$, where $f=(\lambda_1-1,\dots,\lambda_r-1)$. Thus we can conclude that $Q_{\lambda}(x|b) = Q_{(\lambda,f)}(x;b|z)$, which shows that the $z$-variables the right hand side. Now by applying Theorem \ref{mainthm}, we obtain the following theorem. \begin{thm}\label{thmmain3} Let $\lambda=(\lambda_1,\dots,\lambda_r)$ be a strict partition of length $r$, and $f=(\lambda_1-1,\dots, \lambda_r-1)$. Then Ivanov's factorial $Q$ function associated to $\lambda$ can be expressed as \[ Q_{\lambda}(x|b) = \sum_{T\in \operatorname{MST}(\lambda,f)} (xb)^T, \ \ \ \ \ (xb)^T = \prod_{k\in T} x_k\prod_{k'\in T} x_k\prod_{k^{\circ}\in T} b_k. \] \end{thm} \begin{rem} From Theorem \ref{thmmain3} and Proposition \ref{prop1}, we can also write \[ Q_{\lambda}(x|b) = \sum_{\mu \in \calS\!\calP \atop{\mu\subset\lambda \atop{\bar\mu\in {\mathcal P}}}} Q_{\mu}(x)\cdot \widetilde{s}_{\bar\lambda/\bar\mu, f}(b), \] for a strict partition $\lambda$ of length $r$ where $f=(\lambda_1-1,\dots,\lambda_r-1)$. In the view of Theorem \ref{thm:app1}, this recovers \cite[Theorem 10.2]{Ivanov04}. \end{rem} \section{Appendix: Lattice path method for row-strict Schur polynomials}\label{app1} In this section, we prove a Jacobi--Trudi type formula (Theorem \ref{thm:app1} below) for the row-strict flagged skew factorial Schur polynomials defined at Definition \ref{df: row Schur}. It is a factorial generalization of Theorem 3.5$^*$ in \cite{Wachs}. We prove it by interpreting the tableaux as lattice paths and applying \cite[Theorem 1.2]{StembridgePf} (cf. \cite{Lindstrom, GesselViennot, GesselViennot2}). First we recall the basic notations from \cite{StembridgePf}. Let $D=(V,E)$ be an acyclic oriented graph without multiple edges: $V$ is the set of vertices and $E$ is the set of edges in $D$. For vertices $u$ and $v$, a path from $u$ to $v$ is a sequence of edges $e_1,\dots, e_m$ such that the source of $e_1$ is $u$, the target of $e_m$ is $v$, and the target of $e_i$ coincides with the source of $e_{i+1}$ for all $i=1,\dots, m-1$. Let ${\mathscr P}(u,v)$ be the set of all paths from $u$ to $v$. Let $w: E \to R$ be a weight function where $R$ is some commutative ring. For a path $P$, we also denote $w(P)$ the product of the weights of all edges in $P$. Let \[ {GF}\left[{\mathscr P}(u,v)\right] = \displaystyle\sum_{P\in {\mathscr P}(u,v) } w(P). \] Let ${\mathbf u}=(u_1,\dots, u_r)$ and ${\mathbf v}=(v_1,\dots, v_r)$ be ordered sets of vertices of $D$. Let ${\mathscr P}_0({\mathbf u},{\mathbf v})$ is the set of all non-intersecting $r$-tuples of paths, ${\mathbf P}=(P_1,\dots, P_r)$, with $P_i\in {\mathscr P}(u_i,v_i)$. We denote \[ {GF}\left[{\mathscr P}_0({\mathbf u},{\mathbf v})\right] = \displaystyle\sum_{{\mathbf P} \in {\mathscr P}_0({\mathbf u},{\mathbf v})} w({\mathbf P}) \] where we set $w({\mathbf P}) = w(P_1)w(P_2)\cdots w(P_r)$. Finally, we say that ${\mathbf u}$ is $D$-compatible with ${\mathbf v}$ if a path $P \in {\mathscr P}(u_i,v_j)$ intersects with a path $Q\in {\mathscr P}(u_k,v_l)$ whenever $i<k$ and $j>l$. \begin{thm}[Theorem 1.2, \cite{StembridgePf}]\label{appAthm} Let ${\mathbf u}=(u_1,\dots, u_r)$ and ${\mathbf v}=(v_1,\dots,v_r)$ be ordered sets of vertices such that ${\mathbf u}$ is $D$-compatible with ${\mathbf v}$. Then \[ {GF}\left[{\mathscr P}_0({\mathbf u},{\mathbf v})\right] = \det \left({GF}\left[{\mathscr P}(u_i,v_j)\right]\right)_{1\leq i,j\leq r}. \] \end{thm} In order to apply Theorem \ref{appAthm} to the row-strict flagged Schur polynomials, we introduce an acyclic directed graph $D$ as follows: its vertex set $V$ is ${\mathbb Z}\times {\mathbb Z}_{\geq 0}$ and there is an edge $(u,v) \in E$ from the source $u$ to the target $v$ if $u-v$ is $(0,1)$ or $(1,1)$. We call an edge $(u,v)$ {\it diagonal} if $u-v=(1,1)$, and {\it vertical} if $u-v=(0,1)$. We define a weight function $w: E \to {\mathbb Z}[z,{\mathbf b}]$ by setting $w(e) = 1$ if $e$ is horizontal and $w(e)=z_t + b_{t-s}$ if $e$ is a diagonal edge with its source at $(s,t)$. Let $\lambda/\mu$ is a skew (unshifted) diagram of length at most $r$ and $f$ its flag. Consider the ordered sets of vertices ${\mathbf u}=(u_1,\dots, u_r)$ and ${\mathbf v}=(v_1,\dots, v_r)$ where \[ u_i=(\lambda_i-i, f_i), \ \ \ v_i=(\mu_i-i,0). \] There is a bijection between $\operatorname{SST}^*(\lambda/\mu,f)$ and ${\mathscr P}_0({\mathbf u},{\mathbf v})$ defined as follows. Let $T \in \operatorname{SST}^*(\lambda/\mu,f)$. Let ${\mathbf P}=(P_1,\dots, P_r)$ be the corresponding $r$-tuple of paths defined as follows. If $j_m<\cdots<j_1$ are the entries of $i$-th row of $T$ where $m=\lambda_i-\mu_i$, then we define $P_i$ to be the unique path from $u_i$ to $v_i$ such that the $k$-th diagonal edge has its source at $(\lambda_i-i-k+1, j_k)$ for $k=1,\dots,m$. For example, let $\lambda=(3,2,1)$, $\mu=(1,1,0)$ and $f=(3,2,1)$. The following is an example of a tableaux $T$ in $\operatorname{SST}^*(\lambda/\mu,f)$ and the corresponding triple of non-intersecting paths. \setlength{\unitlength}{0.6mm} \begin{center} \begin{picture}(90,60) \dottedline{2}(00,40)(30,40) \dottedline{2}(00,30)(30,30) \dottedline{2}(00,20)(20,20) \dottedline{2}(00,10)(10,10) \dottedline{2}(00,40)(00,10) \dottedline{2}(10,40)(10,10) \dottedline{2}(20,40)(20,20) \dottedline{2}(30,40)(30,30) \linethickness{0.3mm} \put(10,40){\line(1,0){20}} \put(10,30){\line(1,0){20}} \put(00,20){\line(1,0){20}} \put(00,10){\line(1,0){10}} \put(00,20){\line(0,-1){10}} \put(10,40){\line(0,-1){30}} \put(20,40){\line(0,-1){20}} \put(30,40){\line(0,-1){10}} \put(14,33){{\small $2$}}\put(24,33){{\small $3$}} \put(14,23){{\small $2$}} \put(04,13){{\small $1$}} \put(10,45){{\small $\lambda/\mu$}} \put(-14,25){{\small $T$}} \put(44,45){{\small $f$}} \put(44,33){{\small $3$}} \put(44,23){{\small $2$}} \put(44,13){{\small $1$}} \end{picture} \ \ \ \begin{picture}(50,60) \put(-2,53){{\footnotesize $v_3$}} \put(-1,49){{\footnotesize $\bullet$}} \put(18,53){{\footnotesize $v_2$}} \put(19,49){{\footnotesize $\bullet$}} \put(28,53){{\footnotesize $v_1$}} \put(29,49){{\footnotesize $\bullet$}} \put(8,35){{\footnotesize $u_3$}} \put(9,39){{\footnotesize $\bullet$}} \put(28,25){{\footnotesize $u_2$}} \put(29,29){{\footnotesize $\bullet$}} \put(48,15){{\footnotesize $u_1$}} \put(49,19){{\footnotesize $\bullet$}} \put(65,48){{\footnotesize $0$}} \put(65,38){{\footnotesize $1$}} \put(65,28){{\footnotesize $2$}} \put(65,18){{\footnotesize $3$}} \put(65,08){{\footnotesize $4$}} \put(65,-2){{\footnotesize $5$}} \put(-1,-8){{\footnotesize $-3$}} \put(09,-8){{\footnotesize $-2$}} \put(19,-8){{\footnotesize $-1$}} \put(29,-8){{\footnotesize $0$}} \put(39,-8){{\footnotesize $1$}} \put(49,-8){{\footnotesize $2$}} \put(59,-8){{\footnotesize $3$}} \dottedline{2}(00,10)(10,00) \dottedline{2}(00,20)(20,00) \dottedline{2}(00,30)(30,00) \dottedline{2}(00,40)(40,00) \dottedline{2}(00,50)(50,00) \dottedline{2}(10,50)(60,00) \dottedline{2}(20,50)(60,10) \dottedline{2}(30,50)(60,20) \dottedline{2}(40,50)(60,30) \dottedline{2}(50,50)(60,40) \dottedline{2}(00,00)(00,50) \dottedline{2}(10,00)(10,50) \dottedline{2}(20,00)(20,50) \dottedline{2}(30,00)(30,50) \dottedline{2}(40,00)(40,50) \dottedline{2}(50,00)(50,50) \dottedline{2}(60,00)(60,50) \linethickness{0.2mm} \put(20,50){\line(0,-1){10}} \put(30,50){\line(0,-1){10}} \put(00,50){\line(1,-1){10}} \put(20,40){\line(1,-1){10}} \put(30,40){\line(1,-1){10}} \put(40,30){\line(1,-1){10}} \end{picture} \vspace{5mm} \end{center} It is not difficult to see that this defines a bijection from $\operatorname{SST}^*(\lambda/\mu,f)$ to ${\mathscr P}_0({\mathbf u},{\mathbf v})$. Moreover, this bijection preserves the weights. Namely, suppose that $T$ corresponds to ${\mathbf P}$. Let $j_m < \cdots < j_1$ be the entries of the $i$-th row of $T$. The column index of the entry $j_k$ is $\lambda_i-k+1$ and thus its corresponding weight is $z_{j_k} + b_{j_k + i - (\lambda_i-k+1)}$. On the other hand, $P_i$'s $k$-th diagonal edge has its sources at $(\lambda_i-i-k+1, j_k)$ and thus its weight is also z_{j_k} + b_{j_k + i - (\lambda_i-k+1)}$. For example, the weights of the above examples of a tableau and the corresponding paths are both $(x_2 + b_1)(x_3 + b_1)\cdot (x_2 + b_2) \cdot (x_1 + b_3)$. Thus we have \begin{equation}\label{appeq1} \widetilde{s}_{\lambda/\mu,f}(z|{\mathbf b}) = \sum_{T\in \operatorname{SST}^*(\lambda/\mu,f)} (z|{\mathbf b})^T = {GF}\left[{\mathscr P}_0({\mathbf u},{\mathbf v})\right]. \end{equation} The following is an extension of Lemma \ref{lem1} in the view of the lattice path interpretation and will be used in the proof of Theorem \ref{thm:app1} below. \begin{lem}\label{lemApp1} Let $u=(s-1,f)$ and $v=(t-1,0)$ where $s,t\in {\mathbb Z}$ and $f\in {\mathbb Z}_{\leq 0}$, then we have \[ {GF}\left[{\mathscr P}(u,v)\right] = e_{s-t}^{[f|s-t-f-1]}(z|\tau^{-t}b). \] In particular, this identity is trivially zero unless $0\leq s-t \leq f$. \end{lem} \begin{proof} If $s-t<0$, clearly the identity is zero. If $0 \leq f < s-t$, then ${\mathscr P}(u,v)=\varnothing$ so that ${GF}\left[{\mathscr P}(u,v)\right]=0$. Furthermore, $e_u^{[f|s-t-f-1]}$ is a polynomial in $u$ of degree $s-t-1$ so that $e_{s-t}^{[f|s-t-f-1]}=0$. Below we suppose that $0\leq s-t \leq f$. If $t\geq 0$, the claim follows from Lemma \ref{lem1}. If $t<0$, consider $u'=(s-1+n, f)$ and $v'=(t-1+n,0)$ for some $n$ such that $t+n\geq 0$, and then we have, also by Lemma \ref{lem1}, \[ {GF}\left[{\mathscr P}(u',v')\right] = e_{s-t}^{[f|s-t-f-1]}(z|\tau^{-t-n}b). \] Since the paths in ${\mathscr P}(u,v)$ are obtained from the paths in ${\mathscr P}(u',v')$ by shifting horizontally to the left by $n$ units, we obtain ${GF}\left[{\mathscr P}(u,v)\right]$ from ${GF}\left[{\mathscr P}(u',v')\right]$ by adding $n$ to all indices of $b$ variables. Thus the claim follows. \end{proof} \begin{thm}\label{thm:app1} Let $(\lambda/\mu,f)$ be a flagged skew partition where $\lambda$ is a partition of length $r$. Assume that $\lambda_i-i-f_i\geq \lambda_j-j-f_j$ for all $i<j$. Then we have \[ \widetilde{s}_{\lambda/\mu,f}(z|{\mathbf b})=\det\left(e_{\lambda_i-\mu_j+j-i}^{[f_i|\lambda_i-\mu_j+j-i-f_i-1]}(z|\tau^{j-\mu_j-1}b)\right)_{1\leq i,j\leq r}. \] \end{thm} \begin{proof} By the assumption, it follows that ${\mathbf u}$ is $D$-compatible with ${\mathbf v}$. Thus we can apply Theorem \ref{appAthm} to the right hand side of (\ref{appeq1}), and obtain \[ \widetilde{s}_{\lambda/\mu,f}(z|{\mathbf b}) = \det \left({GF}\left[{\mathscr P}(u_i,v_j)\right]\right)_{1\leq i,j\leq r}. \] Now the claim follows by applying Lemma \ref{lemApp1} with $u=u_i=(\lambda_i-i,f_i)$ and $v=v_j=(\mu_j-j,0)$ so that $f=f_i$, $s-t=\lambda_i-\mu_j+j-i$, and $t=\mu_j-j+1$. \end{proof}
3,212,635,537,900
arxiv
\section{Conclusion} \label{sec:conclusion} We developed a dynamic mapping algorithm based on Bayesian kernel inference that models the motion of dynamic objects with scene flow. Our map may be built from any 3D sensor and uses deep neural networks to obtain semantic labels and scene flow from raw point cloud data. We demonstrated the efficacy of the mapping system on simulated data and SemanticKITTI data set, where at the time of writing this paper we achieved second in the multi-scan leaderboard. \section{Introduction} Mapping, localization, and navigation are among the key capabilities for many autonomous systems. Some research streams employ end-to-end deep neural networks for mapless navigation via imitation~\cite{bojarski2016end, codevilla2018end}, reinforcement~\cite{tai2017virtual, chiang2019learning} or self-supervised learning~\cite{kahn2021badgr}. However, most works construct a map explicitly and localize the robot within that map for navigation and other tasks due to the reliability, interpretability, and predictability. Furthermore, mapping also has non-substitute roles in surveillance and monitoring, scene understanding, and augmented reality. Semantic mapping complements geometric modeling of a robot's surroundings with semantic concepts, thus enabling higher-level 3D scene understanding and more complex robotic tasks. The emergence of semantic mapping is attributed to the recognition of limitations of purely geometric maps and the advances of deep neural networks that allow semantic interpretation of raw sensory data~\cite{garg2021semantics}. The initial semantic mapping works only add semantic labels on top of existing map representation, such as point cloud model~\cite{sunderhauf2017meaningful}, surfel-based map~\cite{mccormac2017semanticfusion}, and voxel-based map~\cite{yang2017semantic}, wherein the semantics and geometry are modeled independently. As this field progresses, semantics and geometry have been modeled jointly and inferred in a unified framework~\cite{cherabier2018learning, gan2019bayesian}. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{media/FirstFigureV4.png} \caption{Dynamic Semantic Mapping Pipeline. Raw point clouds are inputs to scene flow and semantic segmentation neural networks, which compute the input to the mapping algorithm, $\mathcal{D}_t = \{\mathcal{X}_t, \mathcal{Y}_t, \mathcal{V}_t\}$. The dynamic map updates voxels parameterized by $\theta$ using scene flow aggregation and Bayesian inference. The dynamic map is capable of updating cells with dynamic objects, without leaving any residual traces.} \label{fig:first_fig} \end{figure} Gan et al.~\cite{gan2019bayesian} proposed a unified Bayesian semantic mapping framework that uses Categorical distribution and its conjugate prior to model the semantic likelihood and probability. This approach leads to an efficient closed-form Bayesian solution for the semantic map posterior. However, the underlying \emph{static world} assumption limits its applications in real-world \emph{dynamic} environments. In scenarios with dynamic objects, static mapping may not provide enough detail and even inconsistent reconstruction due to obscured views. This is most evident in self-driving cars, where robots must not only account for stationary cars and people, but also moving vehicles and pedestrians In this paper, we develop a scalable dynamic semantic mapping framework that combines 3D scene flow, semantic segmentation, and closed-form Bayesian inference in one pipeline, shown in Fig.~\ref{fig:first_fig}. We combine spatio-temporal motion data over multiple scans and neighbouring voxels. In particular, this work has the following contributions. {\small \begin{enumerate}[i.] \item We extend the BKI semantic mapping framework~\cite{gan2019bayesian} to dynamic scenes by incorporating scene flow models. \item We propose an efficient autoregressive transition model for scene propagation in closed-form. \item The overall performance on 28 classes including dynamic objects of our method ranks the 2nd place among 69 participants in SemanticKITTI semantic segmentation multi-scan competition~\cite{behley2019semantickitti}. \item The code will be publicly available after receiving the final decision. \end{enumerate} } Th remaining sections are organized as follows: Literature review is given next. Section~\ref{sec:preliminaries} presents the problem setup and preliminaries. The dynamic mapping methodology is discussed in Section~\ref{sec:method}. Results and discussion are given in Section~\ref{sec:label}. Finally, Section~\ref{sec:conclusion} concludes the paper and provide ideas for future work. \section{Related Work} In this section, we review works on semantic mapping, dynamic mapping, and scene flow estimation. A taxonomy of the state-of-the-art dynamic mapping works is also given in Table~\ref{tab:paper_table}, based on the presence ($\checkmark$) or absence ($\times$) of semantics and scene dynamics usage, the type of sensor they operate on, and the type of flow measurements incorporated. \noindent \textbf{Semantic Mapping.} Semantics are important to robot perception for better scene understanding and interaction~\cite{garg2021semantics}. We focus on works incorporating semantic information into maps built with known poses and are evaluated in terms of 3D semantic classification accuracy. Among a large body of semantic mapping studies, SemanticFusion~\cite{mccormac2017semanticfusion} can be regarded as a classic approach where the semantic probabilities of single-frame 2D images are generated by a convolutional neural network (CNN) and re-projected into 3D, after which a Bayesian update scheme fuses multi-scan probabilities to build a surfel map. Other works differ in the deep neural network used (e.g., recurrent neural networks on consecutive frames~\cite{xiang2017rnn, cheng2020robust}, 3D CNN for point clouds~\cite{dube2020segmap}), the map representation employed (point-cloud maps~\cite{sunderhauf2017meaningful, cheng2020robust} and voxel-based maps~\cite{xiang2017rnn, yang2017semantic, mccormac2018fusion++}), or the type of semantics (instance-level~\cite{grinvald2019volumetric}, object-level~\cite{sunderhauf2017meaningful, zeng2018semantic, maskfusion, detect} and place-level~\cite{sunderhauf2016place}). More recently, distributed semantic mapping for multi-robots ~\cite{yue2020hierarchical, jamieson2021multi} and 3D scene graphs ~\cite{rosinol20203d} are also trending research topics Another line of research concerns \emph{continuous semantic mapping} with uncertainty, inspired by continuous occupancy mapping~\cite{wang2016fast, jadidi2017warped, doherty2019learning}. Kernel methods such as Gaussian processes (GPs) are well-established for predicting a continuous non-parametric function to represent the occupancy map, and naturally extended to semantic mapping~\cite{jadidi2017gaussian, zobeidi2020dense, guerrero2021sparse}. Bayesian kernel inference is an efficient approximation of GPs, and utilized in a semantic mapping framework which yields fast computation and accurate inference~\cite{gan2019bayesian}. This work extends~\cite{gan2019bayesian} to dynamic scenes. \begin{table}[t] \centering \scriptsize \begin{tabular}{|c|p{0.8cm}|m{1cm}|m{1.2cm}|m{0.6cm}|} \hline \centering Paper & Semantic & Scene Dynamics Retention & Sensor & Flow \\ \hline DynaSLAM \cite{dynaslam} & $\times$ & $\times$ & C & $\times$ \\ Alcantarilla et. al. \cite{alcantarilla2012combining} & $\times$ & $\times$ & stereo & Scene \\ DSOD \cite{ma2019dsod} & $\times$ & $\times$ & mono & $\times$ \\ SOF-SLAM \cite{cui2019sof} & $\checkmark$ & $\times$ & RGB-D & Optical \\ DS-SLAM \cite{yu2018ds} & $\checkmark$ & $\times$ & RGB-D & Optical \\ Brasch. et. al. \cite{brasch2018semantic} & $\checkmark$ & $\times$ & mono & $\times$ \\ SLAM++ \cite{salas2013slam++} & $\times$ & $\times$ & RGB-D & $\times$ \\ Suma++ \cite{suma++} & $\checkmark$ & $\times$ & LiDAR & $\times$ \\ DOS-SLAM \cite{xu2019slam} & $\checkmark$ & $\times$ & RGB-D & Scene \\ Detect-SLAM \cite{detect} & $\checkmark$ & $\checkmark$ & RGB-D & $\times$ \\ SLAMANTIC \cite{schorghuber2019slamantic} & $\checkmark$ & $\checkmark$ & mono,stereo & $\times$\\ Sun et. al. \cite{sun2018recurrent} & $\checkmark$ & $\checkmark$ & LiDAR & $\times$ \\ Fusion++ \cite{mccormac2018fusion++} & $\checkmark$ & $\checkmark$ & RGB-D & $\times$ \\ MaskFusion \cite{maskfusion} & $\checkmark$ & $\checkmark$ & RGB-D & $\times$ \\ MID-Fusion \cite{xu2019mid} & $\checkmark$ & $\checkmark$ & RGB-D & $\times$ \\ ClusterSLAM \cite{huang2019clusterslam} & $\times$ & $\checkmark$ & stereo & $\times$ \\ DynSLAM \cite{barsan2018robust} & $\times$ & $\checkmark$ & stereo & Scene \\ EM-Fusion \cite{strecke2019fusion} & $\checkmark$ & $\checkmark$ & RGB-D & $\times$\\ Rosinol et. al. \cite{rosinol20203d} & $\checkmark$ & $\checkmark$ & stereo & $\times$ \\ Vespa et. al. \cite{vespa2018efficient} & $\times$ & $\checkmark$ & RGB-D & $\times$\\ Henein et. al. \cite{henein2020dynamicslam} & $\times$ & $\checkmark$ & RGB-D & $\times$ \\ Kochanov et. al. \cite{kochanov2016scene} & $\checkmark$ & $\checkmark$ & stereo & Scene\\ Ours & $\checkmark$ & $\checkmark$ & LiDAR,3DC & Scene\\ \hline \end{tabular} \caption{Comparison of properties of DynamicSemanticBKI with respect to other dynamic mapping systems. In the table, C = (mono, stereo, RGB-D) and 3DC = (stereo, RGB-D).} \label{tab:paper_table} \end{table} \noindent \textbf{Mapping in Dynamic Environments.} Dynamic objects can break the assumption of scene rigidity in most SLAM works and cause failure. Thus, some SLAM systems treat dynamic objects in a scene as spurious data or outliers, and exclude them from pose estimation and mapping to achieve better accuracy and robustness~\cite{dynaslam, alcantarilla2012combining, ma2019dsod}. Discarding dynamic objects is done through probabilistic outlier rejection~\cite{brasch2018semantic}, moving consistency check~\cite{yu2018ds}, feature-based filtering~\cite{dynaslam}, semantic inconsistency check between an obtained scan and the world model~\cite{suma++}, culling out with object-camera relative pose estimation~\cite{salas2013slam++}, coupling of semantic and geometric information \cite{cui2019sof} or residual motion likelihood calculation~\cite{alcantarilla2012combining, xu2019slam}. However, discarding information based on semantic labels completely depends on the accuracy of semantic prediction. Moreover, the information obtained from motion estimation could be leveraged further to predict the scene dynamics. Following this paradigm, some work model scene dynamics by maintaining an object point cloud with a moving probability ~\cite{detect}, calculating a dynamics factor to find static classes that could be mis-classified as ``dynamic'' (e.g., parked cars) and incorporating those into pose estimation~\cite{schorghuber2019slamantic}, propagating feature points by sampling from scene flow measurements ~\cite{kochanov2016scene}, and fusing semantic features by average pooling observations recurrently in an OctoMap cell~\cite{sun2018recurrent}. Instead of analyzing the motion properties of map cell or feature from a single scan, we combine spatio-temporal motion data over multiple scans and neighbouring voxels. Object-oriented approaches sometimes track local objects using ICP and semantic segmentation-aided fusion~\cite{mccormac2018fusion++, maskfusion, xu2019mid}, or by clustering on the basis of motion estimation~\cite{huang2019clusterslam}. Object tracking is also done via scene flow estimation~\cite{barsan2018robust}, frame-to-model data association~\cite{rosinol2019kimera} or SDF-based alignment~\cite{strecke2019fusion}. In this area, deep learning-based instance segmentation is often the bottleneck for operating frequency~\cite{dynaslam, yu2018ds, xu2019slam, detect}. \\~\\ \noindent \textbf{Scene Flow Estimation.} Scene flow estimation from point clouds can be challenging due to unordered point sets' size and non-uniform density. In the supervised learning literature, scene flow is usually estimated between two temporally consecutive point clouds. Most works on point cloud scene flow have built upon layers from the PointNet \cite{PointNet, PointNetPP} and FlowNet3D \cite{flownet3d} architectures. These models contain layers that learn local features, concatenate local features of point clouds, and upsample the learned features to compute flow. Spatial structure may be enforced upon the point cloud by organizing it into a voxel \cite{SCTN}, d-dimensional lattice \cite{hplflownet}, or BeV (Bird's Eye View) map \cite{motionnet}. In other work, correspondences are found between point clouds using optimal transport \cite{FLOT} or transformers \cite{SCTN}. Most recent advances have explored self-supervised \cite{mittal2020just, egoflow, pointpwc, tlfpad} and temporal methods \cite{sdpnet, weng2020unsupervised} to calculate scene flow from point clouds, which is beneficial due to the high cost of labeling point clouds. Self-supervised methods seek to predict the following LIDAR scan as the sum of the previous LIDAR scan and per-point scene flow. Temporal approaches attempt to utilize temporal data by estimating scene flow from multiple scans. \section{Method: Spatiotemporal Semantic Mapping} \label{sec:method} In this section, we present the proposed approach to extending semantic-BKI to dynamic environments. We first formulate an auto-regressive temporal transition model which propagates the map posterior according to the the scene dynamics. Next, we show how we aggregate motion information from the training data for incorporation into the map voxels. Finally, we consolidate and summarize the algorithm for dynamic semantic mapping. \subsection{Temporal Transition Model} \label{sec:transition} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{media/timevarying2.jpg} \caption{We illustrate the observation of a moving object through the middle voxel in this map and display how that voxel's semantics are different at every time step. For every time step, we plot the posterior Dirichlet probability density function (PDF) of the voxel on a 2-simplex. The shift in the rainbow gradient demonstrates what the belief about a semantic category (robot, free or other) can be over t=0 to t=2 to classify the voxel correctly at that time. This shift can be influenced by changing the concentration parameters (hyperparameters) of the Dirichlet posterior as: $\boldsymbol{\alpha}_{j,0} \rightarrow \boldsymbol{\alpha}_{j,1} \rightarrow \boldsymbol{\alpha}_{j,2} $.} \label{fig:methodmotivation} \end{figure} When dynamic objects move in and out of a voxel $j$, the samples observed in it across time are not \emph{i.i.d.}. Samples drawn from the map posterior at different time steps will come from independent but not identically distributed Dirichlet distributions. In Figure \ref{fig:methodmotivation}, we illustrate the motion of an object and a corresponding visualization of the Dirichlet probability density function (PDF) over the 2-simplex when there are just three classes --- ``robot'', ``free space'', and ``other''. The static world assumption in \eqref{eq:static_assumption} solely relies on the frequency of observations seen in a voxel. This property makes the Dirichlet distribution ignore scene dynamics and become overconfident about classes that contribute more observations over \emph{all time steps} rather than the \emph{current} time step. Therefore, for correct classification, the hyperparameters of the Dirichlet distribution under a static world assumption have to evolve with the scene dynamics. We use a time-series model to account for these discrepancies caused in the Dirichlet distribution under a static world assumption. To forecast $\overline{\boldsymbol{\alpha}}_{j,t}$ of the voxel $j$ when a moving object passes through at timestamp $t-1$, we introduce an auto-regressive (AR) model that leverages the 3D motion information captured from the environment and applies it to the map prior. The AR model is as follows: \begin{equation} \overline{\alpha}_{j,t}^k = e^{-(v_{j,t-1}^k)^2} \alpha_{j,t-1}^k, \label{eq:trans_model} \end{equation} where $e^{-(v_{j,t-1}^k)^2}$ is the AR model's parameter, $\alpha_{j,t-1}^k$ is prior concentration parameter for class $k$ and $v_{j,t-1}^k$ is the 3D motion field that influences the hyperparameter for each class $k$. We depict a graphical model for this temporal transition model in Figure \ref{fig:temporal_model}. \begin{SCfigure} \centering \includegraphics[width=0.45\linewidth]{media/GraphicalModel.png} \caption{A graphical model for hyperparameter propagation. For each voxel $j$ updated at time $t$ and for each class $k$, the hyperparameter $\alpha_t$ is a deterministic function of flow in the previous observed time stamp $v_{t-1}$ and the prior $\alpha_{t-1}$.} \label{fig:temporal_model} \end{SCfigure} Before we elaborate further, we introduce some notation to enhance readability. Let the set of all classes be $\mathcal{L}$, the set of moving classes $\mathcal{M}$ with any class $m \in \mathcal{M}$ and the free voxel category be denoted as ``free.'' Additionally, let the set of all classes excluding any one class $q$ be $\mathcal{L} \setminus q$. The concept behind the transition model is to redistribute the probability mass of the concentration parameters when there is motion observed in the environment. Therefore, we warp the concentration parameters according to the effect the motion of a dynamic object has on \textbf{(i)} its corresponding class and \textbf{(ii)} on other classes. Keeping these two effects in mind, we introduce two modules to predict the concentration parameters $\overline{\boldsymbol{\alpha}}_{j,t}$ for voxel $j$ at timestamp $t$:\par \noindent \textbf{\emph{Backward (exit) correction [BACC]}}: This is required when a moving object of category $m$ is detected in voxel $j$ at timestamp $t-1$, but \emph{exits} at timestamp $t$. We reduce the influence of prior observations $\alpha_{j,t-1}^m$ of $m$ on $\overline{\alpha}_{j, t}^m$ as the object could move out. To do so, we need 3D motion field information from observations of only $m$, i.e., $v_{j,t-1}^m$. The map prior (observations) about other classes $\alpha_{j,t-1}^{k \in (\mathcal{L} \setminus m)}$ is not required. \noindent \textbf{\emph{Forward (entry) correction [FORC]}}: This is essential when we have a significant number of observations indicating a voxel is ``free'' and a moving object of category $m \in \mathcal{M}$ \emph{enters it} at the next time step. The future presence of this object can be represented better by reducing the effect $\alpha_{j,t-1}^{\text{free}}$ has on $\overline{\alpha}_{j,t}^{\text{free}}$. For this operation, we aggregate the 3D motion fields of all $m \in \mathcal{M}$, i.e., movable classes in the neighborhood of voxel $j$, to compute $v_{j,t-1}^{\text{free}}$ in \eqref{eq:trans_model}. Additionally, we also utilize $v_{j,t-1}^{\text{free}}$ for all static classes. \subsection{3D Motion Aggregation from Point Clouds}\label{sec:3dflow} Given a voxel $j$, we wish to get a low-level understanding of how an object is moving in and out of it to model $v_{j,t-1}^k$ in \eqref{eq:trans_model}. Scene flow provides us with the underlying 3D motion field of the points in the scene. Given two incoming point clouds $\mathcal{X}_{t-1}$ and $\mathcal{X}_t$, recorded at timestamps $t-1$ and $t$, respectively, we require a translational motion vector $u_i \in \mathbb{R}^3$ that conveys how much a point $x_i \in \mathcal{X}_{t-1}$ has displaced to a new location $x'_i \in \mathcal{X}_t$. In practice, we take the 2-norm of the flow to obtain $v_i = \lVert u_i \rVert_2$. To capture the 3D flow norm $\boldsymbol v_j = (v_j^1, \ldots, v_j^K)$ pertaining to any semantic category around $x_j$, we aggregate the scene flow from training points around it. Thus, given training points $\mathcal{D} = \{(x_i, y_i, \boldsymbol v_i)\}_{i=1}^N$, a kernel \mbox{$\mathcal{K}_v: \mathbb{R}^3 \times \mathbb{R}^3 \rightarrow[0, 1]$} is used to weight the influence of each point $x_i$ on $x_j$ so that the closer a dynamic object of class $k$ is to the voxel center, the more influence it has on $v_j^k$. In particular, this becomes a kernel density estimation problem and the per-class scene flow is calculated as: \begin{equation} v_j^k =\frac{1}{N} \sum_{i=1}^N \mathcal{K}_v(x_i, x_j) v_i^{[y_i = k]}. \label{eq:velocity} \end{equation} \begin{algorithm}[t] \caption{Scene Flow Aggregation} \label{al:sceneflow} \small \begin{algorithmic}[1] \State \textbf{Input:} Training data $\mathcal{D}_t: (\mathcal{X}_t, \mathcal{Y}_t, \mathcal{V}_t)$ with N points; $\textbf{Query point}: x_j$; \textbf{Previous flow estimate} $\boldsymbol{v}_{j,t-1}$; \textbf{Flow weight}: $\gamma\in [0,1]$ \Procedure{AggregateFlow}{$\mathcal{D}_t, x_j, \boldsymbol{v}_{j,t-1}$} \For{each $(x_i, y_i, v_i) \in (\mathcal{X}_{t}, \mathcal{Y}_{t}, \mathcal{V}_{t})$} \State $w_v \gets \mathcal{K}_v(x_i, x_j)$ \Comment{$x_i$'s influence on BACC}\label{alg:vback} \State $w_v^{\text{free}} \gets \mathcal{K}_v^{\text{free}}(x_i, x_j)$ \Comment{$x_i$'s influence on FORC}\label{alg:vfor} \For{$m \in \mathcal{M}$} \Comment{for all dynamic classes} \State $v_{j, t}^m \gets v_{j, t}^m + w_v v_i^{[y_i=m]}$ \Comment{\eqref{eq:velocity}} \State $v_{j, t}^{\text{free}} \gets v_{j, t}^{\text{free}} + w_v^{\text{free}} v_i^{[y_i=m]}$ \Comment{\eqref{eq:velfree}} \EndFor \EndFor \For{$m \in \mathcal{M}$} \Comment{Normalize, weighted average $v_{j,t-1}$ } \State $v_{j, t}^m \gets \frac{\gamma}{N}v_{j, t}^m + (1-\gamma) v_{j, t-1}^m$ \EndFor \State $v_{j, t}^{\text{free}} \gets \frac{\gamma}{N}v_{j, t}^{\text{free}} + (1-\gamma) v_{j, t-1}^{\text{free}}$ \For{$k \in \mathcal{L} \setminus \mathcal{M}$} \Comment{Assign free velocity to static classes} \State $v_{j, t}^k \gets v_{j,t}^{\text{free}}$ \EndFor \State \textbf{return} $\boldsymbol{v}_{j, t}$ \EndProcedure \end{algorithmic} \end{algorithm} In Section \ref{sec:transition}, we introduced FORC for free and other static classes. As these classes would never have associated flow, we calculate their per-class flow with those of the dynamic objects around them as: \begin{equation} v_j^{\text{free}} =\frac{1}{N} \sum_{i=1}^N \mathcal{K}_v^{\text{free}}(x_i, x_j) v_i^{[y_i \in \mathcal{M}]}, \label{eq:velfree} \end{equation} where $\mathcal{M}$ is the set of dynamic classes and \mbox{$\mathcal{K}_v^{\text{free}}: \mathbb{R}^3 \times \mathbb{R}^3 \rightarrow [0, 1]$} is a spatial kernel that weights the influence of dynamic training points $x_i$ on $x_j$. Specific details about $\mathcal{K}_v$ and $\mathcal{K}_v^{\text{free}}$ will be discussed in Section~\ref{sec:experiments}. In Algorithm \ref{al:sceneflow}, we detail how the per-class scene flow for a query point $x_j$ is estimated using the positional $\mathcal{X}_t$, semantic $\mathcal{Y}_t$, and egomotion-compensated scene flow $\mathcal{V}_t$ information of each point in a point cloud. For \textbf{BACC}, we aggregate the scene flows of the training points encountered around $x_j$ in line \algref{al:sceneflow}{alg:vback}. For \textbf{FORC}, we aggregate the flows of the training points while weighing the ones in neighbouring voxels more (than in BACC) in line \algref{al:sceneflow}{alg:vfor}. \subsection{Map Posterior Update for Scene Propagation}\label{sec:consolidate} Section \ref{sec:transition} describes how we account for the change in concentration parameters of the Dirichlet distribution due to the motion of objects. Using this result and following a Bayesian approach, $\overline{\alpha}_{j,t}^k$ in \eqref{eq:trans_model} can be substituted as the prior in \eqref{eq:spatialupdate}, i.e., \begin{equation} \label{eq:spatiotemporal} \alpha_{j, t}^{k} = \overline{\alpha}_{j, t-1}^k + \sum_i^N \mathcal{K}_s(x_i, x_j) y_i . \end{equation} Algorithm \ref{al:dynamic_semantic_mapping} consists of prediction and update steps as in recursive Bayes filtering. For the prediction step in line \algref{al:dynamic_semantic_mapping}{alg:pred}, we apply the temporal transition model with the query point's flow estimate $\boldsymbol{v}_{j, t-1}$. The prediction step with \textbf{BACC} enables removal of tracks left by moving objects ``exiting'' voxels. For example, if a car was in motion at timestamp $t-1$ and moving out of a voxel $j$, calculating $v_{j, t-1}^{\text{car}}$ ensures that the map maintains confidence about a static class such as ``road'' $\overline{\alpha}_{j, t}^{\text{road}}$ and decreases confidence about the car $\overline{\alpha}_{j, t}^{\text{car}}$. Additionally, our algorithm, with \textbf{FORC}, facilitates the ``entry'' of dynamic objects into previously encountered areas in the map by reducing overconfidence in ``static'' and ``free'' classes. With the update step in line \algref{al:dynamic_semantic_mapping}{alg:correct}, incoming spatial and semantic training data $(\mathcal{X}_t, \mathcal{Y}_t)$ is incorporated. \begin{algorithm}[t] \caption{Dynamic Semantic Mapping} \label{al:dynamic_semantic_mapping} \small \begin{algorithmic}[1] \State \textbf{Input:} Training data: $(\mathcal{X}_{t}, \mathcal{Y}_{t}, \mathcal{V}_{t})$; \textbf{Query point}: $x_j$; \textbf{Previous state estimate} $\boldsymbol{\alpha}_{j,t-1}$; \textbf{Flow estimate} $\boldsymbol{v}_{j, t}$ \Procedure{Combined Posterior Update}{}\label{alg:proc2} \For{$k=1,...,K$} \Comment{Prediction step} \State $\alpha_{j, t}^k \gets \exp( - (v_{j,t-1}^k)^2 )\alpha_{j, t-1}^k$ \Comment{Incorporate motion}\label{alg:pred} \For{each $(x_i, y_i) \in (\mathcal{X}_{t}, \mathcal{Y}_{t})$} \Comment{Update Step} \State $w_s \gets \mathcal{K}_s(x_i, x_j)$ \Comment{Weight of $x_i$ on observation} \State $\alpha_{j, t}^k \gets \alpha_{j, t}^k + w_s [y_i = k]$ \label{alg:correct} \EndFor \EndFor \State \textbf{return} $\boldsymbol{\alpha}_{j, t}$ \EndProcedure \end{algorithmic} \end{algorithm} \section{Problem Setup and Preliminaries} \section{Background on BKI Semantic Mapping} \label{sec:preliminaries} Bayesian kernel inference (BKI)~\cite{gan2019bayesian}, for static semantic mapping, assumes map cells are indexed by \mbox{$j \in \mathbb{Z}^+$}. The $j$-th map cell with semantic probability \mbox{$\boldsymbol{\theta}_j = (\theta_j^1, ..., \theta_j^K)$}, where $\theta_j^k$ is the probability of the $j$-th cell belonging to the $k$-th category, has the Categorical likelihood \mbox{$p(y_i \mid \boldsymbol{\theta}_j ) = \prod_{k=1}^K (\theta_j^k)^{[y_i = k]}$}. Here, \mbox{$y_i \in \{1, ..., K\}$} is the semantic measurement at position \mbox{$x_i \in \mathbb{R}^3$} recorded in or around the $j$-th cell, and \mbox{$[y_i = k]$} evaluates to 1 if \mbox{$y_i = k$}, 0 otherwise. The semantic measurement $y_i$ is usually the semantic label output by a neural network. Given training data with $N$ measurement points \mbox{$\mathcal{D} = \{(x_i, y_i)\}_{i=1}^N$}, semantic mapping seeks the \emph{posterior} distribution \mbox{$p(\boldsymbol{\theta}_j \mid \mathcal{D})$} for each map cell $j$. For a closed-form solution, BKI semantic mapping adopts a conjugate \emph{prior} over $\boldsymbol{\theta}_j$, given by a Dirichlet distribution $\textnormal{Dir}(K, \boldsymbol{\alpha}_0)$, where \mbox{$K \geq 2$} is the number of categories, and \mbox{$\boldsymbol{\alpha}_0 = (\alpha_0^1, ..., \alpha_0^K)$}, \mbox{$\alpha_0^k \in \mathbb{R}^+$} are concentration parameters for each category. Applying Bayes' rule and Bayesian kernel inference, the posterior is another Dirichlet distribution, given by \mbox{$\textnormal{Dir}(K, \boldsymbol{\alpha}_j)$}, \mbox{$\boldsymbol{\alpha}_j = (\alpha_j^1, ..., \alpha_j^K)$}, and: \begin{equation} \label{eq:spatialupdate} \alpha_j^k = \alpha_0^k + \sum_{i=1}^N \mathcal{K}_s(x_i, x_j) [y_i = k], \end{equation} where \mbox{$\mathcal{K}_s: \mathbb{R}^3 \times \mathbb{R}^3 \rightarrow [0, 1]$} is a \emph{spatial} kernel function defined on 3D Euclidean spaces to capture the spatial correlation of two 3D positions, and $\alpha_j^k$ is the $k$-th concentration parameter of the \emph{query} voxel $j$ with center $x_j \in \mathbb{R}^3$ Given $\boldsymbol{\alpha}_j$, the maximum a posteriori (MAP) estimate of $\boldsymbol{\theta}_j$ can then be computed in closed-form: \begin{equation} \label{eq:mode} \hat{\theta}_j^k = \frac{\alpha_j^k - 1}{\sum_{k=1}^K \alpha_j^k - K}, \quad \alpha_j^k > 1. \end{equation} In BKI semantic mapping, the prior distribution of the map at time stamp $t$ is directly set to the posterior at time stamp \mbox{$t - 1$} (assuming that the map does not change between two time stamps), i.e., \mbox{$p(\boldsymbol{\theta}_{j, t} \mid \mathcal{D}_{1:t-1}) = p(\boldsymbol{\theta}_{j, t-1} \mid \mathcal{D}_{1:t-1})$}, to allow recursive Bayesian updates using sequential training data: \begin{align} \label{eq:static_assumption} p(\boldsymbol \theta_{j,t} \mid \mathcal{D}_{1:t}) &\propto p(\mathcal{D}_t \mid \boldsymbol{\theta}_{j, t}, \mathcal{D}_{1:t-1}) p(\boldsymbol{\theta}_{j, t} \mid \mathcal{D}_{1:t-1}) \nonumber \\ & = p(\mathcal{D}_t \mid \boldsymbol{\theta}_{j,t}) p(\boldsymbol{\theta}_{j, t-1} \mid \mathcal{D}_{1:t-1}). \end{align} However, this assumption is easily violated by moving objects in the environment or environmental changes. As such, we model the transition from $p(\boldsymbol{\theta}_{j, t-1} \mid \mathcal{D}_{1:t-1})$ to $p(\boldsymbol{\theta}_{j, t} \mid \mathcal{D}_{1:t-1})$ using spatial and temporal information. \section{Related work: Dynamic Semantic Mapping} \subsection{Filtering out Dynamic Objects during Map Building} \subsubsection{Geometric \& Deep Learning Approaches on Features} Cheng et. al. \cite{cheng2020robust} proposed a method wherein they generate a point cloud map of the environment based on the localization result and then combine it with CRF-RNN-based semantic segmentation to generate a map. Results were compared to ORB-SLAM \cite{orbslam} for TUM-RGB dataset and shown to show much higher improvement in more dynamic sequences. Brasch et al. \cite{brasch2018semantic} worked on a combining semantic information with a probabilistic model that rejects dynamic objects as outliers. Furthermore, they made an argument for utilizing descriptive ORB features over using direct methods, due to their sensitivity to illumination changes. In the event that a map point cannot be initialized with ORB features, their system falls back on using a resolution pyramid to do so. Each map point maintains a depth estimate \emph{d}, an inlier ratio $\phi$, that associates it with the stability of the map point, and a {\textcolor{red}{matching accuracy parameter}}, which are then used to evaluate whether a map point is static or dynamic. These results are then combined with semantic probabilities predicted by a CNN (Convolutional Neural Network) output, to dictate the semantics of a map point. The authors present results in dynamic environments with virtual KITTI, Cityscapes and Synthia against ORB-SLAM* as a benchmark. {\textcolor{purple}{While the authors' approach for fusing the observations with the matching accuracy parameter lends some confidence in tracking a feature correctly, the lack of instance level segmentation could lead to points being erroneously tracked in the case of two objects of the same semantic class "crossing over". Also, instead of tracking multiple features separately inside an semantic segmentation instance, wouldn't it be less computationally intensive to treat them as one abstraction? Essentially, all the features of one "semantic segmentation" instance may have similar depth information because they belong to the same object.}} DynaSLAM \cite{dynaslam} is another such method where the authors combine a priori information about \emph{potentially dynamic} semantic classes with keypoint tracking and matching. Moreover, to deal with patches around keypoints being classified as dynamic, the variance in the depth image is factored into the labelling. The authors evaluated their results on the TUM RGB-D and KITTI dataset as well, where they compared their method to DVO-SLAM, DSLAM and ORB-SLAM2. On TUM-RGBD, they outperform DVO-SLAM and DSLAM and perform similar to ORB-SLAM2. On the KITTI dataset, better results are obtained in sequences which {\textcolor{purple}{contain more structural objects. The authors could track features on the basis of direct methods as a fail-safe if they are not able to extract keypoints from objects lacking texture or at a far away distance (as done by Brasch et al. \cite{brasch2018semantic}). The advantage of the approach is the distinction provided to objects that are "movable," so as to filter them on the basis of depth information and inpainting the background occluded by dynamic objects by leveraging multiple views. However, an obvious disadvantange is that a priori dynamic classes are neither tracked nor mapped despite having potentially useful information that could aid in pose estimation (eg. a person being static)}.} DS-SLAM \cite{yu2018ds} combines semantic segmentation (using SegNet \cite{badrinarayanan2017segnet}) with a moving consistency check to filter out dynamic objects. The moving consistency check involves a pipeline for tracking features with optical flow, computing the fundamental matrix with RANSAC inliers to match points in subsequent keyframes and finally, rejcting outliers on the basis of distance from matched point to epipolar line. The work is evaluated on the TUM-RGBD dataset and compared to ORB-SLAM2 for ATE and RMSE. There is significant improvement observed in datasets that are more dynamic in comparison to the static ones. More recently, Cui and Ma proposed SOF-SLAM\cite{cui2019sof}, wherein optical flow coupled with semantic segmentation, is used for detection and removal of dynamic features. Results were evaluated on the TUM RGB-D dataset and comparisons were made with the RGB-D ORB-SLAM2 system as well as a version of ORB-SLAM2 that only uses semantic information to filter dynamic objects. They also compare results to DynaSLAM \cite{dynaslam}, DS-SLAM \cite{yu2018ds} and Detect-SLAM \cite{detect} for the $w_{halfsphere}, w_{rpy}, w_{static}$ and $w_{xyz}$ sequences. {\textcolor{purple}{As other work in this section, DS-SLAM completely eliminates dynamic objects from their semantic map-building framework and neither work utilizes optical flow for task other than discarding points that are mobile with respect to the sensor.}} Another recent work, by Ma et al.\cite{ma2019dsod} builds a framework that employs a depth prediction network and semantic segmentation over a DSO \cite{dso} base. The depth prediction network is motivated by the uncertainty surrounding the initialized candidate points from the current key frame. High uncertainty could lead to false projection pairs being generated and the network narrows down the depth search interval for this candidate point. Semantic segmentation with Mask-RCNN is post-processed with a scalar representing the semantic label and the image is converted to a grayscale image. The points in the image are finally classified as static or dynamic after a motion consistency check and discarded for pose estimation. Results are evaluated on the KITTI dataset and compared to DSO with significant improvements in the translational root mean square error. Chen et al. \cite{chen2019suma++} proposed a method to leverage semantic information to filter out dynamic objects. The map representation is in the form of surfels and the LiDAR point cloud attained is processed by using a spherical projection in the form of a vertex map. Using the vertex map, a normal map is also generated. RangeNet++ \cite{milioto2019rangenet++} is used to classify each point with a label and the classification results are back-projected onto the 3D point cloud. To filter out points dynamically, the authors check the semantic consistency between the new observation (where labels are taken into consideration) and the world model. If the labels are not compatible, the surfels are assumed to be dynamic and a stability log-odds ratio is updated to mark it so. The "unstable" surfels are not added to the map. The algorithm is evaluated on KITTI Road Sequences and KITTI Odometry benchmark and compare relative errors averaged over trajectories with SuMa \cite{behley2018efficient} and \cite{zhang2014loam}. \textcolor{purple}{Despite not tracking objects and removing surfels in the globally consistent map that are inconsistent with new semantic observations, the approach does not eliminate potentially dynamic semantic classes, which is advantageous and unlike approaches delineated previously in this review. However, the removal of surfels from the map is \emph{completely dependent} on the assumption that the semantic segmentation is foolproof and the inconsistency between the new observation and global map must mean that this point is dynamic. In case this happens, static objects that otherwise provide useful features for aligning point cloud scans will be removed and deteriorate performance. } \subsubsection{Object-Oriented Approaches} SLAM++ \cite{salas2013slam++} is another such method where each node in the map is in the form of homogeneous transformation of the object pose or historical pose of a handheld RGB-D camera. The authors use camera-model ICP to determine the pose and use individual object-camera relative pose estimates to cull out "moved" or "moving" objects from the pose graph later. Their work is demonstrated qualitatively. {\textcolor{purple}{The main disadvantage of this method is that the full set of object instances with their geometric shapes have to be known and initialized beforehand.}} \subsubsection{Scene Flow Approaches} Alcantarilla et. al. \cite{alcantarilla2012combining} discuss taking advantage of dense scene flow to increase the accuracy of SLAM in the presence of dynamic objects, using a stereo camera rig. Their approach involves computing dense scene flow from the right and left disparity map, 2D optical flow and 3D scene flow \cite{vedula1999three}. First, the motion vector for each of the pixels between frames is calculated. Second, the uncertainty (covariance matrix $\Sigma$) surrounding the scene flow vector is taken into account with the Jacobian of the scene flow with respect to the measurements and the measurement noise matrix. The residual motion likelihood is then computed with the motion vector M and the scene flow uncertainty $\Sigma$. This motion likelihood is applied in their visual odometry pipeline to filter out RANSAC inliers that could belong to dynamic objects. Their method is evaluated in scenes of railway stations and on crowded city streets. Adding on to the work by Alcantarilla et. al.\cite{alcantarilla2012combining}, in DOS-SLAM \cite{xu2019slam}, dynamic instance-based object segmentation is used for static semantic octo-map tree building. After performing instance-segmentation on RGB images with YOLACT, the authors use multi-view geometry to judge ORB-feature points that do not satisfy geometric constraints, as dynamic. This classification is based on epipolar geometry and the flow vector bound introduced in \cite{alcantarilla2012combining}. If the number of dynamic feature points on an object exceed a certain threshold, it is categorized as dynamic. The results are filtered in building a semantic octo-tree map. Results are evaluated on the TUM RGB-D dataset and compared to ORB-SLAM2 \cite{orbslam2}. \textcolor{purple}{A global critique of work in this section - dynamic objects are not tracked or added to the map and simply ignored for the purpose of pose estimation. A SLAM framework where each scene has an object-level representation tracked with rigid scene flow. As SUMA \cite{behley2018efficient} is a base framework wherein a globally consistent map is made with the computation of rigid transformations between frames/scenes. Using rigid scene flow to estimate the velocity and transformation of each new object measurement with respect to a previous frame and then retroactively inserting it into a scene may prove helpful.} \subsection{Leveraging or Propagating Dynamic Object Information} \subsubsection{Geometric \& Deep Learning Approaches on Features} Zhong et. al. \cite{detect} proposed Detect-SLAM wherein each feature point is assigned a moving probability and propagated frame-by-frame while simultaneously being updated in the local map. Features extracted on moving objects are then removed before camera pose estimation. An object ID is predicted for every detected region in image space and an object point cloud is generated and transformed into the camera pose to be inserted into the object map. Another interesting aspect of the work is that the same object point cloud inserted into the map is projected into the 2D image plane to propose candidate regions with the same object. \textcolor{purple}{This particular step prevents frame-to-frame object initialization and tracking. Existing point cloud object is aiding in generating proposals in the detection thread, which are then assigned the same object ID.} The algorithm was evaluated on TUM RGB-D dataset and the results were compared to \cite{orbslam}. {\color{purple} Although dynamic objects are tracked with the association of a moving probability, they are discarded for the purpose of pose estimation. } Schorghuber et. al. \cite{schorghuber2019slamantic} propose a dynamics factor that combines information about \emph{how dynamic} a semantic class is with its detection consistency. This factor is then used to categorize 2D (image) feature-to-3D point matches as static, dynamic or static-dynamic. Static points are used to initialize a camera pose estimate, which is then used for geometric validation of the static-dynamic points. Valid 2D point matches in the "static" group and 3D point matches in the "static-dynamic" group are finally used to compute the final pose estimate. The method is motivated by being able to utilize classes that are \emph{expected to be dynamic} but are actually static in improving localization (e.g. a parked car). If the parked car, which was inculcated into localization, starts moving through the course of the SLAM process, it does not cause issues because it becomes "dynamic" in the dynamics factor classification. Moreover, not rejecting the semantic class of "car," for localization increases the number of points that can be used for pose estimation. Their method is evaluated on the Virtual-KITTI dataset where they outperform a ORB-SLAM2 \cite{orbslam2} baseline. Their results on Cityscapes demonstrated success in situations where the baseline fails. Additionally, results are compared to DynaSLAM \cite{dynaslam} on the KITTI and TUM datasets. {\color{purple} {While the work utilizes effectively features that would otherwise be considered dynamic, an obvious downside is that due to their dynamic nature, one needs more data for pose estimation when there are less features and only \emph{potentially dynamic} classes. Moreover, the "dynamics factor" is highly subjective to knowledge of semantic classes and parameter tuning.}} Recurrent OctoMap is another semantic mapping approach that operates on 3D-LiDAR scans. Initially employing PointNet \cite{qi2017pointnet} (pre-trained on the KITTI dataset) to extract semantic features, the approach then moves to long-term localization on mapping. To do this a recurrent-OctoMap inherits functions from OctoMap \cite{octomap}, where the semantic observations and states are stores and with which, dynamic changes can be observed. New LiDAR scans and LiDAR odometry obtained from LOAM \cite{zhang2014loam} is used to transform the point cloud to a map frame. Each recurrent-OctoMap cell is assigned a semantic feature created by average pooling of these points. Dynamic objects are only kept in the map for five minutes in their experiments. Lastly, the authors propose to model the fusion of semantic observations with an LSTM \cite{lstm}. Results are validated on the ETH Parking Lot dataset with 4 categories of objects, and compared to a Bayesian update baseline. \subsubsection{Object-Oriented Approaches} Fusion++ \cite{mccormac2018fusion++} is another method where Mask-RCNN-generated instance segmentation is used to initialize reconstructions of objects after which, \textcolor{red}{they are fused with voxel foreground masks. Each voxel without an object has an existence probability association with it.} Although the work does not focus on working with dynamic objects, local object tracking is done after initialization. The method is demonstrated on the TUM RGB-D dataset. MaskFusion \cite{maskfusion} is another method where the SLAM system maintains a 3D representation of objects in a scene, which are tracked and fused over time by using object labels to associate surfels with the right model. Segmentation is done on an instance-level with Mask R-CNN \cite{maskrcnn} and the extracted masks are matched with geometric segmentation labels from an edginess-based depth map. Upon mask-label matching, a new object is initialized. \textcolor{red} {The pose of the model is tracked by jointly minimizing a geometric and photometric error function and the remaining surfel variables are updated by projective data association.} Results on ATE and RPE are compared to (more importantly) to VO-SF and Co-Fusion on slightly dynamic and highly dynamic sequences in the TUM RGB-D dataset. Their work performs better in highly dynamic scenarios. {\textcolor{purple}{The work depends on the known object models of its 80 semantics classes to do data association and is only limited to tracking rigid objects.}} Huang et. al. developed ClusterSLAM \cite{huang2019clusterslam}, where rigid bodies are identified along with their rigid motion estimate. For a long-term assignment of a landmark, the authors first, describe the motion inconsistencies of multiple landmarks with a motion distance matrix and then, perform a hierarchical clustering approach to merge into the final static or dynamic rigid body. The shape and trajectory of landmarks are updated by a noise-aware point cloud registration and subsequent factor-graph optimization. Lastly, the approach chunks frames for pose graph optimization for the sake of efficiency; however the quality of clustering increases with large chunk sizes but the accuracy decreases. The authors demonstrate their algorithm on KITTI and two synthetic datasets - SUNCG and CARLA and evaluate the ATE and RPE with, most importantly, tracking-by-semantic-segmentation. They also compare their real-time SLAM performance with DynSLAM \cite{barsan2018robust}. {\textcolor{purple}{ The authors provide a framework comparable to leveraging a semantic segmentation framework and eliminate semantic-geometric-object data association from the tracking pipeline, which may be advantageous. However, their method seems to rely on the quality of landmark extraction and data association and there being sufficient landmarks to detect an object. }} Xu et. al \cite{xu2019mid} introduced MID-fusion - where the transform of the camera with respect to the world coordinates $T_{C_{L}W}$is tracked with geometric and photometric error using ICP. From each live frame, objects are retrieved after ray-casting from camera's estimated pose and then, using a modified error formulation for $T_{C{L}W}$, dynamic objects are marked (at an inlier ratio less than 0.9). Data association \textcolor{purple}{(Why this can't be real-time)} of objects in the map with the objects in the live frame is done by rendering the masks of existing objects and comparing them to Mask-RCNN\cite{maskrcnn}-generated masks via IoU. Dynamic objects are tracked every live frame with respect to the camera pose by estimating the associated transform ($T_{C_LO_L}$), but alignment is done for the vertex map in the live frame with that of the vertex map in a reference frame. The segmentation masks are further pruned after ICP is done again using photometric error by now aligning the reference map with the live frame instead. Using Vespa et. al.'s approach \cite{vespa2018efficient}, they leverage colour and geometric information to generate a TSDF and store this information into an associated voxel (semantics are averaged out instead of bayesian updates). New objects are initialized with their own coordinate frame with respect to the world frame and SDF by projecting pixels to world coordinates. Lastly, to spend less time on ray-casting, a foreground probability mask previously ascertained with Mask-RCNN is utilized. The authors compare their results with most notably DynaSLAM \cite{dynaslam} and MaskFusion \cite{maskfusion} on the TUM-RGB-D dataset. While they \textcolor{red}{outperform work focusing on dense approaches, they do not outperform DynaSLAM (feature-based)}. \textcolor{red}{Because the authors' have a separate coordinate system for each object, the reconstructions do not collide with each other. The authors' claim the quality of their reconstructions are superior to those as in MaskFusion, which relies on a surfel-based approach. Using octrees lends memory-efficiency, as discussed later in this literature review.} \textcolor{purple}{Some notable advantages of their approach are pruning the mask by incorporating motion of the object and using object-centric coordinate system instead of virtual camera tracking. The \textbf{quality of their reconstructions} were better than their counterparts and comparable to segmentation masks being initialized with ground-truth. However, the premise of their object tracking algorithm was to mask out human beings and to not track them. Moreover, their results are limited to indoor/table-top environments.} \subsubsection{Scene Flow Approaches} DynSLAM \cite{barsan2018robust} uses scene flow both for pose estimation and to track objects and make accurate reconstructions. The authors use a stereo setup to obtain instance segmentations, compute VO from scene flow and the depth map using ELAS or DispNet. Results of these three are then combined and separated into virtual frames - for background and each of the potentially dynamic objects. Using visual odometry, each new detection's motion is classified as static, dynamic or uncertain and then, initialized as a "virtual frame" containing only that object's data. The volumetric reconstruction of each object is then done with InfiniTAM using this "virtual frame" and the object's estimated 3D motion. The artifacts arising during reconstruction in outdoor environments causes the authors to use a garbage collection method to remove every voxel block that has a minimum TSDF value above a certain threshold. The authors evaluate the accuracy and completeness of their reconstruction method on the KITTI odometry and tracking datasets and demonstrate results with using ELAS and DispNet (DispNet performs far better on reconstruction completeness while ELAS on accuracy). {\textcolor{purple}{The obvious advantages of this approach are how the authors leveraged scene flow both for odometry and for classifying object motion. However, objects are tracked frame-by-frame on the basis of IoU (ie. some sort of data association) only and this can lead to erroneous tracking and fusing of results (even with information about depth map). This approach is also not robust to deformable objects like pedestrians, despite depth/scene flow information in their pipeline. Moreover, the garbage collection procedure done removes voxels after a certain step size has elapsed - their representation could be unsuitable for mapping applications? According to \cite{huang2019clusterslam}, DynSLAM also suffers from cumulative drift.}} Ushani et. al. \cite{aushani-2018a} propose training a neural network pipeline for learning features from occupancy grid maps to aid in scene flow computation for LiDAR scans. The loss function is designed to increase distance between non-matched features and assign correct semantic class labels to all features. In the evaluation phase, an energy function serves as a metric for scene flow computation from a position $x_1$ in an occupancy grid map $G_t$ to another position $x_2$ in the map $G_{t+1}$ at the next time step (t+1). Therefore, it takes into account the learned distance between the features and the residual error for scene flow computation. Scene flow is computed by minimizing this energy function via the EM-algorithm. The E-step finds the latent variable $x_2$ in the map $G_2$ for every $x_1$ and the M-step updates the estimate of $x_1$ that led to scene flow to $x_2$. The authors evaluated their results on the KITTI dataset and compared their method to their previous approach of using occupancy constancy. Results on scene flow estimates of dynamic semantic classes were within an error margin of 30cm more than 88\% of the time. Dewan et. al. \cite{dewan2016rigid} assume local geometric constancy to estimate a dense rigid motion field that explains the motion between two LiDAR scans. This is formulated as an energy minimization problem where the energy potentials fora data term (based on SHOT descriptors) and regularization term (based on creating a neighbourhood from LiDAR scans) are taken into consideration. The authors evaluate their method on the KITTI dataset for rigid-objects and four kinds of human body-arm motions for non-rigid objects and compare it to ICP and RANSAC-preprocessed ICP. Quantitative results are significant in the case of non-rigid objects. They also observe motion estimates of points farther away from the sensor to be different from those closer due to the higher rotational motion of the former; however, the estimates found are still accurate. \reorg FlowNet3d: FlowNet3d can process two 3D point clouds scanned from two successive time steps and produce the scene flow for each point in the first point cloud. It consists of three layers that perform functions specific to capturing spatio-temporal relationships. The first layer, i.e., the point feature learning layer (also called the set conv layer) relies on farthest point sampling to obtain a smaller subset of region centres. These centres are used to learn a filter that extracts new local features in the region surrounding these centroids. The second type of layer, i.e., the flow embedding layer, takes both the point clouds P and Q into consideration and learns a filter similar to the set conv filter in the previous layers. The objective of this filter, however, is to compute for a point in the first point cloud the flow votes of the neighbouring points in the second point cloud. The flow embeddings are extracted using the Euclidean and feature distance between the points in the clouds. As the object of the DNN architecture is to estimate the flow of each point in the inputted point cloud, target points (computed from the flow embedding in the previous layer), together with the original point cloud are fed into an up convolution layer. The method upsamples point features according to its nearby points’ features rather than interpolate. Huber loss and a cycle-consistency loss based on shifting the second cloud back to the first one are utilised in the loss function. The authors also re-sample the input point clouds during inference and average the predicted flow. Their method is evaluated on the FlyingThings3D dataset and the KITTI scene flow dataset and results compared to PRSM, LDOF and OSF (which all used RGB-D or stereo) as input. The authors do not discuss any differences between using dense or sparse point clouds but outperform Dewan et al.’s work on rigid scene flow estimation. The main drawback of this work is that the complexity is linear with the number of points in the input point cloud. The convolution operations suggest high dependence on searching the neighbourhood of every point encountered within a specific radius. HPLFlowNet The idea behind HPLFlowNet is to take an input unstructured point cloud and enforce structure by storing it into a d-dimensional lattice, instead of grappling with performing a computation on each point individually. As the vertices of the Delaunay cell that contains a lattice point grows exponentially with dimensions, the authors compute a permutohedral grid by barycentric interpolation. Operation on the lattice is achieved by using a three-step convolution BCL (Bilateral Convolutional Layer) – the steps are \emph{splat – convolve – slice }, which is akin to gathering points into a d-dimensional lattice, performing sparse convolutions on them and then, interpolating the processed signals back to the same unstructured point cloud. This process replaces the vertices with d-simplices so that BCL has a performance comparable to one on a low-dimensional integer lattice. As the time and space complexity in BCL is linear in the number of points, the authors split BCL into two steps – a down-convolution (DownBCL) and up-convolution (UpBCL). A series of DownBCLs creates a sparse, volumetric representation of the unstructured point cloud, which becomes coarser at every layer in a manner that is independent of the input size. Similar to this, a UpBCL will act on making the spatial representation finer until slicing is done on the last layer. Using multiple convolutional layers in this way instead of a single BCL layer allows a "deeper" architecture and to reduce discrepancies arising from the asymmetry of barycentric interpolation. This pre-processing step is exploited to “splat” together with two points clouds observed at consecutive time steps into the same permutohedral lattice. To compare the two point clouds and calculate the scene flow, the authors introduce CorrBCL – that takes into account the “patch” correlation between the neighbourhoods of two point clouds and leverage the maximum distance movable between scans to create these neighbourhoods in the first place. As the point cloud is sparse and of variable density, density normalisation is done when the input point cloud is “splatted” for the first time, and they later report it to help the network generalise better under variable point densities. The network architecture is such that the two consecutive point clouds are individually down-sampled with DownBCL, mixed together with subsequent CorrBCL layers and then sliced with the UpBCL layer. As the output at each layer is of a lattice at a larger or lower scale than the preceding layer, skip connections are added between the DownBCL and the UpBCL at the same scales. To ensure translational invariance, the relative pose between a point and its enclosing simplex is concatenated with the input signal at each layer. Lastly, the loss function used is the End Point Error loss. The authors compare their results to FlowNet3d, ICP, SPLATFlowNet and the original BCL on the FlyingThings3D dataset and the KITTI scene flow dataset and outperform their counterparts, arguing that it may be due to their approach towards enforcing a structured representation and pipeline for processing both the unstructured point clouds. MotionNet: In this work, similar to HPLFlowNet, the input point cloud is pre-processed into a pre-defined structure before being fed into the neural network architecture. For a given frame, all the past frames are transformed to the current coordinate system not only to take into account past frames as input but also, to aggregate more points to disambiguate static background from moving objects. BEV-map representation involves the creation of 2D pseudo-image from a 3D voxel lattice. Each cell is associated with a binary vector along the vertical axis, over which a 2D convolution can be applied. Each frame is stored as a BEV map, and a sequence of these 2D pseudo-images are fed into a spatio-temporal pyramid network. The STPN is composed of STC (spatio-temporal convolution) blocks, which is a sequence of two 2D convolutions and a degenerate 3D convolution, that works on the temporal domain. In order to learn at multiple resolutions, both the spatial and temporal convolutions are applied at multiple scales and then fused using global temporal pooling. The result is fed into the up-sampled layers for the decoder output. The objective of this work, contrary to the previous two, is to firstly, classify each BEV map cell category, secondly posit whether it is dynamic or not and thirdly to estimate where the cell will be moving to in the future. To effectively deal with jitter introduced in flow estimation by regressing future positions of cells (via L1 loss), the authors threshold the motion of cells that are considered background or static. The authors claim that their method can perceive “unobserved” objects better than by tracking a bounding box prediction. For the task of classification and state-estimation, the authors calculate a cross-entropy loss, weighted differently to mitigate class imbalance. Lastly, another composition of loss functions to capture spatio-temporal dependencies is used. It consists of a spatial consistency loss, foreground temporal consistency loss (both of which are dependent on maximum motion possible between scans) and background temporal consistency loss (that acts on the transformation between background across frames). The authors evaluate their method on the nuScenes dataset and use five previous frames to each frame flow is computed for. The motion prediction performance is compared across 3 groups – static, slow ( < 5 m/s) and fast ( > 5 m/s) and average L2 displacement across scans are compared to the ground truth displacements. For classification, they report a cell classification accuracy. Results are benchmarked against HPLFlowNet, FlowNet3d and a static model of the environment, although models of the first two are trained on the FlyingThings3d dataset and not nuScenes. \subsection{Map Representation} Surfels v/s TSDFs : surfel surface unlike a TSDF can be moved without needing to update the truncation region around it. Surfels are similar to point clouds except they encode local surface properties like the radius and normal map. TSDF fusion methods apparently present some overhead due to having to switch representations between tracking and mapping. \cite{maskfusion} \subsubsection{Truncated Signed Distance Functions} Nie{\ss}ner et. al. \cite{neissner2013} argue that traversing hierarchical data structures like octrees leads to thread divergence in GPUs and the main failings of KinectFusion \cite{newcombe2011kinectfusion} are the computational burden and memory consumption. As an improvement, they introduce a new approach wherein voxels are projected onto incoming depth maps to construct corresponding TSDFs. Each voxel is swept to update the colour, SDF and weight and voxels outside the \textcolor{red}{truncation region of the observed surface} are marked as free-space. The TSDFs can be accessed, updated and deleted using a voxel hashing scheme that associates buckets with a 3D point in space. Within the bucket, there are hash table entries that point to a \textcolor{red}{voxel block that contains $8^3$ blocks}. In case the bucket is full, an offset variable is used to point to an entry corresponding to the same 3D point. Before the next depth image is stored, raycasting is done from the current camera pose to estimate the iso-surface. Using the depth and colour information from this iso-surface, point-to-plane projective ICP is done with the new depth map to estimate the new 6DOF camera pose. What is notable about this approach is \textcolor{red}{frame-to-model tracking is an improvement over frame-to-frame tracking that was reported in Newcombe et. al. \cite{newcombe2011kinectfusion} as suffering from drift.} \textcolor{red}{Another notable technique in the paper is the how surfaces are updated in the voxel. If the same surface is observed again in the current frustum, it is updated by means of a \textcolor{red}{running average}. Otherwise, it is primed for garbage collection. Garbage collection can be invoked when for a \textcolor{red}{voxel block}, the minimum absolute TSDF value exceeds a threshold or if maximum weight (that denotes "nearness" to sensor) is 0}. The authors compare their frame rates to extended (or moving volume) regular grids and the work by Chen et. al. \cite{chen2013scalable} that is based on a hierarchical approach. They also report the optimal bucket size (2), memory consumption (1G for voxels and 0.3G for hash tables) and resolution (8 mm) that gave them the best performance. K\"{a}hler et al. \cite{KahlerPVM16} improve over areas otherwise unaddressed in Nie{\ss}ner et. al.'s work by aiming to \textcolor{red}{reduce memory consumption} and \textcolor{red}{increase speed}. Map-building is approached with the purpose of being multi-resolution i.e., to save memory on areas where fine resolution is not required and to provide higher resolutions on reconstructions that require finer detail. To incorporate this hierarchical structure to voxel hashing, the authors suggested hash tables be built at L levels and that the higher the level, the coarser the resolution of the data a hash entry is pointing directly to. If the hash entry is not directly pointing to a voxel and instead is pointing to a lower level, the resolution of the data stored will be finer. Block allocation is done similar to Nei{\ss}ner et. al.'s\cite{neissner2013} approach but contingent on which block the existing hash table is on. The decision of whether a finer resolution is required for a block is based on whether a node exceeds a complexity criteria that is dependent on roughness of the surface. Similary, the decision on whether a level should be at a coarser representation is made according to whether the children of a particular voxel block all fall below a particular complexity criteria. Upon splitting, eight blocks for children are allocated and the hash table entries are updated with an atomic compare-and-swap operation. Merging is done with subsampling and splitting with bilinear interpolation. To reconstruct the iso-surface via ray-casting, the pose of the camera needs to be known and specific care is taken when reading in interpolated values, especially when 2 out of 8 of the surrounding voxel blocks have a different resolution from the block being compared. The reconstruction results show improvement over an 8mm resolution uniform grid and closeness to a 2mm one. Moreover, 40-50\% of memory savings are reported. The interpolation scheme affects the frame-rate in comparison to K\"{a}hler et. al. \cite{kahler2015very} but is still 80-100 Hz. Amongst other improvements, InfiniTAM v3 \cite{prisacariu2017infinitam} builds upon Nie{\ss}ner et. al.'s work as the map representation but also \textcolor{red}{provides support for a surfel-based representation \cite{keller2013real}}, and a globally-consistent TSDF-based reconstruction that divides the scene into rigid submaps and optimizes the relative poses between them \cite{kahler2016real}. Vespa et. al. \cite{vespa2018efficient} proposed an alternative oct-tree based map that can be rendered as a TSDF or occupancy map. To begin with, a representation is introduced wherein voxels are arranged sparsely \& non-contiguously and indexed with oct-trees. Voxel hashing is done with Morton codes, in order to be combine the ray-casting step of building a TSDF with tree-traversal. The tree-traversal order for all possible configurations is pre-computed and during a nearest-neighbour search or tri-linear interpolation the optimal configurations is chosen to select the optimal sampling point. Finally, the pipeline involves \begin{enumerate}\item tracking phase for frame-to-model alignment, where ray-casting is done over a point cloud generated from the depth, vertex and normal map and, \item a fusion phase to integrate this data into the map. \end{enumerate} In the fusion phase, a ray is marched along the line-of-sight within a particular bandwidth of the depth measurement, with which new parts of the scene are allocated into voxels introduced previously. (Note that the ray-marching is done from the root to the first intersected leaf node as in \cite{laine2010efficient}, performing tri-linear interpolation only near zero-crossings. Surface gradients are calculated via central difference on intersections.) Then, each voxel is projected into the current depth image with its known pose to compute the TSDF value. Finally, this TSDF is integrated into the global one using block averaging. \textcolor{purple}{ An interesting idea put forth by the authors is that both occupancy-grids and TSDFs are zero-crossings.} The authors report the accuracy of their SLAM system with the RGB-D dataset and ICL-NUIM dataset, achieving lower ATEs than InfiniTAM \cite{prisacariu2017infinitam}. They also report total runtime on their fusion, raycasting and remaining steps. Strecke et. al. \cite{strecke2019fusion} introduced EM-Fusion, which is an approach that deals with data association and occlusion handling with a probabilistic approach. The map and pose graph are updated by formulation as a MLE problem, wherein the posterior of the pose and map are maximized given depth images. Each pixel is associated to an object or alternatively, the background with a latent variable. The posterior is optimized separately for the camera pose and then, for the map. The map is stored by constructing SDFs that are stored in voxels and set up such that SDF value at a point within the grid is found through trilinear interpolation. The probability for each point in 3D space in any voxel belonging to the foreground/background is updated with the Mask-RCNN \cite{maskrcnn} segmentation likelihood. To account for occlusions, the authors only update regions where the Mask-RCNN-generated segmentation mask matches the projected mask of the object volume (determined via IoU by reprojecting object volume into image with raycasting). New objects are dealt with by associating an existence probability to new objects as in Fusion++ \cite{mccormac2018fusion++}. The posterior over the latent variable is approximated through the likelihood function. This likelihood is modelled with a mixture distribution according to whether the pixel falls inside the map volume of an object. Lastly, the authors track objects by following the approach in Bylow et. al. \cite{bylow2013real}, wherein only one trilinear interpolation lookup is done for data association of an incoming depth map pixel to surface. Finally, they perform estimation over the posterior with the Expectation-Maximization algorithm. The camera is tracked in the M-Step with regards to the background TSDF (due to foreground masks being inaccurate), after which the association probabilities are recomputed and object TSDF locations also tracked. The map is updated in the M-Step by integrating the new depth image into the SDF recursively with measured depth difference, latent variable prior and weights. Most importantly, the authors compare their method with MaskFusion \cite{maskfusion}, MID-Fusion \cite{xu2019mid} and Co-Fusion. Their approach outperforms Co-Fusion and MaskFusion when evaluated on the RGB-D dataset. They attribute their performance to tracking using direct SDF alignment in comparison to MaskFusion's ICP tracking with the static background. The authors highlight that \textcolor{red}{instead of masking out non-rigid dynamic objects such as human beings like their counterparts, they associate non-rigid objects to the object volumes rather than the background, enabling them to track the camera with respect to the background.} \subsubsection{Surfels} \subsubsection{Scene Graphs} Rosinol et. al. \cite{rosinol20203d} introduced dynamic scene graphs. A node in a graph is modelled after a "spatial concept" and the edges as spatio-temporal relationships between them. The map is built on the basis of 5 layers of abstraction, which affords disambiguations between classes that could belong to foreground or background, be static or dynamic and be traversable or non-traversable. The last two layers of abstraction help group these entities according to their spatial relationships. Edges can represent relationships between spatial entities that aren't otherwise captured - like contact, co-visibility, distance and relative size. A semantic-mesh is constructed using Kimera \cite{rosinol2019kimera}, robot pose is tracked with IMU-aware optical flow and all objects other than humans are extracted with their CAD model or with unknown shape. Human beings are extracted from a panoptic 2D image using a Graph-CNN \cite{zhao2019semantic} and tracked. Tracking is contingent on the bounding box encapsulating the human having a minimum number of pixels and the feasible motion of joints in the meshes from one time step to the next. To exclude the human from the static mesh, they feed the "human-being mesh" back into the static mesh by using the free-space information available from ray-casting towards the depth of the human pixels. \textcolor{purple}{It is \textbf{unclear} how they will track dynamic objects other than human beings but the concept of \textbf{dynamic masking} may be generalizable to other applications such as oct-trees as well.} Finally, for topological parsing of the map, the authors convert the mesh to a 3D ESDF using VoxBlox \cite{oleynikova2017voxblox} and develop the hierarchical relationships previously described. \section{Results and Discussion} \label{sec:label} \begin{figure*}[ht] \centering \begin{minipage}{\textwidth} \begin{minipage}[t]{.7\textwidth} \begin{tabular}{m{.5cm}|m{0.23\textwidth}|m{0.22\textwidth}|m{0.22\textwidth}} \hline & Gazebo Simulation Snapshot & Without & With \\ \hline \rotatebox{90}{\textbf{BACC}}& \includegraphics[width=0.23\textwidth,trim={1.2cm 0 0 0},clip]{media/back_base.jpg}& \includegraphics[width=0.23\textwidth,trim={0 1cm 0 0},clip]{media/BWO.jpg}& \includegraphics[width=0.227\textwidth,trim={0 1cm 0 0},clip]{media/BW.jpg} \\ \hline \rotatebox{90}{\textbf{FORC}}& \includegraphics[width=0.23\textwidth,trim={1.2cm 0 0 2cm},clip]{media/forward_base.jpg}& \includegraphics[width=0.23\textwidth,trim={0 0.5cm 0 1cm},clip]{media/FWO.jpg}& \includegraphics[width=0.23\textwidth,trim={0.5cm 1cm 0 1cm},clip]{media/FW.jpg}\\\hline \end{tabular} \end{minipage} \hspace{-1.5cm} \begin{minipage}{.35\textwidth} \caption{\textbf{Ablation Studies with Gazebo Simulation.} The images in the top row demonstrate the functionality of \textbf{BACC}, while the bottom row that of \textbf{FORC}.\\ \textbf{(Gazebo Simulation Snapshot)}: In this column, we show the top view of the gazebo simulation. The ego robot is demarcated within a white square, while other moving Turtlebots are highlighted and marked with their orientation. \textbf{(Without)}: The images in the middle column show the global map made without specific modules. Traces are left in the map where Turtlebots were present in previous time steps without BACC. Without FORC, the map fails to represent the other two Turtlebots in the map. \textbf{(With)}: The right-most column shows the the global maps constructed with our approach. Minimal traces are left in the map, and the Turtlebots are in their correct locations.} \label{fig:ablation} \end{minipage} \end{minipage} \end{figure*} In this section, we present results evaluating the method on both simulated and real scenarios with dynamic objects. We first describe our experimental setup, and then present qualitative results on a synthetic data set generated in Gazebo and SemanticKITTI~\cite{behley2019semantickitti} data set. Quantitative results on yjr SemanticKITTI Semantic Segmentation competition~\footnote{\href{https://competitions.codalab.org/competitions/20331}{https://competitions.codalab.org/competitions/20331}} which placed second out of 69 participants at the time of this submission are also presented. Our code will be made publicly available after receiving the final decision. \subsection{Experimental Setup} We first describe our (i) kernel design choices, (ii) then elaborate on the datasets used and (iii) lastly, discuss the scene flow networks used to obtain point cloud data. \subsubsection{Kernel Design Choices}\label{sec:experiments} In the mapping framework, every query voxel has 6 neighbours (one on each face). Weights from only the training points observed in each of the six neighbouring voxels are used in the calculation of both \eqref{eq:velocity} and \eqref{eq:velfree}. We choose a sparse kernel~\cite{melkumyan2009sparse} as $\mathcal{K}_v^{\text{free}} = $ \begin{align} \small \nonumber &\begin{cases} \sigma_1[(\frac{1}{3}(2 + \cos({\frac{2\pi d}{l_1}})(1 - \frac{d}{l_1}) + \frac{1}{2\pi}\sin({\frac{2\pi d}{l_1}})] & \text{if } d < l\\ 0 & \text{otherwise} \end{cases} \end{align} where \mbox{$d = \lVert x-x'\rVert$, $l_1 > 0$} the length scale and $\sigma_1$ the kernel scale parameter. To compute $\mathcal{K}_v$ in \eqref{eq:velocity}, we use an isotropic Mat\'ern kernel~\cite{stein1999interpolation}, \mbox{$\mathcal{K}_v(x,x') = \sigma_2[(1 + \frac{\pi d}{2l_2}) e^{-\frac{\pi d}{2l_2}}]$}, where $d = \lVert x-x'\rVert$, $l_2 > 0$ the length scale and $\sigma_2$ the kernel scale parameter. Typically, the kernel length scales for both $\mathcal{K}_v$ and $\mathcal{K}_v^{\text{free}}$ are chosen with respect to the resolution of the map being built as it controls how much influence a point in a neighbouring cell has. In our experiments, $l_1$ is always twice the map resolution and $\sigma_1$ can be set once in the beginning according to the size of the point set and free-space sampling resolution. \subsubsection{Data sets and Benchmarks}\label{sec:gaz_env} We use two point cloud-based data sets that contain only positional information. However, our method is amenable to any point cloud data with intensity, colour or other fields. Additionally, it can also be applied to depth camera data, where larger data sets for training and evaluation exist. \begin{figure*}[ht] \centering \subfloat{ \includegraphics[width=0.5\textwidth,trim={0 0 0 0},clip]{media/1_1.jpg} \label{fig:se1_1_qualitative} }\vline \subfloat{ \includegraphics[width=0.5\textwidth,trim={0 0 0 0},clip]{media/4_1.jpg} \label{fig:seq_4_qualitative} } \caption{Qualitative results on Sequences 01 (left half) and 04 (right half) of SemanticKITTI. Four images are shown for both frames, which include (Top Left:) Right stereo image corresponding to one of the scans (not used in our pipeline). (Top Right:) S-BKI mapping without any free space sampling. Trails are left where cars passed over. (Bottom Right:) S-BKI with free space sampling. After a few scans, the map becomes overconfident about the presence of free cells and fails to incorporate dynamic objects in the map. (Bottom Left:) D-BKI with free space sampling. The cars are tracked with minimal traces.} \label{fig:qualitative_kitti} \end{figure*} \noindent \textbf{Gazebo Simulation Environment:} To create a synthetic data set, a Gazebo simulation environment was set up with multiple Turtlebots exploring a house. We mounted one robot (the ego-robot) with an omni-directional block laser scanner for data collection in the form of point clouds with \emph{positional information only}. To simulate dynamic objects in the environment, we have three other Turtlebots exploring the same house. Using a reactive planner, the robots avoid each other and obstacles in the environment. The collected data is processed using the Point Cloud Library (PCL) and annotated based on height into three labels - floor, robot and miscellaneous objects such as walls or cabinets. The scene flow for each scan is computed with FlowNet3D.\par \noindent \textbf{SemanticKITTI Data Set:} The SemanticKITTI data set~\cite{behley2019semantickitti} is a large-scale real driving data set based on the KITTI Vision Benchmark where semantic annotations and camera poses are provided for all sequences. Camera poses are estimated with SuMa \cite{behley2018efficient}, and semantic annotations are generated with RangeNet++\cite{milioto2019rangenet++}. Additional labels are provided to distinguish static objects from dynamic objects, such as person and moving-person. There are 22 sequences, out of which 11 sequences are provided with ground truth labels for training (00-07), validation (08) and testing (09-10). Sequence 11-21 do not come with ground truth semantic labels, but can be evaluated in a public leaderboard over the mean intersection-over-union (mIoU) metric. \subsubsection{Scene Flow Estimation} To obtain the corresponding flows $\mathcal{V}_t$ of a point set $\mathcal{X}_t,$ we chose two state-of-the-art deep learning architectures based on PointNet++\cite{PointNetPP}. 1) FlowNet3D \cite{flownet3d} : supervised method based on PointNet++ \cite{qi2017pointnet} which estimates scene flow between two successive point clouds $\mathcal{X}_{t-1}$ and $\mathcal{X}_t$; 2) TLFPAD \cite{tlfpad} : self-supervised method based on PointNet++ \cite{PointNetPP} which incorporates temporal information from four point clouds $\mathcal{X}_{t-3:t}$. Predicted scene flow $\mathcal{V}_t$ is compared with the next point cloud $\mathcal{X}_{t+1} = \mathcal{V}_t + \mathcal{X}_t$. Typically, implementations for scene flow estimation train on the XYZRGB fields, i.e., the point cloud includes both the position and colour information. We trained an adapted version of FlowNet3d on the KITTI 2015 Scene Flow data set \cite{menze2015object} and the FlyingThings driving data set \cite{mayer2016large} by only including position information for training. After obtaining the $\mathcal{V}_t$ from the networks, we perform egomotion-compensation on the point cloud by subtracting the mean flow of static classes. \subsection{Qualitative Results} The goal of this section is to (i) demonstrate improvements of the temporal transition model qualitatively through ablation studies, and (ii) compare the real-time map construction by semantic and dynamic BKI with the data sets described in Section \ref{sec:gaz_env}. \begin{table*}[t] \centering \caption{Mean IoU on SemanticKITTI data set sequence 00-10 (Training) and 11-21 (Testing) ~\cite{behley2019semantickitti} for 26 semantic classes.} \label{table:semantickitti} \resizebox{\textwidth}{!}{ \begin{tabular}{llllcccccccccccccccccccccccc} {\bf Seq.} & \multicolumn{1}{l}{\bf Method}& \cellcolor{scarColor}\rotatebox{90}{\color{white}Car} & \cellcolor{sbicycleColor}\rotatebox{90}{\color{white}Bicycle} & \cellcolor{smotorcycleColor}\rotatebox{90}{\color{white}Motorcycle} & \cellcolor{struckColor}\rotatebox{90}{\color{white}Truck} & \cellcolor{sothervehicleColor}\rotatebox{90}{\color{white}Other Vehicle} & \cellcolor{spersonColor}\rotatebox{90}{\color{white}Person} & \cellcolor{sbicyclistColor}\rotatebox{90}{\color{white}Bicyclist} & \cellcolor{smotorcyclistColor}\rotatebox{90}{\color{white}Motorcyclist} & \cellcolor{sroadColor}\rotatebox{90}{\color{white}Road} & \cellcolor{sparkingColor}\rotatebox{90}{\color{white}Parking} & \cellcolor{ssidewalkColor}\rotatebox{90}{\color{white}Sidewalk} & \cellcolor{sothergroundColor}\rotatebox{90}{\color{white}Other Ground} & \cellcolor{sbuildingColor}\rotatebox{90}{\color{white}Building} & \cellcolor{sfenceColor}\rotatebox{90}{\color{white}Fence} & \cellcolor{svegetationColor}\rotatebox{90}{\color{white}Vegetation} & \cellcolor{strunkColor}\rotatebox{90}{\color{white}Trunk} & \cellcolor{sterrainColor}\rotatebox{90}{\color{white}Terrain} & \cellcolor{spoleColor}\rotatebox{90}{\color{white}Pole} & \cellcolor{strafficsignColor}\rotatebox{90}{\color{white}Traffic Sign} & \cellcolor{scarColor}\rotatebox{90}{\color{white} Car-Moving} & \cellcolor{sbicyclistColor}\rotatebox{90}{\color{white} Bicyclist-Moving} & \cellcolor{spersonColor}\rotatebox{90}{\color{white} Person-Moving} & \cellcolor{smotorcyclistColor}\rotatebox{90}{\color{white} Motorcylist-Moving} & \cellcolor{sothervehicleColor}\rotatebox{90}{\color{white} Other Vehicle-Moving} & \cellcolor{struckColor}\rotatebox{90}{\color{white} Truck-Moving} & \rotatebox{90}{\bf Average}\\ \hline \vspace{-2mm} \\ \multirow{2}{*}{Training} & Cylinder3D & 0.950 & 0.604 & 0.824 & 0.927 & 0.820 & 0.629 & n/a & n/a & 0.954 & 0.744 & 0.863 & 0.423 & 0.895 & 0.776 & 0.893 & 0.714 & 0.792 & 0.751 & 0.812 & 0.918 & 0.917 & 0.683 & 0.663 & 0.912 & 0.496 & 0.781 \\ & S-BKI & 0.954 & 0.681 & 0.875 & 0.940 & 0.865 & 0.714 & n/a & n/a & 0.959 & 0.788 & 0.871 & 0.451 & 0.912 & 0.803 & 0.895 & 0.723 & 0.803 & 0.754 & 0.823 & 0.929 & 0.906 & 0.772 & 0.613 & 0.793 & 0.585 & 0.800 \\ & D-BKI (Ours) & 0.954 & 0.648 & 0.883 & 0.953 & 0.865 & 0.690 & n/a & n/a & 0.958 & 0.772 & 0.867 & 0.453 & 0.913 & 0.790 & 0.896 & 0.716 & 0.794 & 0.753 & 0.827 & 0.913 & 0.913 & 0.770 & 0.612 & 0.788 & 0.578 & 0.796 \\ \bottomrule \multirow{2}{*}{Testing} & Cylinder3D & 0.946 & 0.676 & 0.638 & 0.413 & 0.388 & 0.125 & 0.017 & 0.002 & 0.907 & 0.65 & 0.745 & 0.323 & 0.926 & 0.66 & 0.858 & 0.72 & 0.689 & 0.631 & 0.614 & 0.749 & 0.683 & 0.657 & 0.119 & 0.001 & 0.0 & 0.525 \\ & D-BKI (Ours) & 0.946 & 0.593 & 0.411 & 0.495 & 0.461 & 0.273 & 0.0 & 0.0 & 0.907 & 0.663 & 0.748 & 0.26 & 0.906 & 0.656 & 0.857 & 0.727 & 0.711 & 0.637 & 0.694 & 0.756 & 0.64 & 0.656 & 0.329 & 0.221 & 0.012 & 0.542 \\ \bottomrule \end{tabular} } \label{tab:all_data} \end{table*} \begin{minipage}{\columnwidth} \begin{minipage}[t]{0.48\columnwidth} \makeatletter\def\@captype{table} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{l|c} \toprule Map resolution & 0.05 \\ Downsampling resolution & 0.1 \\ Free space sampling resolution & 0.5 \\ $l_s$ & 0.15 \\ $\sigma_s$ & 0.2 \\ $l_1$ & 0.2 \\ $\sigma_1$ & 50 \\ \bottomrule \end{tabular}} \caption{Parameters for Ablation Studies.} \label{tab:parameter_table} \end{minipage} \hfill\vline\hfill \begin{minipage}[t]{0.48\columnwidth} \makeatletter\def\@captype{table} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{l|c} \toprule Map resolution & 0.1 \\ Downsampling resolution & 0.1 \\ Free space sampling resolution & 100 \\ $l_s$ & 0.5 \\ $\sigma_s$ & 0.2 \\ $l_1$ & 2.5 \\ $\sigma_1$ & 1 \\ \bottomrule \end{tabular}} \caption{Parameters for SemanticKITTI results.} \label{tab:quantitative_parameters} \end{minipage} \end{minipage} \subsubsection{Ablation Studies} We perform two ablation studies to demonstrate the function and efficacy of each component of the method. These studies are aimed to qualitatively show how the global map inference changes without backward (BACC) or forward (FORC) correction. Note that our global map is colored with a gray floor, mustard walls and red Turtlebots. We annotate our ego-robot building the map with a white box around it. Holes along the floor are typically spaces which the sensor has not scanned yet. We only tune parameters (in Table \ref{tab:parameter_table}) for $\mathcal{K}_v$. $l_s$ and $\sigma_s$ are the spatial kernel length scale and scale parameters respectively from ~\cite{gan2019bayesian}. \textbf{Without BACC} To conduct this study, we remove scene flow aggregation for all dynamic classes by setting $v_{j,t-1}^m = 0$ and observe the map as it is built. Results are shown in the top row of Fig.~\ref{fig:ablation}. In the \emph{simulation snapshot}, we highlight the 3 Turtlebots in the environment that are in motion. \emph{Without BACC}, trails are visible behind each Turtlebot due to their motion not being considered during map-building. \emph{With BACC}, trails are not left behind and each of the robots have the same (consistent) size due to the incorporation of the temporal transition model. \textbf{Without FORC} Results for this experiment are shown in the bottom row of Fig.~\ref{fig:ablation}. The \emph{simulation snapshot} shows 2 moving Turtlebots in the environment. \emph{Without FORC}, the motion of these turtlebots around a free cell is not considered to compute $v_{j,t-1}^{\text{free}}$ in \eqref{eq:velfree}. As a result, the prediction step for $\overline{\alpha}_{j, t}^{\text{free}}$ in \algref{al:dynamic_semantic_mapping}{alg:pred} becomes obsolete. If $\alpha_{j, t}^{\text{free}} > \alpha_{j, t}^{\text{robot}}$, then voxel $j$ will be (incorrectly) classified as a free cell. Note that the other two robots do not get incorporated into the map as $\alpha_{j, t}^{\text{free}} > \alpha_{j, t}^{\text{robot}}$ for the voxels. \emph{With FORC}, the map successfully represents the two Turtlebots. \subsubsection{SemanticKITTI data set}\label{sec:kittiqual} We include images from sequence 1 and 4 of the SemanticKITTI data set to highlight the differences between static (S-BKI) and dynamic (D-BKI) mapping, as this is not easily captured in the semantic segmentation competition. These results are shown in Figure \ref{fig:qualitative_kitti}, where S-BKI either discards dynamic objects over time completely or leaves them in the map depending on parameter choice. In contrast, D-BKI is able to accurately represent the moving objects without leaving long trails. \subsection{Quantitative Results} The mapping algorithm is compared quantitatively using the SemanticKITTI benchmark as its goal is to accurately label point clouds with semantic labels in a dynamic environment. After building the dynamic map, we query the map at each timestamp to label the incoming point cloud $\mathcal{X}_t$. The experimental parameters are shown in Table \ref{tab:quantitative_parameters}. Semantic labels $\mathcal{Y}_t$ for training are obtained from the Cylinder3D-multiscan model \cite{cylinder3d} and the data set is divided into training (sequences 00-10) and testing (sequences 11-21). For each point in a point cloud ($\mathcal{X}_t$), we compute the per-class mean Intersection over Union with the Jaccard index against the ground truth labels provided in the SemanticKITTI data set. Our evaluation method is limited to the occupied voxels in the map and we do not query unoccupied voxels. Lastly, we also compare dynamic D-BKI mapping with the static mapping framework (S-BKI) to see if we have equivalent performance in the occupied voxels. For a fair comparison to S-BKI, we do not sample free space as S-BKI would become overconfident about free space over multiple scans and fail to map occupied regions accurately. In the absence of free space sampling, we remove FORC from D-BKI and set the prior on $\alpha_0^{\text{free}}$ slightly higher than the other classes. Table \ref{tab:all_data} shows the performance of our Dynamic-BKI (D-BKI) mapping on the SemanticKITTI data set. Our training results show that BKI mapping improves upon Cylinder3D in nearly every category. While this demonstrates that spatio-temporal smoothing is beneficial for segmentation accuracy, it does not showcase the full capabilities of the map, as a "full scan" only captures occupied space. The D-BKI map improves over static-BKI mapping by removing tails from dynamic objects, as demonstrated qualitatively in Section~\ref{sec:kittiqual}. \subsection{Discussion} We showed that a simple auto-regressive transition model enables dynamic scene propagation and rectifies the pitfalls of the static world assumption in the Semantic-BKI mapping algorithm - either by reducing traces in the map or by preventing overconfidence in free space. The work can be applied to any sensor data that can be represented in the form of an XYZ point cloud. Given acquiring scene flow data for a full point cloud is more challenging than acquiring it for camera data, we anticipate that the performance is transferable to other 3D sensors. We also found that self-supervised scene flow achieves comparable performance to supervised scene flow, and therefore, is potentially more suitable for less structured environments. Setting the map to build at finer resolutions achieves significantly better performance, but also at a much higher memory and computation cost. This also limited our ability to tune hyperparameters, as running the longest KITTI sequences took several hours. Additionally, an alternative data set (for multi-scan semantic scene completion) that includes free space labels to check for tails left by dynamic objects would be a better metric for comparison. Future work includes investigating methods to compress and streamline data acquisition, working on datasets in unstructured environments and investigating memory-based alternatives to the autoregressive model. \section{Introduction} Please follow the steps outlined below when submitting your manuscript to the IEEE Computer Society Press. This style guide now has several important modifications (for example, you are no longer warned against the use of sticky tape to attach your artwork to the paper), so all authors should read this new version. \subsection{Language} All manuscripts must be in English. \subsection{Dual submission} By submitting a manuscript to 3DV, the authors assert that it has not been previously published in substantially similar form. Furthermore, no paper which contains significant overlap with the contributions of this paper either has been or will be submitted during the 3DV 2020 review period to {\bf either a journal} or any conference or any workshop. {\bf Papers violating this condition will be rejected.} If there are papers that may appear to the reviewers to violate this condition, then it is your responsibility to: (1)~cite these papers (preserving anonymity as described in Section 1.6 below), (2)~argue in the body of your paper why your 3DV paper is non-trivially different from these concurrent submissions, and (3)~include anonymized versions of those papers in the supplemental material. \subsection{Paper length} 3DV papers should be no longer than 8 pages, excluding references. The references section will not be included in the page count, and there is no limit on the length of the references section. Overlength papers will simply not be reviewed. This includes papers where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font. The reason such papers will not be reviewed is that there is no provision for supervised revisions of manuscripts. The reviewing process cannot determine the suitability of the paper for presentation in eight pages if it is reviewed in eleven. \subsection{The ruler} The \LaTeX\ style defines a printed ruler which should be present in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document using a non-\LaTeX\ document preparation system, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment the \verb'\threedvfinalcopy' command in the document preamble.) Reviewers: note that the ruler measurements do not align well with lines in the paper --- this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. Just use fractional references (e.g.\ this line is $097.5$), although in most cases one would expect that the approximate location will be adequate. \subsection{Mathematics} Please number all of your sections and displayed equations. It is important for readers to be able to refer to any particular equation. Just because you didn't refer to it in the text doesn't mean some future reader might not need to refer to it. It is cumbersome to have to use circumlocutions like ``the equation second from the top of page 3 column 1''. (Note that the ruler will not be present in the final copy, so is not an alternative to equation numbers). All authors will benefit from reading Mermin's description of how to write mathematics \subsection{Blind review} Many authors misunderstand the concept of anonymizing for blind review. Blind review does not mean that one must remove citations to one's own work---in fact it is often impossible to review a paper unless the previous citations are known and available. Blind review means that you do not use the words ``my'' or ``our'' when citing previous work. That is all. (But see below for techreports) Saying ``this builds on the work of Lucy Smith [1]'' does not say that you are Lucy Smith, it says that you are building on her work. If you are Smith and Jones, do not say ``as we show in [7]'', say ``as Smith and Jones show in [7]'' and at the end of the paper, include reference 7 as you would any other cited work. An example of a bad paper just asking to be rejected: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of our previous paper [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Removed for blind review \end{quote} An example of an acceptable paper: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of the paper of Smith \etal [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Smith, L and Jones, C. ``The frobnicatable foo filter, a fundamental contribution to human knowledge''. Nature 381(12), 1-213. \end{quote} If you are making a submission to another conference at the same time, which covers similar or overlapping material, you may need to refer to that submission in order to explain the differences, just as you would if you had previously published related work. In such cases, include the anonymized parallel submission~\cite{Authors12final} as additional material and cite it as \begin{quote} [1] Authors. ``The frobnicatable foo filter'', F\&G 2020 Submission ID 324, Supplied as additional material {\tt fg324.pdf}. \end{quote} Finally, you may feel you need to tell the reader that more details can be found elsewhere, and refer them to a technical report. For conference submissions, the paper must stand on its own, and not {\em require} the reviewer to go to a techreport for further details. Thus, you may say in the body of the paper ``further details may be found in~\cite{Authors12bfinal}''. Then submit the techreport as additional material. Again, you may not assume the reviewers will read this material. Sometimes your paper is about a problem which you tested using a tool which is widely known to be restricted to a single institution. For example, let's say it's 1969, you have solved a key problem on the Apollo lander, and you believe that the 3DV70 audience would like to hear about your solution. The work is a development of your celebrated 1968 paper entitled ``Zero-g frobnication: How being the only people in the world with access to the Apollo lander source code makes us a wow at parties'', by Zeus \etal. You can handle this paper like any other. Don't write ``We show how to improve our previous work [Anonymous, 1968]. This time we tested the algorithm on a lunar lander [name of lander removed for blind review]''. That would be silly, and would immediately identify the authors. Instead write the following: \begin{quotation} \noindent We describe a system for zero-g frobnication. This system is new because it handles the following cases: A, B. Previous systems [Zeus et al. 1968] didn't handle case B properly. Ours handles it by including a foo term in the bar integral. ... The proposed system was integrated with the Apollo lunar lander, and went all the way to the moon, don't you know. It displayed the following behaviours which show how well we solved cases A and B: ... \end{quotation} As you can see, the above text follows standard scientific convention, reads better than the first version, and does not explicitly name you as the authors. A reviewer might think it likely that the new paper was written by Zeus \etal, but cannot make any decision based on that guess. He or she would have to be sure that no other authors could have been contracted to solve problem B. FAQ: Are acknowledgments OK? No. Leave them for the final copy. \begin{figure}[t] \begin{center} \fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}} \end{center} \caption{Example of caption. It is set in Roman so that mathematics (always set in Roman: $B \sin A = A \sin B$) may be included without an ugly clash.} \label{fig:long} \label{fig:onecol} \end{figure} \subsection{Miscellaneous} \noindent Compare the following:\\ \begin{tabular}{ll} \verb'$conf_a$' & $conf_a$ \\ \verb'$\mathit{conf}_a$' & $\mathit{conf}_a$ \end{tabular}\\ See The \TeX book, p165. The space after \eg, meaning ``for example'', should not be a sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided \verb'\eg' macro takes care of this. When citing a multi-author paper, you may save space by using ``et alia'', shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.) However, use it only when there are three or more authors. Thus, the following is correct: `` Frobnication has been trendy lately. It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.'' This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...'' because reference~\cite{Alpher03} has just two authors. If you use the \verb'\etal' macro provided, then you need not worry about double periods when used at the end of a sentence as in Alpher \etal. \begin{figure*} \begin{center} \fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}} \end{center} \caption{Example of a short caption, which should be centered.} \label{fig:short} \end{figure*} \section{Formatting your paper} All text must be in a two-column format. The total allowable width of the text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54 cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the first page) should begin 1.0 inch (2.54 cm) from the top edge of the page. The second and following pages should begin 1.0 inch (2.54 cm) from the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4 paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the page. \subsection{Margins and page numbering} All printed material, including text, illustrations, and charts, must be kept within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm) high. Page numbers should be in footer with page numbers, centered and .75 inches from the bottom of the page and make it start at the correct page number rather than the 4321 in the example. To do this find the line (around line 23) \begin{verbatim} \setcounter{page}{4321} \end{verbatim} where the number 4321 is your assigned starting page. Make sure the first page is numbered by commenting out the first page being empty on line 47 \begin{verbatim} \end{verbatim} \subsection{Type-style and fonts} Wherever Times is specified, Times Roman may also be used. If neither is available on your word processor, please use the font closest in appearance to Times to which you have access. MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of the first page. The title should be in Times 14-point, boldface type. Capitalize the first letter of nouns, pronouns, verbs, adjectives, and adverbs; do not capitalize articles, coordinate conjunctions, or prepositions (unless the title begins with such a word). Leave two blank lines after the title. AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title and printed in Times 12-point, non-boldface type. This information is to be followed by two blank lines. The ABSTRACT and MAIN TEXT are to be in a two-column format. MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use double-spacing. All paragraphs should be indented 1 pica (approx. 1/6 inch or 0.422 cm). Make sure your text is fully justified---that is, flush left and flush right. Please do not place any additional blank lines between paragraphs. Figure and table captions should be 9-point Roman type as in Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred. \noindent Callouts should be 9-point Helvetica, non-boldface type. Initially capitalize only the first word of section titles and first-, second-, and third-order headings. FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction}) should be Times 12-point boldface, initially capitalized, flush left, with one blank line before, and one blank line after. SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements}) should be Times 11-point boldface, initially capitalized, flush left, with one blank line before, and one after. If you require a third-order heading (we discourage it), use 10-point Times, boldface, initially capitalized, flush left, preceded by one blank line, followed by a period and your text on the same line. \subsection{Footnotes} Please use footnotes\footnote {This is what a footnote looks like. It often distracts the reader from the main flow of the argument.} sparingly. Indeed, try to avoid footnotes altogether and include necessary peripheral observations in the text (within parentheses, if you prefer, as in this sentence). If you wish to use a footnote, place it at the bottom of the column on the page on which it is referenced. Use Times 8-point type, single-spaced. \subsection{References} List and number all bibliographical references in 9-point Times, single-spaced, at the end of your paper. When referenced in the text, enclose the citation number in square brackets, for example~\cite{Authors12final}. Where appropriate, include the name(s) of editors of referenced books. \begin{table} \begin{center} \begin{tabular}{|l|c|} \hline Method & Frobnability \\ \hline\hline Theirs & Frumpy \\ Yours & Frobbly \\ Ours & Makes one's heart Frob\\ \hline \end{tabular} \end{center} \caption{Results. Ours is better.} \end{table} \subsection{Illustrations, graphs, and photographs} All graphics should be centered. Please ensure that any point you wish to make is resolvable in a printed copy of the paper. Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print. Many readers (and reviewers), even of an electronic copy, will choose to print your paper in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic. When placing figures in \LaTeX, it's almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below {\small\begin{verbatim} \usepackage[dvips]{graphicx} ... \includegraphics[width=0.8\linewidth] {myfile.eps} \end{verbatim} } \subsection{Color} Color is valuable, and will be visible to readers of the electronic copy. However ensure that, when printed on a monochrome printer, no important information is lost by the conversion to grayscale. \section{Final copy} You must include your signed IEEE copyright release form when you submit your finished paper. We MUST have this form before your paper can be published in the proceedings. {\small \bibliographystyle{ieee} \section{Introduction} Please follow the steps outlined below when submitting your manuscript to the IEEE Computer Society Press. This style guide now has several important modifications (for example, you are no longer warned against the use of sticky tape to attach your artwork to the paper), so all authors should read this new version. \subsection{Language} All manuscripts must be in English. \subsection{Dual submission} By submitting a manuscript to 3DV, the authors assert that it has not been previously published in substantially similar form. Furthermore, no paper which contains significant overlap with the contributions of this paper either has been or will be submitted during the 3DV 2020 review period to {\bf either a journal} or any conference or any workshop. {\bf Papers violating this condition will be rejected}. If there are papers that may appear to the reviewers to violate this condition, then it is your responsibility to: (1)~cite these papers (preserving anonymity as described in Section 1.6 below), (2)~argue in the body of your paper why your 3DV paper is non-trivially different from these concurrent submissions, and (3)~include anonymized versions of those papers in the supplemental material. \subsection{Paper length} 3DV papers should be no longer than 8 pages, excluding references. The references section will not be included in the page count, and there is no limit on the length of the references section. Overlength papers will simply not be reviewed. This includes papers where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font. The reason such papers will not be reviewed is that there is no provision for supervised revisions of manuscripts. The reviewing process cannot determine the suitability of the paper for presentation in eight pages if it is reviewed in eleven. \subsection{The ruler} The \LaTeX\ style defines a printed ruler which should be present in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document using a non-\LaTeX\ document preparation system, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment the \verb'\threedvfinalcopy' command in the document preamble.) Reviewers: note that the ruler measurements do not align well with lines in the paper --- this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. Just use fractional references (e.g.\ this line is $097.5$), although in most cases one would expect that the approximate location will be adequate. \subsection{Mathematics} Please number all of your sections and displayed equations. It is important for readers to be able to refer to any particular equation. Just because you didn't refer to it in the text doesn't mean some future reader might not need to refer to it. It is cumbersome to have to use circumlocutions like ``the equation second from the top of page 3 column 1''. (Note that the ruler will not be present in the final copy, so is not an alternative to equation numbers). All authors will benefit from reading Mermin's description of how to write mathematics: \url{http://www.pamitc.org/documents/mermin.pdf}. \subsection{Blind review} Many authors misunderstand the concept of anonymizing for blind review. Blind review does not mean that one must remove citations to one's own work---in fact it is often impossible to review a paper unless the previous citations are known and available. Blind review means that you do not use the words ``my'' or ``our'' when citing previous work. That is all. (But see below for techreports) Saying ``this builds on the work of Lucy Smith [1]'' does not say that you are Lucy Smith, it says that you are building on her work. If you are Smith and Jones, do not say ``as we show in [7]'', say ``as Smith and Jones show in [7]'' and at the end of the paper, include reference 7 as you would any other cited work. An example of a bad paper just asking to be rejected: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of our previous paper [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Removed for blind review \end{quote} An example of an acceptable paper: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of the paper of Smith \etal [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Smith, L and Jones, C. ``The frobnicatable foo filter, a fundamental contribution to human knowledge''. Nature 381(12), 1-213. \end{quote} If you are making a submission to another conference at the same time, which covers similar or overlapping material, you may need to refer to that submission in order to explain the differences, just as you would if you had previously published related work. In such cases, include the anonymized parallel submission~\cite{Authors12} as additional material and cite it as \begin{quote} [1] Authors. ``The frobnicatable foo filter'', F\&G 2020 Submission ID 324, Supplied as additional material {\tt fg324.pdf}. \end{quote} Finally, you may feel you need to tell the reader that more details can be found elsewhere, and refer them to a technical report. For conference submissions, the paper must stand on its own, and not {\em require} the reviewer to go to a techreport for further details. Thus, you may say in the body of the paper ``further details may be found in~\cite{Authors12b}''. Then submit the techreport as additional material. Again, you may not assume the reviewers will read this material. Sometimes your paper is about a problem which you tested using a tool which is widely known to be restricted to a single institution. For example, let's say it's 1969, you have solved a key problem on the Apollo lander, and you believe that the 3DV70 audience would like to hear about your solution. The work is a development of your celebrated 1968 paper entitled ``Zero-g frobnication: How being the only people in the world with access to the Apollo lander source code makes us a wow at parties'', by Zeus \etal. You can handle this paper like any other. Don't write ``We show how to improve our previous work [Anonymous, 1968]. This time we tested the algorithm on a lunar lander [name of lander removed for blind review]''. That would be silly, and would immediately identify the authors. Instead write the following: \begin{quotation} \noindent We describe a system for zero-g frobnication. This system is new because it handles the following cases: A, B. Previous systems [Zeus et al. 1968] didn't handle case B properly. Ours handles it by including a foo term in the bar integral. ... The proposed system was integrated with the Apollo lunar lander, and went all the way to the moon, don't you know. It displayed the following behaviours which show how well we solved cases A and B: ... \end{quotation} As you can see, the above text follows standard scientific convention, reads better than the first version, and does not explicitly name you as the authors. A reviewer might think it likely that the new paper was written by Zeus \etal, but cannot make any decision based on that guess. He or she would have to be sure that no other authors could have been contracted to solve problem B. FAQ: Are acknowledgements OK? No. Leave them for the final copy. \begin{figure}[t] \begin{center} \fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}} \end{center} \caption{Example of caption. It is set in Roman so that mathematics (always set in Roman: $B \sin A = A \sin B$) may be included without an ugly clash.} \label{fig:long} \label{fig:onecol} \end{figure} \subsection{Miscellaneous} \noindent Compare the following:\\ \begin{tabular}{ll} \verb'$conf_a$' & $conf_a$ \\ \verb'$\mathit{conf}_a$' & $\mathit{conf}_a$ \end{tabular}\\ See The \TeX book, p165. The space after \eg, meaning ``for example'', should not be a sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided \verb'\eg' macro takes care of this. When citing a multi-author paper, you may save space by using ``et alia'', shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.) However, use it only when there are three or more authors. Thus, the following is correct: `` Frobnication has been trendy lately. It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.'' This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...'' because reference~\cite{Alpher03} has just two authors. If you use the \verb'\etal' macro provided, then you need not worry about double periods when used at the end of a sentence as in Alpher \etal. \begin{figure*} \begin{center} \fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}} \end{center} \caption{Example of a short caption, which should be centered.} \label{fig:short} \end{figure*} \section{Formatting your paper} All text must be in a two-column format. The total allowable width of the text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54 cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the first page) should begin 1.0 inch (2.54 cm) from the top edge of the page. The second and following pages should begin 1.0 inch (2.54 cm) from the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4 paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the page. \subsection{Margins and page numbering} All printed material, including text, illustrations, and charts, must be kept within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm) high. \subsection{Type-style and fonts} Wherever Times is specified, Times Roman may also be used. If neither is available on your word processor, please use the font closest in appearance to Times to which you have access. MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of the first page. The title should be in Times 14-point, boldface type. Capitalize the first letter of nouns, pronouns, verbs, adjectives, and adverbs; do not capitalize articles, coordinate conjunctions, or prepositions (unless the title begins with such a word). Leave two blank lines after the title. AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title and printed in Times 12-point, non-boldface type. This information is to be followed by two blank lines. The ABSTRACT and MAIN TEXT are to be in a two-column format. MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use double-spacing. All paragraphs should be indented 1 pica (approx. 1/6 inch or 0.422 cm). Make sure your text is fully justified---that is, flush left and flush right. Please do not place any additional blank lines between paragraphs. Figure and table captions should be 9-point Roman type as in Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred. \noindent Callouts should be 9-point Helvetica, non-boldface type. Initially capitalize only the first word of section titles and first-, second-, and third-order headings. FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction}) should be Times 12-point boldface, initially capitalized, flush left, with one blank line before, and one blank line after. SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements}) should be Times 11-point boldface, initially capitalized, flush left, with one blank line before, and one after. If you require a third-order heading (we discourage it), use 10-point Times, boldface, initially capitalized, flush left, preceded by one blank line, followed by a period and your text on the same line. \subsection{Footnotes} Please use footnotes\footnote {This is what a footnote looks like. It often distracts the reader from the main flow of the argument.} sparingly. Indeed, try to avoid footnotes altogether and include necessary peripheral observations in the text (within parentheses, if you prefer, as in this sentence). If you wish to use a footnote, place it at the bottom of the column on the page on which it is referenced. Use Times 8-point type, single-spaced. \subsection{References} List and number all bibliographical references in 9-point Times, single-spaced, at the end of your paper. When referenced in the text, enclose the citation number in square brackets, for example~\cite{Authors12}. Where appropriate, include the name(s) of editors of referenced books. \begin{table} \begin{center} \begin{tabular}{|l|c|} \hline Method & Frobnability \\ \hline\hline Theirs & Frumpy \\ Yours & Frobbly \\ Ours & Makes one's heart Frob\\ \hline \end{tabular} \end{center} \caption{Results. Ours is better.} \end{table} \subsection{Illustrations, graphs, and photographs} All graphics should be centered. Please ensure that any point you wish to make is resolvable in a printed copy of the paper. Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print. Many readers (and reviewers), even of an electronic copy, will choose to print your paper in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic. When placing figures in \LaTeX, it's almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below {\small\begin{verbatim} \usepackage[dvips]{graphicx} ... \includegraphics[width=0.8\linewidth] {myfile.eps} \end{verbatim} } \subsection{Color} Color is valuable, and will be visible to readers of the electronic copy. However ensure that, when printed on a monochrome printer, no important information is lost by the conversion to grayscale. \section{Final copy} You must include your signed IEEE copyright release form when you submit your finished paper. We MUST have this form before your paper can be published in the proceedings. {\small \bibliographystyle{ieee}
3,212,635,537,901
arxiv
\section{Introduction} \label{sec:intro} Whether thermalization occurs in isolated quantum many-body systems has attracted much attention since the birth of quantum mechanics~\cite{Neumann_29}. A recent impetus for exploring this question comes from experimental advances in ultracold quantum gases, in which nearly isolated quantum systems are realized and used routinely as quantum simulators~\cite{Lewenstein_Sanpera_2007, Bloch_Dalibard_08, Bloch_Dalibard_2012, Langen_Geiger_15, eisert_friesdorf_review_15}. Thermalization has been observed experimentally in nonintegrable (quantum-chaotic interacting) systems~\cite{Trotzky2012, Kaufman2016, clos_porras_16, tang_kao_18}, as it had been observed earlier in numerical simulations~\cite{rigol_dunjko_08, Rigol_09_Breakdown, Rigol_09_Quantum} (see Ref.~\cite{dalessio_kafri_16} for a review). On the other hand, lack of thermalization has been observed experimentally in near-integrable systems~\cite{Kinoshita2006, gring_kuhnert_12, langen15a, wilson_malvania_20, Malvania_Zhang_21}, as well as in early numerical simulations of integrable quantum dynamics~\cite{rigol_dunjko_07, rigol_muramatsu_06} (see Ref.~\cite{vidmar16} for a review). Integrable systems are the focus of this work. Thermalization in nonintegrable systems is understood in terms of the eigenstate thermalization hypothesis (ETH)~\cite{deutsch_91, srednicki_94, srednicki_99, rigol_dunjko_08, dalessio_kafri_16}. The ETH can be written as an ansatz for the matrix elements of few-body observables $O_{\alpha\beta}\equiv\langle\alpha|\hat O|\beta\rangle$ in the energy eigenstates $\{|\alpha\rangle\}$~\cite{srednicki_99, dalessio_kafri_16}, \begin{equation}\label{eq:ETH} O_{\alpha\beta}=O(\bar E)\delta_{\alpha\beta}+e^{-S(\bar E)/2} f_O (\bar E, \omega)R_{\alpha\beta}\,, \end{equation} where the average energy of pairs of eigenstates is $\bar E=(E_\alpha+E_\beta)/2$, the difference is $\omega=E_\alpha-E_\beta$, $S(\bar E)$ is the thermodynamic entropy at energy $\bar E$, $R_{\alpha\beta}$ is a random (in general, normally distributed) variable with zero mean and unit variance, and $O(\bar E)$ and $f_O (\bar E, \omega)$ are smooth functions of their arguments. Since the thermodynamic entropy is an extensive quantity away from the edges of the spectrum, $e^{-S(\bar E)/2}$ is exponentially small in the system size. For $\bar E$ close to the center of the energy spectrum, $e^{-S(\bar E)/2}\simeq 1/\sqrt{D}$, where $D$ is the size of Hilbert space. The smoothness of the diagonal matrix elements as functions of the energy $\bar E$ makes the agreement between the observable after equilibration and statistical mechanics possible, while the smallness of the off-diagonal matrix elements ensures the smallness of the temporal fluctuations after equilibration. Thanks to many computational studies, over the last fifteen years we have sharpened our understanding of the differences between integrable systems (which do not exhibit eigenstate thermalization) and nonintegrable ones (which do), see, e.g., Refs.~\cite{rigol_dunjko_08, Rigol_09_Breakdown, Rigol_09_Quantum, Santos_Rigol_10, Biroli_Kollath_10, Khatami_Pupillo_13, Ikeda_Watanabe_13, Beugeling_Moessner_14, beugeling_moessner_15, Alba_15, LeBlond_Mallayya_19, Mierzejewski_Vidmar_20, LeBlond_Rigol_Eigenstate_20}, and Ref.~\cite{dalessio_kafri_16} for a review. Within integrable systems, we have also learned about the crucial effect of interactions, and that noninteracting systems are very special (as we will discuss later). The presence of interactions, even in models that can be mapped onto noninteracting ones (such as hard-core boson models), results in integrable dynamics that is fundamentally different from that in noninteracting systems~\cite{wright_rigol_14}. For a paradigmatic integrable interacting model, the spin-1/2 XXZ chain, two important observations have been made recently about the matrix elements of observables in energy eigenstates~\cite{LeBlond_Mallayya_19}. The first one is that the off-diagonal matrix elements are {\it dense} (the overwhelming majority does not vanish as it does in noninteracting systems, in which they are {\it sparse}). One can therefore define a meaningful function $V_O(\bar E,\omega)=e^{S(\bar E)}|\overline{O_{\alpha\beta}|^2}$, which we refer to as the scaled variance. It can be seen as the analog of the $|f_O(\bar E,\omega)|^2$ function in Eq.~\eqref{eq:ETH}. $V_O(\bar E,\omega)$ has been shown to be a smooth function of $\omega$, fixing $\bar E$ to be at the center of the spectrum, for various observables~\cite{LeBlond_Mallayya_19, LeBlond_Rigol_Eigenstate_20, Brenes_LeBlond_20, brenes_goold_20}. We note that $|f_O(\bar E,\omega)|^2$ for nonintegrable models, and $V_O(\bar E,\omega)$ for integrable ones, control (together with the initial state) the dynamics of the specific observable. Those functions can be probed experimentally, e.g., by measuring heating rates~\cite{Mallayya_Rigol_19}. The second observation is about the distribution of the off-diagonal matrix elements, and it is the focus of this work. In contrast to the Gaussian distributions of matrix elements that are generic for nonintegrable systems~\cite{beugeling_moessner_15, luitz_barlev_16, khaymovich_haque_19, LeBlond_Mallayya_19, Brenes_LeBlond_20, brenes_goold_20, LeBlond_Rigol_Eigenstate_20, santos_perezbernal_20, noh_21, brenes_pappalardi_21}, the distributions of matrix elements in the spin-1/2 XXZ chain were found to be close to skewed log-normal-like distributions~\cite{LeBlond_Mallayya_19, brenes_goold_20, LeBlond_Rigol_Eigenstate_20}. The distributions of matrix elements of observables in the spin-1/2 XXZ chain were studied using full exact diagonalization in the presence of translational invariance in Refs.~\cite{LeBlond_Mallayya_19, LeBlond_Rigol_Eigenstate_20}, and for chains with open boundary conditions in Ref.~\cite{brenes_goold_20}. Because of the exponential increase in complexity of those calculations with the chain size, the largest chains studied had $L=26$ sites. This prevented an accurate characterization of the distributions and of their scaling with the chain size. The main focus of this work are models of hard-core bosons in one-dimensional lattices, i.e., bosons that exhibit an infinite on-site repulsion, with particle-number conservation and no inter-site interactions. Such models are mappable onto noninteracting spinless fermion models. Our goal is to use them to gain a more accurate understanding of the distributions of off-diagonal matrix elements of observables in integrable models in the presence of interactions, and of their scalings with the system size. We study the occupation of quasimomentum modes (nonlocal one-body observables), which can be measured in experiments with ultracold quantum gases. We consider both the translationally invariant model as well as the Aubry-Andr\'e model. The dynamics of various observables in the latter model were studied in Ref.~\cite{rigol_fitzpatrick_11}, where equilibration to the predictions of a generalized Gibbs ensemble was shown to occur in the delocalized regime. The dynamics of the same observables in the noninteracting spinless fermion model were studied in Ref.~\cite{He_Santos_13}, along with the diagonal matrix elements of the occupation of the zero quasimomentum mode in the hard-core boson model. The latter study revealed the expected lack of compliance with the ETH due to the integrability of the model. Here we discuss the differences between the behavior of the off-diagonal matrix elements of observables in the hard-core boson model and in the noninteracting spinless fermion model to which the former can be mapped. We then show that, in the delocalized regime of the hard-core boson model, the distributions of off-diagonal matrix elements of the occupation of the quasimomentum modes are well described by generalized Gamma distributions~\cite{gamma_distribution}. We also show that results reported in Ref.~\cite{LeBlond_Rigol_Eigenstate_20} for the distribution of the off-diagonal matrix elements of a local observable in the spin-1/2 XXZ chain are well described by a generalized Gamma distribution, suggesting that such distributions are generic in integrable interacting models. The paper is organized as follows. In Sec.~\ref{sec:general}, we discuss the general differences between the off-diagonal matrix elements of few-body observables in systems consisting of noninteracting spinless fermions (which are sparse) and of hard-core bosons (which, for nonlocal observables, need not be sparse). In Sec.~\ref{sec:HCBstranslation}, we study the properties of the off-diagonal matrix elements of the occupation of the zero quasimomentum mode of hard-core bosons in the presence of translational invariance. Sections~\ref{sec:fermionsAA} and~\ref{sec:HCBsAA} are devoted to studying the effect of breaking translational invariance, as well as of localization, in the context of the Aubry-Andr\'e model. In Sec.~\ref{sec:fermionsAA} we discuss results for noninteracting fermions, while in Sec.~\ref{sec:HCBsAA} we discuss results for the corresponding model of hard-core bosons. A discussion of the relevance of our results beyond hard-core boson models is presented in Sec.~\ref{sec:xxz}. Specifically, we show that a generalized Gamma distribution describes the distribution of the off-diagonal matrix elements of an observable studied in Ref.~\cite{LeBlond_Rigol_Eigenstate_20} in the integrable spin-1/2 XXZ chain. We summarize our results in Sec.~\ref{sec:summary}. \section{Noninteracting spinless fermions vs hard-core bosons} \label{sec:general} We begin with a general discussion of the properties of the matrix elements of observables in noninteracting spinless fermion models and in hard-core bosons models. Having the quasimomentum occupation in mind, we identify important differences between the off-diagonal matrix elements of nonlocal few-body observables in both models. \subsection{General results for \\ noninteracting spinless fermions} \label{sec:fermionsgeneral} Let us begin by discussing properties of the off-diagonal matrix elements of observables in a general model of noninteracting spinless fermions with particle number conservation in a lattice with $L$ sites. The Hamiltonian can be written as is \begin{equation}\label{eq:Hsf} \hat H^{\rm SF}=-\sum_{\substack{i,j=1\\i\neq j}}^{L}(A_{ij}\hat f^\dagger_i\hat f^{}_j+\text{H.c.})+\sum_{i=1}^LV_i\hat f^\dagger_i\hat f^{}_i\,, \end{equation} where $\hat f^\dagger_i$ ($\hat f^{}_i$) creates (annihilates) a spinless fermion at site $i$, $A_{ij}$ is the hopping amplitude between sites $i$ and $j$, and $V_i$ is the magnitude of a local potential at site $i$. All many-body energy eigenstates $|\alpha\rangle$ of $\hat H^{\rm SF}$, for $N$ fermions, can be written as Slater determinants \begin{equation}\label{eq:slater} |\alpha\rangle=\prod_{m=1}^{N}\hat c^{\dagger}_{\alpha_m}|0\rangle\,, \end{equation} where \begin{equation}\label{eq:crea} \hat c^\dagger_{\alpha_m}=\sum_{i=1}^L d_{\alpha_m}^i \hat f^\dagger_i \end{equation} creates a spinless fermion with eigenenergy $E_{\alpha_m}$ (the coefficients $d_{\alpha_m}^i$ implement the change of basis). We are interested in the off-diagonal matrix elements of particle-number conserving observables $\hat O$ between energy eigenstates $|\alpha\rangle$ and $|\beta\rangle$ that have the same number of particles, namely, on $O_{\alpha\beta}=\langle\alpha|\hat O|\beta\rangle$. Let us assume that $\hat O$ can be expressed using at most $M$ pairs of creation and annihilation operators (say, in the site basis), with $M\leq {\rm min}(N,L-N)$, \begin{align}\label{eq:operator} \hat O=&\sum_{i_1i'_1}\sigma_{i_1i'_1}\hat f^\dagger_{i_1}\hat f^{}_{i'_1}\nonumber\\ &+\sum_{i_1i_2i'_1j'_2}\sigma_{i_1i_2i'_1i'_2}\hat f^\dagger_{i_1}\hat f^\dagger_{i_2}\hat f^{}_{i'_1}\hat f^{}_{i'_2}+\cdots\nonumber\\ &+\sum_{i_1\cdots i_Mi_1'\cdots i_M'}\sigma_{i_1\cdots i_Mi_1'\cdots i_M'}\hat f^\dagger_{i_1}\cdots\hat f^\dagger_{i_M}\hat f^{}_{i_1'}\cdots\hat f_{i_M'}\,, \end{align} where $\sigma_{...}$ are constants. Then, a necessary criterion for $O_{\alpha\beta}$ to be nonzero is that the analog of Eq.~(\ref{eq:slater}) for $|\beta\rangle$ contains at most $M$ single-particle operators $\hat c^\dagger_{\beta_m}$ that are not contained among the $N$ operators $\hat c^\dagger_{\alpha_m}$ in $|\alpha\rangle$. This follows after noticing that one can rewrite Eq.~\eqref{eq:operator} in terms of the creation (annihilation) operators $\hat c^\dagger_m$ ($\hat c^{}_m$), and this does not change the form of $\hat O$ in terms of the new operators (only the coefficients change). Using this, one can find an {\it upper bound} for the number of nonzero off-diagonal matrix elements $O_{\alpha\beta}$, \begin{equation}\label{eq:Nnonzero} \bar{N}_{\rm nonzero}= {L \choose {N}}\sum_{j=1}^{M} {N \choose {j}}{L-N \choose{j}}\,, \end{equation} where ${L \choose {N}}$ is the number of many-body energy eigenstates, and $\sum_{j=1}^{j'}{N \choose {j}}{L-N \choose{j}}$ bounds the number of nonzero matrix elements that the terms in $\hat O$ with up to $j'$ pairs of creation and annihilation operators can generate for any given many-body energy eigenstate. Comparing $\bar{N}_{\rm nonzero}$ to the total number of $O_{\alpha\beta}$, which is $N_{\rm tot}= {L \choose {N}}\big[{L \choose {N}}-1\big]$, the fraction of nonzero off-diagonal matrix elements must be smaller than or equal to \begin{equation}\label{eq:rnonzero} r_{\rm nonzero}=\frac{\sum_{j=1}^{M} {N \choose {j}}{L-N \choose{j}}}{{L \choose {N}}-1}\,. \end{equation} Taking the thermodynamic limit, $N\rightarrow\infty$ and $L\rightarrow\infty$ with $N/L={\rm const}$ and a fixed $M$, results in a vanishing $r_{\rm nonzero}$. One usually refers to the operators $\hat O$ in Eq.~\eqref{eq:operator} as nonlocal few-body operators when $M$ is $O(1)$, namely, when $M$ is independent of $N$ and $L$. In this work, we focus on the occupation of quasimomentum modes \begin{equation}\label{eq:mk} \hat{\mathsf{m}}_k=\frac{1}{L}\sum_{j,l=1}^{L}e^{ik(j-l)}\hat f^\dagger_j \hat f^{}_l\,, \end{equation} which can be considered as a special case of Eq.~(\ref{eq:operator}) with $M=1$. $\hat{\mathsf{m}}_k$ is a nonlocal one-body operator, and it can be measured in experiments with ultracold quantum gases in optical lattices~\cite{Bloch_Dalibard_08}. For a system with $L$ sites and $N$ particles, the square of the (properly normalized) Hilbert-Schmidt norm of $\hat{\mathsf{m}}_k$ is \begin{equation}\label{eq:hsnorm} ||\hat{\mathsf{m}}_k||^2\equiv\frac{1}{D}\Tr\{\hat{\mathsf{m}}_k^2\} = \frac{1}{D} \sum_{\alpha,\beta=1}^D |\langle\alpha|\hat{\mathsf{m}}_k|\beta\rangle|^2 = \frac{N}{L}\,, \end{equation} where $D = {L\choose N}$ is the size of the Hilbert space, at a given $N$ and $L$, over which the trace is computed. It follows from Eq.~(\ref{eq:rnonzero}) that for $M=1$ the fraction of nonzero matrix elements is \begin{equation} \label{def_rm0} r_{\mathsf{m}_0} = \frac{N (L-N)}{(D-1)} = \frac{L^2}{D}\frac{ n (1-n)}{(1-1/D)}\;, \end{equation} where we introduced the ``filling'' $n=N/L$. For an average number of nonzero off-diagonal matrix elements $DN(L-N)$ of $(\mathsf{m}_0)_{\alpha\beta}$, as per Eq.~\eqref{eq:Nnonzero}, we can use the Hilbert-Schmidt norm from Eq.~\eqref{eq:hsnorm} to estimate their typical magnitude (assuming that all matrix elements are similar in magnitude). One gets that the typical nonzero matrix elements scale as \begin{equation} \label{m0_typical} |(\mathsf{m}_0)_{\alpha\beta}|^2 \approx \frac{1}{L(L-N)} = \frac{1}{L^2} \frac{1}{(1-n)}\;. \end{equation} Summarizing our discussion so far, the fraction of nonzero off-diagonal matrix elements of few-body operators in many-body energy eigenstates of models of noninteracting spinless fermions vanishes in the thermodynamic limit~\cite{Khatami_Pupillo_13, haque_mcclarty_19}. The specific results obtained here for $(\mathsf{m}_0)_{\alpha\beta}$ will be used in our discussion in Sec.~\ref{sec:fermionsAA}. \subsection{General results for hard-core bosons} \label{sec:HCBsgeneral} Next we turn our attention to the most general (particle-number conserving) model of hard-core bosons in one dimension that can be mapped onto a model of noninteracting spinless fermions. The Hamiltonian has the form \begin{equation}\label{eq:Hhcb} \hat H^{\rm HCB} = -\sum_{i=1}^{L}(A_{i,i+1}\hat b^\dagger_i\hat b^{}_{i+1}+\text{H.c.})+\sum_{i=1}^L V_i \hat b^\dagger_i\hat b_i\,, \end{equation} where $\hat b^\dagger_i$ ($\hat b_i$) creates (annihilates) a hard-core boson at site $i$, $A_{i,i+1}$ is the hopping amplitude between nearest-neighbor sites $i$ and $i+1$, and $V_i$ is the magnitude of a local potential at site $i$. Periodic boundary conditions are assumed in Eq.~(\ref{eq:Hhcb}), i.e., $\hat b_{L+1} \equiv \hat b_1$ and $A_{L,L+1} \equiv A_{L,1}$. The hard-core constraint $\hat b^{\dagger2}_i=\hat b^{2}_i=0$ prevents two (or more) bosons from occupying the same lattice site. The hard-core boson Hamiltonian $\hat H^{\rm HCB}$ in Eq.~(\ref{eq:Hhcb}) can be mapped onto a similar Hamiltonian of noninteracting spinless fermion Hamiltonian, specifically, onto $\hat H^{\rm SF}$ in Eq.~(\ref{eq:Hsf}) in one dimension when $A_{ij} = 0$ for $|i-j|>1$~\cite{Cazalilla_Citro_review_11}. The mapping is carried out first using a Holstein-Primakoff transformation~\cite{Holstein_Primakoff_40}, followed by a Jordan-Wigner transformation~\cite{Jordan_Wigner_28}, \begin{equation}\label{eq:mapping} \hat b_j^\dagger=\hat f^\dagger_j\prod_{m=1}^{j-1}e^{-i\pi \hat f_m^\dagger \hat f^{}_m}\,, \quad \hat b^{}_j=\prod_{m=1}^{j-1}e^{i\pi \hat f_m^\dagger \hat f^{}_m}\hat f^{}_j\,. \end{equation} Using properties of Slater determinants, one can calculate (in polynomial time) the matrix elements of the one-body operators $\hat b^{\dagger}_i \hat b^{}_j$ in the many-body eigenstates $\{|\alpha^{\rm HCB}\rangle\}$ of Eq.~\eqref{eq:Hhcb}, $\langle\alpha^{\rm HCB}|\hat b^{\dagger}_i \hat b^{}_j|\beta^{\rm HCB}\rangle$~\cite{Rigol_Muramatsu_04, Rigol_Muramatsu_05}. This allows one to also compute the matrix elements of the occupation of quasimomentum modes \begin{equation}\label{eq:mkhcb} \hat m_k = \frac{1}{L} \sum_{j,l=1}^L e^{ik(j-l)} \hat b_j^\dagger \hat b^{}_l \;. \end{equation} We note that, in order to avoid confusion, we denote the hard-core boson occupation of quasimomentum modes as $\hat m_k$, and the noninteracting spinless fermion occupation of quasimomentum modes as $\hat{\mathsf{m}}_k$. Because of the hard-core interactions, which are encoded in the nonlocal nature of the mapping between hard-core bosons and noninteracting fermions, the one-body sector of the former system is fundamentally different from the one of the latter, see, e.g., Ref.~\cite{wright_rigol_14} for a comparison of their dynamics. In particular, the occupation of quasimomentum modes is in general different for hard-core bosons and noninteracting fermions, both in equilibrium and out of equilibrium~\cite{Cazalilla_Citro_review_11}. More importantly for the purpose of this study, the off-diagonal matrix elements $\langle\alpha^{\rm HCB}|\hat b^{\dagger}_i \hat b^{}_j|\beta^{\rm HCB}\rangle$ need not be sparse as they are for noninteracting fermions. To show it, let us rewrite $\hat m_k$ in Eq.~\eqref{eq:mkhcb} in terms of spinless fermions operators \begin{equation}\label{eq:mkhcbfer} \hat m_k = \frac{1}{L} \sum_{j,l=1}^L e^{ik(j-l)} \hat f^\dagger_j \left(\prod_{m=j}^{l-1}e^{i\pi \hat f_m^\dagger \hat f^{}_m}\right) \hat f^{}_{l} \,. \end{equation} Equation~\eqref{eq:mkhcbfer} shows that $\hat m_k$ is a many-body operator in the spinless fermion representation. As a result, it can connect exponentially many many-body eigenstates of the noninteracting spinless fermion Hamiltonian to which the hard-core bosons are mapped. Our goal is to gain an accurate understanding of the properties of matrix elements of few-body observables in energy eigenstates of integrable interacting models via the computational study of the properties of the matrix elements of $\hat m_k$. The latter can be done efficiently using the mapping onto noninteracting fermions. \section{Translationally invariant hard-core bosons} \label{sec:HCBstranslation} We first consider the case in which the hard-core boson Hamiltonian is translationally invariant (no inhomogeneity and periodic boundary conditions): \begin{equation} \label{eq:Hhcb_pw} \hat H^{\rm HCB}_{\rm TI}=-\sum_{i=1}^{L-1}( \hat b^\dagger_i\hat b_{i+1}+{\rm H.c.})-(\hat b^\dagger_1\hat b_{L}+{\rm H.c.})\,, \end{equation} for which the corresponding spinless-fermion Hamiltonian after the mapping in Eq.~(\ref{eq:mapping}) is \begin{equation}\label{eq:Hsf_pw} \hat H^{\rm SF}_{\rm TI}=-\sum_{i=1}^{L-1}( \hat f^\dagger_i\hat f_{i+1}+{\rm H.c.})+(-1)^{N}(\hat f^\dagger_1\hat f_{L}+{\rm H.c.})\,. \end{equation} In the latter model, periodic (anti-periodic) boundary conditions are needed for an odd (even) number $N$ of particles. We study systems at quarter filling $N=L/4$ in this section, and consider energy eigenstates with total quasimomentum $\kappa = \sum_{\alpha=1}^{N}\kappa_\alpha=2\pi/L$, where $\kappa_\alpha$ is the quasimomentum of the single-particle eigenstates that are part of the Slater determinant of the many-body eigenstates. We note that, as $L\rightarrow\infty$, $\kappa\rightarrow 0$. We focus on this sector, as opposed to the one with $\kappa=0$, to avoid the parity symmetry present in the latter. We also note that we do not study the half-filled case as it has an additional particle-hole symmetry. For the system sizes considered here, the Hilbert space dimension of the quasimomentum sectors is $D\simeq{L \choose{N}}/L$. This is the Hilbert space dimension that we use in our calculations. \begin{figure}[!t] \begin{center} \includegraphics[width=\columnwidth]{Plotnew/Fig1_PWDiagonal.png} \caption{\label{fig:PWDiagonal}Diagonal matrix elements $(m_0)_{\alpha\alpha}$ in the energy eigenstates of translationally invariant hard-core bosons in the sector with total quasimomentum $\kappa=2\pi/L$. We show results for systems with $L=20$ (black circles, all matrix elements), $L=28$ (orange hexagons, all matrix elements), and $L=36$ (blue squares; only 1 of every 25 matrix elements) at quarter filling $N=L/4$. The solid (dashed) line shows the average of $(m_0)_{\alpha\alpha}$ within energy windows with $\Delta E_\alpha/L=0.05$ for $L=28$ ($L=36$).} \end{center} \end{figure} In Fig.~\ref{fig:PWDiagonal}, we show the diagonal matrix elements $(m_0)_{\alpha\alpha}$ of the zero quasimomentum occupation operator $\hat m_0\equiv \hat m_{k=0}$ in Eq.~(\ref{eq:mkhcb}) as a function of the eigenenergy density $E_\alpha/L$ for three different system sizes. They exhibit a well known property of the diagonal matrix elements of integrable models~\cite{rigol_dunjko_08, cassidy_clark_11, vidmar16, Mierzejewski_Vidmar_20}, namely, the support of the matrix elements at any given value of $E_\alpha/L$ does not shrink with increasing the system size. The solid and dashed lines in Fig.~\ref{fig:PWDiagonal} show results for the averages over energy windows with $\Delta E_\alpha/L=0.05$ in the two largest system sizes. The averages overlap (are well converged) for those systems sizes, for which we are able to compute all the matrix elements. Even though the support of the matrix elements does not decrease with increasing system size, the variance does decrease~\cite{Biroli_Kollath_10, Ikeda_Watanabe_13, Alba_15}. In order to study the scaling of the variance with increasing system size, and the distribution of the diagonal matrix elements, we carry out calculations in much larger system sizes than the ones shown in Fig.~\ref{fig:PWDiagonal}. For those systems sizes ($L>36$), we cannot compute all the matrix elements so we sample them. In the inset of Fig.~\ref{fig:DiagPW}, we show the variance \begin{equation} \label{def_variance_diag} {\rm Var}[(m_0)_{\alpha\alpha}] = \frac{1}{|{\cal M}|} \sum_{\alpha\in {\cal M}} [(m_0)_{\alpha\alpha}- \overline{(m_0)}_\alpha]^2 \;, \end{equation} where the sum is computed over a set ${\cal M}$ of states at the center of the energy spectrum sampled within an energy window in which $|E_\alpha|/L\leq10^{-4}$. We stress that the variance in Eq.~\eqref{def_variance_diag} is computed with respect to a moving average $\overline{(m_0)}_\alpha$, not with respect to the average in the entire energy window. This is done in order to remove the structure of $(m_0)_{\alpha\alpha}$ as a function of the energy~\cite{lydzba_zhang_21}. Our moving averages $\overline{(m_0)}_\alpha$ are computed over the 2000 states obtained in the sampling process whose energy is closest to $E_\alpha$. A power-law fit to those numerical results shows that the variance decreases $\propto L^{-1}$, i.e., it vanishes in the thermodynamic limit~\cite{Biroli_Kollath_10, Ikeda_Watanabe_13, Alba_15}. This is to be contrasted with the much faster scaling in nonintegrable (quantum-chaotic interacting) systems, in which the variance vanishes exponentially fast in the system size (see, e.g., Ref.~\cite{LeBlond_Mallayya_19} for a recent comparison between numerical results obtained in integrable and nonintegrable spin-1/2 XXZ chains). \begin{figure}[!t] \begin{center} \includegraphics[width=0.98\columnwidth]{Plotnew/Fig2_DiagPW.pdf} \caption{\label{fig:DiagPW}Probability density function $P$ of the scaled diagonal matrix elements $|(m_0)_{\alpha\alpha} - \overline{(m_0)}_\alpha|L^{1/2}$, for $L=68$ (dashed line), $L=84$ (dashed-dotted line), and $L=100$ (double dashed-dotted line). In axes labels we simplify $\overline{(m_0)}_\alpha \to \overline{m}_0$. The solid line is a Gaussian probability density function $P(x) = \frac{2}{\sigma} \sqrt{\frac{1}{2\pi}} e^{-x^2/2\sigma^2}$, where $\sigma=0.36$ is the square root of the variance obtained for $L=100$ [see Eq.~\eqref{def_variance_diag}]. (Inset) The variance [see Eq.~\eqref{def_variance_diag}] plotted as a function of the system size. The solid line is a power-law fit $\propto L^{-\alpha_0}$, where $\alpha_0=1.02$. The numerical results were obtained using $10^6$ eigenstates randomly sampled with $|E_\alpha|/L\leq 10^{-4}$. The moving average $\overline{(m_0)}_\alpha$ is computed averaging $(m_0)_{\alpha\alpha}$ over the 2000 states obtained in the sampling process whose energy is closest to $E_\alpha$ (see text).} \end{center} \end{figure} In Fig.~\ref{fig:DiagPW} we show the probability density function (PDF) of the scaled matrix elements $|(m_0)_{\alpha\alpha}- \overline{(m_0)}_\alpha| L^{1/2}$. We define the PDF, $P$, of a variable $x$ in an interval $[x,x+\Delta x]$ as \begin{equation} P(x) = \frac{1}{\cal N} \frac{\Delta \cal N}{\Delta x} \;, \end{equation} where $\cal N$ is the total number of elements ($\Delta \cal N$ is the number of elements in $[x,x+\Delta x]$). Figure~\ref{fig:DiagPW} shows that $P(|(m_0)_{\alpha\alpha}- \overline{(m_0)}_\alpha| L^{1/2})$ is a system-size-independent Gaussian. The same Gaussian behavior was found in Ref.~\cite{Alba_15} for the diagonal matrix elements of elements of reduced density matrices in eigenstates of the integrable spin-1/2 isotropic Heisenberg chain. Having shown that the properties of the diagonal matrix elements of $\hat m_0$ are qualitatively similar to those observed in integrable interacting systems that are not mappable onto noninteracting models, we turn our attention to the properties of the off-diagonal matrix elements $(m_0)_{\alpha\beta}$. As for the diagonal matrix elements, we consider only matrix elements between eigenstates within the total quasimomentum sector $\kappa = 2\pi/L$. The variance of the off-diagonal matrix elements (whose average is negligibly small) is \begin{equation} \label{def_variance} {\rm Var}[(m_0)_{\alpha\beta}] = \frac{1}{|\cal M'|} \sum_{\alpha,\beta \in {\cal M'}} |(m_0)_{\alpha\beta}|^2 \;. \end{equation} We carry out our calculations over a set ${\cal M'}$ of pairs of eigenstates $|\alpha\rangle,\, |\beta\rangle$ with $\bar E_{\alpha\beta} = (E_\alpha+E_\beta)/2$ at the center of the energy spectrum, namely, in a small window of energy $|\bar E_{\alpha\beta} - \bar E_0| \leq \Delta E/2$, where $\bar E_0 = {\rm Tr}\{\hat H\}/D$ ($E_0 = 0$ in the translationally invariant model considered in this section). In addition to their average energy $\bar E_{\alpha\beta}$, pairs of eigenstates can be labeled by their energy difference $\omega_{\alpha\beta} = E_\alpha-E_\beta$. We coarse grain ${\rm Var}[(m_0)_{\alpha\beta}]$ so that $|\omega_{\alpha\beta} - \omega| \leq \Delta \omega/2$. We quote the specific widths $\Delta E$ and $\Delta \omega$ used in the calculations in the caption of each figure. Finally, we report in our plots the scaled variance \begin{equation}\label{eq:scaledvar} V_{m_0}(0, \omega)=D \,{\rm Var}[(m_0)_{\alpha\beta}] \,, \end{equation} which, given the fact that observables have a fixed (properly normalized) Hilbert-Schmidt norm, is the quantity that is expected to remain finite in the thermodynamic limit ($D \to \infty$). \begin{figure}[!t] \begin{center} \includegraphics[width=0.98\columnwidth]{Plotnew/Fig3_PWvariance.pdf} \caption{\label{fig:PWvariance}Scaled variance $V_{m_0}(0,\omega)$ [Eq.~(\ref{eq:scaledvar}] of the off-diagonal matrix elements $(m_0)_{\alpha\beta}$ in the translationally invariant hard-core boson model at the center of the energy spectrum. The energy eigenstates are from the $\kappa=2\pi/L$ total quasimomentum sector of systems at quarter filling ($N=L/4$). (a) $V_{m_0}(0,\omega)$ plotted as a function of $\omega$ at low and intermediate frequencies ($\omega \in [0,9]$). We show results for systems with sizes $L=28$ (dashed line), 36 (solid line), and 44 (dashed-dotted line). (Inset) Rescaled $V_{m_0}(0,\omega)/L$ plotted as a function of $\omega L$ at low frequencies. (b) $V_{m_0}(0,\omega)$ plotted as a function of $\omega^2$ ($\omega \lesssim 25$) for systems with sizes $L=52$ (dashed line), 60 (dashed-dotted line), and 68 (solid line). The straight dashed line is a Gaussian fit $\propto e^{-a \omega^2}$ to the $L=68$ results for $\omega^2\in[300,600]$, with the fitting parameter $a=0.12$. (Inset) $V_{m_0}(0,\omega=7)$ vs the system size. For all the results shown in this figure, $\Delta E/L=2\times10^{-4}$. We compute all pairs of eigenstates in this interval for $L\leq 36$, while for $L\geq 44$ we randomly select at least $6\times10^7$ pairs (see Appendix~\ref{app:sampling}). The variances in the main panels are coarse grained using a $\Delta\omega=0.05$ (except for $L=28$ for which $\Delta\omega=0.2$). We use a finer coarse graining in the inset in (a), $\Delta\omega=0.02$ (except for $L=28$ for which $\Delta\omega=0.1$), and the results are plotted as a running average.} \end{center} \end{figure} The main panel of Fig.~\ref{fig:PWvariance}(a) shows the scaled variance $V_{m_0}(0,\omega)$ as a function of $\omega$ at small and intermediate frequencies. The results for three different system sizes collapse at intermediate frequencies, thereby justifying the use of the scaled variance in Eq.~(\ref{eq:scaledvar}) as a meaningful quantity in the thermodynamic limit. The scaled variance in a wider frequency interval ($\omega \lesssim 25$), for three system sizes larger than those in Fig.~\ref{fig:PWvariance}(a), is shown in Fig.~\ref{fig:PWvariance}(b) as a function of $\omega^2$. The results for the three system sizes collapse at high frequencies, and they are consistent with the Gaussian functional form \begin{equation} \label{eq:gaussian} V_{m_0}(0,\omega)=A e^{-a\omega^2} \;, \end{equation} where $A$ and $a$ are constants. The variance of the off-diagonal matrix elements of observables at high frequency was also found to exhibit a Gaussian decay in the integrable spin-1/2 XXZ chain~\cite{LeBlond_Mallayya_19}, and in quantum-chaotic interacting models in which integrability is broken by perturbations that are not extensive in the system size~\cite{jansen_stolpp_19, schoenle_jansen_21}. The behavior of the scaled variance $V_{m_0}(0,\omega)$ is qualitatively different at low frequencies $\omega \propto 1/L$. The inset in Fig.~\ref{fig:PWvariance}(a) shows that, in this regime, the results for the variance collapse only when plotting $V_{m_0}(0,\omega)/L$ vs $\omega L$. Similar behaviors have been observed in some quantum-chaotic interacting models~\cite{dalessio_kafri_16, Brenes_LeBlond_20, brenes_goold_20} and in the integrable spin-1/2 XXZ chain~\cite{LeBlond_Rigol_Eigenstate_20}, and can be attributed to the presence of ballistic transport. In what follows we the study the distributions of the off-diagonal matrix elements $(m_0)_{\alpha\beta}$ at a fixed frequency $\omega=7$. This frequency is in the intermediate frequency regime in Fig.~\ref{fig:PWvariance}(a), and is sufficiently high so that the matrix elements are not affected by the low-frequency ``ballistic'' scaling seen in the inset in Fig.~\ref{fig:PWvariance}(a). (We report results for the distribution of $(m_0)_{\alpha\beta}$ at low-frequencies in Sec.~\ref{sec:xxz}.) The inset in Fig.~\ref{fig:PWvariance}(b) shows that the variance at $\omega=7$ is, up to small fluctuations, independent of the system size. In Appendix~\ref{app:mkpi}, we show that the occupation of other quasimomentum modes (specifically, of $k=\pi/2$ and $\pi$) exhibit the same qualitative behavior as the one discussed here for $k=0$. We study the PDFs of the squared absolute value of the scaled matrix elements (which enter in response functions, and others~\cite{dalessio_kafri_16, Mallayya_Rigol_19}) \begin{equation} \label{def_tilde_m0} |(\tilde{m}_0)_{\alpha\beta}|^2=|(m_0)_{\alpha\beta}\sqrt{D}|^2 \;. \end{equation} To be able to study large systems (with up to $L=100$) so that we can unveil the scaling of the PDFs with the system size, we randomly sample the matrix elements in the targeted $\omega$ window (see Appendix~\ref{app:sampling}). The PDFs $P(|(\tilde{m}_0)_{\alpha\beta}|^2)$ for $L=68,\, 84\,$ and 100 are shown in Fig.~\ref{fig:PWdistribution}(a). They exhibit sharp peaks as $|(\tilde{m}_0)_{\alpha\beta}|^2\rightarrow0$, and long tails for large matrix elements, as those found for local observables in the integrable spin-1/2 XXZ chain~\cite{LeBlond_Mallayya_19, brenes_goold_20, LeBlond_Rigol_Eigenstate_20}. In Fig.~\ref{fig:PWdistribution}(b), we plot the corresponding PDFs $P(\ln |(\tilde{m}_0)_{\alpha\beta}|^2)$. They exhibit the skewed log-normal like shape observed in Refs.~\cite{LeBlond_Mallayya_19, brenes_goold_20, LeBlond_Rigol_Eigenstate_20}, and clearly visible tails for small matrix elements that were visible only in some instances in the much smaller system sizes studied in Refs.~\cite{LeBlond_Mallayya_19, brenes_goold_20, LeBlond_Rigol_Eigenstate_20}. Both plots make apparent that those distributions are not independent of the system size (as they would be for a Gaussian, for which the mean and the variance fix all the higher moments). In particular, the peak in $P(|(\tilde{m}_0)_{\alpha\beta}|^2)$ as $|(\tilde{m}_0)_{\alpha\beta}|^2\rightarrow0$ sharpens, while $P(\ln |(\tilde{m}_0)_{\alpha\beta}|^2)$ exhibits a maximum that drifts to lower values of $|(\tilde{m}_0)_{\alpha\beta}|^2$ with increasing the system size. \begin{figure}[!t] \begin{center} \includegraphics[width=0.99\columnwidth]{Plotnew/Fig4_PWdistribution.pdf} \caption{\label{fig:PWdistribution}(a) Probability density function $P$ of $|(\tilde{m}_0)_{\alpha\beta}|^2$ [see Eq.~(\ref{def_tilde_m0})] in the translationally invariant hard-core boson model. The thin (cyan) lines overlapping with the results show the prediction of the generalized Gamma distribution (GGD) in Eq.~\eqref{eq:prob1}, with the fitting parameters from Fig.~\ref{fig:PWscale}(b). (b) The same results as in panel (a), but plotted as the probability density function of $\ln{|(\tilde{m}_0)_{\alpha\beta}|^2}$. We study eigenstates in the quasimomentum sector $\kappa = 2\pi/L$ for systems at quarter filling $N = L/4$. We show results for systems with sizes $L=68$ (dashed line), 84 (dashed-dotted line), and 100 (solid line). We randomly select at least $5\times 10^6$ pairs of eigenstates with $\Delta E/L= 2\times 10^{-4}$ and $\Delta\omega=0.05$ about $\omega=7$.} \end{center} \end{figure} These results suggest that further rescaling of the matrix elements as a function of $D$ is needed if one is to find a PDF that is meaningful in the thermodynamic limit. We rescale \begin{equation}\label{eq:rescm0} \ln |(\tilde{m}_0)_{\alpha\beta}|^2 \to \frac{\ln |(\tilde{m}_0)_{\alpha\beta}|^2}{\ln D^2}= \frac{\ln |(\tilde{m}_0)_{\alpha\beta}|}{\ln D}\;, \end{equation} and, consequently (to ensure the new distribution is normalized), \begin{eqnarray} P(\ln |(\tilde{m}_0)_{\alpha\beta}|^2) &\to& P(\ln |(\tilde{m}_0)_{\alpha\beta}|^2) \; \ln D^2\nonumber\\&&= P(\ln |(\tilde{m}_0)_{\alpha\beta}|) \; \ln D \;. \label{def_Pln_rescaling} \end{eqnarray} Figure~\ref{fig:PWscale}(a) shows that this yields a very good collapse of the results for different values of $L$, specially about and below the maximum of $P(\ln |(\tilde{m}_0)_{\alpha\beta}|)$. The collapse degrades at the highest values of $|(\tilde{m}_0)_{\alpha\beta}|$, for which $P(\ln |(\tilde{m}_0)_{\alpha\beta}|)$ exhibits a sharp decrease. Properly sampling that part of the distribution becomes increasingly challenging with increasing system size. \begin{figure}[!t] \begin{center} \includegraphics[width=0.99\columnwidth]{Plotnew/Fig5_PWscale.pdf} \caption{\label{fig:PWscale}(a) Rescaled probability density function $P(\ln|(\tilde{m}_0)_{\alpha\beta}|) \ln D$ as a function of $\ln |(\tilde{m}_0)_{\alpha\beta}|/\ln D$ in the translationally invariant hard-core boson model. The numerical results are the same as in Fig.~\ref{fig:PWdistribution}. (b) The symbols show the logarithm of the results for $L=100$ in (a), and the solid line is a fit to the points above the dotted line [$P(\ln|(\tilde{m}_0)_{\alpha\beta}|) \ln D \geq 0.1$] using the function in Eq.~(\ref{eq:fit1}). The fitting parameters are $A_0=4.30$, $B_0=7.29$, $k_0=7.11$, and $x_0=-0.33$.} \end{center} \end{figure} The behavior in Fig.~\ref{fig:PWscale}(a) is consistent with the logarithm of the PDF [plotted in Fig.~\ref{fig:PWscale}(b) for $L=100$] being linear for small values of $\ln |(\tilde{m}_0)_{\alpha\beta}|/\ln D$, and exponential for large values of $\ln| (\tilde{m}_0)_{\alpha\beta}|/\ln D$. We therefore fit the results in Fig.~\ref{fig:PWscale}(b) to the function \begin{align}\label{eq:fit1} \ln[P(&\ln{|(\tilde{m}_0)_{\alpha\beta}}|)\ln{D}]=\\\nonumber&A_0+k_0 \frac{\ln|(\tilde{m}_0)_{\alpha\beta}|}{\ln D} -\exp{\left[B_0\left( \frac{\ln|(\tilde{m}_0)_{\alpha\beta}|}{\ln D}-x_0\right)\right]}\,, \end{align} with $A_0$, $B_0$, $k_0$, and $x_0$ being fitting parameters. The fit provides an excellent description of the data in the regime in which the results for different system sizes exhibit a collapse in Fig.~\ref{fig:PWscale}(a). The corresponding distribution for $|(\tilde{m}_0)_{\alpha\beta}|^2$ is \begin{equation}\label{eq:prob1} P(|(\tilde{m}_0)_{\alpha\beta}|^2)=P_D |(\tilde{m}_0)_{\alpha\beta}|^{2(k_D-1)}\exp\left[-\alpha_D |(\tilde{m}_0)_{\alpha\beta}|^{2B_D}\right]\,, \end{equation} with $P_D=\exp[A_0]/(2\ln D)$, $k_D=k_0/(2\ln D)$, $\alpha_D=\exp(-B_0x_0)$, and $B_D=B_0/(2\ln D)$. The distribution in Eq.~(\ref{eq:prob1}) in known as the generalized Gamma distribution~\cite{gamma_distribution}. In Fig.~\ref{fig:PWdistribution}(a), we show that it fits well the results for $P(|(\tilde{m}_0)_{\alpha\beta}|^2)$ for different system sizes. The results in this section open two important questions that we address in the reminder of this paper. The first one is whether perturbing the hard-core boson model considered, e.g., by breaking translational invariance, still results in PDFs of the off-diagonal matrix elements that are described by generalized Gamma distributions. If yes, we need to understand whether the parameters of the distributions depend on Hamiltonian parameters. The second question is what happens if the hard-core boson model undergoes a localization transition. In order to address these questions, we consider next the Aubry-Andr\'e model. \section{Aubry-Andr\'e model for\\ spinless fermions} \label{sec:fermionsAA} The Aubry-Andr\'e model is a paradigmatic model of a delocalization-localization transition in one-dimensional lattices~\cite{aubry1980analyticity}. For open boundary conditions, the Aubry-Andr\'e model Hamiltonian for noninteracting spinless fermions can be written as \begin{equation}\label{eq:HsfAA} \hat H^{\rm SF}_{\rm AA}=-J\sum_{i=1}^{L-1}( \hat f^\dagger_i\hat f^{}_{i+1}+{\rm H.c.})+\lambda J \sum_{i=1}^L \cos(2\pi\beta i+\phi_0) \hat f^\dagger_i\hat f^{}_i\,, \end{equation} where $J$ is the hopping energy between nearest neighbor sites, and the on-site potential has a quasiperiodic functional form with a magnitude $\lambda J$, incommensurate period $1/\beta$ (we choose $\beta$ to be the inverse golden mean $\beta=(\sqrt{5}-1)/2$, considered to be the most irrational number~\cite{Sokoloff_85}), and a global phase shift $\phi_0$. We set $J=1$ in what follows. The single-particle eigenstates of the Aubry-Andr\'e model have a delocalization-localization transition at $\lambda_c=2$~\cite{aubry1980analyticity}. For $\lambda<\lambda_c$, all single-particle eigenstates are extended, while for $\lambda>\lambda_c$ they are localized. At the transition point $\lambda_c$, the energy spectrum exhibits the well known Hofstadter butterfly fractal structure~\cite{Hofstadter_76}. \begin{figure}[!t] \begin{center} \includegraphics[width=0.99\columnwidth]{Plotnew/Fig6_Fermiondistribution.pdf} \caption{\label{fig:Fermiondistribution}Probability density function $P_{\rm nz}$ of the scaled nonzero off-diagonal matrix elements $|(\mathsf{m}_0)_{\alpha\beta}L|^2$ in the spinless fermion Aubry-Andr\'e model, with a phase shift $\phi_0=0$. The systems studied have $L=22$ (black circles), 28 (red squares), and 34 (blue diamonds), and are at half filling ($N=L/2$). Results are shown for (a) $\lambda=1$ (delocalized regime), (b) $\lambda=2$ (transition point), and (c) $\lambda=10$ (localized regime). We consider all pairs of eigenstates with $\Delta E/L= 2\times 10^{-4}$.} \end{center} \end{figure} In contrast to translationally invariant systems in which all the off-diagonal matrix elements of $\hat{\mathsf{m}}_0$ vanish in the many-body eigenstates of the Hamiltonian (because all the single-particle eigenstates are quasimomentum eigenstates), this is not the case in the Aubry-Andr\'e model. As follows from the discussion in Sec.~\ref{sec:fermionsgeneral}, the off-diagonal matrix elements $(\mathsf{m}_0)_{\alpha\beta}$ in the Aubry-Andr\'e model must still be sparse, i.e., the overwhelming majority of them vanish. The magnitude of those that are nonzero is expected to scale with the system size according to Eq.~(\ref{m0_typical}), i.e., $|(\mathsf{m}_0)_{\alpha\beta}|^2 \propto 1/L^2$. Therefore, here we study the PDF of the nonzero matrix elements $P_{\rm nz}$ as a function of scaled matrix elements $|(\mathsf{m}_0)_{\alpha\beta} L|^2$. \begin{figure}[!t] \begin{center} \includegraphics[width=0.99\columnwidth]{Plotnew/Fig7_Fermionvariance.pdf} \caption{\label{fig:Fermionvariance} Scaled variance $V_{\mathsf{m}_0}(E_0,\omega)$ of the off-diagonal matrix elements $(\mathsf{m}_0)_{\alpha\beta}$ in the spinless fermion Aubry-Andr\'e model, with a phase shift $\phi_0=0$. The systems studied have $L=22$ (black circles), 28 (red squares), and 34 (blue diamonds), and are at half filling ($N=L/2$). Results are shown for (a) $\lambda=1$ (delocalized regime), (b) $\lambda=2$ (transition point), and (c) $\lambda=10$ (localized regime). We consider all pairs of eigenstates with $\Delta E/L= 2\times 10^{-4}$, and average the results over frequency windows $\Delta\omega=0.2$ in (a) and (b), and $\Delta\omega=1.0$ in (c).} \end{center} \end{figure} Results for $P_{\rm nz}(|(\mathsf{m}_0)_{\alpha\beta} L|^2)$ are shown in Fig.~\ref{fig:Fermiondistribution}, at $\lambda=1$ in the delocalized regime [Fig.~\ref{fig:Fermiondistribution}(a)], at the transition point $\lambda = \lambda_c$ [Fig.~\ref{fig:Fermiondistribution}(b)], and at $\lambda=10$ in the localized regime [Fig.~\ref{fig:Fermiondistribution}(c)]. In all cases one can see that, up to fluctuations, the scaled distributions collapse for different system sizes. The PDFs exhibit a sharp peak as $(\mathsf{m}_0)_{\alpha\beta} L \rightarrow 0$ for $\lambda\leq\lambda_c$ (all of them vanish for $\lambda=0$), which broadens and becomes a broad distribution upon increasing $\lambda$ for $\lambda>\lambda_c$. Next we compute the scaled variance $V_{\mathsf{m}_0}(\bar E_0, \omega)$, defined for the fermions as in Eq.~(\ref{eq:scaledvar}) for the hard-core bosons. This is the quantity that is expected to remain finite in the thermodynamic limit. In Fig.~\ref{fig:Fermionvariance}, we plot $V_{\mathsf{m}_0}(\bar E_0, \omega)$ for the same values of $L$ and $\lambda$ as in Fig.~\ref{fig:Fermiondistribution}. The results for $V_{\mathsf{m}_0}(\bar E_0, \omega)$ in different systems sizes collapse (up to fluctuations), which suggests that $V_{\mathsf{m}_0}(\bar E_0, \omega)$ is a well-defined function in the thermodynamic limit. Its functional form depends strongly on whether $\lambda$ is below or above the localization transition. One can also see in Fig.~\ref{fig:Fermionvariance} that $V_{\mathsf{m}_0}(\bar E_0, \omega)$ as a function of $\omega$ is qualitatively different from $V_{{m}_0}(0, \omega)$ as a function of $\omega$ for hard-core bosons (see Fig.~\ref{fig:PWvariance}). For noninteracting spinless fermions the variance is nonzero (and so are the off-diagonal matrix elements) for an $\omega$ range that is determined by the bandwidth of the single-particle spectrum, and no Gaussian decay occurs for large values of $\omega$. \section{Aubry-Andr\'e model for\\ hard-core bosons} \label{sec:HCBsAA} The Aubry-Andr\'e model for hard-core bosons, with open boundary conditions, can be written as \begin{equation}\label{eq:HhcbAA} \hat H^{\rm HCB}_{\rm AA}=-\sum_{i=1}^{L-1}( \hat b^\dagger_i\hat b^{}_{i+1}+{\rm H.c.})+\lambda \sum_{i=1}^L \cos(2\pi\beta i+\phi_0) \hat b^\dagger_i\hat b^{}_i\,, \end{equation} and can be mapped onto the spinless fermion Aubry-Andr\'e model in Eq.~(\ref{eq:HsfAA}). As in the previous sections, we focus on the matrix elements of the occupation of the zero quasimomentum mode $\hat m_{0}$. \begin{figure}[!t] \begin{center} \includegraphics[width=0.99\columnwidth]{Plotnew/Fig8_AANonzero.pdf} \caption{\label{fig:Bosondistribution} Comparison between the matrix elements of hard-core bosons and spinless fermions in the Aubry-Andre model at $\phi_0=0$. (a) Relative difference between the off-diagonal matrix elements $\Delta_{\rm nz}$ [see Eq.~(\ref{def_delta_nz})] vs $\lambda$ for three different system sizes. (b, c) PDFs of the scaled matrix elements of spinless fermions $|L({\bf m}_0)_{\alpha\beta}|^2$ (diamonds) and of hard-core bosons $|L(m_0)_{\alpha\beta}|^2$ (circles) at $\lambda=1$ and $10$, respectively, for $L=28$. We consider all pairs of eigenstates for which the matrix elements of the spinless fermions are nonzero, with $\Delta E/L= 2\times 10^{-4}$, in systems at half filling $N=L/2$. Inset in (a): $r_{\rm zero}$ [see Eq.~(\ref{def_rzero})] vs $\lambda$ for $L=20$. To compute this quantity we use only off-diagonal matrix elements between pairs of eigenstates for which the corresponding matrix elements of the spinless fermions are zero.} \end{center} \end{figure} As advanced in Sec.~\ref{sec:general}, we have seen that the main difference between the off-diagonal matrix elements $(m_0)_{\alpha\beta}$ of hard-core bosons in the translationally invariant model and the matrix elements $(\mathsf{m}_0)_{\alpha\beta}$ of spinless fermions in the Aubry-Andr\'e model is that the overwhelming majority of the former are nonzero. We begin our study of the off-diagonal matrix elements $(m_0)_{\alpha\beta}$ of hard-core bosons in the Aubry-Andr\'e model by computing the relative difference between those that are nonzero for spinless fermions (whose number grows polynomially in the system size) and the same matrix elements for hard-core bosons \begin{equation} \label{def_delta_nz} \Delta_{\rm nz} = \frac{\sum_{\alpha,\beta \in {\rm nz}}||(m_0)_{\alpha\beta}|^2-|(\mathsf{m}_0)_{\alpha\beta}|^2|} {\sum_{\alpha,\beta \in {\rm nz}}|(m_0)_{\alpha\beta}|^2+\sum_{\alpha,\beta \in {\rm nz}}|(\mathsf{m}_0)_{\alpha\beta}|^2} \;. \end{equation} Again, the sum over $\alpha$ and $\beta$ runs over the pairs of eigenstates for which $(\mathsf{m}_0)_{\alpha\beta}$ are nonzero ($\alpha,\beta \in {\rm nz}$). Results for $\Delta_{\rm nz}$ vs $\lambda$, for pairs of eigenstates whose average energy is at the center of spectrum, are shown in the main panel of Fig.~\ref{fig:Bosondistribution}(a) for different system sizes (for which we compute all pairs of eigenstates in the selected window). $\Delta_{\rm nz}$ can be seen to be approximately one for $\lambda< 2$, a regime in which (as for the translationally invariant case) we expect the off-diagonal matrix elements of the hard-core bosons to be dense, while the off-diagonal matrix elements of the spinless fermions are sparse. Because of the fixed Hilbert-Schmidt norm, their magnitude must scale differently with the system size (for the former it should be negligible when compared to the latter), and that results in $\Delta_{\rm nz} \approx 1$. For $\lambda< 2$ the off-diagonal matrix elements of hard-core bosons and spinless fermions also exhibit very different PDFs. This can be seen in Fig.~\ref{fig:Bosondistribution}(b), where we show the PDFs for $\lambda=1$ for pairs of eigenstates $\alpha$ and $\beta$ for which $(\mathsf{m}_0)_{\alpha\beta}$ are nonzero. Figure~\ref{fig:Bosondistribution}(a) also shows that for $\lambda> 2$, in the localized regime, $\Delta_{\rm nz} \to 0$ as $\lambda$ increases. Namely, the off-diagonal matrix elements of the hard-core bosons approach the values of the off-diagonal matrix elements of the fermions. As a result, one concludes that the off-diagonal matrix elements of the hard-core bosons become sparse. In this regime localization precludes $\hat m_0$ from connecting an exponentially large number of eigenstates. Figure~\ref{fig:Bosondistribution}(c) shows that in this regime, specifically for $\lambda=10$, the PDFs for hard-core bosons and spinless fermions are similar, again plotted there for pairs of eigenstates $\alpha$ and $\beta$ for which $(\mathsf{m}_0)_{\alpha\beta}$ is nonzero. \begin{figure}[!t] \begin{center} \includegraphics[width=0.99\columnwidth]{Plotnew/Fig9_AABosonvariance.pdf} \caption{\label{fig:Bosonvariance}Scaled variance $V_{m_0}(E_0,\omega)$ of the off-diagonal matrix elements $(m_0)_{\alpha\beta}$ in the hard-core boson Aubry-Andr\'e model plotted vs $\omega^2$. (a) $\lambda=1$ (delocalized regime) and (b) $\lambda=2$ (transition point). For these values of $\lambda$, we show results for systems with sizes $L=40$ (dashed lines), 50 (dashed-dotted lines), and 60 (solid lines). The long dashed lines are Gaussian fits to the $L=60$ results for $\omega^2\in[300,600]$, with a fitting parameter [see Eq.~(\ref{eq:gaussian})] $a=0.11$ in (a) and $a=0.08$ in (b). We randomly sample at least $10^8$ pairs of eigenstates with $\Delta E/L= 2\times 10^{-4}$. The average is carried out over at least 1000 Hamiltonian realizations with randomly selected phases $\phi_0$. (c) $\lambda=10$ (localized regime). Results are shown for systems with sizes $L=16$ (solid line), 18 (dashed line), and 20 (dashed-dotted line). For this value of $\lambda$, we consider all pairs of eigenstates with $\Delta E/L= 2\times 10^{-4}$, and average over 40 Hamiltonian realizations with randomly selected phases $\phi_0$. All calculations are carried out at half filling $N=L/2$, and the results are coarse grained using $\Delta\omega=0.05$.} \end{center} \end{figure} A complementary understanding of what happens to the off-diagonal matrix elements of the hard-core bosons as $\lambda$ increases can be gained studying for the hard-core bosons the matrix elements that are zero in the spinless fermions model. To quantify their magnitude in the hard-core boson system, we calculate \begin{equation} \label{def_rzero} r_{\rm zero} = \frac{1}{||\hat m_0||^2} \sum_{\alpha,\beta \in {\rm zero}} |(m_0)_{\alpha\beta}|^2 \;, \end{equation} where the sum is carried out over pairs of eigenstates for which the corresponding spinless-fermions matrix elements vanish ($\alpha,\beta \in {\rm zero}$, the overwhelming majority of pairs of eigenstates). Results for $r_{\rm zero}$ vs $\lambda$ are shown in the inset of Fig.~\ref{fig:Bosondistribution} for $L=20$. In the delocalized regime, $r_{\rm zero}$ is close to one. This is consistent with the off-diagonal matrix elements being dense. In the localized regime $r_{\rm zero} \to 0$ as $\lambda$ increases, which shows that in this regime the magnitude of those matrix elements decreases as the others (the ``nonzero'' ones) become similar the ones of the fermions. \begin{figure}[!t] \begin{center} \includegraphics[width=0.99\columnwidth]{Plotnew/Fig10_AAvariancesmall.pdf} \caption{\label{fig:AAvariancesmall}$V_{m_0}(E_0,\omega)$ in the hard-core boson Aubry-Andr\'e model at low and intermediate frequencies $\omega$, for (a) $\lambda=1$ and (b) $\lambda=2$. Results are shown for systems with sizes $L=22$ (solid lines), 26 (dashed lines), and 30 (dashed-dotted lines). We randomly select at least $5\times 10^8$ pairs of eigenstates at $\Delta E/L= 2\times 10^{-4}$, and average over at least 5000 Hamiltonian realizations with randomly selected phases $\phi_0$. The variance is coarse grained using a frequency window $\Delta\omega=0.05$. (Insets) The same results as in the main panels but rescaled to show $V_{m_0}(E_0,\omega)/L$ vs $\omega L$. The variance is coarse grained using a frequency window $\Delta\omega=0.02$, and plotted as a running average.} \end{center} \end{figure} In Fig.~\ref{fig:Bosonvariance}, we show results for the scaled variance $V_{m_0}(E_0,\omega)$ as a function of $\omega^2$ for different system sizes and values of $\lambda$. (In order to reduce finite size effects, in these and in the calculations that follow we carry out an average over results obtained for Aubry-Andr\'e Hamiltonians with randomly selected phases $\phi_0$.) As one may have advanced given the results in Fig.~\ref{fig:Bosondistribution}, the results for the variance are very different in the delocalized and localized regimes. In the delocalized regime [Fig.~\ref{fig:Bosonvariance}(a)] and at the transition point [Fig.~\ref{fig:Bosonvariance}(b)], $V_{m_0}$ exhibits a Gaussian decay at high frequencies [similar to the one observed for translationally invariant hard-core bosons in Fig.~\ref{fig:PWvariance}(b)]. On the other hand, Fig.~\ref{fig:Bosonvariance}(c) shows that no such a Gaussian decay occurs in the localized regime, similar to what happens for noninteracting fermions. For $\lambda=10$ in Fig.~\ref{fig:Bosonvariance}(c), one can see a sort of plateau in the variance for $\omega \lesssim 20$ (the bandwidth of the single-particle spectrum is $\omega \sim 20$). This result is similar to the one for spinless fermions at the same $\lambda=10$ in Fig.~\ref{fig:Fermionvariance}(c). For $\omega \gtrsim 20$ in Fig.~\ref{fig:Bosonvariance}(c), $V_{m_0}$ exhibits a sharp drop. In contrast to the fermions, however, $V_{m_0}(\omega \gtrsim 20)$ for hard-core bosons is small but nonzero. \begin{figure}[!t] \begin{center} \includegraphics[width=0.99\columnwidth]{Plotnew/Fig11_AAscale.pdf} \caption{\label{fig:AAscale}Scaled probability density function $P(\ln|(\tilde{m}_0)_{\alpha\beta}|) \ln D$ vs $\ln |(\tilde{m}_0)_{\alpha\beta}|/\ln D$ in the hard-core boson Aubry-Andr\'e model. (a, c) Results for $\lambda=1$ and $\lambda=2$, respectively, for systems with sizes $L=80$ (solid lines), 100 (dashed lines), and 120 (dashed-dotted lines) at half filling $N=L/2$. (b, d) The symbols show the results for $L=120$ from panels (a) and (c), respectively. The solid line is a fit to the results above the horizontal dotted line [$P(\ln|(\tilde{m}_0)_{\alpha\beta}|) \ln D \geq 0.1$] using the function in Eq.~(\ref{eq:fit1}). The fitting parameters are: $A_0=5.06$, $B_0=10.08$, $k_0=11.62$, $x_0=-0.23$ for $\lambda=1$, and $A_0=9.40$, $B_0=3.50$, $k_0=10.02$, $x_0=-0.87$ for $\lambda=2$. Pairs of eigenstates are sampled randomly for $\Delta E/L= 2\times 10^{-4}$ and $\omega=7$ with $\Delta\omega=0.05$. We average over at least $3\times 10^6$ pairs of eigenstates, and over at least 600 Hamiltonian realizations for randomly selected phases $\phi_0$.} \end{center} \end{figure} We emphasize that we use a different numerical protocol in the calculations of the off-diagonal matrix elements in the delocalized regime and at the transition point ($\lambda \leq 2$), compared to the one in the localized regime ($\lambda > 2$). Given the dense nature of the matrix elements in the delocalized regime, for $\lambda \leq 2$ we can carry out calculations for large system sizes randomly sampling matrix elements that belong to the target energy window. On the other hand, in the localized regime for hard-core bosons (as in any regime in the spinless fermion case), the variances is dominated by a vanishingly small fraction of the matrix elements. In those cases, we need to compute all pairs of eigenstates in the target energy window, thereby limiting the calculations to systems with sizes $L \leq 22$ for hard-core bosons. In the reminder of this section we focus on values of $\lambda \leq 2$ because those are the ones for which we expect the properties of hard-core boson matrix elements to resemble those in integrable interacting systems not mappable onto noninteracting models, such as the spin-1/2 XXZ chain. In Fig.~\ref{fig:AAvariancesmall} we show the scaled variance $V_{m_0}(E_0,\omega)$ vs $\omega$ at low and intermediate frequencies for $\lambda=1$ [Fig.~\ref{fig:AAvariancesmall}(a)] and $\lambda=2$ [Fig.~\ref{fig:AAvariancesmall}(b)]. The results at intermediate frequencies collapse for different system sizes $L$. At low frequencies $\omega\propto 1/L$, the inset shows that the results collapse when plotting $V_{m_0}/L$ vs $\omega L$, as discussed before for the translationally invariant case. Overall, up to additional structure in the variances of the Aubry-Andr\'e case, the results in Fig.~\ref{fig:AAvariancesmall} are qualitatively similar to those reported in Fig.~\ref{fig:PWvariance}(a). The corresponding scaled PDFs are shown in Figs.~\ref{fig:AAscale}(a) and~\ref{fig:AAscale}(c) for $\lambda=1$ and 2, respectively, for different system sizes and $\omega=7$. As in the translationally invariant case, the curves collapse in the delocalized regime [$\lambda=1$ in Fig.~\ref{fig:AAscale}(a)]. The collapse worsens at the transition point [$\lambda=2$ in Fig.~\ref{fig:AAscale}(c)]. The latter finding suggests that further rescaling may be needed at $\lambda_c$, a point whose detailed investigation is postponed to future studies. In Figs.~\ref{fig:AAscale}(b) and~\ref{fig:AAscale}(d) we show that the scaled PDFs, both for $\lambda=1$ and 2, are well described by the ansatz in Eq.~(\ref{eq:fit1}) with parameters that depend on the Hamiltonian parameters. Hence, the corresponding distributions $P(|(\tilde{m}_0)_{\alpha\beta}|^2)$ are well described by generalized Gamma distributions, see Eq.~\eqref{eq:prob1}. \section{Beyond hard-core boson models} \label{sec:xxz} The main goal of this work has been the study of the PDFs of the off-diagonal matrix elements of a specific few-body operator in models of hard-core bosons that can be mapped onto noninteracting spinless fermions (in order to be able to study large system sizes, $L\sim 100$), which we expect to describe the PDFs of the off-diagonal matrix elements of operators in integrable interacting models that are not mappable onto noninteracting models (for which full exact diagonalization studies are limited to sizes $L\sim 20$). The goal of this section is to provide evidence to support our expectation that the main result for the PDFs in the previous sections applies beyond hard-core bosons models. Specifically, we show that the same generalized Gamma distributions that describe the distributions of off-diagonal matrix elements of the occupation of the zero quasimomentum mode of hard-core bosons describe the distribution of off-diagonal matrix elements of a local operator in the spin-1/2 XXZ model~\cite{LeBlond_Rigol_Eigenstate_20}. In Ref.~\cite{LeBlond_Rigol_Eigenstate_20}, the distributions of the off-diagonal matrix elements were reported for $\omega\rightarrow0$. In what follows, we first discuss results for hard-core bosons in the limit $\omega\rightarrow0$ before discussing the results from Ref.~\cite{LeBlond_Rigol_Eigenstate_20}. \subsection{Hard-core boson distributions for $\omega\rightarrow0$} In the previous sections, we focused on the distribution of the off-diagonal matrix elements in the intermediate frequency regime ($\omega=7$). We did this in order to avoid the low-frequency ``ballistic'' scaling of the off-diagonal matrix elements with $L$. To study the distribution of the off-diagonal matrix elements for $\omega\rightarrow 0$, we need to consider the following scaled matrix elements \begin{equation}\label{def_tilde_m02} |(\tilde{m}^*_0)_{\alpha\beta}|^2=|(m_0)_{\alpha\beta}\sqrt{D}/\sqrt{L}|^2 \;, \end{equation} which have an extra $1/\sqrt{L}$ factor when compared to $|(\tilde{m}_0)_{\alpha\beta}|$ in Eq.~(\ref{def_tilde_m0}). The scaled matrix elements $(\tilde{m}^*_0)_{\alpha\beta}$ are the ones that are $O(1)$ in the thermodynamic limit. \begin{figure}[!b] \begin{center} \includegraphics[width=0.99\columnwidth]{Plotnew/Fig12_PWomega0.pdf} \caption{\label{fig:omega0} (a) Scaled probability density function $P(\ln|(\tilde{m}^*_0)_{\alpha\beta}|) \ln D$ [see Eq.~(\ref{def_tilde_m02})] vs $\ln |(\tilde{m}^*_0)_{\alpha\beta}|/\ln D$ in the translationally invariant hard-core boson model considered in Sec.~\ref{sec:HCBstranslation}. We study eigenstates in the quasimomentum sector $\kappa = 2\pi/L$ for systems at quarter filling $N = L/4$. We show results for systems with sizes $L=68$ (dashed line), 84 (dashed-dotted line), and 100 (solid line). We randomly select at least $3\times 10^6$ pairs of eigenstates with $\Delta E/L= 2\times 10^{-4}$ and $\omega\in[0,0.05]$. (b) The symbols show the logarithm of the results for $L=100$ in (a), and the solid line is a fit to the points above the dotted line [$P(\ln|(\tilde{m}^*_0)_{\alpha\beta}|) \ln D \geq 0.1$] using the function in Eq.~(\ref{eq:fit1}). The fitting parameters are $A_0=4.22$, $B_0=7.25$, $k_0=7.10$, and $x_0=-0.32$.} \end{center} \end{figure} In Fig.~\ref{fig:omega0}, we show the scaled probability density function $P(\ln|(\tilde{m}^*_0)_{\alpha\beta}|) \ln D$ as a function of $\ln |(\tilde{m}^*_0)_{\alpha\beta}|/\ln D$ in the translationally invariant hard-core boson model discussed in Sec.~\ref{sec:HCBstranslation}. All the parameters used in the calculations are the same as the ones used for Fig.~\ref{fig:PWscale}, except for the frequency range $\omega\in[0,0.05]$. In Fig.~\ref{fig:omega0}(a), one can see that the curves collapse for different system sizes $L$. In Fig.~\ref{fig:omega0}(b), we fit the scaled PDF with the ansatz function from Eq.~(\ref{eq:fit1}). The outcome of the fitting agrees well with the numerical results, with similar fitting parameters as the ones obtained in Fig.~\ref{fig:PWscale}. Thus, the PDF of $|(\tilde{m}^*_0)_{\alpha\beta}|^2$ is well described by a generalized Gamma distribution, \begin{equation}\label{eq:prob2} P(|(\tilde{m}^*_0)_{\alpha\beta}|^2)=P_D |(\tilde{m}^*_0)_{\alpha\beta}|^{2(k_D-1)}\exp\left[-\alpha_D |(\tilde{m}^*_0)_{\alpha\beta}|^{2B_D}\right]\,, \end{equation} which is nothing but Eq.~(\ref{eq:prob1}) after changing $|(\tilde{m}_0)_{\alpha\beta}|^2\rightarrow|(\tilde{m}^*_0)_{\alpha\beta}|^2$. An interesting property of $P(|(\tilde{m}^*_0)_{\alpha\beta}|^2)$ is that, for $|(\tilde{m}^*_0)_{\alpha\beta}|^2\rightarrow 0$ in large systems sizes, $P(|(\tilde{m}^*_0)_{\alpha\beta}|^2)\propto 1/(\ln D|(\tilde{m}^*_0)_{\alpha\beta}|^2)\simeq 1/(D|(m_0)_{\alpha\beta}|^2)$, where in the last step we used that $\ln D\simeq L$. A similar result, for a fixed system size, was recently reported in Ref.~\cite{sarang21} for a nonlocal Jordan-Wigner string in the spin-1/2 XX chain. The low-frequency behavior of the matrix elements of integrability breaking perturbations in integrable models can be used to gain an analytic understanding of the system size dependence of the onset of many-body quantum chaos~\cite{sarang21}. Our results for the full PDFs in finite systems sizes, and their scaling with system size, can be used in such calculations to improve our understanding of the onset of quantum chaos. \subsection{Spin-1/2 XXZ model} \begin{figure}[!b] \begin{center} \includegraphics[width=0.99\columnwidth]{Plotnew/Fig13_XXZrescale.pdf} \caption{\label{fig:XXZ}Scaled probability density function $P(\ln|\tilde{K}^*_{\alpha\beta}|) \ln D$ vs $\ln|\tilde{K}^*_{\alpha\beta}|/\ln D$ [see Eq.~\eqref{eq:Krescale}] in the integrable spin-1/2 XXZ model. (a) $P(\ln|\tilde{K}^*_{\alpha\beta}|) \ln D$ for systems with sizes $L=20$ (dashed line), 22 (dashed-dotted line), and 24 (solid line). (b) The symbols are the results for $\ln[P(\ln|\tilde{K}^*_{\alpha\beta}|) \ln D]$ in the system with $L=24$ from (a), while the solid line is a fit to the results using the function in Eq.~(\ref{eq:fit1}). We use all the data points in the fitting, and obtain the following fitting parameters: $A_0=12.2$, $B_0=1.65$, $k_0=11.6$, and $x_0=-1.56$. The data used for this figure are from Fig.~16 in Ref.~\cite{LeBlond_Rigol_Eigenstate_20}.} \end{center} \end{figure} With the knowledge gained so far, we are ready to revisit the results in Ref.~\cite{LeBlond_Rigol_Eigenstate_20} for the translationally invariant spin-1/2 XXZ chain, whose Hamiltonian has the form \begin{equation} \hat H_{\rm XXZ}=\sum_{j=1}^{L}\left[\frac{1}{2}(\hat S^+_{j} \hat S^-_{j+1}+{\rm H.c.}) +\Delta \hat S^z_{j}\hat S^z_{j+1}\right]\,, \end{equation} where $\hat S^{x\,(y,z)}_i$ are spin-1/2 operators in the $x$ ($y$, $z$) directions on site $j$, $\hat S^\pm_j=\hat S^x_j\pm i\hat S^y_j$ are the corresponding ladder operators. One of the operators studied in Ref.~\cite{LeBlond_Rigol_Eigenstate_20} is the next-nearest-neighbor flip-flop operator \begin{equation} \hat K=\hat S^+_{1} \hat S^-_{3}+\hat S^+_{3} \hat S^-_{1}\,, \end{equation} and we are interested in results reported there for the matrix elements of $\hat K$ in pairs of eigenstates within the same quasimomentum sectors, specifically, in the results reported in Fig.~16 of Ref.~\cite{LeBlond_Rigol_Eigenstate_20} for $\Delta=0.55$, where the pairs of eigenstates were taken to have an average energy $|\bar E|\leq 0.025L$, and 40\,000 off-diagonal matrix elements were selected that correspond to the lowest values of $\omega$. Following Eq.~\eqref{def_tilde_m02}, we study the PDF of the scaled matrix elements \begin{equation} \label{eq:Krescale} |\tilde K^*_{\alpha\beta}|^2 = |K_{\alpha\beta}\sqrt{D}/\sqrt{L}|^2 \;, \end{equation} when reanalyzing the data from Ref.~\cite{LeBlond_Rigol_Eigenstate_20}. In Fig.~\ref{fig:XXZ}(a), we replot the data in Fig.~16 of Ref.~\cite{LeBlond_Rigol_Eigenstate_20} using the additional rescalings in Eqs.~\eqref{eq:rescm0} and~\eqref{def_Pln_rescaling}. The results for different system sizes in Fig.~\ref{fig:XXZ}(a) exhibit a good collapse (note that the system sizes are much smaller than those in Figs.~\ref{fig:PWscale} and~\ref{fig:AAscale}). In Fig.~\ref{fig:XXZ}(b) we compare the results for the largest system size to a fit to the ansatz in Eq.~(\ref{eq:fit1}). There is also a good agreement between the numerical results (symbols) and the fit (solid line). This suggests that the off-diagonal matrix elements of observables in integrable interacting models are generically described by generalized Gamma distributions. \section{Summary} \label{sec:summary} We studied the statistical properties of the matrix elements of few-body operators in hard-core boson models, and of noninteracting spinless fermions to which hard-core bosons can be mapped, in one-dimensional lattices. We showed, first analytically and then in numerical calculations of the model of interest, that the off-diagonal matrix elements of few-body operators in the eigenstates of noninteracting fermionic Hamiltonians are {\it sparse}, i.e, the overwhelming majority of the matrix elements vanishes (the number of nonzero matrix elements scales polynomially with the system size). For hard-core bosons on the other hand, we showed that there are few-body operators, such as the occupation of quasimomentum modes that are of experimental relevance, for which the off-diagonal matrix elements are {\it dense}, i.e., the overwhelming majority of the matrix elements are nonzero. We considered two hard-core boson Hamiltonians that can be mapped onto noninteracting spinless fermions Hamiltonians, translationally invariant hard-core bosons with nearest neighbor hoppings [Eq.~\eqref{eq:Hhcb_pw}] and the hard-core boson Aubry-Andr\'e model [Eq.~\eqref{eq:HhcbAA}]. For translationally invariant hard-core bosons and for the hard-core boson Aubry-Andr\'e model in the delocalized regime, we showed that the scaled variances of the off-diagonal matrix elements of the occupation of the zero quasimomentum mode behave as those of local operators in integrable interacting models that are not mappable onto noninteracting models, such as the spin-1/2 XXZ chain~\cite{LeBlond_Mallayya_19, Brenes_LeBlond_20, brenes_goold_20, LeBlond_Rigol_Eigenstate_20}. Namely, they exhibit a regime with a Gaussian decay in $\omega$ at high $\omega$, and a regime with a ballistic scaling when $\omega\propto1/L$. On the other hand, we found the behavior of the off-diagonal matrix elements to be completely different in the localized regime, in which the variance is strongly suppressed at frequencies beyond the single-particle bandwidth and no Gaussian decay occurs at high frequencies. The off-diagonal matrix elements also become sparse as $\lambda$ increases in that regime, and become similar to those of the noninteracting spinless fermions to which hard-core bosons can be mapped. Our main results in this work are first the rescaling of the off-diagonal matrix elements of hard-core bosons in delocalized regimes, involving the logarithm of the Hilbert space dimension [see Eq.~\eqref{eq:rescm0}], and the corresponding rescaling of the PDFs [see Eq.~\eqref{def_Pln_rescaling}], to produce meaningful PDFs in the thermodynamic limit. The second main result is the finding that the PDFs after rescaling are well described by generalized Gamma distributions [see Eq.~\eqref{eq:prob1}]. Studying translationally invariant hard-core bosons and the hard-core boson Aubry-Andr\'e model we showed that these distributions are robust against the breaking of translational symmetry, so long as the system does not localize. We also found that the values of the parameters in the generalized Gamma distributions depend on the Hamiltonian considered and its parameters. Furthermore, a reanalysis of the results for the translationally invariant spin-1/2 XXZ chain in Ref.~\cite{LeBlond_Rigol_Eigenstate_20} suggests that generalized Gamma distributions generically describe the PDFs of the off-diagonal matrix elements of observables in integrable interacting models. Further studies are needed to support this conjecture and to understand why such distributions describe the off-diagonal matrix elements of observables in integrable systems. Well known distributions that are special cases of the generalized Gamma distribution in Eq.~\eqref{eq:prob1} include the Weibull distribution ($k^*_D=B^*_D$), the Gamma distribution ($B^*_D=1$), and the exponential distribution ($k^*_D=B^*_D=1$). \section{Acknowledgments} This work was supported by the National Science Foundation, Grant No.~2012145 (Y.Z. and M.R.), and by the Slovenian Research Agency (ARRS), Research core fundings Grants No.~P1-0044 and No.~J1-1696 (L.V.). We grateful to Sarang Gopalakrishnan for stimulating discussions.
3,212,635,537,902
arxiv
\section{Introduction}\label{sec:intro} Chemical vapour deposition (CVD) has become one of the most popular methods for epitaxial growth of single crystal diamond (SCD) and polycrystalline diamond (PCD) for a wide variety of applications. The basis of this technique requires dissociation of a gaseous hydrocarbon pre-cursor (\meth{}) at low pressures using a reactive \hydro{} species that is excited either using hot filament (HFCVD) \cite{Sachan2019,Narayan2021,Amaral2006,Ali2011,Barber1997,Tabakoya2019,Liang2007} or microwave plasma (MPCVD)\cite{Mandal2021,Tallaire2020,Weng2018,Cuenca2020b,Achatz2006,Benedic2001,Sedov2019,Fendrych2010}. Both HFCVD and MPCVD have their advantages and disadvantages for diamond growth. HFCVD offers easier scalability since the activation region simply depends upon the area covered by the tungsten or tantalum filaments. However, metal incorporation is possible\cite{Ohmagari2018,MehtaMenon1999}, the filament stability is challenging\cite{Okoli1991} and growth rates are moderately low compared to other methods ($\sim$\SI{1.6}{\micro\metre\per\hour}). MPCVD does not have filament issues since the plasma is formed using electromagnetic (EM) standing waves and the growth rates are much higher (>\SI{10}{\micro\metre\per\hour})\cite{Bolshakov2016}, although only over small areas making scalability much harder. This makes MPCVD particularly useful for small, millimetre scale sample growth such as SCD for quantum applications\cite{Achard2020,Mallik2016}. In MPCVD, understanding the size of the plasma activation region is of huge importance as this directly influences the deposition area and the diamond growth rate. The plasma activation region is affected by several process parameters including forward microwave power, pressure, gas flow rate and temperature in addition to physical parameters such as the sample size, sample holder geometry and of course the reactor topology. The effect of each may be understood empirically or through modelling approaches. Experimental data offers the greatest insight as no CVD reactor is the same as another, especially for bespoke builds. A notable example of this is shown for Asmussen et al. where for SCD, it has been empirically shown that a recessed or `pocket' type sample holder results in less unwanted PCD growth at the sample edges \cite{Nad2016,Wu2016,Charris2017}. Significant material and machining costs are required in order to experimentally iterate geometrical adjustments. Modelling becomes extremely useful at this point and a viable approach for growth optimisation for various reactors, offering faster and cheaper insights into how modified stages, sample holders and reactor walls affect the plasma. Numerous reactor modelling studies exist to this end; the various topologies include the cylindrical TM$_{01p}$ type cavity such as the ASTEX PDS-18, Seki Diamond SDS 5200 series reactors\cite{Funer1999,Gorbachev2001,Shivkumar2016,Silva2010,Yamada2006}, the \clamshell{} type cavity such as the ARDIS-100, Carat Systems CTS6U, Seki Diamond SDS 6K style clamshell \cite{Yamada2012,Yamada2015,Li2014d,Weng2018,Sedov2020a}, the TM$_{02}$ dome style cavity as developed by Su et al.\cite{Su2014} and the ellipsoidal egg-shaped cavity such as the AIXTRON reactor\cite{Funer1999,Li2011a,Li2015}. For a comprehensive review of modelling different microwave reactor topologies, we referred to Silva et al. \cite{Silva2009}. Plasma modelling is not trivial and requires significant development and experimental validation. Fortunately in this decade a number of commercial packages exist, making this avenue more accessible. One such area which requires more attention from the modelling perspective is the sample holder design. Shivkumar et al. have contributed significant understanding of pillar type models to focus the plasma, corroborated with optical emission spectroscopy\cite{Shivkumar2016}. A notably recent study by Sedov et al. combines both modelling and experimental growths of the geometrical effect of recessed and pedestal type sample holders for 2" Si wafers using an E-field model, demonstrating that pedestal holders yielded higher quality diamond films when compared to a recessed holder\cite{Sedov2020}. In this work, we demonstrate a simple microwave plasma model of the \clamshell{} reactor (Seki Diamond 6K style) that can be implemented in COMSOL Multiphysics\textregistered{} for the purpose of sample holder or `puck' design. The model presented here uses a simplified \hydro{} reaction cross-section set currently accessible from the Itikawa database on lxcat.net \cite{Itikawa2009}. A simple experimental validation is achieved using a sample puck of varying height, positioning the sample closer or further away from the plasma. The model is compared with experimental characterisation of thin film diamond growths over small Si wafers ($\diameter=$ 1", $t=$ \SI{0.5}{\milli\metre}). In Section \ref{sec:theory} the EM theory is briefly discussed for \clamshell{} style reactors along with the plasma and heat transfer continuity equations for the finite element model (FEM). In Section \ref{sec:model} the modelling implementation is shown, including the boundary conditions and the modelling results for. In Section \ref{sec:exp} the experimental data is presented, including plasma images from the viewports, Raman spectra and scanning electron microscopy (SEM) images of the films. \section{Theory}\label{sec:theory} \subsection{Electromagnetic field} The microwave plasma is sustained by the electric (E) field which accelerates the seed electrons to interact with the \hydro{} gas molecules. The shape and location of the microwave plasma activation region is therefore dependent upon the spatial EM field within the resonant chamber. One of the quickest methods of modelling the plasma location is to simply calculate the EM field distribution of the resonant mode through eigenfrequency analysis\cite{Silva2010}. Analytically, these standing wave distributions are determined by deriving the Helmholtz resonator solution from the time-harmonic Maxwell’s equations: \begin{align} \nabla\cdot\vectE = & \rho_c / \varepsilon_0 \label{eq:max1} \\ \nabla\cdot\vectB = & 0 \label{eq:max2} \\ \nabla\times\vectE = & -j\mu_0\mu_r\omega \rm{\bf{H}} \label{eq:max3}\\ \nabla\times\vectH = & j\varepsilon_0\omega\varepsilon_r \vectE \label{eq:max4} \end{align} where $\vectE$ and $\vectH$ are the electric and magnetic fields, respectively, $\varepsilon_r$ and $\mu_r$ are the relative permittivity and permeability of the medium, respectively, $\rho_c$ is the charge density, $\varepsilon_0$ and $\mu_0$ are the permittivity and permeability of vacuum, respectively, $\vectB=\mu_r \mu_0 \vectH$ and $\omega$ is the angular frequency. A Helmholtz resonator solution is obtained using vector identities. For transverse magnetic (TM) modes where $\vectH_{\it{z}}=0$ and $\vectEz$ is finite: \begin{equation} \nabla^2\vectEz -k^2\vectEz = 0 \end{equation} where $k^2=\omega^2\varepsilon_0\varepsilon_r\mu_0\mu_r$ is defined as the wavenumber. Analytical solutions can be derived based on the coordinate system, or solved for using FEM. Microwave reactor topologies are typically cylindrical or elliptical since the standing wave patterns are based upon Bessel functions which inherently focus the $E$-field, and therefore the plasma, into the central regions of the cavity\cite{Silva2010}. For example in cylindrical coordinates, $\nabla^2\vectEz$ becomes a Poisson equation: \begin{equation} \partB{r}{\vectEz} + \frac{1}{r}\partA{r}{\vectEz} +\frac{1}{r^2} \partB{\theta}{\vectEz} + \partB{z}{\vectEz} -k^2 \vectEz = 0 \end{equation} The radial component has solutions dependent on Bessel functions and the azimuthal and axial components have solutions based on sinusoidal functions. By imposing boundary conditions, the cylindrical TM $\vectEz$ field can be obtained: \begin{equation} \vectEz(r,\theta,z) = J_n\left(\frac{\alpha_{mn}}{a}r\right) \textrm{ cos}\left(n\pi\theta\right) \textrm{ cos}\left(\frac{p\pi}{l}z\right) \label{eq-TMmain} \end{equation} where $J_n$ is the $n^{\rm{th}}$ order Bessel function $a$ and $l$ are the radius and height of the cylinder, respectively, $\alpha_{mn}$ is the $n^{\rm{th}}$ troot of the $m^{\rm{th}}$ Bessel function and $p$ is the integer number of axial standing waves. One can then derive all other components of the field distribution using (\ref{eq:max1}) to (\ref{eq:max4}), however, from this equation it is clear which TM modes will be useful for an MPCVD reactor. From a practical point of view, the substrate should be placed in the centre of the reactor as to be as far from the walls as possible to avoid any etch or re-deposition of wall contaminants. Firstly, this means that $p>0$ otherwise the absence of an axial standing wave would directly connect the plasma to the top and bottom of the cylindrical cavity walls. Secondly, in order to achieve a centralised plasma, only TM modes which consider E-fields where $m=0$ can be used as this is the only Bessel function where the result is non-zero and finite at $r=0$. It has been shown that modes where $m>0$ can actually be used to monitor the temperature of a cavity resonator as they are less sensitive to the centre\cite{Cuenca2017,Cuenca2017a}. Bessel functions roughly decay proportional to $r^{-1/2}$ and with increasing root and frequency, the E-field is further compressed and concentrated into the centre of the cavity. Thus, the ideal case is to use the highest $n$ possible although this would make a cavity with a very large radius. This defines the conventional use of cylindrical reactors based on TM$_{0np}$ modes as it places the E-field central and at either the top or bottom of the cavity. A sample holder becomes crucial to break this degeneracy by disrupting the E-field and encourages the plasma to be localised to only one of these regions. While the E-field distribution provides a general idea as to where the plasma is to be localised, the simple EM field eigenfrequency approximation does not take into account the perturbation of the diffuse but conductive gas. The relative permittivity of the conductive gas is complex and can be modelled using the Drude-Lorentz model: \begin{align} \varepsilon_r = 1 - \frac{\omega_p^2}{\omega^2+\nu_m^2} - j\left( \frac{\omega_p^2\nu_m}{\omega(\omega^2+\nu_m^2)} \right)\\ \omega_p = \sqrt{\frac{e^2n_e}{\varepsilon_0m_e}} \end{align} where $\omega_p$ is the plasma frequency and $\nu_m$ is the electron-species collision frequency, $n_c$ is the electron density and $e$ and $m_e$ are the electron charge and mass, respectively. To introduce the metallic gas of a certain electron density, the microwave frequency and information on how frequently the electrons interact with the pre-cursor gas is needed. Notable methods to incorporate this are the early F\"{u}ner models, where at a defined threshold E-field value, $n_e$ would be finite and zero otherwise\cite{Shivkumar2016,Funer1999,Funer1995}. While this method is suitable at high pressure such that the plasma ball is confined within the E-field region, this never allows the plasma to be situated in the nodal regions of the standing wave, which contradict the large pancake plasma shapes typically found in \clamshell{} style reactors at low pressures. \subsection{Plasma fluid} To fully incorporate the nuance of pressure in larger area reactors, the fluid model is introduced which allows modelling of the collective behaviour of the electrons, ions and neutral species\cite{Yamada2011,Yamada2007,Yamada2006,Hassouni1999}. These gaseous species are initially distributed homogeneously within the cavity and several energy dependent electron-impact reactions are defined with a rate constant or reaction cross section. An example of a simplified reaction set is shown in Fig. \ref{fig-xsec}. The accelerated electrons may result in elastic scattering (e + \hydro{} $\rightarrow$ e + \hydro{}), excitation of a species (e + \hydro{} $\rightarrow$ e + \hydros{}), ionisation (e + \hydro{} $\rightarrow$ e + \hydrop{}) or attachment or detachment of atoms (e + \hydro{} $\rightarrow$ e + 2H). Additional reactions can also occur with these products, producing a soup of various charged and neutral species. As electrons and ions are produced, electrostatic forces and concentration gradients then result in a plasma fluid diffusing around the high E-field regions. The key advantage of the fluid approach is that the plasma has a finite density, allowing the fluid to be modelled as a function of pressure, temperature and even the gas flow velocity. \begin{figure}[t!] \centering \includegraphics[width=\figWH\textwidth]{Figures/Fig-Model-xsec} \caption{Simplified electron impact cross section reactions with \hydro{} from the Biagi (MAGBOLTZ), CCC\cite{Bray1992,Zammit2014,Zammit2016}, IST Lisbon, Itikawa\cite{Itikawa2009} and Phelps\cite{Buckman1985} databases (www.lxcat.net, retrieved on September 3$^{\rm{rd}}$, 2021). Data used in this study (denoted '$\circ$' ) is from the Itikawa database with parameters given in the legend.} \label{fig-xsec} \end{figure} For the electrons, the fluid is modelled using continuity equations of motion and energy conservation equations. The equation of motion for the number density is: \begin{align} \partA{t}{n_e} + \nabla\cdot{\bf\Gamma}_e = R_e \label {eq:cont1a}\\ {\bf{\Gamma}}_e = -\mu_e n_e \bf{E}_{\rm{a}} - \nabla \it D_e n_e \label{eq:cont1b} \end{align} \JACB{where $n_e$ is the electron density, ${\bf\Gamma}_e$ is the electron flux vector, $\mu_e$ is the electron moblity, $D_e$ is the electron diffusivity, ${\bf{E}}_a$ is the ambipolar field and $R_e$ represents the electrons that are either produced or consumed during impact reactions. The first term in ${\bf\Gamma}_e$ is associated with the ambipolar E-field and is the E-field that is generated by the separation of the ions and the electrons.} The second term is associated with drift contributions from concentration gradients. Coupled with (\ref{eq:cont1a}) is an electron energy conservation equation of a similar form. Energy is either lost or gained from elastic/inelastic reactions with the gaseous species, absorbed from microwave heating in the E-field or accelerated in the electrostatic ambipolar fields: \begin{align} \partA{t}{n_{\varepsilon}} + \nabla\cdot{\bf{\Gamma}}_{\varepsilon} +{\bf{E}}_{\rm{a}}\cdot{\bf{\Gamma}}_e = S_{\varepsilon} +\frac{Q_{\rm{mw}}}{e} \\ \Gamma_{\varepsilon} = -\mu_{\varepsilon} n_{\varepsilon} \bf{E}_{\rm{a}} - \nabla \it D_{\varepsilon} n_{\varepsilon} \label{eq:cont2} \end{align} \JACB{where $n_{\varepsilon}$ is the electron energy density, ${\bf{\Gamma}}_\varepsilon$ is the electron energy flux vector, $\mu_\varepsilon$ is the electron energy mobilty, $D_\varepsilon$ is the electron energy diffusivity, $S_\varepsilon$ is the energy gain or loss from impact reactions and $Q_{\rm{mw}}$ is the microwave heating of the electrons.} For the heavier gas species such as ions and neutral molecules, the continuity equations are similar to the electrons, however, may include inertial terms from the background gas flow velocity (omitted in this model). For multiple reaction species (\hydro{}, \hydrop{} and \hydros{}), the continuity relation for the $i^{\rm{th}}$ specie is: \begin{align} \rho_i\partA{t}{w_i} = \nabla\cdot{\bf{\Gamma}}_i + R_i \\ {\bf{\Gamma}}_i = \rho_i w_i v_d \label{eq:cont3} \end{align} where $\rho_i$ is the density, $w_i$ is the mass fraction, ${\bf\Gamma}_i$ is the ion flux vector, $R_i$ represents the ions that are either produced or consumed in reactions and $v_d$ is the \JAC{species diffusion velocity}. \subsection{Heat transfer model} In addition to the plasma fluid model, the gas temperature (or neutral/ion species temperature) is calculated over time assuming a simple conductive heat transfer model using a mass averaged gas density, heat capacity, thermal conductivity and the EM power dissipated. The continuity equations for this calculation are: \begin{align} \rho_n C_p \partA{t}{T}= \nabla\cdot(k\nabla T) + Q_{\rm{mw}} \label{eq:cont4} \end{align} where $\rho_n=pM_n/RT$ is the gas density, $p$ is the pressure, $M_n$ is the mean molar mass, $R$ is the gas constant, $T$ is the temperature, $C_p$ is the heat capacity and $k$ is the thermal conductivity. \section{Modelling}\label{sec:model} \subsection{Method} \begin{figure}[t!] \centering \tikzstyle{block0} = [rectangle, fill=none, text width=16em, text centered, rounded corners, minimum height=2em] \tikzstyle{block1} = [rectangle, draw, fill=blue!20, text width=7em, text centered, rounded corners, minimum height=3em] \tikzstyle{block2} = [rectangle, draw, fill=red!20, text width=7em, text centered, rounded corners, minimum height=3em] \tikzstyle{block3} = [rectangle, draw, fill=green!20, text width=7em, text centered, rounded corners, minimum height=3em] \tikzstyle{line} = [draw, -latex'] \tikzstyle{dline} = [draw, latex'-latex'] \begin{tikzpicture}[node distance = 2cm, auto] \renewcommand{\baselinestretch}{1} \node [block0] (title) {\textbf{Finite element modelling process}}; \node [block1, below of=title, node distance=1cm] (eme) {\textbf{EM} \\ Eigenfrequency}; \node [block2, below left of=eme, node distance=2.5cm] (emf) {\textbf{EM} \\ Frequency \\ Transient}; \node [block2, below right of=eme, node distance=2.5cm] (plas) {\textbf{Plasma} \\ Frequency \\ Transient}; \node [block3, below right of=emf, node distance=2.5cm] (ht) {\textbf{Heat Transfer} \\ Transient}; \path [line] (eme) -- (emf); \path [line] (eme) -- (plas); \path [dline] (emf) -- (plas); \path [line] (emf) -- (ht); \path [line] (plas) -- (ht); \end{tikzpicture} \caption{Finite element modelling process flow using COMSOL Multiphysics\textregistered.} \label{fig-flow} \end{figure} The FEM process is split into three separate studies as shown in Fig. \ref{fig-flow}. First, the EM model is run where the eigenfrequencies of the geometry are calculated to ensure that the correct mode is identified (the \clamshell{} type mode where a TM$_{011}$ distribution is present in the active region of the reactor). This step is crucial for determining if any reactor modifications or the introduced sample holders shift the frequency away from the source generator frequency range. Additionally, higher order modes which are not necessary for diamond growth can also be identified. Secondly, the frequency-transient electromagnetic/plasma model is calculated at the eigenfrequency. The reactor port power is varied and provides a continuous wave to set up the EM standing wave. In this way the E-field intensity, and therefore the plasma, is power dependent. The electrons and gaseous species are distributed within the cavity and the transient response is modelled from 0 to 10 s to allow the plasma fluid to evolve to a steady state at an initial ignition power and pressure (1.5 kW at 20 mbar) or low microwave power density (MWPD). Subsequently, the MWPD is \JACB{ramped up} to growth conditions or high MWPD (5 kW at 160 mbar) over the simulated time of 1 hour to keep the solution stable. Finally, the third step calculates a heat transfer solution to obtain the gas temperature using the microwave power dissipated in the plasma. \begin{figure}[t!] \centering \includegraphics[width=\figWH\textwidth]{Figures/Fig-Model-BC} \caption{Simplified schematic of the clamshell reactor (Carat Systems CTS6U). Wall surfaces are modelled as impedance boundary conditions. The dotted line denotes axial symmetry about $r=0$. $P_i$ denotes a lumped coaxial port boundary condition. The red outflow lines denote the limits of the fluid simulation as electron and heat outlets for the plasma and heat transfer solutions, respectively. P1 denotes the location of the plasma region.} \label{fig-model} \end{figure} The cavity boundary conditions are also shown in Fig. \ref{fig-model}. The mesh is kept consistent in all steps, utilising a free quadratic with boundary layers at the extremities for the plasma solution. The mesh also forces a distribution of 50 nodes across the Si sample surface to ensure a high resolution for the spatially dependent electron densities. For the first EM eigenfrequency model, the chamber domain is assumed vacuum and the walls are all modelled as metallic impedance boundary conditions ($\sigma>10^7$ S/m). To reduce computation time in the plasma model, the domain is only confined to the centre of the reactor, marked by the outflow/electron outlet conditions in Fig. \ref{fig-model}. The gas pressure of this domain is varied from 20 to 160 mbar. For the plasma model, all walls are defined as grounds with additional surface reactions for excited species to relax to neutral species (\hydros{} $\rightarrow$ \hydro{}, \hydrop{} $\rightarrow$ \hydro{}). The cross-section reactions for the gaseous species have adopted a simplified hydrogen plasma similar to Yamada et al. \cite{Yamada2006} to reduce computation time with a dataset obtained from the Itikawa database\cite{Itikawa2009} (available on lxcat.net). The reactor port $P_i$ is defined as a lumped port with excitation varying from 1.5 to 5 kW. For the heat transfer model, heat flux boundary conditions are imposed to the extremities to ensure that the sample holder stage is at the correct temperature of approximately \SI{800}{\celsius}. The heat flux boundaries simulate the cooling of the reactor using a fixed systematic heat transfer coefficient for all sample holder heights (a modelled cooling flux of \SI{850}{\watt\per\metre\kelvin} for all external boundaries with an ambient temperature of \SI{25}{\celsius}). The sample is modelled as a Si wafer ($\diameter=$1", $t=$ \SI{0.5}{\milli\metre}) and is positioned on top of the Mo sample holder puck ($\diameter_{\rm{puck}}=40$ to \SI{60}{\milli\metre}, $h_{\rm{puck}}=$ 1 to \SI{20}{\milli\metre}). The corners of the sample are rounded with a radius of \SI{0.1}{\milli\metre} and the Mo holder rounded with a radius of \SI{0.2}{\milli\metre} to ensure a high mesh density at the anticipated high E-field regions and avoid convergence errors. \subsection{Electromagnetic model} \begin{figure}[t!] \centering \includegraphics[width=\figWH\textwidth]{Figures/Fig-Model-EM} \caption{The base EM eigenfrequency model without the Mo sample holder puck. The E-field is normalised to the maximum value. For comparison and identification of the correct mode, the ideal cylindrical case of the TM$_{011}$ mode (left side) and the realistic \clamshell{} design (right side) are shown.} \label{fig-em} \end{figure} Figure \ref{fig-em} shows the EM eigenfrequency model with the E-field distribution of an ideal TM$_{011}$ mode and the \clamshell{} reactor without the Mo sample puck. The \clamshell{} reactor mode shows a reasonably similar E-field distribution to the ideal case, demonstrating that the correct mode has been identified. In this mode, there is an E-field node separating the central region and a side lobe. High intensity E-field regions at the edge of the stage are found where a secondary plasma can also be sustained. Figure \ref{fig-freq} shows how the calculated resonant frequency of the EM model is perturbed to lower frequencies as a Mo sample puck is introduced. With the current dimensions, the initial unperturbed resonant frequency is calculated at approximately 2.45 GHz \JACB{with a -3 dB bandwidth of $\sim$\SI{87}{\mega\hertz}}. The results show that wider pucks have less of a frequency shift, while taller pucks alter the resonant frequency by as much as $\Delta f\approx$ -66 MHz at \hpuck{} = 20 mm. This demonstrates that shorter sample pucks are more favourable for stable operation with a magnetron with a fixed frequency output. \JACB{Interestingly, this is not so much a problem for solid state sources since the signal generator frequency can varied.} After introducing the puck, the E-field distribution of the EM model is shown in Fig. \ref{fig-main}(a). The E-field is perturbed and high intensity regions occur at the corners of the puck. This is simply because of Maxwell's equations (\ref{eq:max1}) and (\ref{eq-TMmain}); the $E_z$ field lines should be perpendicular to the Cu stage and the introduction of a metal object creates parallel surfaces that disrupts this condition resulting in a reconfiguration of surface currents. With increasing sample holder puck height, the E-field, and therefore the plasma activation region, is concentrated towards the edges of the holder. The nodal regions of the E-field are also clearly visible either side of the puck where, based on the threshold modelling approach, the plasma could not exist; or if it did, the plasma could exist in multiple regions in the cavity. It is also noted that with increasing sample holder height, the E-field hot spots at the top of the chamber reduce in intensity, decreasing the likelihood for a secondary plasma to ignite at these regions. \begin{figure}[t!] \centering \includegraphics[width=\figWH\textwidth]{Figures/Fig-Model-FREQ} \caption{Modelled shift in resonant frequency caused by the Mo sample holder puck of varying dimensions; \hpuck{} and \dpuck{} denote the height and diameter, respectively. The sample holder has a Si wafer sample on top ($\diameter=$ 1", $t=$ \SI{0.5}{\milli\metre}). } \label{fig-freq} \end{figure} \subsection{Plasma Model} The plasma fluid models are shown in Fig. \ref{fig-main}(b) and \ref{fig-main}(c) and clearly demonstrate a focused electron density in the sample region. The calculated electron densities are as high as \SI{2.2\times10^{17}}{\per\metre\cubed} and \SI{10\times10^{17}}{\per\metre\cubed} in the low and high MWPD models, respectively. Based on a $\omega_p=$ \SI{2.45}{\giga\hertz}, $n_c =$ \SI{7.45 \times 10^{16}}{\per\metre\cubed}, stipulating that at these electron densities, the microwaves are not able to propagate freely through the plasma and thus attenuates, thereby depositing microwave power into the plasma\cite{Silva2009}. Note that in the low MWPD solution the electron density distribution is much wider than the E-field result, demonstrating that simple EM solutions are potentially less appropriate for modelling low pressure plasmas. \JACB{Although PCD diamond growth with low non-diamond carbon impurities typically occurs at high MWPD, the lower pressure solutions are an imperative result for several purposes. The first is that some applications involve nano-crystalline diamond (NCD) and ultra-nanocrystalline diamond (UNCD) which is typically grown at lower power densities\cite{Cuenca2019,Williams2007,Sankaran2018} as well as hybrid graphene-diamond films\cite{Carvalho2016}. The second is to ensure that the plasma can actually be ignited at the right place in the chamber.} Finally, investigating large area growth using lower pressure would be challenging using an EM solution alone. At low MWPD and at large heights of 15 to 20 mm, the sample is pushed further into the the plasma and the fluid moves towards the electron outlets at the side of the stage. This is not favourable as the risk of the plasma pushing to below the stage towards the quartz ring region where the microwaves enter is much greater. At high MWPD, the plasma becomes the familiar elliptical shape situated over the sample with a smaller area and a much higher electron density (\SI{n_e\sim10\times10^{17}}{\per\metre\cubed}), similar electron densities to those found in previous models of different reactors at growth conditions\cite{Yamada2007,Kelly2012,Hassouni1999}. At high MWPD, higher Mo sample pucks result in the plasma further focusing towards the edges which will inherently affect the spatial CVD diamond growth rate across the sample. \subsection{Heat Transfer Model} \begin{figure*}[t!] \centering \includegraphics[width=\figW\textwidth]{Figures/Fig-Model-EIGEN} \includegraphics[width=\figW\textwidth]{Figures/Fig-Model-MWPD-L} \includegraphics[width=\figW\textwidth]{Figures/Fig-Model-MWPD-H} \includegraphics[width=\figW\textwidth]{Figures/Fig-Model-TEMP-H} \caption{Microwave hydrogen plasma model using the simplified eigenfrequency, plasma coupled and heat transfer solution approach for varying sample holder heights (\hpuck{} = 5 to 20 mm) with a Si wafer on top ($\diameter=$ 1", $t=$ \SI{0.5}{\milli\metre}). Model uses cross-section data from the Itikawa database, available on lxcat.net\cite{Itikawa2009}.} \label{fig-main} \end{figure*} The heat transfer solution at high MWPD shows that the temperature of the gas reaches several tens of thousands Kelvin. Although these values are much higher than those reported by Shivkumar et al. in cylindrical TM$_{01p}$ type reactors ($\sim$2,500 K) \cite{Shivkumar2016}, in that study the modelled MWPD was lower (700 W at 30 Torr) which cannot be easily sustained in the \clamshell{} reactor. The gas temperature is hotter at shallower puck heights and decreases significantly as the puck is pushed into the plasma. This is likely due to the fact that the sample puck is in direct contact with the heavily cooled Cu stage underneath and an increasing volume of Mo increases the thermal mass of the puck, thereby reducing the temperature. Figure \ref{fig-TevsT} shows the averaged gas and electron temperatures over the ellipse drawn over the plasma region (defined as P1 in Fig. \ref{fig-model}). Here, it becomes clear that at low MWPD, the plasma is not at thermodynamic equilibrium as the electron temperature ($T_e$) is much higher than the background gas temperature ($T_g$) at low MWPD. Increasing the gas pressure increases the number of electron-hydrogen collisions which increases $T_g$ and reduces $T_e$. At high MWPD, for a modelled central substrate temperature of approximately \SI{800}{\celsius}, the plasma tends towards a collisional plasma condition. \begin{figure}[t!] \centering \includegraphics[width=\figWH\textwidth]{Figures/Fig-Model-TevsT} \caption{Average gas temperature (solid) and electron temperature (dotted) over the plasma fluid domain region (P1 in Fig. \ref{fig-model}) at different sample holder heights.} \label{fig-TevsT} \end{figure} \section{Experiment\label{sec:exp}} \subsection{Experimental method} To demonstrate the overall affect of the sample holder height on the CVD diamond process, three Mo pucks were machined ($\diameter=$ \SI{40}{\milli\metre}, $h_{\rm{puck}}=$ 5, 10 and \SI{15}{\milli\metre}) and used for thin film growths (approximately \SI{1}{\micro\metre} thick) on small Si wafers ($\diameter=$ 1'', $t=$ \SI{0.5}{\milli\metre}). It is worth stipulating that a fourth puck ($h_{\rm{puck}} = 20$ mm) was also machined, however, stable plasma ignition was not possible. This is likely due to the holder significantly perturbing the resonant frequency of the chamber. The Si wafers were seeded using the ultrasonic seeding process\cite{Williams2011, Mandal2021a}. Briefly, the wafers were solvent cleaned and placed in a nanodiamond colloidal solution with a particles of a positive zeta potential whilst under ultrasonic agitation for 10 minutes. The samples were then rinsed in deionised water, dried using an air gun and placed on the top of the sample holder inside \JACB{ of a \clamshell{} style reactor for CVD diamond growth}. Samples were grown using a \meth{}/\hydro{} gas mixture with a \meth{} concentration of 3\% in a total flow rate of 300 sccm at a forward microwave power of 5 kW at 160 mbar. The total growth time for all samples was fixed at 30 minutes, followed by a 5 minute cool down ramp to 1.5 kW at 27 mbar. The temperature at growth MWPD was monitored using a Williamson dual wavelength pyrometer (DWF-24-36C), giving initial measured readings of 760, 790 and \SI{780}{\celsius} for the 5, 10 and 15 mm growths, respectively. Two samples for each molybdenum puck height were grown at different reactor usage times. After growth, the samples were examined using Raman spectroscopy and scanning electron microscopy (SEM). Raman spectroscopy was conducted using a Horiba LabRAM HR Evolution with a green laser ($\lambda=$ \SI{532}{\nano\metre}) and a grating of \SI{600}{l\per\milli\metre} from 200 to \SI{2000 }{\centi\metre}$^{-1}$ to allow for sensitivity to both the diamond and non-diamond carbon content. Line scans were taken at points across the samples ($N$ = 20 points over \SI{22}{\milli\metre}). SEM images were obtained using a Hitachi SU8200 (\SI{10}{\kilo\volt} at \SI{10}{\micro\ampere}) with a working distance of \SI{8}{\milli\metre}. \subsection{Microwave Plasma} \begin{figure}[t!] \centering \includegraphics[height=2.5cm]{Figures/Fig-Photo-05-L} \includegraphics[height=2.5cm]{Figures/Fig-Photo-05-H}\\ \includegraphics[height=2.5cm]{Figures/Fig-Photo-15-L} \includegraphics[height=2.5cm]{Figures/Fig-Photo-15-H} \caption{Photographs of the microwave \hydro{}/\meth{} plasma at low (left column) and high (right column) MWPD for sample puck heights of 5 mm (top row) and 15 mm (bottom row), respectively. Low MWPD plasma is at 1.5 kW at 27 mbar, while high MWPD is 5 kW at 160 mbar with 3\% \meth{} in a total flow rate of 300 sccm. Images were taken from the side viewport of the Carat Systems CTS6U using an Apple iPhone 7.} \label{fig:photo} \end{figure} Figure \ref{fig:photo} shows the typical images of a microwave \hydro{}/\meth{} plasma at ignition and at the point of reaching diamond growth conditions with different puck heights in the \clamshell{} reactor. At low power density the plasma emits a purple glow associated with the combined emissions of the H$_\alpha$ ($\sim$657 nm), H$_\beta$ ($\sim$486 nm) and the H$_\gamma$ ($\sim$437 nm) lines\cite{Hemawan2015}. As the MWPD is increased, the plasma becomes the familiar green ellipsoid over the sample holder, characteristic of C$_2$ emission from the small \meth{} concentration. The images show that the low pressure plasma extends well beyond the extents of the puck, into where the EM E-Field nodes are calculated to be. The edges of the 15 mm puck are also much brighter compared to the 5 mm puck owing to the E-field focusing. The increase in MWPD results in a smaller and focused ellipsoid above the puck, with the higher 15 mm puck distorting the bottom edges shape of the plasma. \begin{figure*}[t!] \setlength{\tabcolsep}{2pt} \renewcommand{\arraystretch}{1} \centering \begin{tabular}{>{\centering}m{0.5in} c c c c c c} {} & \bf{centre} & & & & & \bf{edge}\\ {} & 0 mm & 2.5 mmm & 5 mm & 7.5 mm & 10 mm & 12.5 mm \\ 5 mm & \imSEM{Fig-SEM-05-000.png} & \imSEM{Fig-SEM-05-025.png} & \imSEM{Fig-SEM-05-050.png} & \imSEM{Fig-SEM-05-075.png} & \imSEM{Fig-SEM-05-100.png} & \imSEM{Fig-SEM-05-125.png} \\ 10 mm & \imSEM{Fig-SEM-10-000.png} & \imSEM{Fig-SEM-10-025.png} & \imSEM{Fig-SEM-10-050.png} & \imSEM{Fig-SEM-10-075.png} & \imSEM{Fig-SEM-10-100.png} & \imSEM{Fig-SEM-10-125.png} \\ 15 mm & \imSEM{Fig-SEM-15-000.png} & \imSEM{Fig-SEM-15-025.png} & \imSEM{Fig-SEM-15-050.png} & \imSEM{Fig-SEM-15-075.png} & \imSEM{Fig-SEM-15-100.png} & \imSEM{Fig-SEM-15-125bar.png} \\ \end{tabular} \caption{SEM images of the CVD diamond films grown at varying Mo sample holder heights (5, 10 and \SI{15}{\milli\metre}). Images are taken at fixed radial distances from the centre of the wafer to the edge. Scale bar represents a \SI{5}{\micro\metre} length.} \label{fig-sem} \end{figure*} \subsection{Scanning Electron Microscopy} SEM images of the films are shown in Fig. \ref{fig-sem} at incremental regions from the centre towards the edge of 1" Si wafer. These images show clear radial variations in growth rate depending on the height of the sample puck. Starting with the film grown at a height of 5 mm, at a radial position of 0 mm (centre of the film) shows a large size distribution with grains as large as $\sim$\SI{1}{\micro\metre} down to the nanoscale. At 2.5 mm, these large micron size grains are still apparent, however much are fewer in distribution and a larger fraction of nanoscale grains is found. Moving further outwards, the micron size grains begin to disappear, and only nanoscale grain texturing is observed. Finally reaching the edges of the film, the grain size suddenly increases showing significant growth of a microcrystalline film. Next, for the film grown at a height of 10 mm, there is minimal radial grain size variation from 0 up to 10 mm, showing similarly large micron size grains as the 5 mm sample except has coalesced with less nanoscale grains. However, at the edge of the film there is a clear jump again in growth rate with evidence of much larger grains. Finally, for the film grown at a height of 15 mm, the centre of the film also shows minimal radial variation from 0 up to 10 mm with the exception of a much smaller average grain size. In a similar fashion to the previous samples, the growth rate towards the edges of the film is much faster and much larger grains are found. \subsection{Raman Spectroscopy} Raman spectra at the centre of the wafers of the thin diamond films grown at different Mo puck heights are shown in Fig. \ref{fig-ram} (a). The high intensity peak at \SI{520}{\per\centi\metre} and the band at approximately \SI{950}{\per\centi\metre} are attributed to the first and second order bands of Si\cite{Prawer2004,Sedov2019}. Since the growth time is fairly short, the diamond films are fairly thin, therefore the contribution of the Si substrate is large. The sharp peak at \SI{1332}{\per\centi\metre} is attributed to the first order \dia{} carbon peak, a signature characteristic of diamond\cite{Ramaswamy1930,Bhagavantam1930,Prawer2004,Knight1989a,Ayres2017}. Amongst this diamond peak are several broad bands associated with various non-diamond carbon impurities. The weak band at approximately \SI{1420}{\per\centi\metre} and even weaker band at approximately \SI{1120}{\per\centi\metre} are both attributed to trans-polyacetylene (tPA), commonly found in CVD diamond Raman spectra although are only dominant at particularly low grain sizes such as nanocrystalline diamond (NCD)\cite{Sankaran2012a,Sankaran2018,Ferrari2000}. The broad band at approximately 1310 to 1340 cm$^{-1}$ is attributed to the $A_{1g}$ breathing mode of aromatic \spt{} carbon rings, while the peak at around 1580 to 1610 cm$^{-1}$ is attributed to the $E_{2g}$ bond stretching mode in \spt{} carbon\cite{Ferrari2000}. The low intensity of these non-diamond carbon signatures compared to the \dia{} peak at a laser excitation wavelength of 532 nm implies a low non-diamond carbon impurity concentration; in heavily \spt{} incorporated films, these band often dominate the \dia{} peak\cite{Cuenca2019,Williams2011}. The line scans of the d / G ratio in Fig. \ref{fig-ram} (b) show that there is a significant variation in the diamond growth across the 1" Si wafer. For all samples, the \dia{}/\spt{} peak ratio is much higher at the edges of the sample compared to the centre. This implies a much faster growth rate at the edges of the sample which is corroborated in the plasma model by the focusing effect at the samples edges, and therefore an increased plasma and reaction density. At a radial position of approximately \SI{10}{\milli\metre} from the centre, the \dia{}/\spt{} peak ratio rapidly decreases for all sample heights. For the film grown at a height of 5 mm, almost no diamond peak is found whereas for the films grown at heights of 10 and 15 mm, the \dia{}/\spt{} ratio is fairly similar towards the centre of the film. At the centre, a noticeable hump in the \dia{}/\spt{} ratio is found, which gradually disappears with increasing puck height. This variation is again caused by the variation in the plasma density across the sample; as shown in Fig. \ref{fig-model} the plasma tends towards the classic central ellipsoidal shape at shallow heights and pushes towards the edges for taller pucks at high MWPD. \begin{figure}[t!] \centering \includegraphics[width=\figWHH\textwidth]{Figures/Fig-Raman-lines} \caption{Raman spectroscopy of CVD diamond on 1'' Si wafers grown at holder heights of 5 mm, 10 mm and 15 mm for 30 minutes (5 kW 160 mbar). (a) shows the spectra at the centre of the wafer. The labels `d', `D', `G' and `tPA' denote the contributions from the \dia{} carbon peak in diamond, the D and G bands of \spt{} carbon and trans-polyacetylene, respectively. (b) shows a line scan of the intensity ratio of 'd' / 'G', or the implied \dia{} / \spt{} or non-diamond carbon ratio. \JACB{(c) shows the temperature recorded by the pyrometer, where the inset shows a zoom in at the start point. Pyrometer lower limit is 315 $^\circ$C.}} \label{fig-ram} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=\figWH\textwidth]{Figures/Fig-Model-Lines} \caption{Model line scans over the Si sample surface of (a) the E-field magnitude in the EM model, (b) the plasma electron density and (c) the substrate temperature.} \label{fig-model-lines} \end{figure} \section{Discussion} It is clear from the modelling and experimental results that the Mo sample puck height has a considerable effect on the plasma shape and therefore the spatial diamond growth rate across a 1" Si wafer. For comparison between the model and the experiment, Fig. \ref {fig-model-lines} shows line scans of the E-field magnitude, plasma electron density and substrate temperature across the sample from the EM, plasma and heat transfer solutions, respectively. Clear correlations and limitations in the model are identified when compared with the SEM and Raman data. For the EM model, the E-field distribution only shows the high intensity regions at the edges of the sample and almost no sign of the broad hump in the centre. This highlights a limitation of the E-field modelling approach in predicting growth rate variations across small samples. However, the plasma fluid model shows that there is a clear bump at the centre of the film for the 5 mm puck height and then becomes increasingly flatter as the puck height is increased. The electron density alone however implies that the growth rate would be much faster for a shallower puck over a taller puck which does not appear to not be the case from the Raman line scan spectra and the SEM images. The reason is likely due to the temperature at the sample surface during growth. The experimentally measured temperature is shown in Fig. \ref{fig-ram} (c) where the temperature is marginally lower for the 5 mm puck (\SI{\sim760}{\celsius}) when compared to the 10 mm (\SI{\sim790}{\celsius}) and 15 mm (\SI{\sim780}{\celsius}) pucks. In fact, the films with the most consistent quality are those grown on the 10 mm puck, with growth temperatures closer to the typical CVD diamond growth temperature of $\sim$\SI{800}{\celsius}. Thus, the electron density plasma model is only representative of some spatial growth variation but less accurate when used to compare between holders. The heat transfer solution provides a better insight as is shown in Fig. \ref{fig-model-lines}, where the temperature line scans show both the radial variation in temperature and that the 10 mm puck has a higher growth temperature than the others, thereby producing a better quality film. The result from this experiment is that shallower pucks are heavily cooled by the stage which reduces the substrate temperature for a given process growth condition. As the height of the puck increases, the sample is pushed into the plasma and the substrate temperature increases. However, if the puck is too tall, the plasma is largely perturbed and then focuses to the edges and away from the sample substrate. Additionally, since a larger volume of Mo is in contact with the cooled stage, the $T_g$ is reduced and therefore the substrate temperature. This model demonstrates that if the users goal is to optimise spatial homogeneity across a sample then this puts precedence on monitoring the spatial temperature across the sample during growth. \section{Conclusion} Modelling of microwave hydrogen plasmas can offer simple and cost effective insights into how sample holder designs can affect MPCVD processes. This work demonstrates that EM eigenfrequency models show how sample holder pucks of different width and diameter can perturb the reactor frequency where in this instance, taller pucks have a much larger effect on frequency. Additionally, strict EM modelling approaches have some limitations at both low and high MWPD and are generally useful for identifying possible regions for the plasma to spark, but do not necessarily describe the plasma shape. Fully coupled EM/plasma models offer a better description which models the power and pressure size and electron density. Using multi-physics coupling of EM, plasma and heat transfer solutions, the spatial variation in diamond growth can be estimated through variations in the substrate temperature. \section{Acknowledgements} This project has been supported by Engineering and Physical Sciences Research Council (EPSRC) under the GaN-DaME program grant (EP/P00945X/1) and the European Research Council (ERC) Consolidator Grant under the SUPERNEMS Project (647471). SEM was carried out in the cleanroom of the ERDF-funded Institute for Compound Semiconductors (ICS) at Cardiff University. \bibliographystyle{elsarticle-num-nourl.bst}
3,212,635,537,903
arxiv
\section{Introduction} The continuous miniaturization of the field-effect transistor (FET) has enabled the fabrication of increasingly powerful circuits on a single microchip. The performance of traditional planar FETs drops significantly as the source-drain separation is pushed below 50~nm due to short channel effects~\cite{Ferain_Nat_2011}. These short channel effects can be mitigated by improving the gate coupling~\cite{Ferain_Nat_2011, Park_IEEE_2002, Okano_IEEE_2005, Cho_CD_2004}. This led to the development of fin-FET devices with gates interfacing the channel from three sides~\cite{Ferain_Nat_2011, Park_IEEE_2002, Okano_IEEE_2005, Cho_CD_2004, Chau_NM_2007}. Optimal gate coupling is obtained from gate structures that enclose the transistor channel from all sides~\cite{Ferain_Nat_2011, Leobandung_Vac_1997, Colinge_IEEE_1990}. However, these structures can be difficult to make using the conventional top-down fabrication techniques employed for planar devices.~\cite{Leobandung_Vac_1997, Colinge_IEEE_1990, Singh_IEEE_2006}. A bottom-up approach exploiting self-assembled nanowires offers a simpler pathway to fully conformal `gate-all-around' structures~\cite{Samuelson_MT_2003}. These nanowires stand vertically, enabling a conformal coating of gate oxide and gate metal to be applied in a straightforward way~\cite{Tanaka_APE_2010, Bryllert_IEEE_2006, Ng_NL_2004,Storm_NL_2011, Burke_NL_2015}. The nanowires can be processed into vertical FET arrays on the growth substrate~\cite{Tanaka_APE_2010, Bryllert_IEEE_2006, Ng_NL_2004} or can be transferred to a separate device substrate to create horizontal devices~\cite{Storm_NL_2011, Burke_NL_2015}. Vertical nanowire arrays have achieved near thermal-limit subthreshold swings~\cite{Bryllert_DRCD_2005}, integration of III-Vs on Si~\cite{Tomioka_Nat_2012}, and continue down a road towards practical applications~\cite{Riel_MRS_2014}. This orientation has also seen work to incorporate heterostructured nanowires towards high-performance tunnel-FETs~\cite{Memisevic_IEEE_2017, Memisevic_IEEE_2018} and produce multiple independent 'wrap gates'~\cite{Li_IEEE_2011}. Turning to horizontal wrap-gated nanowire transistors, these are of interest for basic research devices, e.g., quantum electronics, but also as a possible complement for vertical transistors in 3D-integrated circuits~\cite{Ferry_Sci_2008}. Fabrication of multiple independent wrap-gates has also been achieved in the horizontal orientation, giving a significant advantage on scalability over the vertical orientation~\cite{Burke_NL_2015}. A major limitation of horizontal wrap-gate nanowire transistors~\cite{Storm_NL_2011, Burke_NL_2015} is that the gate length is defined by wet-etching. This limits control and restricts the minimum achievable gate length~\cite{Burke_NL_2015}. Shorter gates are also intrinsically of lower quality in this instance because unintentional overetching introduces `mouse-bite' defects---small holes in the gate metal and oxide that compromise both performance and yield~\cite{Burke_NL_2015}. Burke \textit{et al.}~\cite{Burke_NL_2015} found that the shortest gate length that could be reliably achieved was $\sim$300~nm. The minimal gate length is important for electronics applications and basic research. For industrial applications, the gate length governs the density of devices on a microchip. For research studies, sub-200~nm gates are desirable for nanowire quantum devices e.g. gate-defined quantum dots~\cite{Pfund_APL_2006, Fasth_NL_2007} and nanowire quantum-point contacts~\cite{Abay_NL_2013, Heedt_NL_2016a, Heedt_NL_2016b, Saldana_NL_2018}. \begin{figure*} \centering \includegraphics[width=1\textwidth]{Figure_1_trenches} \caption{Illustration of the nanowire alignment procedure. (a) Arrays of 30~nm thick bottom gates were patterned by electron-beam lithography (EBL). (b) and (c) 300~nm wide trenches were defined in $\sim$300~nm thick EBL resist. Nanowires were deposited on top of the resist, covered with a drop of isopropyl alcohol and then brushed into the trenches~\cite{Lim_Small_2010, Lard_NL_2014}. (d) The nanowires remain aligned perpendicular to the bottom gates after removing the resist with acetone. Scanning-electron microscopy image (e) of a nanowire inside a resist trench and (f) a different nanowire after the resist is removed. (g) Dark-field optical microscopy image of nanowires aligned to bottom gates. \label{fig:F1_align}} \end{figure*} Here, we introduce a fabrication process for horizontal nanowire FETs with multiple gate-all-around structures that maintain the advantages of the horizontal orientation while overcoming the limitations arising from wet etching. The gate length is defined instead by electron-beam lithography (EBL) patterned metal deposition. This presents the challenge of obtaining the portion of the gate directly underneath the nanowire. We achieve this by depositing nanowires on pre-fabricated bottom gates before completing the gate-all-around structure by depositing a top gate aligned with the bottom gate. A crucial aspect of the fabrication process is that the nanowire is aligned perpendicular to the bottom gate. This alignment is achieved with high accuracy using a resist-trench technique (see figure~1)~\cite{Lim_Small_2010, Lard_NL_2014}. As a result, the minimal achievable gate length, control over gate length, and gate-metal quality are limited by the EBL process rather than wet-etch steps. At this point we note a nomenclature distinction regarding wrap-gates and gate-all-around structures. The wrap-gate devices~\cite{Storm_NL_2011, Burke_NL_2015} have unambiguously conformal gate metallisation. In contrast, our process has the possibility of small voids under the nanowire edges (see figure~2(e)). For clarity, in distinguishing between them, we refer to our devices as gate-all-around structures rather than wrap gates. We demonstrate the full capacity of our method with two devices. The first is a single nanowire with independently controllable bottom gate, top gate, and gate-all-around structures. It highlights that the strongest gating is obtained with a gate-all-around structure, but that an $\Omega$-shaped top gate gives comparable performance in a side-by-side comparison. This is consistent with modelling predictions~\cite{Tang_IEEE_2004, Li_IEEE_2005}. The second device is a single nanowire with three independently controllable gate-all-around structures with different gate lengths: 300~nm, 200~nm, and 150~nm. This is a significant improvement in minimal gate length compared to earlier horizontal wrap gates~\cite{Storm_NL_2011, Burke_NL_2015}. \section{Experimental details} \subsection{Nanowire alignment} Our devices are made on an n$^{+}$-Si substrate capped with 100~nm of SiO$_{2}$ and 10~nm of HfO$_{2}$. The HfO$_{2}$ layer serves as an etch-stop layer when the Al$_{2}$O$_{3}$ gate insulator is etched during contact formation. Arrays of bottom gates were patterned by EBL and evaporation of Ni/Au (5/25~nm), as shown in figure~1(a). The bottom gates are 150-300~nm wide, 10~$\mu$m long and have variable inter-gate spacing. There are 20 bottom-gate arrays per 100$\times$100~$\mu$m$^{2}$ device field (figure~1(g)) to enable satisfactory device yield. The nanowires are positioned using a resist-trench method~\cite{Lim_Small_2010, Lard_NL_2014}, as follows: The substrate was spin-coated with $\sim$300~nm of \textit{MicroChem} polymethyl methacrylate (PMMA) 950k A5 EBL resist. Trenches with length 10~$\mu$m and width 200-400~nm were defined by EBL. These trenches are perpendicular to the underlying bottom gates (see figure~1(e)). Any resist residue in the trenches was removed with a 30~s oxygen-plasma etch after development (50~W, 340~mTorr). Wurtzite InAs nanowires approximately 50~nm in diameter and 3 to 10~$\mu$m long were grown by chemical beam epitaxy (CBE)~\cite{Jensen_NL_2004} or metal organic vapour phase epitaxy (MOVPE)~\cite{Lehmann_NL_2013}. They were conformally coated with a 16~nm Al$_{2}$O$_{3}$ gate dielectric by atomic layer deposition (ALD). The oxide coating of the nanowire removes the need to cover the bottom gates with an insulator as done previously~\cite{Fasth_NL_2007, Saldana_NL_2018}. The nanowires were picked up from the growth substrate with the tip of a triangular piece of clean-room tissue and deposited on top of the patterned resist. Approximately 20-50 nanowires were transferred to each of the 24 100$\times$100~$\mu$m$^2$ regions with 20 bottom-gate arrays per 100$\times$100~$\mu$m$^2$ region (figure~\ref{fig:F1_align}(g)). The substrate was then covered in a single droplet of isopropyl alcohol. A piece of clean-room tissue was used to brush the nanowires into the trenches until the isopropyl alcohol evaporated completely (see figure~\ref{fig:F1_align}(b) and (c)). This process was repeated 2-4 times until no nanowires were visible on top of the resist near the area patterned with trenches under a dark-field microscope. Finally the PMMA resist was removed in an acetone bath leaving the aligned nanowires adhered to the bottom-gate array (see figure~\ref{fig:F1_align}(d)). Any nanowires left on top of the resist were washed away, leaving only aligned nanowires. Empty trenches cannot be distinguished from trenches with nanowires using a 1000$\times$ optical microscope prior to the removal of the resist. We estimated the yield of the alignment procedure by counting the nanowires in seven 100$\times$100~$\mu$m$^{2}$ regions each with 100 trenches after the initial deposition and again after removal of the resist. Typically, 5-30\% of 20-50 distributed nanowires were aligned successfully (see figure~\ref{fig:F1_align}(g)). This yield is sufficient as a complete device only requires one nanowire per 100$\times$100~$\mu$m$^{2}$ region. Lard \textit{et al.}~\cite{Lard_NL_2014} have demonstrated the assembly of highly-ordered nanowire arrays with this method using higher nanowire density and fine-tuning of the trench dimensions. This demonstrates the scalability of this approach, which could be used, e.g., to prototype integrated nanowire circuits with multiple nanowires on the same chip. In this study, the trench width had no significant impact on the yield of captured nanowires for trench widths of 200~nm, 300~nm, and 400~nm. We rarely observed multiple nanowires captured in the same trench. This is likely due to the large supply of trenches relative to the number of available nanowires. An unexpected finding was that the accuracy of angular nanowire alignment was independent of trench width. We attribute this to the nanowires sticking to the trench side-walls during capture, resulting in optimal angular alignment for nearly all nanowires (see figure~\ref{fig:F1_align}(e)). The orientation is generally maintained upon removal of the resist as shown in figure~\ref{fig:F1_align}(f). \subsection{Nanowire contacts} \label{sub:Contacts} \begin{figure} \centering \includegraphics[width=1\columnwidth]{Figure_2_fabrication} \caption{Schematic of the fabrication of a nanowire field-effect transistors with three different gate types. (a) An Al$_{2}$O$_{3}$ coated nanowire was placed orthogonally on an array of bottom gates (figure~\ref{fig:F1_align}). (b) Top gates were defined using electron-beam lithography (EBL) and metal evaporation. A gate-all-around structure was formed where top and bottom gates are aligned. (c) The source and drain contacts were exposed in EBL resist and the Al$_{2}$O$_{3}$ coating was removed with a HF etch. (d) The source and drain contacts were passivated with (NH$_{4}$)$_{2}$S$_{x}$ immediately prior to metallization. (e) The finished device has an independent gate-all-around structure, top gate and bottom gate on the same nanowire. \label{fig:F2_fab}} \end{figure} The fabrication process for the source, drain, and gate electrodes for a device with a gate-all-around structure, a top gate, and a bottom gate is shown in figure~\ref{fig:F2_fab}. Figure~\ref{fig:F3_3gate}(a) shows the finished device. The same processing steps can be applied to create devices with multiple gate-all-around structures such as the nanowire FET shown in figure~\ref{fig:F4_length}(a). The substrate with the aligned nanowires (figure~\ref{fig:F2_fab}(a)) was once more coated with EBL resist. Top gates were exposed and metallized (Ni/Au, 6/134 nm) after a 30~s oxygen-plasma treatment to remove any resist residue (see figure~\ref{fig:F2_fab}(b)). Excess metal was removed together with the EBL resist in an acetone lift-off at 60$^{\circ}$C. Source and drain contacts were exposed in a final EBL step. The Al$_{2}$O$_{3}$ coating was removed from the exposed nanowire ends by a 15~s buffered HF etch (1:7 HF:NH$_{4}$F) as shown in figure~\ref{fig:F2_fab}(c). Wet-etching can be eliminated by substituting the gate oxide with the organic gate insulator parylene, which can be removed by oxygen-plasma etching~\cite{Gluschke_NL_2018}. The sample was treated with (NH$_{4}$)$_{2}$S$_{x}$ solution immediately prior to the metal deposition by thermal evaporation (Ni/Au, 6/134 nm) to ensure ohmic contacts~\cite{Suyatin_NT_2007}. Excess metal was removed in an acetone lift-off at 60~$^{\circ}$C giving the completed device shown in figure~\ref{fig:F2_fab}(e). \subsection{Electrical measurements} All electrical measurements were performed in liquid nitrogen (temperature $T$~=~77~K) to improve gate stability and reduce hysteresis due to charge trapping at the Al$_{2}$O$_{3}$-InAs interface~\cite{Burke_NL_2015}. A dc source-drain voltage $V_{sd}$~=~50~mV was applied at the source contact to drive a drain current $I_{d}$ measured using a \textit{Keithley} 6517A electrometer at the drain. The gate voltage $V_{g}$ was applied using the dc auxiliary ports of a \textit{Stanford Research} SR830 lock-in amplifier after confirming negligible gate leakage ($<100$~pA) for the individual gates with a \textit{Keithley} K2401 source-measure unit. $I_{d}$ was recorded for decreasing $V_{g}$. Only one gate was swept at a time with all other gates kept grounded. \section{Results and discussion} \subsection{Bottom, top, and gate-all-around structure} \begin{figure} \centering \includegraphics[width=1\columnwidth]{Figure_3_gateTypes.pdf} \caption{(a) False-coloured scanning-electron microscopy image of a device with three different gate types: a top gate, a gate-all-around structure and a bottom gate. (b) Drain current $I_{d}$ vs gate voltage $V_{g}$ for the three different gate types on the same nanowire. \label{fig:F3_3gate}} \end{figure} Figure~\ref{fig:F3_3gate}(a) shows an scanning-electron microscopy image of a device with three different gate types: a top gate, a gate-all-around structure, and a bottom gate. All three gates are approximately 250~nm in length. The EBL process yielded smooth, conformal metal gates without the `mouse-bite' defects found in short horizontal wrap gates~\cite{Burke_NL_2015}. Overlapping top and bottom gates are aligned to within 10~nm. The top gates are up to 20~nm wider than the bottom gates because top gate evaporation was carried out at an angle of 15-20$^{\circ}$ under rotation to ensure gate continuity across the nanowire. If required, this can be compensated for by reducing the width of the top gates in the EBL pattern. We estimate the coverages as 100\% for the gate-all-around structure, 73\% for the top gate, and 17\% for the bottom gate based on geometrical considerations. Figure~\ref{fig:F3_3gate}(b) shows the electrical performance of the three individual gates from a nominally identical device. The gate-all-around structure gives the steepest subthreshold swing $S$~=~33~mV/dec. This is approximately twice the thermal limit of 15.3~mV/dec at 77~K and competitive with the $S$~=~25 to 43~mV/dec reported by Burke \textit{et al.}~\cite{Burke_NL_2015} for InAs nanowire FETs with longer, etch-defined wrap-gates at 77~K. The $\Omega$-shaped top gate performs almost as well giving $S$~=~38~mV/dec. This is consistent with modelling predictions~\cite{Tang_IEEE_2004, Li_IEEE_2005}. The bottom gate performs significantly worse with $S$~=~256~mV/dec due to reduced gate coupling resulting from the limited gate coverage of the nanowire circumference. The reduced gate coupling means that higher gate voltages are required to deplete the nanowire. This causes a shift in threshold voltage $V_{th}$ to more negative values. The shift is small for the top gate but larger for the bottom gate. \subsection{Different gate lengths} \begin{figure} \centering \includegraphics[width=1\columnwidth]{Figure_4_gateLengths.pdf} \caption{(a) False-coloured scanning-electron microscopy image of a device with three gate-all-around structures with different gate lengths. (b) Drain current $I_{d}$ vs gate-all-around voltage $V_{g}$ for a nominally identical device.\label{fig:F4_length}} \end{figure} A device with three different length gate-all-around structures demonstrates the enhanced control over gate length and ability to make shorter gates. Figure~\ref{fig:F4_length}(a) shows a false-coloured SEM image of a device with three gate-all-around structures 300~nm, 200~nm, and 150~nm long. Data from a nominally identical device is displayed in figure~\ref{fig:F4_length}(b). The threshold voltage $V_{th}$~=~$-1.10$~V for the 300~nm gate-all-around structure is similar to the $V_{th}$~=~$-1.15$~V obtained for the 250~nm gate-all-around structure in figure~\ref{fig:F3_3gate}. $V_{th}$ is shifted significantly to larger negative $V_{th}$ for the 150~nm and 200~nm gates indicating decreased effective gate coupling per gate length. Interestingly, no degradation in subthreshold swing is observed with decreased gate length. In fact, $S$ improves slightly for the shorter gates with $S$~=~52~mV/dec for the 300~nm long gate, 45~mV/dec for the 200 nm gate, and 37~mV/dec for the 150~nm gate. A similar behaviour was observed by Burke \textit{et al.}~\cite{Burke_NL_2015} in wrap-gated InAs nanowire transistors at 77~K (see supplementary information). Three-dimensional electrostatic-modelling studies have shown that effective gate capacitance per length decreases for shorter gates due to fringing effects~\cite{Ji_ME_2008, Song_IEEE_2006, Zou_IEEE_2011, Gupta_JS_2013}. This leads to a shift in threshold voltage to more negative values as the gate length decreases. This shift is non-linear and increases with reduced length~\cite{Park_IEEE_2002, Ji_ME_2008, Song_IEEE_2006}. Generally, drain-induced barrier lowering also contributes, driving an associated degradation in subthreshold swing~\cite{Park_IEEE_2002, Song_IEEE_2006}. This is clearly not observed in figure~4(b) and we speculate that the small source-drain bias and the strong gate coupling~\cite{Ferain_Nat_2011, Park_IEEE_2002, Leobandung_Vac_1997, Lee_SSE_2007} make this effect insignificant for the gate lengths studied here. This effect may become significant for gate lengths $<$100~nm. Electrostatic simulations~\cite{Heedt_NS_2015} for this aspect of these devices could generate further insight and are encouraged. \section{Conclusion} We introduced a versatile fabrication technique for gate-all-around structure nanowire FETs. Single nanowires were positioned perpendicularly on top of pre-defined bottom gates using a resist-trench alignment technique~\cite{Lim_Small_2010, Lard_NL_2014}. Top gates were then created in alignment with bottom gates to form gate-all-around structures. This approach overcomes a key limitation of established wrap-gate methods~\cite{Storm_NL_2011, Burke_NL_2015} where a metal etch is used to define gate segments; namely the limitation of gate-length control and minimal gate length due to over-etching. Gate length and quality in our approach are only limited by the resolution of the EBL process. We demonstrated the length control by fabricating a device with independent 300~nm, 200~nm, and 150~nm long gate-all-around structures with a subthreshold swing of 38~mV/dec at 77~K for the 150~nm gate. We expect process optimization will yield further significant reduction in minimal gate length as sub-20~nm features can be achieved with commercial EBL systems \cite{Cord_NSPMP_2009,Mohammad_book_2010}. This platform may be interesting to systematically study gate-length dependent transistor performance. Our process also enables the fabrication of multiple gate types such as gate-all-around structures, top gates and bottom gates on the same nanowire. The gate-all-around structure performed best followed by the top gate and then the bottom gate. This is expected due to the different electrostatic couplings of the gate geometries~\cite{Tang_IEEE_2004, Heedt_NS_2015}. \ack This work was funded by the Australian Research Council (ARC) under DP170102552 and DP170104024, UNSW Goldstar Scheme, NanoLund at Lund University, Swedish Research Council, Swedish Energy Agency (Grant No. 38331-1) and Knut and Alice Wallenberg Foundation (KAW). APM acknowledges an ARC Future Fellowship (FT0990285). This work was performed in part using the NSW node of the Australian National Fabrication Facility (ANFF). \section*{References}
3,212,635,537,904
arxiv
\section{Introduction} In this paper we consider optimization problems where the objective is a sum of two terms: The first term is separable in the variable blocks, and the second term is separable in the difference between consecutive variable blocks. One example is the Fused Lasso method in statistical learning, \cite{Tibshirani-Saunders-Rosset-Zhu-Knight-05}, where the objective includes an $\ell_1$-norm penalty on the parameters, as well as an $\ell_1$-norm penalty on the difference between consecutive parameters. The first penalty encourages a sparse solution, \emph{i.e.~}, one with few nonzero entries, while the second penalty enhances block partitions in the parameter space. The same ideas have been applied in many other areas, such as Total Variation (TV) denoising, \cite{Rudin:1992:NTV:142273.142312}, and segmentation of ARX models, \cite{OhlssonLB:10} (where it is called sum-of-norms regularization). Another example is multi-period portfolio optimization, where the variable blocks give the portfolio in different time periods, the first term is the portfolio objective (such as risk-adjusted return), and the second term accounts for transaction costs. In many applications, the optimization problem involves a large number of variables, and cannot be efficiently handled by generic optimization solvers. In this paper, our main contribution is to derive an efficient and scalable optimization algorithm, by exploiting the structure of the optimization problem. To do this, we use a distributed optimization method called Alternating Direction Method of Multipliers (ADMM). ADMM was developed in the 1970s, and is closely related to many other optimization algorithms including Bregman iterative algorithms for $\ell_1$ problems, Douglas-Rachford splitting, and proximal point methods; see \cite{Eckstein92onthe, 4407760}. ADMM has been applied in many areas, including image and signal processing, \cite{DBLP:journals/ijcv/Setzer11}, as well as large-scale problems in statistics and machine learning, \cite{DBLP:journals/ftml/BoydPCPE11}. We will apply ADMM to $\ell_1$ mean filtering and $\ell_1$ variance filtering (\cite{BW_Asilomar}), which are important problems in signal processing with many applications, for example in financial or biological data analysis. In some applications, mean and variance filtering are used to pre-process data before fitting a parametric model. For non-stationary data it is also important for segmenting the data into stationary subsets. The approach we present is inspired by the $\ell_1$ trend filtering method described in \cite{Kim-Koh-Boyd-Gorinevsky-09}, which tracks changes in the mean value of the data. (An example in this paper also tracks changes in the variance of the underlying stochastic process.) These problems are closely related to the covariance selection problem, \cite{Dempster-72}, which is a convex optimization problem when the inverse covariance is used as the optimization variable, \cite{Banerjee-ElGhaoui-dAspremont-08}. The same ideas can also be found in \cite{Kim-Koh-Boyd-Gorinevsky-09} and \cite{Friedman-Hastie-Tibshirani-08}. This paper is organized as follows. In Section \ref{sec:admm} we review the ADMM method. In Section \ref{sec:sep}, we apply ADMM to our optimization problem to derive an efficient optimization algorithm. In Section \ref{sec:l1mean} we apply our method to $\ell_1$ mean filtering, while in Section \ref{sec:l1var} we consider $\ell_1$ variance filtering. Section \ref{sec:num} contains some numerical examples, and Section \ref{sec:con} concludes the paper. \section{Alternating Direction Method of Multipliers (ADMM) } \label{sec:admm} In this section we give an overview of ADMM. We follow closely the development in Section 5 of \cite{DBLP:journals/ftml/BoydPCPE11}. Consider the following optimization problem \begin{equation}\label{e-constrained-problem} \begin{array}{ll} \mbox{minimize} & f(x)\\ \mbox{subject to} & x \in {\mathcal C} \end{array} \end{equation} with variable $x\in \mathbb{R}^n$, and where $f$ and $\mathcal{C}$ are convex. We let $p^\star$ denote the optimal value of (\ref{e-constrained-problem}). We first re-write the problem as \begin{equation}\label{e-admm-problem} \begin{array}{ll} \mbox{minimize} & f(x) + I_\mathcal{C}(z)\\ \mbox{subject to} & x = z, \end{array} \end{equation} where $I_\mathcal{C}(z)$ is the indicator function on $\mathcal{C}$ (\emph{i.e.~}, $I_\mathcal{C}(z) = 0$ for $z\in\mathcal{C}$, and $I_\mathcal{C}(z) = \infty$ for $z\notin\mathcal{C}$). The augmented Lagrangian for this problem is \[ L_\rho (x, z, u) = f(x) + I_\mathcal{C}(z) + (\rho/2)\|x-z+u\|_2^2, \] where $u$ is a scaled dual variable associated with the constraint $x = z$, \emph{i.e.~}, $u = (1/\rho)y$, where $y$ is the dual variable for $x = z$. Here, $\rho > 0$ is a penalty parameter. In each iteration of ADMM, we perform alternating minimization of the augmented Lagrangian over $x$ and $z$. At iteration $k$ we carry out the following steps \begin{align} x^{k+1} &:= \mathop{\rm argmin}_x\{f(x) +(\rho/2)\|x - z^k + u^k\|_2^2\} \label{eq:admm1}\\ z^{k+1} &:= \Pi_{\mathcal{C}}(x^{k+1} + u^k) \label{eq:admm2}\\ u^{k+1} &:= u^k + (x^{k+1} - z^{k+1}) \label{eq:admm3}, \end{align} where $\Pi_{\mathcal C}$ denotes Euclidean projection onto $\mathcal{C}$. In the first step of ADMM, we fix $z$ and $u$ and minimize the augmented Lagrangian over $x$; next, we fix $x$ and $u$ and minimize over $z$; finally, we update the dual variable $u$. \subsection{Convergence} Under mild assumptions on $f$ and $\mathcal{C}$, we can show that the iterates of ADMM converge to a solution; specifically, we have \[ f(x^k) \rightarrow p^\star, \quad x^k-z^k\rightarrow 0, \] as $k\rightarrow\infty$. The rate of convergence, and hence the number of iterations required to achieve a specified accuracy, can depend strongly on the choice of the parameter $\rho$. When $\rho$ is well chosen, this method can converge to a fairly accurate solution (good enough for many applications), within a few tens of iterations. However, if the choice of $\rho$ is poor, many iterations can be needed for convergence. These issues, including heuristics for choosing $\rho$, are discussed in more detail in \cite{DBLP:journals/ftml/BoydPCPE11}. \subsection{Stopping criterion} The primal and dual residuals at iteration $k$ are given by \[ e_p^k = (x^k-z^k), \quad e_d^k = -\rho (z^k-z^{k-1}). \] We terminate the algorithm when the primal and dual residuals satisfy a stopping criterion (which can vary depending on the requirements of the application). A typical criterion is to stop when \[ \|e_p^k\|_2 \leq \epsilon^\mathrm{pri},\quad \|e_d^k\|_2 \leq \epsilon^\mathrm{dual}. \] Here, the tolerances $\epsilon^\mathrm{pri} > 0$ and $\epsilon^\mathrm{dual} > 0$ can be set via an absolute plus relative criterion, \begin{align*} &\epsilon^\mathrm{pri} = \sqrt{n} \epsilon^\mathrm{abs} + \epsilon^\mathrm{rel} \max\{\|x^k\|_2, \|z^k\|_2\}, \\ &\epsilon^\mathrm{dual} = \sqrt{n} \epsilon^\mathrm{abs} + \epsilon^\mathrm{rel} \rho \|u^k\|_2, \end{align*} where $\epsilon^\mathrm{abs} > 0$ and $\epsilon^\mathrm{rel} > 0$ are absolute and relative tolerances (see \cite{DBLP:journals/ftml/BoydPCPE11} for details). \section{Problem formulation and method} \label{sec:sep} In this section we formulate our problem and derive an efficient distributed optimization algorithm via ADMM. \subsection{Optimization problem} We consider the problem \begin{equation}\label{e-our-problem} \begin{array}{ll} \mbox{minimize} & \sum_{i=1}^N \Phi_i(x_i)+\sum_{i=1}^{N-1} \Psi_i(r_i)\\ \mbox{subject to} & r_i=x_{i+1}-x_i,\quad i = 1,\ldots,N-1 \end{array} \end{equation} with variables $x_1, \ldots, x_N,r_1, \ldots, r_{N-1}\in\mathbf R^n$, and where $\Phi_i:\mathbf R^n\rightarrow\mathbf R\cup\{\infty\}$ and $\Psi_i:\mathbf R^n\rightarrow\mathbf R\cup\{\infty\}$ are convex functions. This problem has the form (\ref{e-constrained-problem}), with variables $x = (x_1,\ldots,x_N)$, $r = (r_1,\ldots,r_{N-1})$, objective function \[ f(x,r) = \sum_{i=1}^N \Phi_i(x_i)+\sum_{i=1}^{N-1} \Psi_i(r_i) \] and constraint set \begin{equation}\label{e-constraint-set} \mathcal{C} = \{ (x, r) \mid r_i = x_{i+1}-x_i, \;i=1,\ldots,N-1\}. \end{equation} The ADMM form for problem (\ref{e-our-problem}) is \begin{equation}\label{e-our-problem-admm} \begin{array}{ll} \mbox{minimize} & \sum_{i=1}^N \Phi_i(x_i)+\sum_{i=1}^{N-1} \Psi_i(r_i) + I_\mathcal{C}(z,s) \\ \mbox{subject to} & r_i = s_i, \quad i = 1,\ldots,N-1 \\ & x_i = z_i, \quad i = 1,\ldots,N, \end{array} \end{equation} with variables $x = (x_1,\ldots,x_N)$, $r = (r_1,\ldots,r_{N-1})$, $z = (z_1,\ldots,z_N)$, and $s = (s_1,\ldots,s_{N-1})$. Furthermore, we let $u = (u_1,\ldots,u_N)$ and $t = (t_1,\ldots,t_{N-1})$ be vectors of scaled dual variables associated with the constraints $x_i = z_i$, $i = 1,\ldots,N$, and $r_i = s_i$, $i = 1,\ldots,N-1$ (\emph{i.e.~}, $u_i = (1/\rho)y_i$, where $y_i$ is the dual variable associated with $x_i = z_i$). \subsection{Distributed optimization method} Applying ADMM to problem (\ref{e-our-problem-admm}), we carry out the following steps in each iteration. \paragraph*{Step 1.} Since the objective function $f$ is separable in $x_i$ and $r_i$, the first step (\ref{eq:admm1}) of the ADMM algorithm consists of $2N-1$ separate minimizations \begin{equation}\label{e-admm-11} x_i^{k+1} := \mathop{\rm argmin}_{x_i} \{\Phi_i(x_i) +(\rho/2)\|x_i - z_i^k + u_i^k\|_2^2\}, \end{equation} $i = 1,\ldots,N$, and \begin{equation}\label{e-admm-12} r_i^{k+1} := \mathop{\rm argmin}_{r_i} \{\Psi_i(r_i) +(\rho/2)\|r_i - s_i^k + t_i^k\|_2^2\}, \end{equation} $i = 1,\ldots,N-1$. These updates can all be carried out in parallel. For many applications, we will see that we can often solve (\ref{e-admm-11}) and (\ref{e-admm-12}) analytically. \paragraph*{Step 2.} In the second step of ADMM, we project $(x^{k+1} + u^k, r^{k+1} + t^k)$ onto the constraint set $\mathcal{C}$, \emph{i.e.~}, \[ (z^{k+1}, s^{k+1}) := \Pi_\mathcal{C}((x^{k+1}, r^{k+1}) + (u^k, t^k)). \] For the particular constraint set (\ref{e-constraint-set}), we will show in Section \ref{s-projection} that the projection can be performed extremely efficiently. \paragraph*{Step 3.} Finally, we update the dual variables: \[ u_i^{k+1} := u_i^k + (x_i^{k+1}-z_i^{k+1}), \quad i = 1,\ldots,N \] and \[ t_i^{k+1} := t_i^k + (r_i^{k+1}-s_i^{k+1}), \quad i = 1,\ldots,N-1. \] These updates can also be carried out independently in parallel, for each variable block. \subsection{Projection}\label{s-projection} In this section we work out an efficient formula for projection onto the constraint set $\mathcal{C}$ (\ref{e-constraint-set}). To perform the projection \[ (z, s) = \Pi_\mathcal{C}((w, v)), \] we solve the optimization problem \[ \begin{array}{ll} \mbox{minimize} & \|z - w\|_2^2 + \|s - v\|_2^2 \\ \mbox{subject to} & s = Dz, \end{array} \] with variables $z = (z_1,\ldots,z_N)$ and $s = (s_1,\ldots,s_{N-1})$, and where $D\in\mathbf R^{(N-1)n\times Nn}$ is the forward difference operator, \emph{i.e.~}, \[ D = \left[ \begin{array}{lllll} -I & I & & & \\ & -I & I & & \\ & & \ddots & \ddots & \\ & & & -I & I \\ \end{array} \right]. \] This problem is equivalent to \[ \begin{array}{ll} \mbox{minimize} & \|z - w\|_2^2 + \|Dz - v\|_2^2. \end{array} \] with variable $z = (z_1,\ldots,z_N)$. Thus to perform the projection we first solve the optimality condition \begin{equation}\label{e-opt-cond} (I + D^TD)z = w + D^Tv, \end{equation} for $z$, then we let $s = Dz$. The matrix $I + D^TD$ is block tridiagonal, with diagonal blocks equal to multiples of $I$, and sub/super-diagonal blocks equal to $-I$. Let $LL^T$ be the Cholesky factorization of $I + D^TD$. It is easy to show that $L$ is block banded with the form \[ L = \left[ \begin{array}{lllll} l_{1,1} & & & & \\ l_{2,1} & l_{2,2} & & & \\ & l_{3,2} & l_{3,3} & & \\ & & \ddots & \ddots & \\ & & & l_{N,N-1} & l_{N,N} \end{array} \right] \otimes I, \] where $\otimes$ denotes the Kronecker product. The coefficients $l_{i,j}$ can be explicitly computed via the recursion \[ \begin{array}{l} l_{1,1} = \sqrt{2}, \\ l_{i+1,i} = -1/l_{i,i}, \;\; l_{i+1,i+1} = \sqrt{3-l_{i+1,i}^2}, \;\; i = 1,\ldots,N-2, \\ l_{N,N-1} = -1/l_{N-1,N-1}$, \quad $l_{N,N} = \sqrt{2-l_{N,N-1}^2}. \end{array} \] The coefficients only need to be computed once, before the projection operator is applied. The projection therefore consists of the following steps \begin{enumerate} \item Form $b := w + D^Tv$: \[ \begin{array}{l} b_1 := w_1 - v_1, \quad b_N := w_N + v_{N-1}, \\ b_i := w_i + (v_{i-1} - v_i),\quad i = 2,\ldots,N-1. \end{array} \] \item Solve $Ly = b$: \begin{align*} y_1 &:= (1/l_{1,1})b_1, \\ y_i &:= (1/l_{i,i})(b_i - l_{i,i-1}y_{i-1}),\quad i = 2,\ldots,N. \end{align*} \item Solve $L^Tz = y$: \begin{align*} z_N &:= (1/l_{N,N})y_N, \\ z_i &:= (1/l_{i,i})(y_i-l_{i+1,i}z_{i+1}),\quad i = N-1,\ldots,1. \end{align*} \item Set $s = Dz$: \[ s_i := z_{i+1}-z_i,\quad i = 1,\ldots,N-1. \] \end{enumerate} Thus, we see that we can perform the projection very efficiently, in $\mathcal{O}(Nn)$ flops (floating-point operations). In fact, if we pre-compute the inverses $1/l_{i,i}$, $i = 1,\ldots,N$, the only operations that are required are multiplication, addition, and subtraction. We do not need to perform division, which can be expensive on some hardware platforms. \section{Examples} \subsection{$\ell_1$ Mean filtering} \label{sec:l1mean} Consider a sequence of vector random variables \[ Y_i\sim {\mathcal{N}}(\bar y_i, \Sigma), \quad i = 1,\ldots,N, \] where $\bar y_i\in\mathbf R^n$ is the mean, and $\Sigma\in{\mbox{\bf S}}^n_+$ is the covariance matrix. We assume that the covariance matrix is known, but the mean of the process is unknown. Given a sequence of observations $y_1,\ldots,y_N$, our goal is to estimate the mean under the assumption that it is piecewise constant, \emph{i.e.~}, $\bar y_{i+1} = \bar y_i$ for many values of $i$. In the Fused Group Lasso method, we obtain our estimates by solving \[ \begin{array}{ll} \mbox{minimize} & \sum_{i=1}^N\frac 1 2 (y_i-x_i)^T\Sigma^{-1}(y_i-x_i)+\lambda \sum_{i=1}^{N-1} \|r_i\|_2\\ \mbox{subject to} & r_i=x_{i+1}-x_i, \quad i = 1,\ldots,N-1, \end{array} \] with variables $x_1,\ldots,x_N$, $r_1,\ldots,r_{N-1}$. Let $x_1^\star,\ldots,x_N^\star$, $r_1^\star,\ldots,r_{N-1}^\star$ denote an optimal point, our estimates of $\bar y_1,\ldots,\bar y_N$ are $x_1^\star,\ldots,x_N^\star$. This problem is clearly in the form (\ref{e-our-problem}), with \[ \Phi_i(x_i) = \frac{1}{2} (y_i-x_i)^T\Sigma^{-1}(y_i-x_i),\quad \Psi_i(r_i) = \lambda \|r_i\|_2. \] \newpage \paragraph*{ADMM steps.} For this problem, steps (\ref{e-admm-11}) and (\ref{e-admm-12}) of ADMM can be further simplified. Step (\ref{e-admm-11}) involves minimizing an unconstrained quadratic function in the variable $x_i$, and can be written as \[ x_i^{k+1} = (\Sigma^{-1}+\rho I)^{-1} (\Sigma^{-1} y_i + \rho(z_i^k-u_i^k)). \] Step (\ref{e-admm-12}) is \[ r_i^{k+1} := \mathop{\rm argmin}_{r_i} \{\lambda \|r_i\|_2+(\rho/2)\|r_i - s_i^k + t_i^k\|_2^2\}, \] which simplifies to \begin{equation}\label{eq:thresh} r_i^{k+1} ={\mathcal{S}}_{\lambda/\rho}(s_i^k -t_i^k), \end{equation} where $\mathcal{S}_\kappa$ is the vector soft thresholding operator, defined as \[ {\mathcal{S}}_\kappa({a})=(1-\kappa/\|a\|_2)_+ {a},\quad {\mathcal{S}}_\kappa({0})=0. \] Here the notation $(v)_+ = \max\{0, v\}$ denotes the positive part of the vector $v$. (For details see \cite{DBLP:journals/ftml/BoydPCPE11}.) \paragraph*{Variations.} In some problems, we might expect that individual components of $x_t$ will be piecewise constant, in which case we can instead use the standard Fused Lasso method. In the standard Fused Lasso method we solve \[ \begin{array}{ll} \mbox{minimize} & \sum_{i=1}^N\frac 1 2 (y_i-x_i)^T\Sigma^{-1}(y_i-x_i)+\lambda \sum_{i=1}^{N-1} \|r_i\|_1 \\ \mbox{subject to} & r_i=x_{i+1}-x_i,\quad i = 1,\ldots,N, \end{array} \] with variables $x_1,\ldots,x_N$, $r_1,\ldots,r_{N-1}$. The ADMM updates are the same, except that instead of doing vector soft thresholding for step (\ref{e-admm-12}), we perform scalar componentwise soft thresholding, \emph{i.e.~}, \[ (r_i^{k+1})_j ={\mathcal{S}}_{\lambda/\rho}((s_i^k -t_i^k)_j),\quad j=1,\ldots,n. \] \subsection{$\ell_1$ Variance filtering} \label{sec:l1var} Consider a sequence of vector random variables (of dimension $n$) \[ Y_i\sim {\mathcal{N}}(0,\Sigma_i), \quad i = 1,\ldots,N, \] where $\Sigma_i\in{\mbox{\bf S}}^n_+$ is the covariance matrix for $Y_i$ (which we assume is fixed but unknown). Given observations of $y_1,\ldots,y_N$, our goal is to estimate the sequence of covariance matrices $\Sigma_1,\ldots,\Sigma_N$, under the assumption that it is piecewise constant, \emph{i.e.~}, it is often the case that $\Sigma_{i+1} = \Sigma_i$. In order to obtain a convex problem, we use the inverse covariances $X_i = \Sigma_i^{-1}$ as our variables. The Fused Group Lasso method for this problem involves solving \[ \begin{array}{ll} \mbox{minimize} & \sum_{i=1}^N \mathop{\bf Tr}(X_iy_iy_i^T)-\log\det X_i +\lambda \sum_{i=1}^{N-1}\|R_i\|_F \\ \mbox{subject to} & R_i = X_{i+1}-X_i, \quad i = 1,\ldots,N-1, \end{array} \] where our variables are $R_i\in{\mbox{\bf S}}^n$, $i = 1,\ldots,N-1$, and $X_i\in{\mbox{\bf S}}^n_+$, $i = 1,\ldots,N$. Here, \[ \|R_i\|_F = \sqrt{\mathop{\bf Tr}(R_i^TR_i)} \] is the Frobenius norm of $R_i$. Let $X_1^\star, \ldots,X_N^\star$, $R_1^\star,\ldots,R_{N-1}^\star$ denote an optimal point, our estimates of $\Sigma_1,\ldots,\Sigma_N$ are $(X_1^\star)^{-1},\ldots,(X_N^\star)^{-1}$. \paragraph*{ADMM steps.} It is easy to see that steps (\ref{e-admm-11}) and (\ref{e-admm-12}) simplify for this problem. Step (\ref{e-admm-11}) requires solving \[ X_i^{k+1} := \mathop{\rm argmin}_{X_i\succ 0} \{ \Phi_i(X_i) +(\rho/2)\|X_i - Z_i^k + U_i^k\|_2^2\}, \] where \[ \Phi_i(X_i) = \mathop{\bf Tr}(X_iy_iy_i^T)-\log\det X_i. \] This update can be solved analytically, as follows. \begin{enumerate} \item Compute the eigenvalue decomposition of \[ \rho\left(Z_i^k -U_i^k\right)-y_iy_i^T=Q\Lambda Q^T \] where $\Lambda={\bf diag}(\lambda_1,\ldots , \lambda_n)$. \item Now let \[ \mu_j := \frac{\lambda_j+\sqrt{\lambda_j^2+4\rho}}{2\rho},\quad j = 1,\ldots,n. \] \item Finally, we set \[ X_i^{k+1} = Q \mathop{\bf diag}(\mu_1,\ldots,\mu_n) Q^T. \] \end{enumerate} For details of this derivation, see Section 6.5 in \cite{DBLP:journals/ftml/BoydPCPE11}. Step (\ref{e-admm-12}) is \[ R_i^{k+1} := \mathop{\rm argmin}_{R_i} \{\lambda \|R_i\|_F+(\rho/2)\|R_i - S_i^k + T_i^k\|_2^2\}, \] which simplifies to \[ R_i^{k+1} ={\mathcal{S}}_{\lambda/\rho}(S_i^k - T_i^k), \] where $\mathcal{S}_\kappa$ is a matrix soft threshold operator, defined as \[ {\mathcal{S}}_\kappa(A)=(1-\kappa/\|A\|_F)_+ A,\quad {\mathcal{S}}_\kappa({0})=0. \] \paragraph*{Variations.} As with $\ell_1$ mean filtering, we can replace the Frobenius norm penalty with a componentwise vector $\ell_1$-norm penalty on $R_i$ to get the problem \[ \begin{array}{ll} \mbox{minimize} & \sum_{i=1}^N \mathop{\bf Tr}(X_iy_iy_i^T)-\log\det X_i + \lambda \sum_{i=1}^{N-1}\|R_i\|_1 \\ \mbox{subject to} & R_i=X_{i+1}-X_i, \quad i = 1,\ldots,N-1, \end{array} \] with variables $R_1,\ldots,R_{N-1}\in{\mbox{\bf S}}^n$, and $X_1,\ldots,X_N\in{\mbox{\bf S}}^n_+$, and where \[ \|R\|_1 = \sum_{j,k} |R_{jk}|. \] Again, the ADMM updates are the same, the only difference is that in step (\ref{e-admm-12}) we replace matrix soft thresholding with a componentwise soft threshold, \emph{i.e.~}, \[ (R_i^{k+1})_{l,m} = \mathcal{S}_{\lambda/\rho}((S_i^k -T_i^k)_{l,m}), \] for $l = 1,\ldots,n$, $m = 1,\ldots,n$. \subsection{$\ell_1$ Mean and variance filtering} \label{sec:l1mean_var} Consider a sequence of vector random variables \[ Y_i\sim {\mathcal{N}}(\bar y_i, \Sigma_i), \quad i = 1,\ldots,N, \] where $\bar y_i\in\mathbf R^n$ is the mean, and $\Sigma_i\in{\mbox{\bf S}}^n_+$ is the covariance matrix for $Y_i$. We assume that the mean and covariance matrix of the process is unknown. Given observations $y_1,\ldots,y_N$, our goal is to estimate the mean and the sequence of covariance matrices $\Sigma_1,\ldots,\Sigma_N$, under the assumption that they are piecewise constant, \emph{i.e.~}, it is often the case that $\bar y_{i+1} = \bar y_i$ and $\Sigma_{i+1} = \Sigma_i$. To obtain a convex optimization problem, we use \[ X_i=-\frac{1}{2}\Sigma_t^{-1},\quad m_i=\Sigma_t^{-1}x_i, \] as our variables. In the Fused Group Lasso method, we obtain our estimates by solving \[ \begin{array}{ll} \mbox{minimize} & \sum_{i=1}^N -(1/2)\log\det(-X_i)-\mathop{\bf Tr}(X_iy_iy_i^T)\\ & \quad\qquad - m_i^T y_i -(1/4)\mathop{\bf Tr}(X^{-1}_im_im_i^T)\\ & \quad\qquad + \lambda_1 \sum_{i=1}^{N-1} \|r_i\|_2+\lambda_2 \sum_{i=1}^{N-1} \|R_i\|_F\\ \mbox{subject to} & r_i=m_{i+1}-m_i, \quad i = 1,\ldots,N-1, \\ & R_i=X_{i+1}-X_i, \quad i = 1,\ldots,N-1, \end{array} \] with variables $r_1,\ldots,r_{N-1} \in \mathbf R^n$, $m_1,\ldots,m_{N}\in \mathbf R^n$, $R_1,\ldots,R_{N-1}\in{\mbox{\bf S}}^n$, and $X_1,\ldots,X_N\in{\mbox{\bf S}}^n_+$. \paragraph*{ADMM steps.} This problem is also in the form (\ref{e-our-problem}), however, as far as we are aware, there is no analytical formula for steps (\ref{e-admm-11}) and (\ref{e-admm-12}). To carry out these updates, we must solve semidefinite programs (SDPs), for which there are a number of efficient and reliable software packages (\cite{TTT:99,Stu:99}). \section{Numerical Example} \label{sec:num} In this section we solve an instance of $\ell_1$ mean filtering with $n = 1$, $\Sigma = 1$, and $N = 400$, using the standard Fused Lasso method. To improve convergence of the ADMM algorithm, we use over-relaxation with $\alpha=1.8$, see \cite{DBLP:journals/ftml/BoydPCPE11}. The parameter $\lambda$ is chosen as approximately 10\% of $\lambda_\mathrm{max}$, where $\lambda_\mathrm{max}$ is the largest value that results in a non-constant mean estimate. Here, $\lambda_\mathrm{max} \approx 108$ and so $\lambda=10$. We use an absolute plus relative error stopping criterion, with $\epsilon^\mathrm{abs} = 10^{-4}$ and $\epsilon^\mathrm{rel} = 10^{-3}$. Figure \ref{ex1} shows convergence of the primal and dual residuals. The resulting estimates of the means are shown in Figure~\ref{ex2}. \begin{figure}[ht] \begin{center} \includegraphics[width = \columnwidth]{residuals.pdf} \caption{Residual convergence: Primal residual $e_p$ (solid line), and dual residual $e_d$ (dashed line).}\label{ex1}\end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width = \columnwidth]{estimates.pdf} \caption{Estimated means (solid line), true means (dashed line) and measurements (crosses).}\label{ex2}\end{center} \end{figure} We solved the same $\ell_1$ mean filtering problem using CVX, a package for specifying and solving convex optimization problems (\cite{cvxProg}). CVX calls generic SDP solvers SeDuMi (\cite{TTT:99}) or SDPT3 (\cite{Stu:99}) to solve the problem. While these solvers are reliable for wide classes of optimization problems, and exploit sparsity in the problem formulation, they are not customized for particular problem families, such as ours. The computation time for CVX is approximately 20 seconds. Our ADMM algorithm (implemented in C), took $2.2$ \textit{milliseconds} to produce the same estimates. Thus, our algorithm is approximately 10000 times faster compared with generic optimization packages. Indeed, our implementation does \textit{not} exploit the fact that steps 1 and 3 of ADMM can be implemented independently in parallel for each measurement. Parallelizing steps 1 and 3 of the computation can lead to further speedups. For example, simple multi-threading on a quad-core CPU would result in a further $4 \times$ speed-up. \section{Conclusions} \label{sec:con} In this paper we derived an efficient and scalable method for an optimization problem (\ref{e-our-problem}) that has a variety of applications in control and estimation. Our custom method exploits the structure of the problem via a distributed optimization framework. In many applications, each step of the method is a simple update that typically involves solving a set of linear equations, matrix multiplication, or thresholding, for which there are exceedingly efficient libraries. In numerical examples we have shown that we can solve problems such as $\ell_1$ mean and variance filtering many orders of magnitude faster than generic optimization solvers such as SeDuMi or SDPT3. The only tuning parameter for our method is the regularization parameter $\rho$. Finding an optimal $\rho$ is not a straightforward problem, but \cite{DBLP:journals/ftml/BoydPCPE11} contains many heuristics that work well in practice. For the $\ell_1$ mean filtering example, we find that setting $\rho \approx \lambda$ works well, but we do not have a formal justification.
3,212,635,537,905
arxiv
\section{Introduction} Increase in fault currents in power grids is due to the ongoing expansion in energy generation and demand, as well as the trend to parallelize the network sections. As a result, there is a global effort underway to find solutions to the fault currents problem in order to prevent having to replace existing circuit breakers or to postpone their replacements \cite{naphade2021experimental,jia2017numerical,zhao2014performance,zhou2020hybrid,yuan2020optimized,zhou2019performance,liu2021design,wei2020limiting,ouali2020integration,alam2018fault}. Using various types of fault current limiters (FCLs) could be an option. This device essentially makes a series connection to the power grid with a variable impedance. The impedance is low when it performs at its best situation. When a fault arises, however, the impedance of the system increases dramatically, preventing amplitude currents from flowing to greater levels. Since the failures in the converters and rotor circuit might result dangerously in FCLs, on the other hand, may be employed to maintain the generator's grid connection for as long as possible. Short circuit current is limited by such circuits, but the generator is not disconnected from the grid \cite{zhang2019viable,li2019current,shen2018three}. Despite being advantageous to utilities, these new methods have created a significant problem in terms of short-circuits. Symmetrical or asymmetrical short circuits are unavoidable in power networks and result in large magnitude short circuit currents, particularly in high voltage power networks. These currents have negative thermal and mechanical consequences \cite{safaei2020survey}. Different types of these devices can be classified by their functionalities, materials, structures, expenditures, etc. They may be classified into two general groups based on the most significant functional characteristics \cite{shen2018study,wei2018performance,yuan2018novel,baferani2018novel}: 1- Limiters that have a built-in reaction to a fault. 2- Limiters with a delayed error response. SCFCLs offer numerous benefits over other types, however, they necessitate a significant quantity of magnetic substances, resulting in a high upfront cost and a considerable scale. These FCLs can be as non-superconducting and superconducting which our attention is on non-superconducting FCLs in this chapter. Superconducting FCLs have at least a superconductor tape or coil in their configurations which can be effective in limiting the fault currents, however, there are some problems in using these types of limiters. Some issues are such as cooling system volume and superconductor costs that make barriers for researchers and engineers to provide a suitable design of power grid including the superconducting FCLs \cite{jia2017numerical,ruiz2014resistive,tan2015resistive,chen2016parameter,commins2012three}. Nowadays these FCLs are constructed in small scales in laboratories and test in some small sections of power grids \cite{tan2015resistive}. There are different types of FCLs including saturated core FCL, inductive FCL, magnetic FCL, open core FCL, etc. that every configuration has special structure and material, but the same functionality.\\ The saturated core FCL is made up of three parts including copper dc \& ac coils, and an iron core, each of which has its own structure. The power supply is linked in series to the ac coils, while the dc coils are connected to a separate direct current source. These sorts of limiters, on the other hand, confront problems such as increasing the needed magnetic material, which maybe partially addressed by introducing novel structures and arrangements. To address the issue of big volume and high cost, many structures are offered \cite{shen2018three,shen2018study,linden2019design,linden2020phase,commins2012analytical}. By introducing an air gap into the core, the volume of the magnetic substance may be decreased. This approach is applicable to nonlinear inductors in general. The air gap is not just a component of an alternating current circuit, but it is also a component of a direct current magnetic circuit. In this situation, core saturation is difficult, and the FCL's usual impedance is decreased. As a result, designs with three legs are possible, and the arrangement of ac and dc windings may be modified such which there is no dc flux passing through the middle leg of the core. As a result, an air gap in the intermediate branch may be generated without raising the dc excitation to saturate the nucleus. This air gap also guarantees that the core size is reduced for the same voltage level. The configuration of the ac and dc windings allows one of the two outer legs to go out of saturation during each half cycle of the fault, limiting the fault current. The weight of the FCL may be decreased by utilizing the proper construction when compared to induction type designs, which depends on how the ac and dc coils are positioned and put on distinct legs. The ratio of the defective phase voltage to the overall fault current determines the effective fault impedance \cite{cvoric2009new,cvoric20163d,cvoric2008comparison}. \section{Literature Review} There are many FCL configurations tested or implemented all over the world to relieve the risks of these hazardous currents in a power grid. These can help the power system to be safe, reliable, and stable \cite{safaei2020survey}. Ruiz et al. \cite{ruiz2014resistive} has been investigated the Resistive-type Superconducting FCLs in terms of their working principles, numerical modeling, and superconducting materials with experimental concepts. Tan et al. presented a resistive SFCL with different current flowing time by an YBCO tape as a superconductor material \cite{tan2015resistive}. Assessment of thermoelectric model of RSFCL by root mean squared method is conducted by Branco et al. in \cite{branco2010proposal}. Related to inductive SFCL, Yamaguchi et al. presented a transformer type SFCL that analyze the relation between the current limiting characteristics and transformer ratings in \cite{yamaguchi2004characteristics}. Recently, a transformer-type SFCL with an external circuit has been developed that may minimize the fault current utilizing twice quench operations. The impact of current limitation on the winding path of two isolated windings has been addressed \cite{han2018fault}. A comprehensive review of Flux-locked type FCLs (FLSFCLs) was recently published in \cite{badakhshan2018flux}, which included a detailed assessment of research conductions and applications of FLSFCLs in power grids. A FLSFCL was used to enrich the quality of power indexes in a distribution system \cite{lim2011analysis}. A voltage-sourced inverter has been used to incorporate the planned FLSFCL. The coordination of overcurrent relays with FLSFCL is discussed in \cite{kim2010study}. In \cite{zhao2014performance}, the impacts of these windings on FLSFCL’s performance in an iron closed-core were investigated. Tripathi et al. \cite{tripathi2021real} has been conducted a configuration with two ac winding and a dc winding SCFCL for improving a doubly fed induction generator (DFIG) system. \section{Principles of Fault Current Limiting} FCLs are widely studied during recent years in which researchers are introduced many configurations with different material, performance type, arrangements, etc. Two fault current liming basis are as follows: Adding resistive impedance and reactive impedance (an inductor or a combination of an inductor and a capacitor) \cite{safaei2020survey}.\\ There is a sample transmission line with FCL is represented in which this device is in series with other power grid components. FCLs usually limit fault currents to an acceptable level which it is relied on the system requirements. That means they do not act during normal conditions of a power grid. Also, they do not apply any changes, damage, or restrictions to the system, so could be safe and reliable.\\ \begin{figure} \centering \includegraphics[height=1.5cm]{figures/Fig.2.1.jpg} \caption{Sample power system circuit} \label{fig:Fig. 2.1} \end{figure} Eq. (\ref{1}) may describe the fault current in the system, such as the single-phase fault to ground (see Fig.~\ref{fig:Fig. 2.1}) in which $i_{fault(t)}$ represents the fault current, $U_{L(t)}$, the line voltage, $L$ the phase angle of the transmission line impedance, $Z_{L}$ the impedance of transmission line, $I_{L, t_{f}}$ the value of a current of transmission line at the start time during a fault condition, $t_{f}$ the start time of a fault, and $\tau$ the line time constant \cite{yuan2018novel}: \begin{equation}\label{1} {i_{fault}(t) = \frac{U_{L} \times sin(\omega t - \phi_{L})}{|Z_{L}|} + (I_{L,t_{f}}-\frac{U_{L} \times sin(\omega t - \phi_{L})}{|Z_{L}|}) \times e^{-\frac{t-t_{f}}{\tau}}} \end{equation} When a resistive or inductive types of fault current limiters are utilized, the difference in system’s fault performance is in the time constant value of the system, resulting in a difference in the value of the fault current's dc portion. The transmission line time constant of a fault is less than the inductive FCL for the same FCL impedance when a resistive FCL is employed, as demonstrated by Eqs. (\ref{2}) and (\ref{3}) which $L_{L}$ is line leakage inductance, $R_{L}$, line leakage resistance, and $R_{FCL}$ and $L_{FCL}$ are resistive and inductive impedances, correspondingly. \begin{equation}\label{2} {\tau_{RFCL} = \frac{L_{L}}{R_{L} + R_{FCL}}} \end{equation} \begin{equation}\label{3} {\tau_{XFCL} = \frac{L_{L} + L_{FCL}}{R_{L}}} \end{equation} As a result, given the same FCL impedance, the initial fault current peak for resistive limiting impedance is less than the inductive impedance. Another distinction is that the resistive FCL dissipates energy during the fault, whereas the inductive FCL accumulates energy in the magnetic field during the failures and restores it to the system at the end of each cycle. Hence, inductive FCLs do not result in power loss for the system if normal system operation is resumed without interruption of current flow (regardless of inductor’s resistance). When the transmission line is terminated, the energy accumulated in the previous cycle is merely the power loss in the circuit breaker \cite{tan2015resistive,zhou2020inductive}.\\ To limit the fault current, inductive FCL can employ both an inductor and a capacitor, simultaneously. The FCL can be substituted by a parallel inductor and capacitor in this case to arrange to conduct the network frequency to a resonance state. The $C_{FCL}$ capacitor is connected in series via a transmission line, and its quantity is adjusted to compensate the leakage inductance of the transmission line. By occurring a fault, the $L_{FCL}$ inductance is connected in parallel to the capacitor, and the parallel LC circuit's resonant impedance restricts the fault current as follows: \begin{equation}\label{4} {Z_{res,FCL} =j \times \frac{\omega L_{FCL}}{1-\omega^2 L_{FCL}C_{FCL}}} \end{equation} This section briefly discussed the limitation of fault currents. Then a comprehensive comparison will be given to assess the different configurations by numerical method in the next sections. \section{Basis of SCFCLs} The nonlinear inductor impedance is determined by the average longitude of the flux direction, $l_{mean}$, the cross-sectional area of the core, $A_{core}$, winding’s turn number, $N_{ind}$, and the core's relative permeability, ${\mu}_{r}$ (Fig.~\ref{fig:Fig. 2.2}). By conducting the core to a saturation mode, the inductance is smaller when the hysteresis curve’s operational point is outside of the saturation zone (Fig.~\ref{fig:Fig. 2.3}) \cite{5415715}. \begin{figure} \centering \includegraphics[height=6cm]{figures/Fig.2.2.jpg} \caption{Nonlinear inductor} \label{fig:Fig. 2.2} \end{figure} \begin{figure} \centering \includegraphics[height=5cm]{figures/Fig.2.3.jpg} \caption{Nonlinear B-H (hysteresis) curve in an iron core} \label{fig:Fig. 2.3} \end{figure} The estimated inductances for saturated and unsaturated situations are given in the following equations: \begin{equation}\label{5} {L_{x} = \mu_{0}\mu_{r} \times \frac{{N^2_{ind}} A_{core}}{l_{mean}}} \end{equation} \begin{equation}\label{6} {L_{y} = \mu_{0}\mu_{r,sat} \times \frac{{N^2_{ind}} A_{core}}{l_{mean}}} \end{equation} Where $L_{x}$ and $L_{y}$ are the FCL impedances during the fault and normal conditions, and $\mu_{0}$ and $\mu_{r,sat}$ are the air permeability and relative permeability coefficients of the core at a saturation condition. The core must be saturated under normal system conditions, but not over-saturated. The core should be de-saturated under fault situations. An additional dc current-carrying coil provided by an auxiliary source, or a permanent magnet (PM) can be used to saturate the core, as illustrated in Fig.~\ref{fig:Fig. 2.4}.\\ A dc coil, a copper ac coil, and an iron core make up generally the saturated core fault current limiters, the arrangement of which changes depending on the different constructions and used materials. The ac coils are linked to a power system in series, and the resulting magnetic field is directed in opposite directions. The dc coils are connected to a separate direct current source, as demonstrated in Fig.~\ref{fig:Fig. 2.5}. \begin{figure} \centering \includegraphics[height=9cm]{figures/Fig.2.4.jpg} \caption{Saturation methods of FCL core. (a) by direct current (b) by PM} \label{fig:Fig. 2.4} \end{figure} The SCFCL works in its usual non-limiting state under normal power system circumstances, which is also the mode of operation for most of the time and has no impacts on a power grid. The rated current will flow between the ac winding windings, and the huge current in the dc winding will make the bias of the large dc magnetic field within a core in this case. During each ac cycle, deep saturation and low magnetic permeability are maintained in iron cores. The SCFCL reactance is low and has slight impact on the rest of the power system because of the inductance relation with the permeability \cite{jia2017numerical,cvoric2009new,gunawardana2016transient,nikulshin2016saturated}. \begin{figure} \centering \includegraphics[height=5cm]{figures/Fig.2.5.jpg} \caption{Schematic of main components of SCFCLs} \label{fig:Fig. 2.5} \end{figure} A substantial fault current flows between the ac windings when short circuits occur, and the limiter enters its own current limiting mode. The dc current is instantly shut off for active type SCFCLs, and regardless of the short circuit fault, the same dc bias is created for inactive type SCFCLs. The iron cores in both types will not always be at deep saturation for a long period due to the high fault currents. The permeability rises by several thousand while the cores operate in the linear area of the hysteresis characteristic, therefore the average limiter reactance enhances. The limiter can restrict the fault current due to reactance \cite{chen2016parameter,baimel2021new}. Soft iron has nonlinear magnetic characteristics, which are dictated by the hysteresis characteristic of material and is used to represent in this case. The following graph (Fig.~\ref{fig:Fig. 2.6}) illustrates some advantages of employing this type of limiter. \begin{figure} \centering \includegraphics[height=8cm]{figures/Fig.2.6.jpg} \caption{SCFCL characteristics} \label{fig:Fig. 2.6} \end{figure} The cores of limiters are adequately saturated to the point that typical current flux changes cannot restore them to the linear area depicted in Fig.~\ref{fig:Fig. 2.7}. The operating point of \textbf{B-H} graph is de-saturated in the case a fault occurrence, increasing the limiter impedance. The safe margin width indicated in the figure determines the fault flow level at which the FCL commences to restrict. The width of this portion may be ignored due to the small fault current peak, and the FCL can serve on spot. \begin{figure} \centering \includegraphics[height=7cm]{figures/Fig.2.7.jpg} \caption{FCL performance during normal regime – flux changes} \label{fig:Fig. 2.7} \end{figure} The relationship between dc current, $I_{dc}$, line current, $I_{L}$, and the magnetic field in the core, $H$ is as follows. To calculate the safe zone width, $H_{s}$, Eq. (\ref{7}) must be rewritten as Eq. (\ref{8}) \cite{cvoric20163d}: \begin{equation}\label{7} {N_{dc} I_{dc} - N_{ac} I_{L,max} > H_{sat} l_{mean}} \end{equation} \begin{equation}\label{8} {N_{dc} I_{dc} - N_{ac} I_{L,max} = (H_{sat} + H_{s}) l_{mean}} \end{equation} After the fault has been cleared, the core comes back to saturated mode. Therefore, when the amount of transmission current decreases, the ac winding’s impedance decreases with no latency. As a result, the inductive FCL can limit a countless number of continuous faults.\\ Because of the inductive essence of the fault current limiter’s impedance, there have been no energy losses throughout the restriction, but there is a loss owing to the resistance of a winding. When compared to the transmission line's nominal power, the dissipative power is rather low. Unlike resistive fault current limiters, the limiting operation period is not constrained by an acceptable quantity of thermal transfer at impedance. Because inductive limiters can't operate as a breaker, the fault current must be eliminated by other protection apparatus like circuit breakers. However, attributed to the design issues, they have not yet been industrialized \cite{yuan2020saturated}: 1- The scale and expense of the materials are both significant, 2- Inductive overvoltage passing through the dc current source during fault operation. \section{Circuit Diagram of a Simple Power System} Test circumstances of FCLs and their impacts on current limiting and other network parameters are in the form of a simplified power network test circuit with resistance, inductor, power supply, and load resistance, which act in fault and normal operations utilizing the various architectures of SCFCLs. The sample circuit system associated with their values are illustrated in Fig.~\ref{fig:Fig. 2.8} and Table \ref{table 2.1}. It is our purpose to test the different type of saturated core FCLs on the sample network by providing a model by finite element method simulated by COMSOL Multiphysics software in a computer system with Intel core i7, 6500U, 64-bit, 12 GB RAM and provide a comparison between them in terms of appropriate parameters. Various arrangements of these FCLs are given in the next section. \begin{figure} \centering \includegraphics[height=2cm]{figures/Fig.2.8.jpg} \caption{Power system equivalent circuit with single phase to ground fault} \label{fig:Fig. 2.8} \end{figure} \section{Different Configurations of SCFCLs} We propose some SCFCL configurations such as 100\% magnetic separation, partial magnetic separation, short circuit dc winding, etc. The performance of these FCLs is tested on the sample power system, and comparisons are provided. \begin{table} \centering \caption{Equivalent circuit parameters} \label{table 2.1} \setlength{\tabcolsep}{10pt} \renewcommand{\arraystretch}{1.5} \begin{tabular}{|c|c|} \hline \textbf{Parameter} & \textbf{Description} \\ \hline Power Supply ($U_L$) (rms) & $10\sqrt{2} kV$ \\ \hline $R_L$ & $0.1095 \Omega$ \\ \hline $L_L$ & $5.63419 \times 10^{-4} H$ \\ \hline $Z_{load}$ & $8.79 \Omega$ \\ \hline \end{tabular} \end{table} \subsection{SCFCL with 100\% Magnetic Separation} Because of resistive essence of ac winding in SCFCLs, there will be a voltage drop which is a coefficient of voltage during normal operation (up to 2\%). However, phase voltage of $U_{L(t)}$, is applied to the ac winding while a fault occurs. An overvoltage $(U_{FCL,dc(t)})$ is produced by a dc source as a result of windings’ coupling in a transformer in which the number of turns in the ac and dc windings are $N_{ac}$ and $N_{dc}$, correspondingly \cite{li2019current,wei2018performance}. \begin{equation}\label{9} {U_{FCL,dc (t)} = \frac{N_{dc}}{N_{ac}} \times U_{L} (t)} \end{equation} The induced overvoltage problem can be solved by magnetic separation of ac and dc windings. Fig.~\ref{fig:Fig. 2.9} depicts this arrangement in this case. \begin{figure} \centering \includegraphics[height=3.5cm]{figures/Fig.2.9.jpg} \caption{SCFCL configuration with 100\% magnetic separation of dc and ac circuit} \label{fig:Fig. 2.9} \end{figure} In this situation, the dc flux only crosses over the core's left and center legs and reaches saturation there. A leg with a significant reluctance possessing an air gap deflects the passing flux away from the path as Eq. (\ref{10}): \begin{equation}\label{10} \left.\begin{aligned} \Re_{m} = \frac{l_{mm}}{\mu_{0}\mu_{r}A_{cm}} \\ \Re_{rr} = \frac{l_{mr} + \mu_{r} l_{g}}{\mu_{0}\mu_{r}A_{cr}} \end{aligned} \right\} \longrightarrow \Re_{m} << \Re_{rr} \quad \text{that} \quad \end{equation} \\ \\ { \centering - $\Re_{m}$ is middle leg reluctance \\ - $\Re_{rr}$ is right leg reluctance \\ - $l_{mm}$ is average length of middle leg \\ - $l_{mr}$ is average length of right leg \\ - $l_{g}$ is air gap length \\ - $A_{cm}$ is cross-section of middle leg \\ - $A_{cr}$ is cross-section of right leg \\} \vspace{5mm} During the fault condition, the ac flux merely flows through the center and right legs which is induced by fault currents. The reluctance of left side is much higher than right one (with air gap) owing to a saturation state \cite{cvoric20163d}: \begin{equation}\label{11} \left.\begin{aligned} \Re_{lf} = \frac{l_{mf}}{\mu_{0}\mu_{r}A_{cl}} \\ \Re_{rr} = \frac{l_{mr} + \mu_{r} l_{g}}{\mu_{0}\mu_{r}A_{cr}} \end{aligned} \right\} \longrightarrow \Re_{lf} >> \Re_{rr} \quad \text{that} \quad \end{equation} \\ \\ { \centering - $\Re_{lf}$ is left leg reluctance \\ - $l_{mf}$ is average length of left leg \\ - $A_{cl}$ is cross-section of left leg \\} \vspace{5mm} The dc winding does not sense any flux change in this situation since it is not aware of the ac flux. As a result, no induced overvoltage occurs. Fig.~\ref{fig:Fig. 2.10} depicts the flow distribution, with two cores utilized for each phase. \begin{figure} \centering \includegraphics[height=4cm]{figures/Fig.2.10.jpg} \caption{Flux distribution in SCFCL with 100\% magnetic separation} \label{fig:Fig. 2.10} \end{figure} This configuration, however, has a design flaw. During normal operation, the ac winding has a very high impedance. Non-deep saturation of the mid-leg results in high impedance. Because the dc winding is on the opposite leg of the core, it can only saturate the same leg. \subsection{SCFCL with Partial Magnetic Separation} The saturated core FCL with 100\% magnetic separation has a limiting difficulty in that a coil placed in the opposite leg of the core cannot be thoroughly saturated, as detailed in the preceding section. The ac leg falls out of saturation even with a regular current, increasing the FCL impedance during normal operation. The core size must be large enough to get an acceptable low value for the FCL's normal state impedance, it may be inferred \cite{naphade2021experimental}.\\ An improved design of a SCFCL with partial magnetic separation is illustrated in Fig.~\ref{fig:Fig. 2.11}. There is an ac leg with an appended dc auxiliary coil to conduct it to a stronger saturated mode. The effect of the auxiliary dc coil is depicted in Fig.~\ref{fig:Fig. 2.12}. This auxiliary coil propels the working point outside the knee area, allowing it to saturate better. SCFCL parameters are presented in Table \ref{table 2.2}. \begin{figure} \centering \includegraphics[height=5cm]{figures/Fig.2.11.jpg} \caption{SCFCL with partial magnetic separation} \label{fig:Fig. 2.11} \end{figure} \begin{figure} \centering \includegraphics[height=5.5cm]{figures/Fig.2.12.jpg} \caption{Deep saturated mode of an AC leg provided by an dc auxiliary coil} \label{fig:Fig. 2.12} \end{figure} \begin{table} \centering \caption{SCFCL parameters} \label{table 2.2} \setlength{\tabcolsep}{3.5pt} \renewcommand{\arraystretch}{1.1} \begin{tabular}{|c|c|c|c|} \hline \textbf{Parameter} & \textbf{Description} & \textbf{Value} & \textbf{Value}\\ \hline & & 100\% magnetic separation & Partial magnetic separation \\ \hline $N_{ac}$ & No. of turns in ac winding & 60 & 60 \\ \hline $N_{dc}$ & No. of turns in dc winding & 500 & 500 \\ \hline $I_{dc}$ & dc current & 450 & 450 \\ \hline $l_{gap}$ & air gap length & 0.3 & 0.3 \\ \hline $N_{dc,aux}$ & No. of turns in auxiliary winding & - & 76 \\ \hline \end{tabular} \end{table} Eq. (\ref{12}) illustrates a relationship between $N_{ac, aux}$ and $N_{ac}$ as follows: \begin{equation}\label{12} N_{dc,aux} I_{dc} = N_{ac} I_{L} \quad \text{that} \quad I_{dc} = \text{dc current}, \quad I_{L} = \text{Line current} \end{equation} The design parameters are analogous to the previous mode for this case and the auxiliary winding has 76 turns, which is obtained using Eq. (\ref{12}). The FCL also comes with the same specifications as the auxiliary dc coil, except that the main dc coil is short-circuited. \subsection{SCFCL with DC Coil through short-circuited terminals} Even though the primary dc coil has more turns than the auxiliary one, the primary coil’s share of ac leg saturation is less than that of the auxiliary dc coil illustrated in Fig.~\ref{fig:Fig. 2.13}. To raise the dc reluctance and control the ac flux via the air gap, more turns of the dc primary coil are indispensable. Otherwise, the induced voltage at the dc source will rise due to the spreading of ac flux via the dc leg \cite{wei2020limiting,yuan2020saturated}.\\ If the primary winding is cut off from the rest of the circuit, the limiter functionality will not change as shown in the figure below. The ac current flowing via the dc leg causes a current in the coil with short-circuited terminals, which drives the ac flux. The dc leg is not fully saturated in this scenario. A supplementary coil navigates it to the knee area. However, the FCL impedance does not increase when everything is running smoothly because it is where the ac flux flows. \begin{figure} \centering \includegraphics[height=5cm]{figures/Fig.2.13.jpg} \caption{SCFCL with DC Coil through short-circuited terminals} \label{fig:Fig. 2.13} \end{figure} The level of flux variation in the dc side determines the induction current in the short-circuit coil, which is oppositely proportional to $N_{dc}$. The quantity of induction current increases as a reduction of the coil's turn numbers. Considering that, the entire amount of material could be substantially reduced in this circumstance. The overall amount of FCL coil material is lowered by substituting the primary dc coil with a short-circuit one by partial magnetic separation \cite{cvoric2009new}. \section{Simulation Results for Configurations with One AC Winding} In this section, we evaluate the simulation results of these models, which we define the models as Fig.~\ref{fig:Fig. 2.14} and the components of this type of limiter as Fig.~\ref{fig:Fig. 2.15}: \begin{figure} \centering \includegraphics[height=4cm]{figures/Fig.2.14.jpg} \caption{SCFCL structures including partial magnetic separation (Model A), 100\% magnetic separation (Model B) and a dc coil with short-circuited terminals (Model C)} \label{fig:Fig. 2.14} \end{figure} \begin{figure} \centering \includegraphics[height=5.5cm]{figures/Fig.2.15.jpg} \caption{SCFCL components} \label{fig:Fig. 2.15} \end{figure} Also, the geometrical dimensions of the design are demonstrated in Fig.~\ref{fig:Fig. 2.16}: \begin{figure} \centering \includegraphics[height=6cm]{figures/Fig.2.16.jpg} \caption{Configuration dimensions} \label{fig:Fig. 2.16} \end{figure} Using the sample power system for assessing the FCL performance in the circuit in fault conditions, the current curves passing through the system and the flux density and field strength curves can be obtained, as well as the distribution of flux density or field strength in different situations. A transient analysis was performed for the configurations versus time and its performance in the presence of fault is compared and assessed. Even though the limiter exists in a power system during normal situation, it should not have impacts on the circuit. Therefore, the current passing through the grid without the presence of FCL and with the presence of FCL under normal operating conditions of the system is compared, which is shown in Fig.~\ref{fig:Fig. 2.17}. Red line demonstrated the electric current without FCL with amplitude of 1000 A. Other models illustrate some changes in current specifically at the outset of time. It originates from copper structure of FCL and is due to winding positions that reduces the amplitude of current at the models. \begin{figure} \centering \includegraphics[height=6cm]{figures/Fig.2.17.jpg} \caption{Current passing through the system in normal operating conditions} \label{fig:Fig. 2.17} \end{figure} \begin{figure} \centering \includegraphics[height=6cm]{figures/Fig.2.18.jpg} \caption{Current passing through the system in normal and fault operation without SCFCL} \label{fig:Fig. 2.18} \end{figure} In models A, B and C, the limiter is in the electric circuit. Moreover, a maximum current passing through the system is about 1000 A, which with the presence of FCL (because of series connection to the circuit) due to a slight voltage drop (very small impedance input), the amount of current is slightly reduced. Also, the general diagram of the current without the presence of FCL under normal and fault operating conditions is illustrated in Fig.~\ref{fig:Fig. 2.18}, where the fault was initially entered into the system and the analysis was performed on the circuit for 50 ms. As you can observe, the fault impact on the electric current is highly significant.\\ Also, as mentioned, only the dc leg goes to deep saturation in model B, and to further saturate the ac leg, a dc auxiliary winding is added to it that a flux density diagram is shown in Fig.~\ref{fig:Fig. 2.19}. As you can observe, the flux density in Model A (partial mode) is slightly better saturated due to the auxiliary winding. Also in the Model C, the dc leg is not deeply saturated and is conducted by an auxiliary winding to the knee area, which has low values of flux density, as depicted in Fig.~\ref{fig:Fig. 2.19}. \begin{figure} \centering \includegraphics[height=6cm]{figures/Fig.2.19.jpg} \caption{Magnetic flux density changes versus time for different models} \label{fig:Fig. 2.19} \end{figure} Now, if we examine the flux density for model A in normal operation and fault mode, we perceive that the flux density in fault mode is generally higher than normal condition, which the fault causes more saturation of the core and a larger impedance. The flux density is shown in Fig.~\ref{fig:Fig. 2.20} that represents this type of SCFCL is effective in density reduction and the maximum amount of $B$ reaches almost 1.3 T. \begin{figure} \centering \includegraphics[height=6.5cm]{figures/Fig.2.20.jpg} \caption{Magnetic flux density passing through the core for both conditions in model A} \label{fig:Fig. 2.20} \end{figure} Fig.~\ref{fig:Fig. 2.21} depicts the magnetic field strength of the iron core for Model A as well, which is relatively higher in the fault state. \begin{figure} \centering \includegraphics[height=6.5cm]{figures/Fig.2.21.jpg} \caption{Magnetic field strength passing through the iron core for model A in normal and fault modes} \label{fig:Fig. 2.21} \end{figure} For a general comparison of flux density in the models presented above, their diagrams in normal operation and fault modes are represented in Fig.~\ref{fig:Fig. 2.22}. The lowest flux density value is related to dc short-circuited winding mode in normal operation which is in saturation mode. Most of the mode is related to the partial model in the fault mode, which possesses deeper saturation. Significant difference in normal and fault modes for Model C is due to the presence of short-circuited dc winding that the impact of this type of SCFCL is very advantageous. Other structures have smaller decline in flux density than Model C. \begin{figure} \centering \includegraphics[height=6.5cm]{figures/Fig.2.22.jpg} \caption{Comparison of flux density in normal and fault operations} \label{fig:Fig. 2.22} \end{figure} The amount of current flow in this model in fault mode has also been investigated in the presence of SCFCL. In this case we have\\ $B = \mu H \xrightarrow[\text{}] {if} \quad B \uparrow \rightarrow H \uparrow, \qquad H = \frac{NI}{l} \xrightarrow[\text{}] {N} I (\text{variable}),$ \\ \\ Which can be analyzed using the flux density diagram.\\ \begin{figure} \centering \includegraphics[height=6.5cm]{figures/Fig.2.23.jpg} \caption{Current passing through the system during a fault condition with SCFCL} \label{fig:Fig. 2.23} \end{figure} For example, a fault has been occurred on 23 ms, and the FCL performance for Model A has been investigated, as shown in Fig.~\ref{fig:Fig. 2.24}. As you can see, there is no fault until 23 ms and the FCL is in the circuit, so it should not normally have much of an effect on the circuit, which makes sense. After starting the fault in the first peak, the current was almost 116 kA, and then the peak of fault current was reduced, which the FCL has amplified properly (we applied this type of fault on 23 ms in this section Because the calculation volume is very high and time consuming and for analysis and comparison, so a fault has been applied in this moment). \begin{figure} \centering \includegraphics[height=6.5cm]{figures/Fig.2.24.jpg} \caption{Current passing through the system by applying a fault in 23 ms} \label{fig:Fig. 2.24} \end{figure} We now examine the flux density distribution at different points in these three models and make a comparison between them. First, we show how the flux density changes in Model A, which is in the form of Fig.~\ref{fig:Fig. 2.25} and Fig.~\ref{fig:Fig. 2.26} (for an air gap length of 0.3 m). In Fig.~\ref{fig:Fig. 2.25}, there is a flux density reduction in middle leg that increases near the fault point. After fault occurrence, there is a huge drop of flux density that density’s concentration is on the top of middle leg at 5 ms. Then, the flux density distributes to other legs after 5 ms that middle leg experiences the highest flux density. Afterwards, the high-density transfer to the right leg. \begin{figure} \centering \includegraphics[height=8cm]{figures/Fig.2.25.jpg} \caption{Flux density illustration during normal mode for Model A} \label{fig:Fig. 2.25} \end{figure} As you can see, the flux density gradually decreases in the central leg and goes out of saturation until it rises again in about 21 ms, but it has never entered deep saturation in this case. Now, the flux density is very high in some positions, due to the fault originating the passage of current through the ac windings. The middle leg is deeply saturated, which is done using the dc auxiliary winding in 10 ms. Flux density is gradually directed from the middle leg to the leg at an air gap, creating little saturation in those areas. Also, the flux density distribution is given for these three models as an example in 5 ms that the diagram has already been given. \begin{figure} \centering \includegraphics[height=8cm]{figures/Fig.2.26.jpg} \caption{Flux density distribution for Model A during fault mode} \label{fig:Fig. 2.26} \end{figure} \begin{figure} \centering \includegraphics[height=8cm]{figures/Fig.2.27.jpg} \caption{Comparison of flux density distribution at 5 milliseconds} \label{fig:Fig. 2.27} \end{figure} \section{Inductive SCFCL} The configuration of a single phase inductive SCFCL is shown in Fig.~\ref{fig:Fig. 2.28}. In this structure, they alternate from one half-cycle to the other half-cycle because the two legs have an ac winding separately, and the leg with the air gap directs the ac flux produced in this leg. Therefore, there are three magnetic legs in this architecture, each with dc and ac coils. The ac coils are placed to the transmission line in series, whilst the dc coils are connected to the direct current source in series \cite{cvoric20163d,cvoric2008comparison,pirhadi2020design}. \begin{figure} \centering \includegraphics[height=5.5cm]{figures/Fig.2.28.jpg} \caption{Single phase inductive SCFCL} \label{fig:Fig. 2.28} \end{figure} The dc windings are connected in a way that the dc flux receives a rotational current. As a result, the outside legs of the core are saturated, but the dc flux does not pass through the central leg. So, dc saturation level does not rely on the length of the air gap ($l_{gap}$), so we have \cite{linden2019design}: \begin{equation}\label{13} B_{mid}=0, \quad B_{o} = \mu_{r,sat} \times \frac{N_{dc} I_{dc}}{l_{mean,outer}} + (B_{sat} - \mu_{sat} H_{sat}) \end{equation} Where $B_{o}$ and $B_{mid}$ and are the dc flux densities in the outer and central legs, correspondingly, and $l_{mean,outer}$ is the average lengths of the outer leg and $H_{sat}$ is the saturation values of the magnetic field. The central leg makes a parallel route for the ac flux, allowing the air gap to be merely in the ac magnetic circuit. After the fault commences, the left and right legs of the core alternate from saturation mode. The connection of the ac windings is during a half cycle, the ac and dc fluxes are in the same direction but in the other outer leg are in opposite directions. Since the reluctance of the right leg is quite high, the ac flux of the left leg blocks its pathway via the center part due to saturation. Inductance of SCFCL can be expressed during normal and fault conditions as Eqs. (\ref{14}) and (\ref{15}) \cite{commins2012three,commins2012analytical,zhou2020inductive,5415715,pellecchia2016development}: \begin{equation}\label{14} L_{FCL,sat} = \mu_{0} \times \frac{N^2_{ac} \times A_{core,outer}}{l_{mean,outer}+l_{gap}} = \mu_{0} \times \frac{N^2_{ac} \times A_{core,outer}}{({l_{mean,outer}}/{l_{gap}}+1) \times l_{gap}} \end{equation} \begin{equation}\label{15} L_{FCL,lin} = \mu_{0} \times \frac{N^2_{ac} \times 2A_{core,outer}}{l_{gap}} = f(2A_{core,outer}) \end{equation} Where $L_{FCL,sat}$ is FCL inductance in saturation mode, $A_{core}$, outer cross-sectional area of the outer leg, and $L_{FCL,lin}$ is FCL inductance out of saturated mode (in fault mode). \section{SCFCL with DC Single-Core Short-Circuited Windings} SCFCL with dc short-circuited windings uses two cores in each phase. The two cores can be combined to further reduce the magnetic materials. This topology is illustrated in Fig.~\ref{fig:Fig. 2.29}. As you can observe, because the dc leg's main purpose was to make a low reluctance path for the dc flux, it was eliminated. In this setup, an extra ac leg takes over the dc leg's role. The ac flux is directed from each ac winding to the air gap legs using short-circuited windings \cite{baferani2018novel,commins2012analytical,pellecchia2016development,vilhena2018design}. \begin{figure} \centering \includegraphics[height=5.5cm]{figures/Fig.2.29.jpg} \caption{Single-core SCFCL configuration with short-circuited windings} \label{fig:Fig. 2.29} \end{figure} In this section, we evaluate the simulation results of models D and E, which we define as the models in Fig.~\ref{fig:Fig. 2.30}. The current passing through the system in the presence of FCL in normal operating conditions has been investigated in this case. By mentioning to Fig.~\ref{fig:Fig. 2.31}, it can be seen that these two models have larger impedances than models A, B and C in the normal operating mode which is not desirable and their current has reached up to 400 A or more even in the first peak, although the rated current of the system has a peak of 1000 A. Model D is more desirable than model E in this state (because model E has a larger number of windings and therefore it has more resistance and impedance in a normal operation). There is a drop at the outset of electric current of Model D because of air gap presence at the middle leg. \begin{figure} \centering \includegraphics[height=4.5cm]{figures/Fig.2.30.jpg} \caption{SCFCL configurations including an inductive saturated core FCL (Model D) and a single-core saturated core FCL with short-circuited windings (Model E)} \label{fig:Fig. 2.30} \end{figure} \begin{figure} \centering \includegraphics[height=6cm]{figures/Fig.2.31.jpg} \caption{Current passing through the system with SCFCL in a normal mode} \label{fig:Fig. 2.31} \end{figure} Model D has better saturation than model E, because the outer legs in model D have dc winding and are better saturated, while on model E, the dc winding has short-circuited. It causes the iron core to saturate around it later, as shown in Fig.~\ref{fig:Fig. 2.32} that the flux density changes at fault mode for models D and E. \begin{figure} \centering \includegraphics[height=6cm]{figures/Fig.2.32.jpg} \caption{Flux density of iron core during fault mode for Models D and E} \label{fig:Fig. 2.32} \end{figure} It is also possible to show the amount of current in fault mode for models D and E, which due to the further voltage drops of them in which their limitation is even less than the nominal current (Fig.~\ref{fig:Fig. 2.33}). Fig.~\ref{fig:Fig. 2.33} observes higher current amplitude than Fig.~\ref{fig:Fig. 2.31}. These structures have slight effect on reducing the fault in comparison with configurations introduced in previous section and has more material consuming. \begin{figure} \centering \includegraphics[height=6cm]{figures/Fig.2.33.jpg} \caption{Current passing through during fault condition with SCFCL} \label{fig:Fig. 2.33} \end{figure} Now, we want to evaluate the flux density distribution in Model D, which are in Fig.~\ref{fig:Fig. 2.34} and Fig.~\ref{fig:Fig. 2.35} for different times in normal and fault states. By comparing the above two states of flux density distribution, because of the occurrence of a fault, two outer legs gradually deviate from the saturation state, so the flux density is greater than the fault state in the normal mode. It should also be not-ed that we must apply much dc current to dc winding to fully enter the saturation zone at the beginning of the design that the analogical diagram for two normal and fault modes for model D is shown for 100 milliseconds. \begin{figure} \centering \includegraphics[height=8cm]{figures/Fig.2.34.jpg} \caption{Flux density distribution for Model D during normal mode} \label{fig:Fig. 2.34} \end{figure} \begin{figure} \centering \includegraphics[height=8cm]{figures/Fig.2.35.jpg} \caption{Flux density distribution for Model D during fault mode} \label{fig:Fig. 2.35} \end{figure} \begin{figure} \centering \includegraphics[height=6cm]{figures/Fig.2.36.jpg} \caption{Flux density for Model D during normal and fault operations} \label{fig:Fig. 2.36} \end{figure} \begin{figure} \centering \includegraphics[height=8cm]{figures/Fig.2.37.jpg} \caption{Flux density distribution for Model E during normal operation} \label{fig:Fig. 2.37} \end{figure} It is also possible to provide a flux density distribution for Model E, which is as follows. As it can be perceived, the middle legs are not deeply saturated from the beginning, because of the presence of short-circuit windings at the top and bottom of the core, and a large dc current must be applied to the winding to achieve this. The middle legs become gradually saturated over time and the flux slowly disperses at the surface of the iron core, with the middle legs alternating between saturated and unsaturated modes. There is a summary of proposed configurations and their specifications as Table \ref{table 2.3}. \begin{center} \begin{table} \caption{Summary of proposed SCFCL configurations} \label{table 2.3} \setlength{\leftmargini}{1cm} \begin{tabular}{| m{7cm} | m{6cm} |} \hline \centering \textbf{SCFCL Configuration} & \textbf{Design Specifications} \\ \hline \centering \includegraphics[height=2.5cm]{figures/Fig.2.11.jpg} & \begin{itemize} \item Every phase has two cores. \item Partial magnetic separation of dc \& ac fluxes \item An air gap in side leg \item Voltage is induced in dc winding \item FCL impedance is declined during normal conditions \end{itemize} \\ \hline \centering \includegraphics[height=2cm]{figures/Fig.2.9.jpg} & \begin{itemize} \item Every phase has two cores \item There is no voltage induced \item 100\% magnetic separation of dc \& ac fluxes \item An air gap in sidelong leg \item There is no deep saturation in the middle leg because of one ac winding \item FCL impedance is increased during normal conditions \end{itemize} \\ \hline \centering \includegraphics[height=2.5cm]{figures/Fig.2.13.jpg} & \begin{itemize} \item Every phase has two cores \item Partial magnetic separation of dc \& ac fluxes \item An air gap in sidelong leg \item Volume of winding material is mitigated because of short-circuited dc winding \end{itemize} \\ \hline \centering \includegraphics[height=2.5cm]{figures/Fig.2.28.jpg} & \begin{itemize} \item Existence of an air gap in middle leg \item Partial magnetic separation of dc \& ac fluxes \item Single phase inductive SCFCL (every phase has a core) \item Conduction of ac flux across the leg with air gap is sufficient \end{itemize} \\ \hline \centering \includegraphics[height=3cm]{figures/Fig.2.29.jpg} & \begin{itemize} \item Partial magnetic separation of dc \& ac fluxes \item Single phase inductive SCFCL (every phase has a core) \item Changes in flux distribution because of winding positions \end{itemize} \\ \hline \end{tabular} \end{table} \end{center} \section{Conclusion} Nowadays fault current limiters can have considerable effect on any failure or outage in power systems. They are very beneficial to postpone the replacement of power devices and take an important role in mitigation of fault impacts. Saturated-core FCLs can be attractive because of saturation features and prompt changes of flux between the different parts of an iron core. Suitable designs of this type of FCLs in terms of material, volume, type of winding placement, etc. can be helpful for industrial engineers to utilize an efficient configuration regarding their applicability. This device can remarkably reduce the vulnerability of a power grid. This chapter presents an analysis of different configurations of SCFCLs that their design is relied on number of windings, winding positions, etc. Various parameters of this protective device including electric current, magnetic flux density, magnetic field strength, etc. are assessed and compared. Numerical analysis with finite element method is carried out to provide a comprehensive information related to SCFCLs to demonstrate the variables’ changes during normal operation or fault mode better.